1
|
Fan X, Tang E, Zhang M, Lin Y, Ding H, Zhang Y. Decline of Affective Prosody Recognition With a Positivity Bias Among Older Adults: A Systematic Review and Meta-Analysis. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:3862-3879. [PMID: 39324838 DOI: 10.1044/2024_jslhr-23-00775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/27/2024]
Abstract
PURPOSE Understanding how older adults perceive and interpret emotional cues in speech prosody contributes to our knowledge of cognitive aging. This study provides a systematic review with meta-analysis to investigate the extent of the decline in affective prosody recognition (APR) among older adults in terms of overall and emotion-specific performance and explore potential moderators that may cause between-studies heterogeneity. METHOD The literature search encompassed five electronic databases, with a specific emphasis on studies comparing the APR performance of older adults with that of younger adults. This comparison was focused on basic emotions. Meta-regression analyses were executed to pinpoint potential moderators related to demographic and methodological characteristics. RESULTS A total of 19 studies were included in the meta-analysis, involving 560 older adults with a mean age of 69.15 years and 751 younger adults with a mean age of 23.02 years. The findings indicated a substantial negative effect size (g = -1.21). Furthermore, the magnitude of aggregated effect sizes showed a distinct valence-related recognition pattern with positive prosody exhibiting smaller effect sizes. Language background and years of education were found to moderate the overall and emotion-specific (i.e., disgust and surprise) performance effect estimate, and age and gender significantly influenced the effect estimate of happiness. CONCLUSIONS The results confirmed a significant decline in APR ability among older adults compared to younger adults, but this decline was unbalanced across basic emotions. Language background and educational level emerged as significant factors influencing older adults' APR ability. Moreover, participants with a higher mean age exhibited notably poorer performance in recognizing happy prosody. These findings underscore the need to further investigate the neurobiological mechanisms for APR decline associated with aging. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.26407888.
Collapse
Affiliation(s)
- Xinran Fan
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
- National Research Centre for Language and Well-Being, Shanghai, China
| | - Enze Tang
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, China
| | - Minyue Zhang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
- National Research Centre for Language and Well-Being, Shanghai, China
| | - Yi Lin
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
- National Research Centre for Language and Well-Being, Shanghai, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
- National Research Centre for Language and Well-Being, Shanghai, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences and Center for Neurobehavioral Development, University of Minnesota, Minneapolis
| |
Collapse
|
2
|
Yıldırım C, Düzenli-Öztürk S, Parlak MM. Assessing the perception of emotional prosody in healthy ageing. INTERNATIONAL JOURNAL OF LANGUAGE & COMMUNICATION DISORDERS 2024. [PMID: 39137279 DOI: 10.1111/1460-6984.13097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 07/24/2024] [Indexed: 08/15/2024]
Abstract
BACKGROUND Emotional prosody is the reflection of emotion types such as happiness, sadness, fear and anger in the speaker's tone of voice. Accurately perceiving, interpreting and expressing emotional prosody is an inseparable part of successful communication and social interaction. There are few studies on emotional prosody, which is crucial for communication, and the results of these studies have inconsistent information regarding age and gender. AIMS The primary aim of this study is to assess the perception of emotional prosody in healthy ageing. The other aim is to examine the effects of variables such as age, gender, language and neurocognitive capacity on the prediction of emotional prosody recognition skills. METHODS AND PROCEDURES Sixty-nine participants between the ages of 18-75 were included in the study. Participants were grouped as the young group aged 18-35 (n = 26), the middle-aged group aged 36-55 (n = 24) and the elderly group aged 56-75 (n = 19). Perceptual emotional prosody test, motor response time test, and neuropsychological test batteries were administered to the participants. Participants were asked to recognise the emotion in the sentences played on the computer. Natural (neutral, containing neither positive nor negative emotion), happy, angry, surprised and panic emotions were evaluated with sentences composed of pseudoword stimuli. RESULTS AND OUTCOMES It was observed that the elderly group performed worse in recognising angry, panic, natural and happy emotions and in total recognition, which gives the correct recognition performance in recognition of all emotions. There was no age-related difference in recognition of the emotion of surprise. The women were more successful in recognising angry, panic, happy and total emotions compared to men. Age and Motor Reaction Time Test scores were found to be significant predictors in the emotional response time regression model. Age, language, attention and gender variables were found to have a significant effect on the regression model created for the success of total recognition of emotions (p < 0.05). CONCLUSIONS AND IMPLICATIONS This was a novel study in which emotional prosody was assessed in the elderly by eliminating lexical-semantic cues related to emotional prosody and associating emotional prosody results with neuropsychiatric tests. All our findings revealed the importance of age for the perception of emotional prosody. In addition, the effects of cognitive functions such as attention, which decline with age, were found to be important. Therefore, it should not be forgotten that many factors contribute to the success of recognising emotional prosody correctly. In this context, clinicians should consider variables such as cognitive health and education when assessing the perception of emotional prosody in elderly individuals. WHAT THIS PAPER ADDS What is already known on the subject Most of the studies compare young and old groups, and these studies evaluate the perception of emotional prosody by using sentences formed by observing the speech sounds, syllables, words and grammar rules in the vocabulary of the language. It has been reported that the perception of emotional prosody is lower, mostly in the elderly group, but there is inconsistent information in terms of age and gender. What this paper adds to existing knowledge Perceptual Prosody Recognition was evaluated with an experimental design in which sentence structures consisting of lexemes were used as stimuli and neurocognitive tests were included, taking into account the phonological and syntactic rules of language. This study was a novel study in diagnosing emotional prosody in terms of comparing different age groups and determining the factors affecting multidimensional emotional prosody, including neuropsychiatric features. What are the clinical implications of this work? All our findings revealed the importance of age for the perception of emotional prosody. In addition, it was determined that the effects of cognitive functions such as attention were important with age.
Collapse
Affiliation(s)
- Cansu Yıldırım
- Department of Speech and Language Therapy, Faculty of Health Sciences, İzmir Bakırçay University, Izmir, Turkey
| | - Seren Düzenli-Öztürk
- Department of Speech and Language Therapy, Faculty of Health Sciences, İzmir Bakırçay University, Izmir, Turkey
| | - Mümüne Merve Parlak
- Department of Speech and Language Therapy, Faculty of Health Sciences, Ankara Yıldırım Beyazıt University, Ankara, Turkey
| |
Collapse
|
3
|
Yue L, Hu P, Zhu J. Advanced differential evolution for gender-aware English speech emotion recognition. Sci Rep 2024; 14:17696. [PMID: 39085418 PMCID: PMC11291894 DOI: 10.1038/s41598-024-68864-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2024] [Accepted: 07/29/2024] [Indexed: 08/02/2024] Open
Abstract
Speech emotion recognition (SER) technology involves feature extraction and prediction models. However, recognition efficiency tends to decrease because of gender differences and the large number of extracted features. Consequently, this paper introduces a SER system based on gender. First, gender and emotion features are extracted from speech signals to develop gender recognition and emotion classification models. Second, according to gender differences, distinct emotion recognition models are established for male and female speakers. The gender of speakers is determined before executing the corresponding emotion model. Third, the accuracy of these emotion models is enhanced by utilizing an advanced differential evolution algorithm (ADE) to select optimal features. ADE incorporates new difference vectors, mutation operators, and position learning, which effectively balance global and local searches. A new position repairing method is proposed to address gender differences. Finally, experiments on four English datasets demonstrate that ADE is superior to comparison algorithms in recognition accuracy, recall, precision, F1-score, the number of used features and execution time. The findings highlight the significance of gender in refining emotion models, while mel-frequency cepstral coefficients are important factors in gender differences.
Collapse
Affiliation(s)
- Liya Yue
- Fanli Business School, Nanyang Institute of Technology, Nanyang, 473004, China
| | - Pei Hu
- School of Computer and Software, Nanyang Institute of Technology, Nanyang, 473004, China
| | - Jiulong Zhu
- Fanli Business School, Nanyang Institute of Technology, Nanyang, 473004, China.
| |
Collapse
|
4
|
Trinite B, Zdanovica A, Kurme D, Lavrane E, Magazeina I, Jansone A. The role of the age and gender, and the complexity of the syntactic unit in the perception of affective emotions in voice. Codas 2024; 36:e20240009. [PMID: 39046026 PMCID: PMC11340876 DOI: 10.1590/2317-1782/20242024009en] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Accepted: 04/13/2024] [Indexed: 07/25/2024] Open
Abstract
PURPOSE The study aimed to identify (1) whether the age and gender of listeners and the length of vocal stimuli affect emotion discrimination accuracy in voice; and (2) whether the determined level of expression of perceived affective emotions is age and gender-dependent. METHODS Thirty-two age-matched listeners listened to 270 semantically neutral voice samples produced in neutral, happy, and angry intonation by ten professional actors. The participants were required to categorize the auditory stimulus based on three options and judge the intensity of emotional expression in the sample using a customized tablet web interface. RESULTS The discrimination accuracy of happy and angry emotions decreased with age, while accuracy in discriminating neutral emotions increased with age. Females rated the intensity level of perceived affective emotions higher than males across all linguistic units. These were: for angry emotions in words (z = -3.599, p < .001), phrases (z = -3.218, p = .001), and texts (z = -2.272, p = .023), for happy emotions in words (z = -5.799, p < .001), phrases (z = -4.706, p < .001), and texts (z = -2.699, p = .007). CONCLUSION Accuracy in perceiving vocal expressions of emotions varies according to age and gender. Young adults are better at distinguishing happy and angry emotions than middle-aged adults, while middle-aged adults tend to categorize perceived affective emotions as neutral. Gender also plays a role, with females rating expressions of affective emotions in voices higher than males. Additionally, the length of voice stimuli impacts emotion discrimination accuracy.
Collapse
Affiliation(s)
- Baiba Trinite
- Voice and Speech Research Laboratory, Riga Technical University Liepaja Academy – RTU LA - Liepaja, Latvia.
| | - Anita Zdanovica
- Voice and Speech Research Laboratory, Riga Technical University Liepaja Academy – RTU LA - Liepaja, Latvia.
| | - Daiga Kurme
- Voice and Speech Research Laboratory, Riga Technical University Liepaja Academy – RTU LA - Liepaja, Latvia.
| | - Evija Lavrane
- Voice and Speech Research Laboratory, Riga Technical University Liepaja Academy – RTU LA - Liepaja, Latvia.
| | - Ilva Magazeina
- Voice and Speech Research Laboratory, Riga Technical University Liepaja Academy – RTU LA - Liepaja, Latvia.
| | - Anita Jansone
- Voice and Speech Research Laboratory, Riga Technical University Liepaja Academy – RTU LA - Liepaja, Latvia.
| |
Collapse
|
5
|
Ben-David BM, Chebat DR, Icht M. "Love looks not with the eyes": supranormal processing of emotional speech in individuals with late-blindness versus preserved processing in individuals with congenital-blindness. Cogn Emot 2024:1-14. [PMID: 38785380 DOI: 10.1080/02699931.2024.2357656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Accepted: 05/11/2024] [Indexed: 05/25/2024]
Abstract
Processing of emotional speech in the absence of visual information relies on two auditory channels: semantics and prosody. No study to date has investigated how blindness impacts this process. Two theories, Perceptual Deficit, and Sensory Compensation, yiled different expectations about the role of visual experience (or its lack thereof) in processing emotional speech. To test the effect of vision and early visual experience on processing of emotional speech, we compared individuals with congenital blindness (CB, n = 17), individuals with late blindness (LB, n = 15), and sighted controls (SC, n = 21) on identification and selective-attention of semantic and prosodic spoken-emotions. Results showed that individuals with blindness performed at least as well as SC, supporting Sensory Compensation and the role of cortical reorganisation. Individuals with LB outperformed individuals with CB, in accordance with Perceptual Deficit, supporting the role of early visual experience. The LB advantage was moderated by executive functions (working-memory). Namely, the advantage was erased for individuals with CB who showed higher levels of executive functions. Results suggest that vision is not necessary for processing of emotional speech, but early visual experience could improve it. The findings support a combination of the two aforementioned theories and reject a dichotomous view of deficiencies/enhancements of blindness.
Collapse
Affiliation(s)
- Boaz M Ben-David
- Communication, Aging, and Neuropsychology Lab (CANlab), Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
- Department of Speech-Language Pathology, University of Toronto, Toronto, Canada
- KITE, Toronto Rehabilitation Institute, University Health Networks (UHN), Toronto, Canada
| | - Daniel-Robert Chebat
- Visual and Cognitive Neuroscience Laboratory (VCN Lab), The Department of Psychology, Ariel University, Ariel, Israel
- Navigation and Accessibility Research Center (NARCA), Ariel University, Ariel, Israel
| | - Michal Icht
- Department of Communication Disorders, Ariel University, Ariel, Israel
| |
Collapse
|
6
|
Sinvani RT, Fogel-Grinvald H, Sapir S. Self-Rated Confidence in Vocal Emotion Recognition Ability: The Role of Gender. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1413-1423. [PMID: 38625128 DOI: 10.1044/2024_jslhr-23-00373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/17/2024]
Abstract
PURPOSE We studied the role of gender in metacognition of voice emotion recognition ability (ERA), reflected by self-rated confidence (SRC). To this end, we guided our study in two approaches: first, by examining the role of gender in voice ERA and SRC independently and second, by looking for gender effects on the ERA association with SRC. METHOD We asked 100 participants (50 men, 50 women) to interpret a set of vocal expressions portrayed by 30 actors (16 men, 14 women) as defined by their emotional meaning. Targets were 180 repetitive lexical sentences articulated in congruent emotional voices (anger, sadness, surprise, happiness, fear) and neutral expressions. Trial by trial, the participants were assigned retrospective SRC based on their emotional recognition performance. RESULTS A binomial generalized linear mixed model (GLMM) estimating ERA accuracy revealed a significant gender effect, with women encoders (speakers) yielding higher accuracy levels than men. There was no significant effect of the decoder's (listener's) gender. A second GLMM estimating SRC found a significant effect of encoder and decoder genders, with women outperforming men. Gamma correlations were significantly greater than zero for women and men decoders. CONCLUSIONS In spite of varying interpretations of gender in each independent rating (ERA and SRC), our results suggest that both men and women decoders were accurate in their metacognition regarding voice emotion recognition. Further research is needed to study how individuals of both genders use metacognitive knowledge in their emotional recognition and whether and how such knowledge contributes to effective social communication.
Collapse
Affiliation(s)
| | | | - Shimon Sapir
- Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Sciences, University of Haifa, Israel
| |
Collapse
|
7
|
Zhang L, Liang H, Bjureberg J, Xiong F, Cai Z. The Association Between Emotion Recognition and Internalizing Problems in Children and Adolescents: A Three-Level Meta-Analysis. J Youth Adolesc 2024; 53:1-20. [PMID: 37991601 DOI: 10.1007/s10964-023-01891-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 10/17/2023] [Indexed: 11/23/2023]
Abstract
Numerous studies have explored the link between how well youth recognize emotions and their internalizing problems, but a consensus remains elusive. This study used a three-level meta-analysis model to quantitatively synthesize the findings of existing studies to assess the relationship. A moderation analysis was also conducted to explore the sources of research heterogeneity. Through a systematic literature search, a total of 42 studies with 201 effect sizes were retrieved for the current meta-analysis, and 7579 participants were included. Emotion recognition was negatively correlated with internalizing problems. Children and adolescents with weaker emotion recognition skills were more likely to have internalizing problems. In addition, this meta-analysis found that publication year had a significant moderating effect. The correlation between emotion recognition and internalizing problems decreased over time. The degree of internalizing problems was also found to be a significant moderator. The correlation between emotion recognition and internalizing disorders was higher than the correlation between emotion recognition and internalizing symptoms. Deficits in emotion recognition might be relevant for the development and/or maintenance of internalizing problems in children and adolescents. The overall effect was small and future research should explore the clinical relevance of the association.
Collapse
Affiliation(s)
- Lin Zhang
- School of Psychology, Central China Normal University, Wuhan, China.
- Key Laboratory of Adolescent Cyberpsychology and Behavior, Ministry of Education, Wuhan, China.
- Key Laboratory of Human Development and Mental Health of Hubei Province, Wuhan, China.
| | - Heting Liang
- School of Psychology, Central China Normal University, Wuhan, China
- Key Laboratory of Adolescent Cyberpsychology and Behavior, Ministry of Education, Wuhan, China
- Key Laboratory of Human Development and Mental Health of Hubei Province, Wuhan, China
| | - Johan Bjureberg
- Centre for Psychiatry Research, Karolinska Institutet and Stockholm Health Care Services, Stockholm County Council, Stockholm, Sweden
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Fen Xiong
- School of Psychology, Central China Normal University, Wuhan, China
- Key Laboratory of Adolescent Cyberpsychology and Behavior, Ministry of Education, Wuhan, China
- Key Laboratory of Human Development and Mental Health of Hubei Province, Wuhan, China
| | - Zhihui Cai
- School of Psychology, Central China Normal University, Wuhan, China
- Key Laboratory of Adolescent Cyberpsychology and Behavior, Ministry of Education, Wuhan, China
- Key Laboratory of Human Development and Mental Health of Hubei Province, Wuhan, China
| |
Collapse
|
8
|
McDonald B, Kanske P. Gender differences in empathy, compassion, and prosocial donations, but not theory of mind in a naturalistic social task. Sci Rep 2023; 13:20748. [PMID: 38007569 PMCID: PMC10676355 DOI: 10.1038/s41598-023-47747-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Accepted: 11/17/2023] [Indexed: 11/27/2023] Open
Abstract
Despite broad interest, experimental evidence for gender differences in social abilities remains inconclusive. Two important factors may have limited previous results: (i) a lack of clear distinctions between empathy (sharing another's feelings), compassion (a feeling of concern toward others), and Theory of Mind (ToM; inferring others' mental states), and (ii) the absence of robust, naturalistic social tasks. Overcoming these limitations, in Study 1 (N = 295) we integrate three independent, previously published datasets, each using a dynamic and situated, video-based paradigm which disentangles ToM, empathy, and compassion, to examine gender differences in social abilities. We observed greater empathy and compassion in women compared to men, but found no evidence that either gender performed better in ToM. In Study 2 (n = 226) we extend this paradigm to allow participants to engage in prosocial donations. Along with replicating the findings of Study 1, we also observed greater prosocial donations in women compared to men. Additionally, we discuss an exploratory, novel finding, namely that ToM performance is positively associated with prosocial donations in women, but not men. Overall, these results emphasize the importance of establishing experimental designs that incorporate dynamic, complex stimuli to better capture the social realities that men and women experience in their daily lives.
Collapse
Affiliation(s)
- Brennan McDonald
- Clinical Psychology and Behavioral Neuroscience, Faculty of Psychology, Technische Universität Dresden, Chemnitzer Straße 46, 01187, Dresden, Germany.
| | - Philipp Kanske
- Clinical Psychology and Behavioral Neuroscience, Faculty of Psychology, Technische Universität Dresden, Chemnitzer Straße 46, 01187, Dresden, Germany
| |
Collapse
|
9
|
Chang Y, Chen X, Chen F, Li M. Roles of Phonation Types and Decoders' Gender in Recognizing Mandarin Emotional Speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:4363-4379. [PMID: 37861384 DOI: 10.1044/2023_jslhr-23-00356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/21/2023]
Abstract
PURPOSE Capturing phonation types such as breathy, modal, and pressed voices precisely can facilitate the recognition of human emotions. However, little is known about how exactly phonation types and decoders' gender influence the perception of emotional speech. Based on the modified Brunswikian lens model, this article aims to examine the roles of phonation types and decoders' gender in Mandarin emotional speech recognition by virtue of articulatory speech synthesis. METHOD Fifty-five participants (28 male and 27 female) completed a recognition task of Mandarin emotional speech, with 200 stimuli representing five emotional categories (happiness, anger, fear, sadness, and neutrality) and five types (original, copied, breathy, modal, and pressed). Repeated-measures analyses of variance were performed to analyze recognition accuracy and confusion data. RESULTS For male and female decoders, the recognition accuracy of anger from pressed stimuli and fear from breathy stimuli was high; across all phonation-type stimuli, the recognition accuracy of sadness was also high, but that of happiness was low. The confusion data revealed that in recognizing fear from all phonation-type stimuli, female decoders chose fear responses more frequently and neutral responses less frequently than male decoders. In recognizing neutrality from breathy stimuli, female decoders significantly reduced their choice of neutral responses and misidentified neutrality as anger, while male decoders mistook neutrality from pressed stimuli for anger. CONCLUSIONS This study revealed that, in Mandarin, phonation types play crucial roles in recognizing anger, fear, and neutrality, while the recognition of sadness and happiness seems not to depend heavily on phonation types. Moreover, the decoders' gender affects their recognition of neutrality and fear. These findings support the modified Brunswikian lens model and have significance for diagnosis and intervention among clinical populations with hearing impairment or gender-related psychiatric disorders. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.24302221.
Collapse
Affiliation(s)
- Yajie Chang
- School of Foreign Languages, Hunan University, Changsha, China
| | - Xiaoxiang Chen
- School of Foreign Languages, Hunan University, Changsha, China
| | - Fei Chen
- School of Foreign Languages, Hunan University, Changsha, China
| | - Manhong Li
- School of Foreign Languages, Hunan University, Changsha, China
- School of Foreign Languages, Hunan First Normal University, Changsha, China
| |
Collapse
|
10
|
Ziereis A, Schacht A. Gender congruence and emotion effects in cross-modal associative learning: Insights from ERPs and pupillary responses. Psychophysiology 2023; 60:e14380. [PMID: 37387451 DOI: 10.1111/psyp.14380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 05/01/2023] [Accepted: 06/17/2023] [Indexed: 07/01/2023]
Abstract
Social and emotional cues from faces and voices are highly relevant and have been reliably demonstrated to attract attention involuntarily. However, there are mixed findings as to which degree associating emotional valence to faces occurs automatically. In the present study, we tested whether inherently neutral faces gain additional relevance by being conditioned with either positive, negative, or neutral vocal affect bursts. During learning, participants performed a gender-matching task on face-voice pairs without explicit emotion judgments of the voices. In the test session on a subsequent day, only the previously associated faces were presented and had to be categorized regarding gender. We analyzed event-related potentials (ERPs), pupil diameter, and response times (RTs) of N = 32 subjects. Emotion effects were found in auditory ERPs and RTs during the learning session, suggesting that task-irrelevant emotion was automatically processed. However, ERPs time-locked to the conditioned faces were mainly modulated by the task-relevant information, that is, the gender congruence of the face and voice, but not by emotion. Importantly, these ERP and RT effects of learned congruence were not limited to learning but extended to the test session, that is, after removing the auditory stimuli. These findings indicate successful associative learning in our paradigm, but it did not extend to the task-irrelevant dimension of emotional relevance. Therefore, cross-modal associations of emotional relevance may not be completely automatic, even though the emotion was processed in the voice.
Collapse
Affiliation(s)
- Annika Ziereis
- Department for Cognition, Emotion and Behavior, Affective Neuroscience and Psychophysiology Laboratory, Institute of Psychology, Georg-August-University of Göttingen, Göttingen, Germany
| | - Anne Schacht
- Department for Cognition, Emotion and Behavior, Affective Neuroscience and Psychophysiology Laboratory, Institute of Psychology, Georg-August-University of Göttingen, Göttingen, Germany
| |
Collapse
|
11
|
Gong B, Li N, Li Q, Yan X, Chen J, Li L, Wu X, Wu C. The Mandarin Chinese auditory emotions stimulus database: A validated set of Chinese pseudo-sentences. Behav Res Methods 2023; 55:1441-1459. [PMID: 35641682 DOI: 10.3758/s13428-022-01868-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/29/2022] [Indexed: 11/08/2022]
Abstract
Emotional prosody is fully embedded in language and can be influenced by the linguistic properties of a specific language. Considering the limitations of existing Chinese auditory stimulus database studies, we developed and validated an emotional auditory stimuli database composed of Chinese pseudo-sentences, recorded by six professional actors in Mandarin Chinese. Emotional expressions included happiness, sadness, anger, fear, disgust, pleasant surprise, and neutrality. All emotional categories were vocalized into two types of sentence patterns, declarative and interrogative. In addition, all emotional pseudo-sentences, except for neutral, were vocalized at two levels of emotional intensity: normal and strong. Each recording was validated with 40 native Chinese listeners in terms of the recognition accuracy of the intended emotion portrayal; finally, 4361 pseudo-sentence stimuli were included in the database. Validation of the database using a forced-choice recognition paradigm revealed high rates of emotional recognition accuracy. The detailed acoustic attributes of vocalization were provided and connected to the emotion recognition rates. This corpus could be a valuable resource for researchers and clinicians to explore the behavioral and neural mechanisms underlying emotion processing of the general population and emotional disturbances in neurological, psychiatric, and developmental disorders. The Mandarin Chinese auditory emotion stimulus database is available at the Open Science Framework ( https://osf.io/sfbm6/?view_only=e22a521e2a7d44c6b3343e11b88f39e3 ).
Collapse
Affiliation(s)
- Bingyan Gong
- School of Nursing, Peking University Health Science Center, Room 510, 38 Xueyuan Road, Haidian District, Beijing, 100191, China
| | - Na Li
- Theatre Pedagogy Department, Central Academy of Drama, Beijing, 100710, China
| | - Qiuhong Li
- School of Nursing, Peking University Health Science Center, Room 510, 38 Xueyuan Road, Haidian District, Beijing, 100191, China
| | - Xinyuan Yan
- School of Computing, University of Utah, Salt Lake City, UT, USA
| | - Jing Chen
- Department of Machine Intelligence, Peking University, 5 Yiheyuan Road, Haidian District, Beijing, 100871, China
- Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing, 100871, China
| | - Liang Li
- School of Psychological and Cognitive Sciences, Peking University, Beijing, 100871, China
| | - Xihong Wu
- Department of Machine Intelligence, Peking University, 5 Yiheyuan Road, Haidian District, Beijing, 100871, China.
- Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing, 100871, China.
| | - Chao Wu
- School of Nursing, Peking University Health Science Center, Room 510, 38 Xueyuan Road, Haidian District, Beijing, 100191, China.
| |
Collapse
|
12
|
Lyakso E, Ruban N, Frolova O, Mekala MA. The children's emotional speech recognition by adults: Cross-cultural study on Russian and Tamil language. PLoS One 2023; 18:e0272837. [PMID: 36791129 PMCID: PMC9931107 DOI: 10.1371/journal.pone.0272837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Accepted: 07/27/2022] [Indexed: 02/16/2023] Open
Abstract
The current study investigated the features of cross-cultural recognition of four basic emotions "joy-neutral (calm state)-sad-anger" in the spontaneous and acting speech of Indian and Russian children aged 8-12 years across Russian and Tamil languages. The research tasks were to examine the ability of Russian and Indian experts to recognize the state of Russian and Indian children by their speech, determine the acoustic features of correctly recognized speech samples, and specify the influence of the expert's language on the cross-cultural recognition of the emotional states of children. The study includes a perceptual auditory study by listeners and instrumental spectrographic analysis of child speech. Different accuracy and agreement between Russian and Indian experts were shown in recognizing the emotional states of Indian and Russian children by their speech, with more accurate recognition of the emotional state of children in their native language, in acting speech vs spontaneous speech. Both groups of experts recognize the state of anger via acting speech with the high agreement. The difference between the groups of experts was in the definition of joy, sadness, and neutral states depending on the test material with a different agreement. Speech signals with emphasized differences in acoustic patterns were more accurately classified by experts as belonging to emotions of different activation. The data showed that, despite the universality of basic emotions, on the one hand, the cultural environment affects their expression and perception, on the other hand, there are universal non-linguistic acoustic features of the voice that allow us to identify emotions via speech.
Collapse
Affiliation(s)
- Elena Lyakso
- The Child Speech Research Group, St. Petersburg State University, St. Petersburg, Russia
- * E-mail:
| | - Nersisson Ruban
- School of Electrical Engineering, Vellore Institute of Technology, Vellore, India
| | - Olga Frolova
- The Child Speech Research Group, St. Petersburg State University, St. Petersburg, Russia
| | - Mary A. Mekala
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India
| |
Collapse
|
13
|
Vos S, Collignon O, Boets B. The Sound of Emotion: Pinpointing Emotional Voice Processing Via Frequency Tagging EEG. Brain Sci 2023; 13:brainsci13020162. [PMID: 36831705 PMCID: PMC9954097 DOI: 10.3390/brainsci13020162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 01/13/2023] [Accepted: 01/16/2023] [Indexed: 01/20/2023] Open
Abstract
Successfully engaging in social communication requires efficient processing of subtle socio-communicative cues. Voices convey a wealth of social information, such as gender, identity, and the emotional state of the speaker. We tested whether our brain can systematically and automatically differentiate and track a periodic stream of emotional utterances among a series of neutral vocal utterances. We recorded frequency-tagged EEG responses of 20 neurotypical male adults while presenting streams of neutral utterances at a 4 Hz base rate, interleaved with emotional utterances every third stimulus, hence at a 1.333 Hz oddball frequency. Four emotions (happy, sad, angry, and fear) were presented as different conditions in different streams. To control the impact of low-level acoustic cues, we maximized variability among the stimuli and included a control condition with scrambled utterances. This scrambling preserves low-level acoustic characteristics but ensures that the emotional character is no longer recognizable. Results revealed significant oddball EEG responses for all conditions, indicating that every emotion category can be discriminated from the neutral stimuli, and every emotional oddball response was significantly higher than the response for the scrambled utterances. These findings demonstrate that emotion discrimination is fast, automatic, and is not merely driven by low-level perceptual features. Eventually, here, we present a new database for vocal emotion research with short emotional utterances (EVID) together with an innovative frequency-tagging EEG paradigm for implicit vocal emotion discrimination.
Collapse
Affiliation(s)
- Silke Vos
- Center for Developmental Psychiatry, Department of Neurosciences, KU Leuven, 3000 Leuven, Belgium
- Leuven Autism Research (LAuRes), KU Leuven, 3000 Leuven, Belgium
- Leuven Brain Institute (LBI), KU Leuven, 3000 Leuven, Belgium
- Correspondence: ; Tel.: +32-16-37-76-83
| | - Olivier Collignon
- Institute of Research in Psychology & Institute of Neuroscience, Université Catholique de Louvain, 1348 Louvain-La-Neuve, Belgium
- School of Health Sciences, HES-SO Valais-Wallis, The Sense Innovation and Research Center, 1007 Lausanne and 1950 Sion, Switzerland
| | - Bart Boets
- Center for Developmental Psychiatry, Department of Neurosciences, KU Leuven, 3000 Leuven, Belgium
- Leuven Autism Research (LAuRes), KU Leuven, 3000 Leuven, Belgium
- Leuven Brain Institute (LBI), KU Leuven, 3000 Leuven, Belgium
| |
Collapse
|
14
|
Kanwal S, Asghar S, Ali H. Feature selection enhancement and feature space visualization for speech-based emotion recognition. PeerJ Comput Sci 2022; 8:e1091. [PMID: 36426263 PMCID: PMC9680882 DOI: 10.7717/peerj-cs.1091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 08/17/2022] [Indexed: 06/16/2023]
Abstract
Robust speech emotion recognition relies on the quality of the speech features. We present speech features enhancement strategy that improves speech emotion recognition. We used the INTERSPEECH 2010 challenge feature-set. We identified subsets from the features set and applied principle component analysis to the subsets. Finally, the features are fused horizontally. The resulting feature set is analyzed using t-distributed neighbour embeddings (t-SNE) before the application of features for emotion recognition. The method is compared with the state-of-the-art methods used in the literature. The empirical evidence is drawn using two well-known datasets: Berlin Emotional Speech Dataset (EMO-DB) and Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) for two languages, German and English, respectively. Our method achieved an average recognition gain of 11.5% for six out of seven emotions for the EMO-DB dataset, and 13.8% for seven out of eight emotions for the RAVDESS dataset as compared to the baseline study.
Collapse
Affiliation(s)
- Sofia Kanwal
- Department of Computer Science, Islamabad Campus, Comsats University, Islamabad, Pakistan
- Department of Computer Science, University of Poonch Rawalakot, Rawalakot, Azad Kashmir, Pakistan
| | - Sohail Asghar
- Department of Computer Science, Islamabad Campus, Comsats University, Islamabad, Pakistan
| | - Hazrat Ali
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| |
Collapse
|
15
|
Quesque F, Coutrot A, Cox S, de Souza Leonardo C, Baez S, Cardona JF, Mulet-Perreault H, Flanagan E, Neely-Prado A, Clarens MF, Cassimiro L, Musa G, Kemp J, Botzung A, Philippi N, Cosseddu M, Trujillo C, Grisales JS, Fittipaldi S, Magrath Guimet N, Calandri IL, Crivelli L, Sedeno L, Garcia AM, Moreno F, Indakoetxea B, Benussi A, Brandão Moura MV, Santamaria-Garcia H, Matallana D, Prianishnikova G, Morozova A, Iakovleva O, Veryugina N, Levin O, Zhao L, Liang J, Duning T, Lebouvier T, Pasquier F, Huepe D, Barandiaran M, Johnen A, Lyashenko E, Allegri RF, Borroni B, Blanc F, Wang F, Yassuda MS, Lillo P, Teixeira AL, Caramelli P, Hudon C, Slachevsky A, Ibáñez A, Hornberger M, Bertoux M. Does culture shape our understanding of others' thoughts and emotions? An investigation across 12 countries. Neuropsychology 2022; 36:664-682. [PMID: 35834208 PMCID: PMC11186050 DOI: 10.1037/neu0000817] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
Measures of social cognition have now become central in neuropsychology, being essential for early and differential diagnoses, follow-up, and rehabilitation in a wide range of conditions. With the scientific world becoming increasingly interconnected, international neuropsychological and medical collaborations are burgeoning to tackle the global challenges that are mental health conditions. These initiatives commonly merge data across a diversity of populations and countries, while ignoring their specificity. OBJECTIVE In this context, we aimed to estimate the influence of participants' nationality on social cognition evaluation. This issue is of particular importance as most cognitive tasks are developed in highly specific contexts, not representative of that encountered by the world's population. METHOD Through a large international study across 18 sites, neuropsychologists assessed core aspects of social cognition in 587 participants from 12 countries using traditional and widely used tasks. RESULTS Age, gender, and education were found to impact measures of mentalizing and emotion recognition. After controlling for these factors, differences between countries accounted for more than 20% of the variance on both measures. Importantly, it was possible to isolate participants' nationality from potential translation issues, which classically constitute a major limitation. CONCLUSIONS Overall, these findings highlight the need for important methodological shifts to better represent social cognition in both fundamental research and clinical practice, especially within emerging international networks and consortia. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Collapse
Affiliation(s)
- François Quesque
- Univ. Lille, Inserm, CHU Lille, U1172 - LilNCog - Lille Neuroscience & Cognition, LiCEND, F-59000 Lille, France
| | | | - Sharon Cox
- Department of Behavioural Science and Health, Institute of Epidemiology and Healthcare, University College London, London, UK
| | | | | | | | | | - Emma Flanagan
- Norwich Medical School, University of East Anglia, UK
- Department of Clinical Neurosciences, University of Cambridge, UK
| | - Alejandra Neely-Prado
- Center for Social and Cognitive Neuroscience, School of Psychology, Adolfo Ibáñez University, Santiago, Chile
| | | | - Luciana Cassimiro
- School of Arts, Sciences and Humanities, University of São Paulo, Department of Neurology, São Paulo, Brazil
| | - Gada Musa
- Universidad de Chile, Santiago, Chile
| | | | | | | | | | | | | | - Sol Fittipaldi
- Universidad de San Andrés, Buenos Aires, Argentina
- National Scientific and Technical Research Council (CONICET), Argentina
| | | | | | - Lucia Crivelli
- FLENI Fondation, Department of Neurology, Buenos Aires, Argentina
| | - Lucas Sedeno
- National Scientific and Technical Research Council (CONICET), Argentina
| | - Adolfo M Garcia
- Universidad de San Andrés, Buenos Aires, Argentina
- National Scientific and Technical Research Council (CONICET), Argentina
- Departamento de Lingüística y Literatura, Facultad de Humanidades, Universidad de Santiago de Chile, Santiago, Chile
- Global Brain Health Institute (GBHI), University of California-San Francisco (UCSF), San Francisco, California, United States
| | - Fermin Moreno
- Department of Neurology, Unit of Cognitive Disorders, Hospital Universitario Donostia, San Sebastian, Spain
| | - Begoña Indakoetxea
- Department of Neurology, Unit of Cognitive Disorders, Hospital Universitario Donostia, San Sebastian, Spain
| | - Alberto Benussi
- Centre for Neurodegenerative Disorders, Department of Clinical and Experimental Sciences, University of Brescia, Brescia, Italy
| | | | - Hernando Santamaria-Garcia
- School of Medicine, Neuroscience Doctorate. Aging Institute, Physiology and Psychiatry Department. Pontificia Universidad Javeriana, Bogotá, Colombia
| | - Diana Matallana
- School of Medicine, Neuroscience Doctorate. Aging Institute, Physiology and Psychiatry Department. Pontificia Universidad Javeriana, Bogotá, Colombia
| | | | - Anna Morozova
- Central Clinic No 1 of the Ministry of Internal Affairs, Moskva, Russia
| | - Olga Iakovleva
- Central Clinic No 1 of the Ministry of Internal Affairs, Moskva, Russia
| | - Nadezda Veryugina
- Central Clinic No 1 of the Ministry of Internal Affairs, Moskva, Russia
| | - Oleg Levin
- Central Clinic No 1 of the Ministry of Internal Affairs, Moskva, Russia
| | - Lina Zhao
- Innovation center for neurological disorders, Department of Neurology, Xuan Wu Hospital, Capital Medical University, 45 Changchun Street, Beijing
| | - Junhua Liang
- Innovation center for neurological disorders, Department of Neurology, Xuan Wu Hospital, Capital Medical University, 45 Changchun Street, Beijing
| | - Thomas Duning
- Clinic of Neurology with Institute for Translational Neurology, University Hospital Münster, Münster, Germany
| | - Thibaud Lebouvier
- Univ. Lille, Inserm, CHU Lille, U1172 - LilNCog - Lille Neuroscience & Cognition, LiCEND, F-59000 Lille, France
| | - Florence Pasquier
- Univ. Lille, Inserm, CHU Lille, U1172 - LilNCog - Lille Neuroscience & Cognition, LiCEND, F-59000 Lille, France
| | - David Huepe
- Center for Social and Cognitive Neuroscience, School of Psychology, Adolfo Ibáñez University, Santiago, Chile
| | - Myriam Barandiaran
- Department of Neurology, Unit of Cognitive Disorders, Hospital Universitario Donostia, San Sebastian, Spain
| | - Andreas Johnen
- Clinic of Neurology with Institute for Translational Neurology, University Hospital Münster, Münster, Germany
| | - Elena Lyashenko
- Central Clinic No 1 of the Ministry of Internal Affairs, Moskva, Russia
| | | | - Barbara Borroni
- Centre for Neurodegenerative Disorders, Department of Clinical and Experimental Sciences, University of Brescia, Brescia, Italy
| | | | - Fen Wang
- Innovation center for neurological disorders, Department of Neurology, Xuan Wu Hospital, Capital Medical University, 45 Changchun Street, Beijing
| | - Monica Sanches Yassuda
- School of Arts, Sciences and Humanities, University of São Paulo, Department of Neurology, São Paulo, Brazil
| | | | | | | | - Carol Hudon
- Université Laval and CERVO Brain Research Centre, Québec, Canada
| | - Andrea Slachevsky
- Geroscience Center for Brain Health and Metabolism (GERO), Faculty of Medicine, University of Chile, Santiago, Chile
- Neuropsychology and Clinical Neuroscience Laboratory (LANNEC), Physiopathology Department - ICBM, Neurocience and East Neuroscience Departments, Faculty of Medicine, University of Chile, Santiago, Chile
- Memory and Neuropsychiatric Clinic (CMYN) Neurology Department, Hospital del Salvador and Faculty of Medicine, University of Chile, Santiago, Chile
- Servicio de Neurología, Departamento de Medicina, Clínica Alemana-Universidad del Desarrollo, Santiago, Chile
| | - Agustin Ibáñez
- Center for Social and Cognitive Neuroscience, School of Psychology, Adolfo Ibáñez University, Santiago, Chile
- Universidad de San Andrés, Buenos Aires, Argentina
- National Scientific and Technical Research Council (CONICET), Argentina
- Global Brain Health Institute (GBHI), University of California-San Francisco (UCSF), San Francisco, California, United States
- Universidad Autónoma del Caribe, Barranquilla, Colombia
| | - Michael Hornberger
- Norwich Medical School, University of East Anglia, UK
- Department of Clinical Neurosciences, University of Cambridge, UK
| | - Maxime Bertoux
- Univ. Lille, Inserm, CHU Lille, U1172 - LilNCog - Lille Neuroscience & Cognition, LiCEND, F-59000 Lille, France
- Department of Clinical Neurosciences, University of Cambridge, UK
| |
Collapse
|
16
|
Płaza M, Trusz S, Kęczkowska J, Boksa E, Sadowski S, Koruba Z. Machine Learning Algorithms for Detection and Classifications of Emotions in Contact Center Applications. SENSORS (BASEL, SWITZERLAND) 2022; 22:5311. [PMID: 35890994 PMCID: PMC9321989 DOI: 10.3390/s22145311] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Revised: 06/27/2022] [Accepted: 07/13/2022] [Indexed: 12/04/2022]
Abstract
Over the past few years, virtual assistant solutions used in Contact Center systems are gaining popularity. One of the main tasks of the virtual assistant is to recognize the intentions of the customer. It is important to note that quite often the actual intention expressed in a conversation is also directly influenced by the emotions that accompany that conversation. Unfortunately, scientific literature has not identified what specific types of emotions in Contact Center applications are relevant to the activities they perform. Therefore, the main objective of this work was to develop an Emotion Classification for Machine Detection of Affect-Tinged Conversational Contents dedicated directly to the Contact Center industry. In the conducted study, Contact Center voice and text channels were considered, taking into account the following families of emotions: anger, fear, happiness, sadness vs. affective neutrality of the statements. The obtained results confirmed the usefulness of the proposed classification-for the voice channel, the highest efficiency was obtained using the Convolutional Neural Network (accuracy, 67.5%; precision, 80.3; F1-Score, 74.5%), while for the text channel, the Support Vector Machine algorithm proved to be the most efficient (accuracy, 65.9%; precision, 58.5; F1-Score, 61.7%).
Collapse
Affiliation(s)
- Mirosław Płaza
- Faculty of Electrical Engineering, Automatic Control and Computer Science, Kielce University of Technology, Al. Tysiąclecia P.P. 7, 25-314 Kielce, Poland;
| | - Sławomir Trusz
- Institute of Educational Sciences, Pedagogical University in Kraków, ul. 4 Ingardena, 30-060 Cracow, Poland;
| | - Justyna Kęczkowska
- Faculty of Electrical Engineering, Automatic Control and Computer Science, Kielce University of Technology, Al. Tysiąclecia P.P. 7, 25-314 Kielce, Poland;
| | - Ewa Boksa
- Faculty of Humanities, Jan Kochanowski University, ul. Żeromskiego 5, 25-369 Kielce, Poland;
| | | | - Zbigniew Koruba
- Faculty of Mechatronics and Mechanical Engineering, Kielce University of Technology, Al. Tysiąclecia P.P. 7, 25-314 Kielce, Poland;
| |
Collapse
|
17
|
Farley SD, Carson D, Hughes SM. Just Seconds of Laughter Reveals Relationship Status: Laughter with Friends Sounds More Authentic and Less Vulnerable than Laughter with Romantic Partners. JOURNAL OF NONVERBAL BEHAVIOR 2022; 46:421-448. [PMID: 35791311 PMCID: PMC9247916 DOI: 10.1007/s10919-022-00406-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/18/2022] [Indexed: 11/28/2022]
Abstract
The dual pathway model posits that spontaneous and volitional laughter are voiced using distinct production systems, and perceivers rely upon these system-related cues to make accurate judgments about relationship status. Yet, to our knowledge, no empirical work has examined whether raters can differentiate laughter directed at friends and romantic partners and the cues driving this accuracy. In Study 1, raters (N = 50), who listened to 52 segments of laughter, identified conversational partner (friend versus romantic partner) with greater than chance accuracy (M = 0.57) and rated laughs directed at friends to be more pleasant-sounding than laughs directed at romantic partners. Study 2, which involved 58 raters, revealed that prototypical friendship laughter sounded more spontaneous (e.g., natural) and less "vulnerable" (e.g., submissive) than prototypical romantic laughter. Study 3 replicated the findings of the first two studies using a large cross-cultural sample (N = 252). Implications for the importance of laughter as a subtle relational signal of affiliation are discussed.
Collapse
Affiliation(s)
- Sally D. Farley
- Division of Applied Behavioral Sciences, University of Baltimore, Baltimore, MD USA
| | - Deborah Carson
- Division of Applied Behavioral Sciences, University of Baltimore, Baltimore, MD USA
| | | |
Collapse
|
18
|
Sinvani RT, Sapir S. Sentence vs. Word Perception by Young Healthy Females: Toward a Better Understanding of Emotion in Spoken Language. Front Glob Womens Health 2022; 3:829114. [PMID: 35692948 PMCID: PMC9174644 DOI: 10.3389/fgwh.2022.829114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2021] [Accepted: 05/04/2022] [Indexed: 11/13/2022] Open
Abstract
Expression and perception of emotions by voice are fundamental for basic mental health stability. Since different languages interpret results differently, studies should be guided by the relationship between speech complexity and the emotional perception. The aim of our study was therefore to analyze the efficiency of speech stimuli, word vs. sentence, as it relates to the accuracy of four different categories of emotions: anger, sadness, happiness, and neutrality. To this end, a total of 2,235 audio clips were presented to 49 females, native Hebrew speakers, aged 20–30 years (M = 23.7; SD = 2.13). Participants were asked to judge audio utterances according to one of four emotional categories: anger, sadness, happiness, and neutrality. Simulated voice samples were consisting of words and meaningful sentences, provided by 15 healthy young females Hebrew native speakers. Generally, word vs. sentence was not originally accepted as a means of emotional recognition of voice; However, introducing a variety of speech utterances revealed a different perception. Thus, the emotional conveyance provided new, even higher precision to our findings: Anger emotions produced a higher impact to the single word (χ2 = 10.21, p < 0.01) as opposed to the sentence, while sadness was identified more accurately with a sentence (χ2 = 3.83, p = 0.05). Our findings resulted in a better understanding of how speech types can interpret perception, as a part of mental health.
Collapse
Affiliation(s)
- Rachel-Tzofia Sinvani
- School of Occupational Therapy, Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem, Israel
- *Correspondence: Rachel-Tzofia Sinvani
| | - Shimon Sapir
- Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Sciences, University of Haifa, Haifa, Israel
| |
Collapse
|
19
|
Cosme G, Tavares V, Nobre G, Lima C, Sá R, Rosa P, Prata D. Cultural differences in vocal emotion recognition: a behavioural and skin conductance study in Portugal and Guinea-Bissau. PSYCHOLOGICAL RESEARCH 2022; 86:597-616. [PMID: 33718984 PMCID: PMC8885546 DOI: 10.1007/s00426-021-01498-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Accepted: 03/02/2021] [Indexed: 11/03/2022]
Abstract
Cross-cultural studies of emotion recognition in nonverbal vocalizations not only support the universality hypothesis for its innate features, but also an in-group advantage for culture-dependent features. Nevertheless, in such studies, differences in socio-economic-educational status have not always been accounted for, with idiomatic translation of emotional concepts being a limitation, and the underlying psychophysiological mechanisms still un-researched. We set out to investigate whether native residents from Guinea-Bissau (West African culture) and Portugal (Western European culture)-matched for socio-economic-educational status, sex and language-varied in behavioural and autonomic system response during emotion recognition of nonverbal vocalizations from Portuguese individuals. Overall, Guinea-Bissauans (as out-group) responded significantly less accurately (corrected p < .05), slower, and showed a trend for higher concomitant skin conductance, compared to Portuguese (as in-group)-findings which may indicate a higher cognitive effort stemming from higher difficulty in discerning emotions from another culture. Specifically, accuracy differences were particularly found for pleasure, amusement, and anger, rather than for sadness, relief or fear. Nevertheless, both cultures recognized all emotions above-chance level. The perceived authenticity, measured for the first time in nonverbal cross-cultural research, in the same vocalizations, retrieved no difference between cultures in accuracy, but still a slower response from the out-group. Lastly, we provide-to our knowledge-a first account of how skin conductance response varies between nonverbally vocalized emotions, with significant differences (p < .05). In sum, we provide behavioural and psychophysiological data, demographically and language-matched, that supports cultural and emotion effects on vocal emotion recognition and perceived authenticity, as well as the universality hypothesis.
Collapse
Affiliation(s)
- Gonçalo Cosme
- Instituto de Biofísica e Engenharia Biomédica, Faculdade de Ciências da Universidade de Lisboa, Campo Grande 016, 1749-016, Lisboa, Portugal
| | - Vânia Tavares
- Instituto de Biofísica e Engenharia Biomédica, Faculdade de Ciências da Universidade de Lisboa, Campo Grande 016, 1749-016, Lisboa, Portugal
- Faculdade de Medicina, Universidade de Lisboa, Lisboa, Portugal
| | - Guilherme Nobre
- Faculdade de Medicina, Universidade de Lisboa, Lisboa, Portugal
| | - César Lima
- Centro de Investigação e Intervenção Social, Instituto Universitário de Lisboa (ISCTE-IUL), CIS-IUL, Lisboa, Portugal
| | - Rui Sá
- CAPP-Centre for Public Administration & Public Policies, ISCSP, Universidade de Lisboa, Lisboa, Portugal
- Environmental Sciences Department, Universidade Lusófona da Guiné, Bissau, Guinea-Bissau
| | - Pedro Rosa
- Centro de Investigação e Intervenção Social, Instituto Universitário de Lisboa (ISCTE-IUL), CIS-IUL, Lisboa, Portugal
- HEI-LAB: Human-Environment Interaction Lab/Universidade Lusófona de Humanidades e Tecnologias, Lisboa, Portugal
| | - Diana Prata
- Instituto de Biofísica e Engenharia Biomédica, Faculdade de Ciências da Universidade de Lisboa, Campo Grande 016, 1749-016, Lisboa, Portugal.
- Centro de Investigação e Intervenção Social, Instituto Universitário de Lisboa (ISCTE-IUL), CIS-IUL, Lisboa, Portugal.
- Department of Neuroimaging, Institute of Psychiatry, Psychology and Neuroscience, King's College London, London, UK.
| |
Collapse
|
20
|
Lin Y, Ding H, Zhang Y. Unisensory and Multisensory Stroop Effects Modulate Gender Differences in Verbal and Nonverbal Emotion Perception. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:4439-4457. [PMID: 34469179 DOI: 10.1044/2021_jslhr-20-00338] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Purpose This study aimed to examine the Stroop effects of verbal and nonverbal cues and their relative impacts on gender differences in unisensory and multisensory emotion perception. Method Experiment 1 investigated how well 88 normal Chinese adults (43 women and 45 men) could identify emotions conveyed through face, prosody and semantics as three independent channels. Experiments 2 and 3 further explored gender differences during multisensory integration of emotion through a cross-channel (prosody-semantics) and a cross-modal (face-prosody-semantics) Stroop task, respectively, in which 78 participants (41 women and 37 men) were asked to selectively attend to one of the two or three communication channels. Results The integration of accuracy and reaction time data indicated that paralinguistic cues (i.e., face and prosody) of emotions were consistently more salient than linguistic ones (i.e., semantics) throughout the study. Additionally, women demonstrated advantages in processing all three types of emotional signals in the unisensory task, but only preserved their strengths in paralinguistic processing and showed greater Stroop effects of nonverbal cues on verbal ones during multisensory perception. Conclusions These findings demonstrate clear gender differences in verbal and nonverbal emotion perception that are modulated by sensory channels, which have important theoretical and practical implications. Supplemental Material https://doi.org/10.23641/asha.16435599.
Collapse
Affiliation(s)
- Yi Lin
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences & Center for Neurobehavioral Development, University of Minnesota, Minneapolis
| |
Collapse
|
21
|
Richards SE, Hughes ME, Woodward TS, Rossell SL, Carruthers SP. External speech processing and auditory verbal hallucinations: A systematic review of functional neuroimaging studies. Neurosci Biobehav Rev 2021; 131:663-687. [PMID: 34517037 DOI: 10.1016/j.neubiorev.2021.09.006] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Revised: 08/31/2021] [Accepted: 09/03/2021] [Indexed: 12/23/2022]
Abstract
It has been documented that individuals who hear auditory verbal hallucinations (AVH) exhibit diminished capabilities in processing external speech. While functional neuroimaging studies have attempted to characterise the cortical regions and networks facilitating these deficits in a bid to understand AVH, considerable methodological heterogeneity has prevented a consensus being reached. The current systematic review investigated the neurobiological underpinnings of external speech processing deficits in voice-hearers in 38 studies published between January 1990 to June 2020. AVH-specific deviations in the activity and lateralisation of the temporal auditory regions were apparent when processing speech sounds, words and sentences. During active or affective listening tasks, functional connectivity changes arose within the language, limbic and default mode networks. However, poor study quality and lack of replicable results plague the field. A detailed list of recommendations has been provided to improve the quality of future research on this topic.
Collapse
Affiliation(s)
- Sophie E Richards
- Centre for Mental Health, Faculty of Health, Arts & Design, Swinburne University of Technology, VIC, 3122, Australia.
| | - Matthew E Hughes
- Centre for Mental Health, Faculty of Health, Arts & Design, Swinburne University of Technology, VIC, 3122, Australia
| | - Todd S Woodward
- Department of Psychiatry, University of British Colombia, Vancouver, BC, Canada; BC Mental Health and Addictions Research Institute, Vancouver, BC, Canada
| | - Susan L Rossell
- Centre for Mental Health, Faculty of Health, Arts & Design, Swinburne University of Technology, VIC, 3122, Australia; Department of Psychiatry, St Vincent's Hospital, Melbourne, VIC, Australia
| | - Sean P Carruthers
- Centre for Mental Health, Faculty of Health, Arts & Design, Swinburne University of Technology, VIC, 3122, Australia
| |
Collapse
|
22
|
Lin Y, Ding H, Zhang Y. Gender Differences in Identifying Facial, Prosodic, and Semantic Emotions Show Category- and Channel-Specific Effects Mediated by Encoder's Gender. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:2941-2955. [PMID: 34310173 DOI: 10.1044/2021_jslhr-20-00553] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Purpose The nature of gender differences in emotion processing has remained unclear due to the discrepancies in existing literature. This study examined the modulatory effects of emotion categories and communication channels on gender differences in verbal and nonverbal emotion perception. Method Eighty-eight participants (43 females and 45 males) were asked to identify three basic emotions (i.e., happiness, sadness, and anger) and neutrality encoded by female or male actors from verbal (i.e., semantic) or nonverbal (i.e., facial and prosodic) channels. Results While women showed an overall advantage in performance, their superiority was dependent on specific types of emotion and channel. Specifically, women outperformed men in regard to two basic emotions (happiness and sadness) in the nonverbal channels and only the anger category with verbal content. Conversely, men did better for the anger category in the nonverbal channels and for the other two emotions (happiness and sadness) in verbal content. There was an emotion- and channel-specific interaction effect between the two types of gender differences, with male subjects showing higher sensitivity to sad faces and prosody portrayed by the female encoders. Conclusion These findings reveal explicit emotion processing as a highly dynamic complex process with significant gender differences tied to specific emotion categories and communication channels. Supplemental Material https://doi.org/10.23641/asha.15032583.
Collapse
Affiliation(s)
- Yi Lin
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences & Center for Neurobehavioral Development, University of Minnesota Twin Cities, Minneapolis
| |
Collapse
|
23
|
Cole M, Murray K, St‐Onge E, Risk B, Zhong J, Schifitto G, Descoteaux M, Zhang Z. Surface-Based Connectivity Integration: An atlas-free approach to jointly study functional and structural connectivity. Hum Brain Mapp 2021; 42:3481-3499. [PMID: 33956380 PMCID: PMC8249904 DOI: 10.1002/hbm.25447] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 03/03/2021] [Accepted: 04/06/2021] [Indexed: 01/29/2023] Open
Abstract
There has been increasing interest in jointly studying structural connectivity (SC) and functional connectivity (FC) derived from diffusion and functional MRI. Previous connectome integration studies almost exclusively required predefined atlases. However, there are many potential atlases to choose from and this choice heavily affects all subsequent analyses. To avoid such an arbitrary choice, we propose a novel atlas-free approach, named Surface-Based Connectivity Integration (SBCI), to more accurately study the relationships between SC and FC throughout the intra-cortical gray matter. SBCI represents both SC and FC in a continuous manner on the white surface, avoiding the need for prespecified atlases. The continuous SC is represented as a probability density function and is smoothed for better facilitation of its integration with FC. To infer the relationship between SC and FC, three novel sets of SC-FC coupling (SFC) measures are derived. Using data from the Human Connectome Project, we introduce the high-quality SFC measures produced by SBCI and demonstrate the use of these measures to study sex differences in a cohort of young adults. Compared with atlas-based methods, this atlas-free framework produces more reproducible SFC features and shows greater predictive power in distinguishing biological sex. This opens promising new directions for all connectomics studies.
Collapse
Affiliation(s)
- Martin Cole
- Department of Biostatistics and Computational BiologyUniversity of RochesterRochesterNew YorkUSA
| | - Kyle Murray
- Department of Physics and AstronomyUniversity of RochesterRochesterNew YorkUSA
| | - Etienne St‐Onge
- Sherbrooke Connectivity Imaging Laboratory (SCIL)Université de SherbrookeQuébecCanada
| | - Benjamin Risk
- Department of Biostatistics and BioinformaticsEmory UniversityAtlantaGeorgiaUSA
| | - Jianhui Zhong
- Department of Physics and AstronomyUniversity of RochesterRochesterNew YorkUSA
- Department of Imaging SciencesUniversity of RochesterRochesterNew YorkUSA
| | - Giovanni Schifitto
- Department of Imaging SciencesUniversity of RochesterRochesterNew YorkUSA
- Department of NeurologyUniversity of RochesterRochesterNew YorkUSA
| | - Maxime Descoteaux
- Sherbrooke Connectivity Imaging Laboratory (SCIL)Université de SherbrookeQuébecCanada
| | - Zhengwu Zhang
- Department of Statistics and Operations ResearchUniversity of North Carolina at Chapel HillNorth CarolinaUSA
| |
Collapse
|
24
|
Voice Emotion Recognition by Mandarin-Speaking Children with Cochlear Implants. Ear Hear 2021; 43:165-180. [PMID: 34288631 DOI: 10.1097/aud.0000000000001085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Objectives Emotional expressions are very important in social interactions. Children with cochlear implants can have voice emotion recognition deficits due to device limitations. Mandarin-speaking children with cochlear implants may face greater challenges than those speaking nontonal languages; the pitch information is not well preserved in cochlear implants, and such children could benefit from child-directed speech, which carries more exaggerated distinctive acoustic cues for different emotions. This study investigated voice emotion recognition, using both adult-directed and child-directed materials, in Mandarin-speaking children with cochlear implants compared with normal hearing peers. The authors hypothesized that both the children with cochlear implants and those with normal hearing would perform better with child-directed materials than with adult-directed materials. Design Thirty children (7.17-17 years of age) with cochlear implants and 27 children with normal hearing (6.92-17.08 years of age) were recruited in this study. Participants completed a nonverbal reasoning test, speech recognition tests, and a voice emotion recognition task. Children with cochlear implants over the age of 10 years also completed the Chinese version of the Nijmegen Cochlear Implant Questionnaire to evaluate the health-related quality of life. The voice emotion recognition task was a five-alternative, forced-choice paradigm, which contains sentences spoken with five emotions (happy, angry, sad, scared, and neutral) in a child-directed or adult-directed manner. Results Acoustic analyses showed substantial variations across emotions in all materials, mainly on measures of mean fundamental frequency and fundamental frequency range. Mandarin-speaking children with cochlear implants displayed a significantly poorer performance than normal hearing peers in voice emotion perception tasks, regardless of whether the performance is measured in accuracy scores, Hu value, or reaction time. Children with cochlear implants and children with normal hearing were mainly affected by the mean fundamental frequency in speech emotion recognition tasks. Chronological age had a significant effect on speech emotion recognition in children with normal hearing; however, there was no significant correlation between chronological age and accuracy scores in speech emotion recognition in children with implants. Significant effects of specific emotion and test materials (better performance with child-directed materials) in both groups of children were observed. Among the children with cochlear implants, age at implantation, percentage scores of nonverbal intelligence quotient test, and sentence recognition threshold in quiet could predict recognition performance in both accuracy scores and Hu values. Time wearing cochlear implant could predict reaction time in emotion perception tasks among children with cochlear implants. No correlation was observed between the accuracy score in voice emotion perception and the self-reported scores of health-related quality of life; however, the latter were significantly correlated with speech recognition skills among Mandarin-speaking children with cochlear implants. Conclusions Mandarin-speaking children with cochlear implants could have significant deficits in voice emotion recognition tasks compared with their normally hearing peers and can benefit from the exaggerated prosody of child-directed speech. The effects of age at cochlear implantation, speech and language development, and cognition could play an important role in voice emotion perception by Mandarin-speaking children with cochlear implants.
Collapse
|
25
|
Exploring the Meanings of the “Heartfelt” Gesture: A Nonverbal Signal of Heartfelt Emotion and Empathy. JOURNAL OF NONVERBAL BEHAVIOR 2021. [DOI: 10.1007/s10919-021-00371-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
|
26
|
Madison A, Vasey M, Emery CF, Kiecolt-Glaser JK. Social anxiety symptoms, heart rate variability, and vocal emotion recognition in women: evidence for parasympathetically-mediated positivity bias. ANXIETY STRESS AND COPING 2020; 34:243-257. [PMID: 33156720 DOI: 10.1080/10615806.2020.1839733] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
BACKGROUND AND OBJECTIVES Individuals with social anxiety disorder show pronounced perceptual biases in social contexts, such as being hypervigilant to threat and discounting positive social cues. Parasympathetic activity influences responses to the social environment and may underlie these biases. This study examined the associations among social anxiety symptoms, heart rate variability (HRV), and vocal emotion recognition. DESIGN AND METHOD Female undergraduate students (N = 124) self-reported their social anxiety symptoms using the Social Anxiety Disorder Dimensional Scale and completed a computerized vocal emotion recognition task using stimuli from the Ryerson Audio-Visual Database of Emotional Speech and Song stimulus set. HRV was measured at baseline and during the emotion recognition task. RESULTS Women with more social anxiety symptoms had higher emotion recognition accuracy (p = .021) and rated positive stimuli as less intense (p = .032). Additionally, although those with greater social anxiety symptoms did not have lower resting HRV (p = .459), they did have lower task HRV (p = .026), which mediated their lower positivity bias and greater recognition accuracy. CONCLUSIONS A parasympathetically-mediated positivity bias may indicate or facilitate normal social functioning in women. Additionally, HRV during a symptom- or disorder-relevant task may predict task performance and reveal parasympathetic differences that are not found at baseline.
Collapse
Affiliation(s)
- Annelise Madison
- Institute for Behavioral Medicine Research, The Ohio State University, Columbus, OH, USA.,Department of Psychology, The Ohio State University, Columbus, OH, USA
| | - Michael Vasey
- Department of Psychology, The Ohio State University, Columbus, OH, USA
| | - Charles F Emery
- Institute for Behavioral Medicine Research, The Ohio State University, Columbus, OH, USA.,Department of Psychology, The Ohio State University, Columbus, OH, USA
| | - Janice K Kiecolt-Glaser
- Institute for Behavioral Medicine Research, The Ohio State University, Columbus, OH, USA.,Department of Psychiatry and Behavioral Health, The Ohio State University Harding Hospital, Columbus, OH, USA
| |
Collapse
|
27
|
Prasetio BH, Tamura H, Tanno K. Deep time-delay Markov network for prediction and modeling the stress and emotions state transition. Sci Rep 2020; 10:18071. [PMID: 33093631 PMCID: PMC7581816 DOI: 10.1038/s41598-020-75155-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Accepted: 10/12/2020] [Indexed: 11/09/2022] Open
Abstract
To recognize stress and emotion, most of the existing methods only observe and analyze speech patterns from present-time features. However, an emotion (especially for stress) can change because it was triggered by an event while speaking. To address this issue, we propose a novel method for predicting stress and emotions by analyzing prior emotional states. We named this method the deep time-delay Markov network (DTMN). Structurally, the proposed DTMN contains a hidden Markov model (HMM) and a time-delay neural network (TDNN). We evaluated the effectiveness of the proposed DTMN by comparing it with several state transition methods in predicting an emotional state from time-series (sequences) speech data of the SUSAS dataset. The experimental results show that the proposed DTMN can accurately predict present emotional states by outperforming the baseline systems in terms of the prediction error rate (PER). We then modeled the emotional state transition using a finite Markov chain based on the prediction result. We also conducted an ablation experiment to observe the effect of different HMM values and TDNN parameters on the prediction result and the computational training time of the proposed DTMN.
Collapse
Affiliation(s)
- Barlian Henryranu Prasetio
- Interdisciplinary Graduate School of Agriculture and Engineering, University of Miyazaki, Miyazaki, 889-2192, Japan.
| | - Hiroki Tamura
- Faculty of Engineering, University of Miyazaki, Miyazaki, 889-2192, Japan
| | - Koichi Tanno
- Faculty of Engineering, University of Miyazaki, Miyazaki, 889-2192, Japan
| |
Collapse
|
28
|
Karlsen AS, Futris TG, Richardson EW. The Dyadic Effects of Relationship Uncertainty on Relationship Maintenance and Damaging Behaviors. JOURNAL OF COUPLE & RELATIONSHIP THERAPY 2020. [DOI: 10.1080/15332691.2020.1837323] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
- Annika S. Karlsen
- Department of Human Development and Family Science, University of Georgia, Athens, Georgia, USA
| | - Ted G. Futris
- Department of Human Development and Family Science, University of Georgia, Athens, Georgia, USA
| | - Evin W. Richardson
- Department of Human Development and Family Science, University of Georgia, Athens, Georgia, USA
| |
Collapse
|
29
|
Lausen A, Broering C, Penke L, Schacht A. Hormonal and modality specific effects on males' emotion recognition ability. Psychoneuroendocrinology 2020; 119:104719. [PMID: 32544773 DOI: 10.1016/j.psyneuen.2020.104719] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/01/2019] [Revised: 05/11/2020] [Accepted: 05/13/2020] [Indexed: 01/10/2023]
Abstract
Successful emotion recognition is a key component of human socio-emotional communication skills. However, little is known about the factors impacting males' accuracy in emotion recognition tasks. This pre-registered study examined potential candidates, focusing on the modality of stimulus presentation, emotion category and individual baseline hormone levels. In an additional exploratory analysis, we examined the association of testosterone x cortisol interaction with recognition accuracy and reaction times. We obtained accuracy and reaction time scores from 282 males who categorized voice, face and voice-face stimuli for nonverbal emotional content. Results showed that recognition accuracy was significantly higher in the audio-visual than in the auditory or visual modality. While Spearman's rank correlations showed no significant association of testosterone (T) with recognition accuracy or with response times for specific emotions, the logistic and linear regression models uncovered some evidence for a positive association between T and recognition accuracy as well as between cortisol (C) and reaction time. In addition, the overall effect size of T by C interaction with recognition accuracy and reaction time was significant, but small. Our results establish that audio-visual congruent stimuli enhance recognition accuracy and provide novel empirical support by showing that the interaction of testosterone and cortisol relates to males' accuracy and response times in emotion recognition tasks.
Collapse
Affiliation(s)
- Adi Lausen
- Department of Affective Neuroscience and Psychophysiology, Institute of Psychology, University of Goettingen, 37073 Goettingen, Germany; Department of Mathematical Sciences, University of Essex, Colchester CO4 3SQ, United Kingdom; Leibniz ScienceCampus "Primate Cognition", 37077 Goettingen, Germany.
| | - Christina Broering
- Department of Affective Neuroscience and Psychophysiology, Institute of Psychology, University of Goettingen, 37073 Goettingen, Germany; Department of Psychology, Private University of Applied Sciences (PFH) Goettingen, 37073 Goettingen, Germany
| | - Lars Penke
- Department of Biological Personality Psychology, Institute of Psychology, University of Goettingen, 37073 Goettingen, Germany; Leibniz ScienceCampus "Primate Cognition", 37077 Goettingen, Germany
| | - Annekathrin Schacht
- Department of Affective Neuroscience and Psychophysiology, Institute of Psychology, University of Goettingen, 37073 Goettingen, Germany; Leibniz ScienceCampus "Primate Cognition", 37077 Goettingen, Germany
| |
Collapse
|
30
|
Gender Differences in Familiar Face Recognition and the Influence of Sociocultural Gender Inequality. Sci Rep 2019; 9:17884. [PMID: 31784547 PMCID: PMC6884510 DOI: 10.1038/s41598-019-54074-5] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Accepted: 11/07/2019] [Indexed: 01/05/2023] Open
Abstract
Are gender differences in face recognition influenced by familiarity and socio-cultural factors? Previous studies have reported gender differences in processing unfamiliar faces, consistently finding a female advantage and a female own-gender bias. However, researchers have recently highlighted that unfamiliar faces are processed less efficiently than familiar faces, which have more robust, invariant representations. To-date, no study has examined whether gender differences exist for familiar face recognition. The current study addressed this by using a famous faces task in a large, web-based sample of > 2000 participants across different countries. We also sought to examine if differences varied by socio-cultural gender equality within countries. When examining raw accuracy as well when controlling for fame, the results demonstrated that there were no participant gender differences in overall famous face accuracy, in contrast to studies of unfamiliar faces. There was also a consistent own-gender bias in male but not female participants. In countries with low gender equality, including the USA, females showed significantly better recognition of famous female faces compared to male participants, whereas this difference was abolished in high gender equality countries. Together, this suggests that gender differences in recognizing unfamiliar faces can be attenuated when there is enough face learning and that sociocultural gender equality can drive gender differences in familiar face recognition.
Collapse
|
31
|
Kokkinaki T, Vasdekis V, Devouche E. Maternal and paternal infant-directed speech to girls and boys: An exploratory study. EUROPEAN JOURNAL OF DEVELOPMENTAL PSYCHOLOGY 2019. [DOI: 10.1080/17405629.2019.1646123] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Theano Kokkinaki
- Laboratory of Applied Psychology, Department of Psychology, University of Crete, Rethymno, Greece
| | | | - Emmanuel Devouche
- Laboratoire de Psychopathologie et Processus de Santé, Paris Descartes University, Paris, France
| |
Collapse
|
32
|
Engelberg JWM, Schwartz JW, Gouzoules H. Do human screams permit individual recognition? PeerJ 2019; 7:e7087. [PMID: 31275746 PMCID: PMC6596410 DOI: 10.7717/peerj.7087] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2018] [Accepted: 05/07/2019] [Indexed: 11/20/2022] Open
Abstract
The recognition of individuals through vocalizations is a highly adaptive ability in the social behavior of many species, including humans. However, the extent to which nonlinguistic vocalizations such as screams permit individual recognition in humans remains unclear. Using a same-different vocalizer discrimination task, we investigated participants' ability to correctly identify whether pairs of screams were produced by the same person or two different people, a critical prerequisite to individual recognition. Despite prior theory-based contentions that screams are not acoustically well-suited to conveying identity cues, listeners discriminated individuals at above-chance levels by their screams, including both acoustically modified and unmodified exemplars. We found that vocalizer gender explained some variation in participants' discrimination abilities and response times, but participant attributes (gender, experience, empathy) did not. Our findings are consistent with abundant evidence from nonhuman primates, suggesting that both human and nonhuman screams convey cues to caller identity, thus supporting the thesis of evolutionary continuity in at least some aspects of scream function across primate species.
Collapse
Affiliation(s)
| | - Jay W Schwartz
- Department of Psychology, Emory University, Atlanta, GA, USA
| | | |
Collapse
|
33
|
Zhao C, Chronaki G, Schiessl I, Wan MW, Abel KM. Is infant neural sensitivity to vocal emotion associated with mother-infant relational experience? PLoS One 2019; 14:e0212205. [PMID: 30811431 PMCID: PMC6392422 DOI: 10.1371/journal.pone.0212205] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Accepted: 01/29/2019] [Indexed: 12/20/2022] Open
Abstract
An early understanding of others' vocal emotions provides infants with a distinct advantage for eliciting appropriate care from caregivers and for navigating their social world. Consistent with this notion, an emerging literature suggests that a temporal cortical response to the prosody of emotional speech is observable in the first year of life. Furthermore, neural specialisation to vocal emotion in infancy may vary according to early experience. Neural sensitivity to emotional non-speech vocalisations was investigated in 29 six-month-old infants using near-infrared spectroscopy (fNIRS). Both angry and happy vocalisations evoked increased activation in the temporal cortices (relative to neutral and angry vocalisations respectively), and the strength of the angry minus neutral effect was positively associated with the degree of directiveness in the mothers' play interactions with their infant. This first fNIRS study of infant vocal emotion processing implicates bilateral temporal mechanisms similar to those found in adults and suggests that infants who experience more directive caregiving or social play may more strongly or preferentially process vocal anger by six months of age.
Collapse
Affiliation(s)
- Chen Zhao
- Centre for Women’s Mental Health, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, United Kingdom
| | - Georgia Chronaki
- Developmental Cognitive Neuroscience (DCN) Laboratory, School of Psychology, University of Central Lancashire, Preston, United Kingdom
- Division of Neuroscience & Experimental Psychology, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, United Kingdom
- Developmental Brain-Behaviour Laboratory, Psychology, University of Southampton, United Kingdom
| | - Ingo Schiessl
- Division of Neuroscience & Experimental Psychology, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, United Kingdom
| | - Ming Wai Wan
- Centre for Women’s Mental Health, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, United Kingdom
| | - Kathryn M. Abel
- Centre for Women’s Mental Health, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, United Kingdom
- Greater Manchester Mental Health NHS Foundation Trust, Manchester, United Kingdom
| |
Collapse
|