1
|
Ben-David BM, Chebat DR, Icht M. "Love looks not with the eyes": supranormal processing of emotional speech in individuals with late-blindness versus preserved processing in individuals with congenital-blindness. Cogn Emot 2024:1-14. [PMID: 38785380 DOI: 10.1080/02699931.2024.2357656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Accepted: 05/11/2024] [Indexed: 05/25/2024]
Abstract
Processing of emotional speech in the absence of visual information relies on two auditory channels: semantics and prosody. No study to date has investigated how blindness impacts this process. Two theories, Perceptual Deficit, and Sensory Compensation, yiled different expectations about the role of visual experience (or its lack thereof) in processing emotional speech. To test the effect of vision and early visual experience on processing of emotional speech, we compared individuals with congenital blindness (CB, n = 17), individuals with late blindness (LB, n = 15), and sighted controls (SC, n = 21) on identification and selective-attention of semantic and prosodic spoken-emotions. Results showed that individuals with blindness performed at least as well as SC, supporting Sensory Compensation and the role of cortical reorganisation. Individuals with LB outperformed individuals with CB, in accordance with Perceptual Deficit, supporting the role of early visual experience. The LB advantage was moderated by executive functions (working-memory). Namely, the advantage was erased for individuals with CB who showed higher levels of executive functions. Results suggest that vision is not necessary for processing of emotional speech, but early visual experience could improve it. The findings support a combination of the two aforementioned theories and reject a dichotomous view of deficiencies/enhancements of blindness.
Collapse
Affiliation(s)
- Boaz M Ben-David
- Communication, Aging, and Neuropsychology Lab (CANlab), Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
- Department of Speech-Language Pathology, University of Toronto, Toronto, Canada
- KITE, Toronto Rehabilitation Institute, University Health Networks (UHN), Toronto, Canada
| | - Daniel-Robert Chebat
- Visual and Cognitive Neuroscience Laboratory (VCN Lab), The Department of Psychology, Ariel University, Ariel, Israel
- Navigation and Accessibility Research Center (NARCA), Ariel University, Ariel, Israel
| | - Michal Icht
- Department of Communication Disorders, Ariel University, Ariel, Israel
| |
Collapse
|
2
|
Sinvani RT, Fogel-Grinvald H, Sapir S. Self-Rated Confidence in Vocal Emotion Recognition Ability: The Role of Gender. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1413-1423. [PMID: 38625128 DOI: 10.1044/2024_jslhr-23-00373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/17/2024]
Abstract
PURPOSE We studied the role of gender in metacognition of voice emotion recognition ability (ERA), reflected by self-rated confidence (SRC). To this end, we guided our study in two approaches: first, by examining the role of gender in voice ERA and SRC independently and second, by looking for gender effects on the ERA association with SRC. METHOD We asked 100 participants (50 men, 50 women) to interpret a set of vocal expressions portrayed by 30 actors (16 men, 14 women) as defined by their emotional meaning. Targets were 180 repetitive lexical sentences articulated in congruent emotional voices (anger, sadness, surprise, happiness, fear) and neutral expressions. Trial by trial, the participants were assigned retrospective SRC based on their emotional recognition performance. RESULTS A binomial generalized linear mixed model (GLMM) estimating ERA accuracy revealed a significant gender effect, with women encoders (speakers) yielding higher accuracy levels than men. There was no significant effect of the decoder's (listener's) gender. A second GLMM estimating SRC found a significant effect of encoder and decoder genders, with women outperforming men. Gamma correlations were significantly greater than zero for women and men decoders. CONCLUSIONS In spite of varying interpretations of gender in each independent rating (ERA and SRC), our results suggest that both men and women decoders were accurate in their metacognition regarding voice emotion recognition. Further research is needed to study how individuals of both genders use metacognitive knowledge in their emotional recognition and whether and how such knowledge contributes to effective social communication.
Collapse
Affiliation(s)
| | | | - Shimon Sapir
- Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Sciences, University of Haifa, Israel
| |
Collapse
|
3
|
Leung FYN, Stojanovik V, Jiang C, Liu F. Investigating implicit emotion processing in autism spectrum disorder across age groups: A cross-modal emotional priming study. Autism Res 2024; 17:824-837. [PMID: 38488319 DOI: 10.1002/aur.3124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 03/01/2024] [Indexed: 04/13/2024]
Abstract
Cumulating evidence suggests that atypical emotion processing in autism may generalize across different stimulus domains. However, this evidence comes from studies examining explicit emotion recognition. It remains unclear whether domain-general atypicality also applies to implicit emotion processing in autism and its implication for real-world social communication. To investigate this, we employed a novel cross-modal emotional priming task to assess implicit emotion processing of spoken/sung words (primes) through their influence on subsequent emotional judgment of faces/face-like objects (targets). We assessed whether implicit emotional priming differed between 38 autistic and 38 neurotypical individuals across age groups as a function of prime and target type. Results indicated no overall group differences across age groups, prime types, and target types. However, differential, domain-specific developmental patterns emerged for the autism and neurotypical groups. For neurotypical individuals, speech but not song primed the emotional judgment of faces across ages. This speech-orienting tendency was not observed across ages in the autism group, as priming of speech on faces was not seen in autistic adults. These results outline the importance of the delicate weighting between speech- versus song-orientation in implicit emotion processing throughout development, providing more nuanced insights into the emotion processing profile of autistic individuals.
Collapse
Affiliation(s)
- Florence Y N Leung
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
- Department of Psychology, University of Bath, Bath, UK
| | - Vesna Stojanovik
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Cunmei Jiang
- Music College, Shanghai Normal University, Shanghai, China
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| |
Collapse
|
4
|
Baglione H, Coulombe V, Martel-Sauvageau V, Monetta L. The impacts of aging on the comprehension of affective prosody: A systematic review. APPLIED NEUROPSYCHOLOGY. ADULT 2023:1-16. [PMID: 37603689 DOI: 10.1080/23279095.2023.2245940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/23/2023]
Abstract
Recent clinical reports have suggested a possible decline in the ability to understand emotions in speech (affective prosody comprehension) with aging. The present study aims to further examine the differences in performance between older and younger adults in terms of affective prosody comprehension. Following a recent cognitive model dividing affective prosody comprehension into perceptual and lexico-semantic components, a cognitive approach targeting these components was adopted. The influence of emotions' valence and category on aging performance was also investigated. A systematic review of the literature was carried out using six databases. Twenty-one articles, presenting 25 experiments, were included. All experiments analyzed affective prosody comprehension performance of older versus younger adults. The results confirmed that older adults' performance in identifying emotions in speech was reduced compared to younger adults. The results also brought out the fact that affective prosody comprehension abilities could be modulated by the emotion category but not by the emotional valence. Various theories account for this difference in performance, namely auditory perception, brain aging, and socioemotional selectivity theory suggesting that older people tend to neglect negative emotions. However, the explanation of the underlying deficits of the affective prosody decline is still limited.
Collapse
Affiliation(s)
- Héloïse Baglione
- Département de réadaptation, Université Laval, Québec City, Quebec, Canada
- Département de readaptation, Centre interdisciplinaire de recherche en réadaptation et intégration sociale (CIRRIS), Québec City, Quebec, Canada
| | - Valérie Coulombe
- Département de réadaptation, Université Laval, Québec City, Quebec, Canada
- Département de readaptation, Centre interdisciplinaire de recherche en réadaptation et intégration sociale (CIRRIS), Québec City, Quebec, Canada
| | - Vincent Martel-Sauvageau
- Département de réadaptation, Université Laval, Québec City, Quebec, Canada
- Département de readaptation, Centre interdisciplinaire de recherche en réadaptation et intégration sociale (CIRRIS), Québec City, Quebec, Canada
| | - Laura Monetta
- Département de réadaptation, Université Laval, Québec City, Quebec, Canada
- Département de readaptation, Centre interdisciplinaire de recherche en réadaptation et intégration sociale (CIRRIS), Québec City, Quebec, Canada
| |
Collapse
|
5
|
Sinvani RT, Sapir S. Sentence vs. Word Perception by Young Healthy Females: Toward a Better Understanding of Emotion in Spoken Language. Front Glob Womens Health 2022; 3:829114. [PMID: 35692948 PMCID: PMC9174644 DOI: 10.3389/fgwh.2022.829114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2021] [Accepted: 05/04/2022] [Indexed: 11/13/2022] Open
Abstract
Expression and perception of emotions by voice are fundamental for basic mental health stability. Since different languages interpret results differently, studies should be guided by the relationship between speech complexity and the emotional perception. The aim of our study was therefore to analyze the efficiency of speech stimuli, word vs. sentence, as it relates to the accuracy of four different categories of emotions: anger, sadness, happiness, and neutrality. To this end, a total of 2,235 audio clips were presented to 49 females, native Hebrew speakers, aged 20–30 years (M = 23.7; SD = 2.13). Participants were asked to judge audio utterances according to one of four emotional categories: anger, sadness, happiness, and neutrality. Simulated voice samples were consisting of words and meaningful sentences, provided by 15 healthy young females Hebrew native speakers. Generally, word vs. sentence was not originally accepted as a means of emotional recognition of voice; However, introducing a variety of speech utterances revealed a different perception. Thus, the emotional conveyance provided new, even higher precision to our findings: Anger emotions produced a higher impact to the single word (χ2 = 10.21, p < 0.01) as opposed to the sentence, while sadness was identified more accurately with a sentence (χ2 = 3.83, p = 0.05). Our findings resulted in a better understanding of how speech types can interpret perception, as a part of mental health.
Collapse
Affiliation(s)
- Rachel-Tzofia Sinvani
- School of Occupational Therapy, Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem, Israel
- *Correspondence: Rachel-Tzofia Sinvani
| | - Shimon Sapir
- Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Sciences, University of Haifa, Haifa, Israel
| |
Collapse
|
6
|
Zhang M, Chen Y, Lin Y, Ding H, Zhang Y. Multichannel Perception of Emotion in Speech, Voice, Facial Expression, and Gesture in Individuals With Autism: A Scoping Review. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:1435-1449. [PMID: 35316079 DOI: 10.1044/2022_jslhr-21-00438] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
PURPOSE Numerous studies have identified individuals with autism spectrum disorder (ASD) with deficits in unichannel emotion perception and multisensory integration. However, only limited research is available on multichannel emotion perception in ASD. The purpose of this review was to seek conceptual clarification, identify knowledge gaps, and suggest directions for future research. METHOD We conducted a scoping review of the literature published between 1989 and 2021, following the 2005 framework of Arksey and O'Malley. Data relating to study characteristics, task characteristics, participant information, and key findings on multichannel processing of emotion in ASD were extracted for the review. RESULTS Discrepancies were identified regarding multichannel emotion perception deficits, which are related to participant age, developmental level, and task demand. Findings are largely consistent regarding the facilitation and compensation of congruent multichannel emotional cues and the interference and disruption of incongruent signals. Unlike controls, ASD individuals demonstrate an overreliance on semantics rather than prosody to decode multichannel emotion. CONCLUSIONS The existing literature on multichannel emotion perception in ASD is limited, dispersed, and disassociated, focusing on a variety of topics with a wide range of methodologies. Further research is necessary to quantitatively examine the impact of methodological choice on performance outcomes. An integrated framework of emotion, language, and cognition is needed to examine the mutual influences between emotion and language as well as the cross-linguistic and cross-cultural differences. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.19386176.
Collapse
Affiliation(s)
- Minyue Zhang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Yu Chen
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Yi Lin
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences and Center for Neurobehavioral Development, University of Minnesota, Twin Cities, Minneapolis
| |
Collapse
|
7
|
Chen F, Lian J, Zhang G, Guo C. Semantics-Prosody Stroop Effect on English Emotion Word Processing in Chinese College Students With Trait Depression. Front Psychiatry 2022; 13:889476. [PMID: 35733799 PMCID: PMC9207235 DOI: 10.3389/fpsyt.2022.889476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 05/06/2022] [Indexed: 11/13/2022] Open
Abstract
This study explored the performance of Chinese college students with different severity of trait depression to process English emotional speech under a complete semantics-prosody Stroop effect paradigm in quiet and noisy conditions. A total of 24 college students with high-trait depression and 24 students with low-trait depression participated in this study. They were required to selectively attend to either the prosodic emotion (happy, sad) or semantic valence (positive and negative) of the English words they heard and then respond quickly. Both prosody task and semantic task were performed in quiet and noisy listening conditions. Results showed that the high-trait group reacted slower than the low-trait group in the prosody task due to their bluntness and insensitivity toward emotional processing. Besides, both groups reacted faster under the consistent situation, showing a clear congruency-induced facilitation effect and the wide existence of the Stroop effect in both tasks. Only the Stroop effect played a bigger role during emotional prosody identification in quiet condition, and the noise eliminated such an effect. For the sake of experimental design, both groups spent less time on the prosody task than the semantic task regardless of consistency in all listening conditions, indicating the friendliness of basic emotion identification and the difficulty for second language learners in face of semantic judgment. These findings suggest the unneglectable effects of college students' mood conditions and noise outside on emotion word processing.
Collapse
Affiliation(s)
- Fei Chen
- School of Foreign Languages, Hunan University, Changsha, China
| | - Jing Lian
- School of Foreign Languages, Hunan University, Changsha, China
| | - Gaode Zhang
- School of Foreign Languages, Hunan University, Changsha, China
| | - Chengyu Guo
- School of Foreign Languages, Hunan University, Changsha, China
| |
Collapse
|
8
|
Lin Y, Ding H, Zhang Y. Unisensory and Multisensory Stroop Effects Modulate Gender Differences in Verbal and Nonverbal Emotion Perception. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:4439-4457. [PMID: 34469179 DOI: 10.1044/2021_jslhr-20-00338] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Purpose This study aimed to examine the Stroop effects of verbal and nonverbal cues and their relative impacts on gender differences in unisensory and multisensory emotion perception. Method Experiment 1 investigated how well 88 normal Chinese adults (43 women and 45 men) could identify emotions conveyed through face, prosody and semantics as three independent channels. Experiments 2 and 3 further explored gender differences during multisensory integration of emotion through a cross-channel (prosody-semantics) and a cross-modal (face-prosody-semantics) Stroop task, respectively, in which 78 participants (41 women and 37 men) were asked to selectively attend to one of the two or three communication channels. Results The integration of accuracy and reaction time data indicated that paralinguistic cues (i.e., face and prosody) of emotions were consistently more salient than linguistic ones (i.e., semantics) throughout the study. Additionally, women demonstrated advantages in processing all three types of emotional signals in the unisensory task, but only preserved their strengths in paralinguistic processing and showed greater Stroop effects of nonverbal cues on verbal ones during multisensory perception. Conclusions These findings demonstrate clear gender differences in verbal and nonverbal emotion perception that are modulated by sensory channels, which have important theoretical and practical implications. Supplemental Material https://doi.org/10.23641/asha.16435599.
Collapse
Affiliation(s)
- Yi Lin
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences & Center for Neurobehavioral Development, University of Minnesota, Minneapolis
| |
Collapse
|
9
|
Liu P, Rigoulot S, Jiang X, Zhang S, Pell MD. Unattended Emotional Prosody Affects Visual Processing of Facial Expressions in Mandarin-Speaking Chinese: A Comparison With English-Speaking Canadians. JOURNAL OF CROSS-CULTURAL PSYCHOLOGY 2021; 52:275-294. [PMID: 33958813 PMCID: PMC8053741 DOI: 10.1177/0022022121990897] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Emotional cues from different modalities have to be integrated during communication, a process that can be shaped by an individual’s cultural background. We explored this issue in 25 Chinese participants by examining how listening to emotional prosody in Mandarin influenced participants’ gazes at emotional faces in a modified visual search task. We also conducted a cross-cultural comparison between data of this study and that of our previous work in English-speaking Canadians using analogous methodology. In both studies, eye movements were recorded as participants scanned an array of four faces portraying fear, anger, happy, and neutral expressions, while passively listening to a pseudo-utterance expressing one of the four emotions (Mandarin utterance in this study; English utterance in our previous study). The frequency and duration of fixations to each face were analyzed during 5 seconds after the onset of faces, both during the presence of the speech (early time window) and after the utterance ended (late time window). During the late window, Chinese participants looked more frequently and longer at faces conveying congruent emotions as the speech, consistent with findings from English-speaking Canadians. Cross-cultural comparison further showed that Chinese, but not Canadians, looked more frequently and longer at angry faces, which may signal potential conflicts and social threats. We hypothesize that the socio-cultural norms related to harmony maintenance in the Eastern culture promoted Chinese participants’ heightened sensitivity to, and deeper processing of, angry cues, highlighting culture-specific patterns in how individuals scan their social environment during emotion processing.
Collapse
Affiliation(s)
- Pan Liu
- McGill University, Montréal, QC, Canada.,Western University, London, ON, Canada
| | - Simon Rigoulot
- McGill University, Montréal, QC, Canada.,Université du Québec à Trois-Rivières, QC, Canada
| | - Xiaoming Jiang
- McGill University, Montréal, QC, Canada.,Tongji University, Shanghai, China
| | | | | |
Collapse
|
10
|
de Simone J, Cevasco J. The Role of the Establishment of Causal Connections and the Modality of Presentation of Discourse in the Generation of Emotion Inferences by Argentine College Students. READING PSYCHOLOGY 2020. [DOI: 10.1080/02702711.2020.1837314] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
| | - Jazmín Cevasco
- Department of Psychology, University of Buenos Aires, Buenos Aires, Argentina
- National Research and Technical Research Council, Buenos Aires, Argentina
| |
Collapse
|
11
|
Kao C, Zhang Y. Differential Neurobehavioral Effects of Cross-Modal Selective Priming on Phonetic and Emotional Prosodic Information in Late Second Language Learners. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:2508-2521. [PMID: 32658561 DOI: 10.1044/2020_jslhr-19-00329] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose Spoken language is inherently multimodal and multidimensional in natural settings, but very little is known about how second language (L2) learners undertake multilayered speech signals with both phonetic and affective cues. This study investigated how late L2 learners undertake parallel processing of linguistic and affective information in the speech signal at behavioral and neurophysiological levels. Method Behavioral and event-related potential measures were taken in a selective cross-modal priming paradigm to examine how late L2 learners (N = 24, M age = 25.54 years) assessed the congruency of phonetic (target vowel: /a/ or /i/) and emotional (target affect: happy or angry) information between the visual primes of facial pictures and the auditory targets of spoken syllables. Results Behavioral accuracy data showed a significant congruency effect in affective (but not phonetic) priming. Unlike a previous report on monolingual first language (L1) users, the L2 users showed no facilitation in reaction time for congruency detection in either selective priming task. The neurophysiological results revealed a robust N400 response that was stronger in the phonetic condition but without clear lateralization and that the N400 effect was weaker in late L2 listeners than in monolingual L1 listeners. Following the N400, late L2 learners showed a weaker late positive response than the monolingual L1 users, particularly in the left central to posterior electrode regions. Conclusions The results demonstrate distinct patterns of behavioral and neural processing of phonetic and affective information in L2 speech with reduced neural representations in both the N400 and the later processing stage, and they provide an impetus for further research on similarities and differences in L1 and L2 multisensory speech perception in bilingualism.
Collapse
Affiliation(s)
- Chieh Kao
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis
- Center for Neurobehavioral Development, University of Minnesota, Minneapolis
| |
Collapse
|
12
|
Lin Y, Ding H, Zhang Y. Prosody Dominates Over Semantics in Emotion Word Processing: Evidence From Cross-Channel and Cross-Modal Stroop Effects. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:896-912. [PMID: 32186969 DOI: 10.1044/2020_jslhr-19-00258] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Purpose Emotional speech communication involves multisensory integration of linguistic (e.g., semantic content) and paralinguistic (e.g., prosody and facial expressions) messages. Previous studies on linguistic versus paralinguistic salience effects in emotional speech processing have produced inconsistent findings. In this study, we investigated the relative perceptual saliency of emotion cues in cross-channel auditory alone task (i.e., semantics-prosody Stroop task) and cross-modal audiovisual task (i.e., semantics-prosody-face Stroop task). Method Thirty normal Chinese adults participated in two Stroop experiments with spoken emotion adjectives in Mandarin Chinese. Experiment 1 manipulated auditory pairing of emotional prosody (happy or sad) and lexical semantic content in congruent and incongruent conditions. Experiment 2 extended the protocol to cross-modal integration by introducing visual facial expression during auditory stimulus presentation. Participants were asked to judge emotional information for each test trial according to the instruction of selective attention. Results Accuracy and reaction time data indicated that, despite an increase in cognitive demand and task complexity in Experiment 2, prosody was consistently more salient than semantic content for emotion word processing and did not take precedence over facial expression. While congruent stimuli enhanced performance in both experiments, the facilitatory effect was smaller in Experiment 2. Conclusion Together, the results demonstrate the salient role of paralinguistic prosodic cues in emotion word processing and congruence facilitation effect in multisensory integration. Our study contributes tonal language data on how linguistic and paralinguistic messages converge in multisensory speech processing and lays a foundation for further exploring the brain mechanisms of cross-channel/modal emotion integration with potential clinical applications.
Collapse
Affiliation(s)
- Yi Lin
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Science & Center for Neurobehavioral Development, University of Minnesota, Minneapolis
| |
Collapse
|
13
|
Leshem R, van Lieshout PHHM, Ben-David S, Ben-David BM. Does emotion matter? The role of alexithymia in violent recidivism: A systematic literature review. CRIMINAL BEHAVIOUR AND MENTAL HEALTH : CBMH 2019; 29:94-110. [PMID: 30916846 DOI: 10.1002/cbm.2110] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/07/2018] [Revised: 01/28/2019] [Accepted: 02/08/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND Several variables have been evidenced for their association with violent reoffending. Resultant interventions have been suggested, yet the rate of recidivism remains high. Alexithymia, characterised by deficits in emotion processing and verbal expression, might interact with these other risk factors to affect outcomes. AIM Our goal was to examine the role of alexithymia as a possible moderator of risk factors for violent offender recidivism. Our hypothesis was that, albeit with other risk factors, alexithymia increases the risk of violent reoffending. METHOD We conducted a systematic literature review, using terms for alexithymia and violent offending and their intersection. RESULTS (a) No study that directly tests the role of alexithymia in conjunction with other potential risk factors for recidivism and actual violent recidivism was uncovered. (b) Primarily alexithymia researchers and primarily researchers into violence have separately found several clinical features in common between aspects of alexithymia and violence, such as impulsivity (total n = 24 studies). (c) Other researchers have established a relationship between alexithymia and both dynamic and static risk factors for violent recidivism (n = 16 studies). CONCLUSION Alexithymia may be a possible moderator of risk of violent offence recidivism. Supplementing offenders' rehabilitation efforts with assessments of alexithymia may assist in designing individually tailored interventions to promote desistance among violent offenders.
Collapse
Affiliation(s)
- Rotem Leshem
- Department of Criminology, Bar-Ilan University, Ramat Gan, Israel
| | - Pascal H H M van Lieshout
- Oral Dynamics Lab, Department of Speech-Language Pathology, University of Toronto, Toronto, Ontario, Canada
- Toronto Rehabilitation Institute, University of Toronto, Toronto, Ontario, Canada
- Rehabilitation Sciences Institute, University of Toronto, Toronto, Ontario, Canada
| | | | - Boaz M Ben-David
- Communication, Aging and Neuropsychology lab (CANlab), Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC) Herzliya, Herzliya, Israel
- Oral Dynamics Lab, Department of Speech-Language Pathology, University of Toronto, Toronto, Ontario, Canada
- Toronto Rehabilitation Institute, University of Toronto, Toronto, Ontario, Canada
- Rehabilitation Sciences Institute, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
14
|
Lausen A, Schacht A. Gender Differences in the Recognition of Vocal Emotions. Front Psychol 2018; 9:882. [PMID: 29922202 PMCID: PMC5996252 DOI: 10.3389/fpsyg.2018.00882] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2018] [Accepted: 05/15/2018] [Indexed: 11/22/2022] Open
Abstract
The conflicting findings from the few studies conducted with regard to gender differences in the recognition of vocal expressions of emotion have left the exact nature of these differences unclear. Several investigators have argued that a comprehensive understanding of gender differences in vocal emotion recognition can only be achieved by replicating these studies while accounting for influential factors such as stimulus type, gender-balanced samples, number of encoders, decoders, and emotional categories. This study aimed to account for these factors by investigating whether emotion recognition from vocal expressions differs as a function of both listeners' and speakers' gender. A total of N = 290 participants were randomly and equally allocated to two groups. One group listened to words and pseudo-words, while the other group listened to sentences and affect bursts. Participants were asked to categorize the stimuli with respect to the expressed emotions in a fixed-choice response format. Overall, females were more accurate than males when decoding vocal emotions, however, when testing for specific emotions these differences were small in magnitude. Speakers' gender had a significant impact on how listeners' judged emotions from the voice. The group listening to words and pseudo-words had higher identification rates for emotions spoken by male than by female actors, whereas in the group listening to sentences and affect bursts the identification rates were higher when emotions were uttered by female than male actors. The mixed pattern for emotion-specific effects, however, indicates that, in the vocal channel, the reliability of emotion judgments is not systematically influenced by speakers' gender and the related stereotypes of emotional expressivity. Together, these results extend previous findings by showing effects of listeners' and speakers' gender on the recognition of vocal emotions. They stress the importance of distinguishing these factors to explain recognition ability in the processing of emotional prosody.
Collapse
Affiliation(s)
- Adi Lausen
- Department of Affective Neuroscience and Psychophysiology, Institute for Psychology, University of Goettingen, Goettingen, Germany.,Leibniz Science "Primate Cognition", Goettingen, Germany
| | - Annekathrin Schacht
- Department of Affective Neuroscience and Psychophysiology, Institute for Psychology, University of Goettingen, Goettingen, Germany.,Leibniz Science "Primate Cognition", Goettingen, Germany
| |
Collapse
|
15
|
Meconi F, Doro M, Schiano Lomoriello A, Mastrella G, Sessa P. Neural measures of the role of affective prosody in empathy for pain. Sci Rep 2018; 8:291. [PMID: 29321532 PMCID: PMC5762917 DOI: 10.1038/s41598-017-18552-y] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2017] [Accepted: 12/14/2017] [Indexed: 01/10/2023] Open
Abstract
Emotional communication often needs the integration of affective prosodic and semantic components from speech and the speaker’s facial expression. Affective prosody may have a special role by virtue of its dual-nature; pre-verbal on one side and accompanying semantic content on the other. This consideration led us to hypothesize that it could act transversely, encompassing a wide temporal window involving the processing of facial expressions and semantic content expressed by the speaker. This would allow powerful communication in contexts of potential urgency such as witnessing the speaker’s physical pain. Seventeen participants were shown with faces preceded by verbal reports of pain. Facial expressions, intelligibility of the semantic content of the report (i.e., participants’ mother tongue vs. fictional language) and the affective prosody of the report (neutral vs. painful) were manipulated. We monitored event-related potentials (ERPs) time-locked to the onset of the faces as a function of semantic content intelligibility and affective prosody of the verbal reports. We found that affective prosody may interact with facial expressions and semantic content in two successive temporal windows, supporting its role as a transverse communication cue.
Collapse
Affiliation(s)
- Federica Meconi
- Department of Developmental and Social Psychology, University of Padova, Padova, Italy
| | - Mattia Doro
- Department of Developmental and Social Psychology, University of Padova, Padova, Italy
| | | | - Giulia Mastrella
- Department of Developmental and Social Psychology, University of Padova, Padova, Italy
| | - Paola Sessa
- Department of Developmental and Social Psychology, University of Padova, Padova, Italy.
| |
Collapse
|
16
|
Filippi P, Ocklenburg S, Bowling DL, Heege L, Güntürkün O, Newen A, de Boer B. More than words (and faces): evidence for a Stroop effect of prosody in emotion word processing. Cogn Emot 2016; 31:879-891. [DOI: 10.1080/02699931.2016.1177489] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
- Piera Filippi
- Artificial Intelligence Laboratory, Vrije Universiteit Brussel, Brussels, Belgium
- Center for Mind, Brain and Cognitive Evolution, Ruhr-University Bochum, Bochum, Germany
| | | | - Daniel L. Bowling
- Department of Cognitive Biology, University of Vienna, Vienna, Austria
| | - Larissa Heege
- Department of General and Biological Psychology, University of Wuppertal, Wuppertal, Germany
| | - Onur Güntürkün
- Center for Mind, Brain and Cognitive Evolution, Ruhr-University Bochum, Bochum, Germany
- Department of Biopsychology, Ruhr-University Bochum, Bochum, Germany
| | - Albert Newen
- Center for Mind, Brain and Cognitive Evolution, Ruhr-University Bochum, Bochum, Germany
- Institute of Philosophy II, Ruhr-University Bochum, Bochum, Germany
| | - Bart de Boer
- Artificial Intelligence Laboratory, Vrije Universiteit Brussel, Brussels, Belgium
| |
Collapse
|
17
|
Ben-David BM, Multani N, Shakuf V, Rudzicz F, van Lieshout PHHM. Prosody and Semantics Are Separate but Not Separable Channels in the Perception of Emotional Speech: Test for Rating of Emotions in Speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2016; 59:72-89. [PMID: 26903033 DOI: 10.1044/2015_jslhr-h-14-0323] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2014] [Accepted: 07/22/2015] [Indexed: 06/05/2023]
Abstract
PURPOSE Our aim is to explore the complex interplay of prosody (tone of speech) and semantics (verbal content) in the perception of discrete emotions in speech. METHOD We implement a novel tool, the Test for Rating of Emotions in Speech. Eighty native English speakers were presented with spoken sentences made of different combinations of 5 discrete emotions (anger, fear, happiness, sadness, and neutral) presented in prosody and semantics. Listeners were asked to rate the sentence as a whole, integrating both speech channels, or to focus on one channel only (prosody or semantics). RESULTS We observed supremacy of congruency, failure of selective attention, and prosodic dominance. Supremacy of congruency means that a sentence that presents the same emotion in both speech channels was rated highest; failure of selective attention means that listeners were unable to selectively attend to one channel when instructed; and prosodic dominance means that prosodic information plays a larger role than semantics in processing emotional speech. CONCLUSIONS Emotional prosody and semantics are separate but not separable channels, and it is difficult to perceive one without the influence of the other. Our findings indicate that the Test for Rating of Emotions in Speech can reveal specific aspects in the processing of emotional speech and may in the future prove useful for understanding emotion-processing deficits in individuals with pathologies.
Collapse
|
18
|
Pinheiro AP, Vasconcelos M, Dias M, Arrais N, Gonçalves ÓF. The music of language: an ERP investigation of the effects of musical training on emotional prosody processing. BRAIN AND LANGUAGE 2015; 140:24-34. [PMID: 25461917 DOI: 10.1016/j.bandl.2014.10.009] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2014] [Revised: 09/30/2014] [Accepted: 10/22/2014] [Indexed: 06/04/2023]
Abstract
Recent studies have demonstrated the positive effects of musical training on the perception of vocally expressed emotion. This study investigated the effects of musical training on event-related potential (ERP) correlates of emotional prosody processing. Fourteen musicians and fourteen control subjects listened to 228 sentences with neutral semantic content, differing in prosody (one third with neutral, one third with happy and one third with angry intonation), with intelligible semantic content (semantic content condition--SCC) and unintelligible semantic content (pure prosody condition--PPC). Reduced P50 amplitude was found in musicians. A difference between SCC and PPC conditions was found in P50 and N100 amplitude in non-musicians only, and in P200 amplitude in musicians only. Furthermore, musicians were more accurate in recognizing angry prosody in PPC sentences. These findings suggest that auditory expertise characterizing extensive musical training may impact different stages of vocal emotional processing.
Collapse
Affiliation(s)
- Ana P Pinheiro
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal; Cognitive Neuroscience Lab, Department of Psychiatry, Harvard Medical School, Boston, MA, USA.
| | - Margarida Vasconcelos
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Marcelo Dias
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Nuno Arrais
- Music Department, Institute of Arts and Human Sciences, University of Minho, Braga, Portugal
| | - Óscar F Gonçalves
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal; Spaulding Center of Neuromodulation, Department of Physical Medicine & Rehabilitation, Spaulding Rehabilitation Hospital and Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
19
|
Aguert M, Laval V, Lacroix A, Gil S, Le Bigot L. Inferring emotions from speech prosody: not so easy at age five. PLoS One 2013; 8:e83657. [PMID: 24349539 PMCID: PMC3857318 DOI: 10.1371/journal.pone.0083657] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2013] [Accepted: 11/13/2013] [Indexed: 11/24/2022] Open
Abstract
Previous research has suggested that children do not rely on prosody to infer a speaker's emotional state because of biases toward lexical content or situational context. We hypothesized that there are actually no such biases and that young children simply have trouble in using emotional prosody. Sixty children from 5 to 13 years of age had to judge the emotional state of a happy or sad speaker and then to verbally explain their judgment. Lexical content and situational context were devoid of emotional valence. Results showed that prosody alone did not enable the children to infer emotions at age 5, and was still not fully mastered at age 13. Instead, they relied on contextual information despite the fact that this cue had no emotional valence. These results support the hypothesis that prosody is difficult to interpret for young children and that this cue plays only a subordinate role up until adolescence to infer others' emotions.
Collapse
Affiliation(s)
- Marc Aguert
- Université de Caen Basse-Normandie, PALM (EA 4649), Caen, France
| | - Virginie Laval
- Université de Poitiers, CeRCA (UMR CNRS 7295), Poitiers, France
| | - Agnès Lacroix
- Université européenne de Bretagne - Rennes 2, CRP2C (EA 1285), Rennes, France
| | - Sandrine Gil
- Université de Poitiers, CeRCA (UMR CNRS 7295), Poitiers, France
| | | |
Collapse
|