1
|
Ben-David BM, Chebat DR, Icht M. "Love looks not with the eyes": supranormal processing of emotional speech in individuals with late-blindness versus preserved processing in individuals with congenital-blindness. Cogn Emot 2024:1-14. [PMID: 38785380 DOI: 10.1080/02699931.2024.2357656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Accepted: 05/11/2024] [Indexed: 05/25/2024]
Abstract
Processing of emotional speech in the absence of visual information relies on two auditory channels: semantics and prosody. No study to date has investigated how blindness impacts this process. Two theories, Perceptual Deficit, and Sensory Compensation, yiled different expectations about the role of visual experience (or its lack thereof) in processing emotional speech. To test the effect of vision and early visual experience on processing of emotional speech, we compared individuals with congenital blindness (CB, n = 17), individuals with late blindness (LB, n = 15), and sighted controls (SC, n = 21) on identification and selective-attention of semantic and prosodic spoken-emotions. Results showed that individuals with blindness performed at least as well as SC, supporting Sensory Compensation and the role of cortical reorganisation. Individuals with LB outperformed individuals with CB, in accordance with Perceptual Deficit, supporting the role of early visual experience. The LB advantage was moderated by executive functions (working-memory). Namely, the advantage was erased for individuals with CB who showed higher levels of executive functions. Results suggest that vision is not necessary for processing of emotional speech, but early visual experience could improve it. The findings support a combination of the two aforementioned theories and reject a dichotomous view of deficiencies/enhancements of blindness.
Collapse
Affiliation(s)
- Boaz M Ben-David
- Communication, Aging, and Neuropsychology Lab (CANlab), Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
- Department of Speech-Language Pathology, University of Toronto, Toronto, Canada
- KITE, Toronto Rehabilitation Institute, University Health Networks (UHN), Toronto, Canada
| | - Daniel-Robert Chebat
- Visual and Cognitive Neuroscience Laboratory (VCN Lab), The Department of Psychology, Ariel University, Ariel, Israel
- Navigation and Accessibility Research Center (NARCA), Ariel University, Ariel, Israel
| | - Michal Icht
- Department of Communication Disorders, Ariel University, Ariel, Israel
| |
Collapse
|
2
|
Sinvani RT, Fogel-Grinvald H, Sapir S. Self-Rated Confidence in Vocal Emotion Recognition Ability: The Role of Gender. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1413-1423. [PMID: 38625128 DOI: 10.1044/2024_jslhr-23-00373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/17/2024]
Abstract
PURPOSE We studied the role of gender in metacognition of voice emotion recognition ability (ERA), reflected by self-rated confidence (SRC). To this end, we guided our study in two approaches: first, by examining the role of gender in voice ERA and SRC independently and second, by looking for gender effects on the ERA association with SRC. METHOD We asked 100 participants (50 men, 50 women) to interpret a set of vocal expressions portrayed by 30 actors (16 men, 14 women) as defined by their emotional meaning. Targets were 180 repetitive lexical sentences articulated in congruent emotional voices (anger, sadness, surprise, happiness, fear) and neutral expressions. Trial by trial, the participants were assigned retrospective SRC based on their emotional recognition performance. RESULTS A binomial generalized linear mixed model (GLMM) estimating ERA accuracy revealed a significant gender effect, with women encoders (speakers) yielding higher accuracy levels than men. There was no significant effect of the decoder's (listener's) gender. A second GLMM estimating SRC found a significant effect of encoder and decoder genders, with women outperforming men. Gamma correlations were significantly greater than zero for women and men decoders. CONCLUSIONS In spite of varying interpretations of gender in each independent rating (ERA and SRC), our results suggest that both men and women decoders were accurate in their metacognition regarding voice emotion recognition. Further research is needed to study how individuals of both genders use metacognitive knowledge in their emotional recognition and whether and how such knowledge contributes to effective social communication.
Collapse
Affiliation(s)
| | | | - Shimon Sapir
- Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Sciences, University of Haifa, Israel
| |
Collapse
|
3
|
Chatterjee M, Gajre S, Kulkarni AM, Barrett KC, Limb CJ. Predictors of Emotional Prosody Identification by School-Age Children With Cochlear Implants and Their Peers With Normal Hearing. Ear Hear 2024; 45:411-424. [PMID: 37811966 PMCID: PMC10922148 DOI: 10.1097/aud.0000000000001436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
Abstract
OBJECTIVES Children with cochlear implants (CIs) vary widely in their ability to identify emotions in speech. The causes of this variability are unknown, but this knowledge will be crucial if we are to design improvements in technological or rehabilitative interventions that are effective for individual patients. The objective of this study was to investigate how well factors such as age at implantation, duration of device experience (hearing age), nonverbal cognition, vocabulary, and socioeconomic status predict prosody-based emotion identification in children with CIs, and how the key predictors in this population compare to children with normal hearing who are listening to either normal emotional speech or to degraded speech. DESIGN We measured vocal emotion identification in 47 school-age CI recipients aged 7 to 19 years in a single-interval, 5-alternative forced-choice task. None of the participants had usable residual hearing based on parent/caregiver report. Stimuli consisted of a set of semantically emotion-neutral sentences that were recorded by 4 talkers in child-directed and adult-directed prosody corresponding to five emotions: neutral, angry, happy, sad, and scared. Twenty-one children with normal hearing were also tested in the same tasks; they listened to both original speech and to versions that had been noise-vocoded to simulate CI information processing. RESULTS Group comparison confirmed the expected deficit in CI participants' emotion identification relative to participants with normal hearing. Within the CI group, increasing hearing age (correlated with developmental age) and nonverbal cognition outcomes predicted emotion recognition scores. Stimulus-related factors such as talker and emotional category also influenced performance and were involved in interactions with hearing age and cognition. Age at implantation was not predictive of emotion identification. Unlike the CI participants, neither cognitive status nor vocabulary predicted outcomes in participants with normal hearing, whether listening to original speech or CI-simulated speech. Age-related improvements in outcomes were similar in the two groups. Participants with normal hearing listening to original speech showed the greatest differences in their scores for different talkers and emotions. Participants with normal hearing listening to CI-simulated speech showed significant deficits compared with their performance with original speech materials, and their scores also showed the least effect of talker- and emotion-based variability. CI participants showed more variation in their scores with different talkers and emotions than participants with normal hearing listening to CI-simulated speech, but less so than participants with normal hearing listening to original speech. CONCLUSIONS Taken together, these results confirm previous findings that pediatric CI recipients have deficits in emotion identification based on prosodic cues, but they improve with age and experience at a rate that is similar to peers with normal hearing. Unlike participants with normal hearing, nonverbal cognition played a significant role in CI listeners' emotion identification. Specifically, nonverbal cognition predicted the extent to which individual CI users could benefit from some talkers being more expressive of emotions than others, and this effect was greater in CI users who had less experience with their device (or were younger) than CI users who had more experience with their device (or were older). Thus, in young prelingually deaf children with CIs performing an emotional prosody identification task, cognitive resources may be harnessed to a greater degree than in older prelingually deaf children with CIs or than children with normal hearing.
Collapse
Affiliation(s)
- Monita Chatterjee
- Auditory Prostheses & Perception Laboratory, Center for Hearing Research, Boys Town National Research Hospital, 555 N 30 St., Omaha, NE 68131, USA
| | - Shivani Gajre
- Auditory Prostheses & Perception Laboratory, Center for Hearing Research, Boys Town National Research Hospital, 555 N 30 St., Omaha, NE 68131, USA
| | - Aditya M Kulkarni
- Auditory Prostheses & Perception Laboratory, Center for Hearing Research, Boys Town National Research Hospital, 555 N 30 St., Omaha, NE 68131, USA
| | - Karen C Barrett
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, San Francisco, California, USA
| | - Charles J Limb
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, San Francisco, California, USA
| |
Collapse
|
4
|
Lingelbach K, Vukelić M, Rieger JW. GAUDIE: Development, validation, and exploration of a naturalistic German AUDItory Emotional database. Behav Res Methods 2024; 56:2049-2063. [PMID: 37221343 PMCID: PMC10991051 DOI: 10.3758/s13428-023-02135-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/21/2023] [Indexed: 05/25/2023]
Abstract
Since thoroughly validated naturalistic affective German speech stimulus databases are rare, we present here a novel validated database of speech sequences assembled with the purpose of emotion induction. The database comprises 37 audio speech sequences with a total duration of 92 minutes for the induction of positive, neutral, and negative emotion: comedian shows intending to elicit humorous and amusing feelings, weather forecasts, and arguments between couples and relatives from movies or television series. Multiple continuous and discrete ratings are used to validate the database to capture the time course and variabilities of valence and arousal. We analyse and quantify how well the audio sequences fulfil quality criteria of differentiation, salience/strength, and generalizability across participants. Hence, we provide a validated speech database of naturalistic scenarios suitable to investigate emotion processing and its time course with German-speaking participants. Information on using the stimulus database for research purposes can be found at the OSF project repository GAUDIE: https://osf.io/xyr6j/ .
Collapse
Affiliation(s)
- Katharina Lingelbach
- Fraunhofer Institute for Industrial Engineering IAO, Nobelstraße 12, 70569, Stuttgart, Germany.
- Department of Psychology, University of Oldenburg, Oldenburg, Germany.
| | - Mathias Vukelić
- Fraunhofer Institute for Industrial Engineering IAO, Nobelstraße 12, 70569, Stuttgart, Germany
| | - Jochem W Rieger
- Department of Psychology, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
5
|
Carmichael CL, Mizrahi M. Connecting cues: The role of nonverbal cues in perceived responsiveness. Curr Opin Psychol 2023; 53:101663. [PMID: 37572551 DOI: 10.1016/j.copsyc.2023.101663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Revised: 07/06/2023] [Accepted: 07/11/2023] [Indexed: 08/14/2023]
Abstract
Nonverbal cues powerfully shape interpersonal experiences with close others; yet, there has been minimal cross-fertilization between the nonverbal behavior and close relationships literatures. Using examples of responsive nonverbal behavior conveyed across vocal, tactile, facial, and bodily channels of communication, we illustrate the utility of assessing and isolating their effects to differentiate the contributions of verbal and nonverbal displays of listening and responsiveness to relationship outcomes. We offer suggestions for methodological approaches to better capture responsive behavior across verbal and nonverbal channels, and discuss theoretical and practical implications of carrying out this work to better clarify what makes people feel understood, validated, listened to, and cared for.
Collapse
Affiliation(s)
- Cheryl L Carmichael
- Department of Psychology, Brooklyn College, CUNY, 2900 Bedford Avenue, Brooklyn, NY 11210, USA
| | - Moran Mizrahi
- Department of Psychology, Ariel University, 3 Kiryat HaMada, Ariel 40700, Israel.
| |
Collapse
|
6
|
Kao C, Zhang Y. Detecting Emotional Prosody in Real Words: Electrophysiological Evidence From a Modified Multifeature Oddball Paradigm. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:2988-2998. [PMID: 37379567 DOI: 10.1044/2023_jslhr-22-00652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/30/2023]
Abstract
PURPOSE Emotional voice conveys important social cues that demand listeners' attention and timely processing. This event-related potential study investigated the feasibility of a multifeature oddball paradigm to examine adult listeners' neural responses to detecting emotional prosody changes in nonrepeating naturally spoken words. METHOD Thirty-three adult listeners completed the experiment by passively listening to the words in neutral and three alternating emotions while watching a silent movie. Previous research documented preattentive change-detection electrophysiological responses (e.g., mismatch negativity [MMN], P3a) to emotions carried by fixed syllables or words. Given that the MMN and P3a have also been shown to reflect extraction of abstract regularities over repetitive acoustic patterns, this study employed a multifeature oddball paradigm to compare listeners' MMN and P3a to emotional prosody change from neutral to angry, happy, and sad emotions delivered with hundreds of nonrepeating words in a single recording session. RESULTS Both MMN and P3a were successfully elicited by the emotional prosodic change over the varying linguistic context. Angry prosody elicited the strongest MMN compared with happy and sad prosodies. Happy prosody elicited the strongest P3a in the centro-frontal electrodes, and angry prosody elicited the smallest P3a. CONCLUSIONS The results demonstrated that listeners were able to extract the acoustic patterns for each emotional prosody category over constantly changing spoken words. The findings confirm the feasibility of the multifeature oddball paradigm in investigating emotional speech processing beyond simple acoustic change detection, which may potentially be applied to pediatric and clinical populations.
Collapse
Affiliation(s)
- Chieh Kao
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Twin Cities
- Center for Cognitive Sciences, University of Minnesota, Twin Cities
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Twin Cities
- Masonic Institute for the Developing Brain, University of Minnesota, Twin Cities
| |
Collapse
|
7
|
Eschenauer S, Tsao R, Legou T, Tellier M, André C, Brugnoli I, Tortel A, Pasquier A. Performing for Better Communication: Creativity, Cognitive-Emotional Skills and Embodied Language in Primary Schools. J Intell 2023; 11:140. [PMID: 37504783 PMCID: PMC10381105 DOI: 10.3390/jintelligence11070140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 06/25/2023] [Accepted: 07/01/2023] [Indexed: 07/29/2023] Open
Abstract
While the diversity and complexity of the links between creativity and emotional skills as well as their effects on cognitive processes are now established, few approaches to implementing them in schools have been evaluated. Within the framework of the enactive paradigm, which considers the complexity and dynamics of language as a cognitive process, we study how an approach based on performative theatre can synergistically stimulate creativity (artistic, bodily and linguistic), emotional skills (identifying and understanding emotions) and executive functions (especially inhibition, cognitive flexibility and emotional control), all as components defined in the context of oral communication. Stimulating this synergy in the context of foreign language teaching may be especially beneficial for children with communication disorders. This paper presents the first results of the CELAVIE pilot study (Creativity, Empathy and Emotions in Language learning with Autism for an Inclusive Education) through a case study of a pupil with a neurodevelopmental disorder included in a 4th-grade class. The results show a progression in oral communication in English as a Foreign Language (EFL), in emotional skills and creativity.
Collapse
Affiliation(s)
- Sandrine Eschenauer
- Aix-Marseille Univ, CNRS, LPL, 13100 Aix-en-Provence, France
- Aix-Marseille Univ, Pôle Pilote AMPIRIC, 13013 Marseille, France
- Institute of Creativity and Innovation from Aix-Marseille Univ-InCIAM, 13100 Aix-en-Provence, France
- SFERE-Provence, 13013 Marseille, France
- Aix-Marseille Univ, Institute for Language, Communication and the Brain, ILCB, 13100 Aix-en-Provence, France
| | - Raphaële Tsao
- Aix-Marseille Univ, Pôle Pilote AMPIRIC, 13013 Marseille, France
- Institute of Creativity and Innovation from Aix-Marseille Univ-InCIAM, 13100 Aix-en-Provence, France
- SFERE-Provence, 13013 Marseille, France
- Aix-Marseille Univ, PSYCLE, 13100 Aix-en-Provence, France
| | - Thierry Legou
- Aix-Marseille Univ, CNRS, LPL, 13100 Aix-en-Provence, France
- Institute of Creativity and Innovation from Aix-Marseille Univ-InCIAM, 13100 Aix-en-Provence, France
- Aix-Marseille Univ, Institute for Language, Communication and the Brain, ILCB, 13100 Aix-en-Provence, France
| | - Marion Tellier
- Aix-Marseille Univ, CNRS, LPL, 13100 Aix-en-Provence, France
- Aix-Marseille Univ, Pôle Pilote AMPIRIC, 13013 Marseille, France
- Institute of Creativity and Innovation from Aix-Marseille Univ-InCIAM, 13100 Aix-en-Provence, France
- SFERE-Provence, 13013 Marseille, France
| | - Carine André
- Aix-Marseille Univ, CNRS, LPL, 13100 Aix-en-Provence, France
- Institute of Creativity and Innovation from Aix-Marseille Univ-InCIAM, 13100 Aix-en-Provence, France
| | - Isabelle Brugnoli
- University Paris-Est Créteil, IMAGER-Languenact, 94100 Créteil, France
| | - Anne Tortel
- Aix-Marseille Univ, CNRS, LPL, 13100 Aix-en-Provence, France
- Institute of Creativity and Innovation from Aix-Marseille Univ-InCIAM, 13100 Aix-en-Provence, France
| | - Aurélie Pasquier
- Institute of Creativity and Innovation from Aix-Marseille Univ-InCIAM, 13100 Aix-en-Provence, France
- SFERE-Provence, 13013 Marseille, France
- Aix-Marseille Univ, ADEF-GCAF, 13013 Marseille, France
| |
Collapse
|
8
|
Lin Y, Li C, Hu R, Zhou L, Ding H, Fan Q, Zhang Y. Impaired emotion perception in schizophrenia shows sex differences with channel- and category-specific effects: A pilot study. J Psychiatr Res 2023; 161:150-157. [PMID: 36924569 DOI: 10.1016/j.jpsychires.2023.03.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 02/28/2023] [Accepted: 03/02/2023] [Indexed: 03/18/2023]
Abstract
Individuals with schizophrenia reportedly demonstrate deficits in emotion perception. Relevant studies on the effects of decoder's sex, communication channels and emotion categories have produced mixed findings and seldom explored the interactions among these three key factors. The present pilot study examined how male and female individuals with schizophrenia and healthy controls perceived emotional (e.g., angry, happy, and sad) and neutral expressions from verbal semantic and nonverbal prosodic and facial channels. Twenty-eight (11 females) individuals with schizophrenia and 30 healthy controls (13 females) were asked to recognize emotional facial expressions, emotional prosody, and emotional semantics. Both accuracy and response time showed subpar performance for all communication channels and emotional categories in the schizophrenia group. More severe emotion perception deficits were found with the nonverbal (not the verbal) materials. There was also a reduced level of impairment with anger perception, especially in the female individuals with schizophrenia while biased perception towards emotional semantics was more pronounced in male individuals with schizophrenia. These findings, although preliminary, indicate the channel- and category-specific nature of emotion perception with potential sex differences among people with schizophrenia, which has important theoretical and practical implications.
Collapse
Affiliation(s)
- Yi Lin
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, 800 Dong Chuan Rd., Minhang District, Shanghai, 200240, China.
| | - Chuoran Li
- Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Ruozhen Hu
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, 800 Dong Chuan Rd., Minhang District, Shanghai, 200240, China
| | - Leqi Zhou
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, 800 Dong Chuan Rd., Minhang District, Shanghai, 200240, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, 800 Dong Chuan Rd., Minhang District, Shanghai, 200240, China.
| | - Qing Fan
- Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China; Shanghai Key Laboratory of Psychotic Disorders, Shanghai, China.
| | - Yang Zhang
- Department of Speech-Language-Hearing Science & Masonic Institute for the Developing Brain, University of Minnesota, USA.
| |
Collapse
|
9
|
Icht M, Zukerman G, Ben-Itzchak E, Ben-David BM. Response to McKenzie et al. 2021: Keep It Simple; Young Adults With Autism Spectrum Disorder Without Intellectual Disability Can Process Basic Emotions. J Autism Dev Disord 2023; 53:1269-1272. [PMID: 35507295 PMCID: PMC9066386 DOI: 10.1007/s10803-022-05574-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/13/2022] [Indexed: 12/26/2022]
Abstract
We recently read the interesting and informative paper entitled "Empathic accuracy and cognitive and affective empathy in young adults with and without autism spectrum disorder" (McKenzie et al. in Journal of Autism and Developmental Disorders 52: 1-15, 2021). This paper expands recent findings from our lab (Ben-David in Journal of Autism and Developmental Disorders 50: 741-756, 2020a; International Journal of Audiology 60: 319-321, 2020b) and a recent theoretical framework (Icht et al. in Autism Research 14: 1948-1964, 2021) that may suggest a new purview for McKenzie et al.'s results. Namely, these papers suggest that young adults with autism spectrum disorder without intellectual disability can successfully recruit their cognitive abilities to distinguish between different simple spoken emotions, but may still face difficulties processing complex, subtle emotions. McKenzie et al. (Journal of Autism and Developmental Disorders 52: 1-15, 2021) extended these findings to the processing of emotions in video clips, with both visual and auditory information.
Collapse
Affiliation(s)
- Michal Icht
- Department of Communication Disorders, Ariel University, Ariel, Israel
| | - Gil Zukerman
- Department of Communication Disorders, Ariel University, Ariel, Israel
| | - Esther Ben-Itzchak
- Department of Communication Disorders, Ariel University, Ariel, Israel.,Department of Communication Disorders, The Bruckner Center for Research in Autism, Ariel University, Ariel, Israel
| | - Boaz M Ben-David
- Baruch Ivcher School of Psychology, Reichman University (IDC, Herzliya), Herzliya, Israel. .,Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada. .,Toronto Rehabilitation Institute, University Health Networks (UHN), Toronto, ON, Canada.
| |
Collapse
|
10
|
Shakuf V, Ben-David B, Wegner TGG, Wesseling PBC, Mentzel M, Defren S, Allen SEM, Lachmann T. Processing emotional prosody in a foreign language: the case of German and Hebrew. JOURNAL OF CULTURAL COGNITIVE SCIENCE 2022; 6:251-268. [PMID: 35996660 PMCID: PMC9386669 DOI: 10.1007/s41809-022-00107-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Revised: 06/13/2022] [Accepted: 07/16/2022] [Indexed: 11/09/2022]
Abstract
This study investigated the universality of emotional prosody in perception of discrete emotions when semantics is not available. In two experiments the perception of emotional prosody in Hebrew and German by listeners who speak one of the languages but not the other was investigated. Having a parallel tool in both languages allowed to conduct controlled comparisons. In Experiment 1, 39 native German speakers with no knowledge of Hebrew and 80 native Israeli speakers rated Hebrew sentences spoken with four different emotional prosodies (anger, fear, happiness, sadness) or neutral. The Hebrew version of the Test for Rating of Emotions in Speech (T-RES) was used for this purpose. Ratings indicated participants’ agreement on how much the sentence conveyed each of four discrete emotions (anger, fear, happiness and sadness). In Experient 2, 30 native speakers of German, and 24 Israeli native speakers of Hebrew who had no knowledge of German rated sentences of the German version of the T-RES. Based only on the prosody, German-speaking participants were able to accurately identify the emotions in the Hebrew sentences and Hebrew-speaking participants were able to identify the emotions in the German sentences. In both experiments ratings between the groups were similar. These findings show that individuals are able to identify emotions in a foreign language even if they do not have access to semantics. This ability goes beyond identification of target emotion; similarities between languages exist even for “wrong” perception. This adds to accumulating evidence in the literature on the universality of emotional prosody.
Collapse
|
11
|
Dor YI, Algom D, Shakuf V, Ben-David BM. Age-Related Changes in the Perception of Emotions in Speech: Assessing Thresholds of Prosody and Semantics Recognition in Noise for Young and Older Adults. Front Neurosci 2022; 16:846117. [PMID: 35546888 PMCID: PMC9082150 DOI: 10.3389/fnins.2022.846117] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Accepted: 03/14/2022] [Indexed: 11/15/2022] Open
Abstract
Older adults process emotions in speech differently than do young adults. However, it is unclear whether these age-related changes impact all speech channels to the same extent, and whether they originate from a sensory or a cognitive source. The current study adopted a psychophysical approach to directly compare young and older adults’ sensory thresholds for emotion recognition in two channels of spoken-emotions: prosody (tone) and semantics (words). A total of 29 young adults and 26 older adults listened to 50 spoken sentences presenting different combinations of emotions across prosody and semantics. They were asked to recognize the prosodic or semantic emotion, in separate tasks. Sentences were presented on the background of speech-spectrum noise ranging from SNR of −15 dB (difficult) to +5 dB (easy). Individual recognition thresholds were calculated (by fitting psychometric functions) separately for prosodic and semantic recognition. Results indicated that: (1). recognition thresholds were better for young over older adults, suggesting an age-related general decrease across channels; (2). recognition thresholds were better for prosody over semantics, suggesting a prosodic advantage; (3). importantly, the prosodic advantage in thresholds did not differ between age groups (thus a sensory source for age-related differences in spoken-emotions processing was not supported); and (4). larger failures of selective attention were found for older adults than for young adults, indicating that older adults experienced larger difficulties in inhibiting irrelevant information. Taken together, results do not support a sole sensory source, but rather an interplay of cognitive and sensory sources for age-related differences in spoken-emotions processing.
Collapse
Affiliation(s)
- Yehuda I Dor
- School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel.,Communication, Aging and Neuropsychology Lab (CANlab), Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
| | - Daniel Algom
- School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel
| | - Vered Shakuf
- Department of Communications Disorders, Achva Academic College, Arugot, Israel
| | - Boaz M Ben-David
- Communication, Aging and Neuropsychology Lab (CANlab), Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel.,Toronto Rehabilitation Institute, University Health Networks (UHN), Toronto, ON, Canada.,Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
12
|
Carl M, Icht M, Ben-David BM. A Cross-Linguistic Validation of the Test for Rating Emotions in Speech: Acoustic Analyses of Emotional Sentences in English, German, and Hebrew. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:991-1000. [PMID: 35171689 DOI: 10.1044/2021_jslhr-21-00205] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
PURPOSE The Test for Rating Emotions in Speech (T-RES) has been developed in order to assess the processing of emotions in spoken language. In this tool, spoken sentences, which are composed of emotional content (anger, happiness, sadness, and neutral) in both semantics and prosody in different combinations, are rated by listeners. To date, English, German, and Hebrew versions have been developed, as well as online versions, iT-RES, to adapt to COVID-19 social restrictions. Since the perception of spoken emotions may be affected by linguistic (and cultural) variables, it is important to compare the acoustic characteristics of the stimuli within and between languages. The goal of the current report was to provide cross-linguistic acoustic validation of the T-RES. METHOD T-RES sentences in the aforementioned languages were acoustically analyzed in terms of mean F0, F0 range, and speech rate to obtain profiles of acoustic parameters for different emotions. RESULTS Significant within-language discriminability of prosodic emotions was found, for both mean F0 and speech rate. Similarly, these measures were associated with comparable patterns of prosodic emotions for each of the tested languages and emotional ratings. CONCLUSIONS The results demonstrate the lack of dependence of prosody and semantics within the T-RES stimuli. These findings illustrate the listeners' ability to clearly distinguish between the different prosodic emotions in each language, providing a cross-linguistic validation of the T-RES and iT-RES.
Collapse
Affiliation(s)
- Micalle Carl
- Department of Communication Disorders, Ariel University, Israel
| | - Michal Icht
- Department of Communication Disorders, Ariel University, Israel
| | - Boaz M Ben-David
- Baruch Ivcher School of Psychology, Reichman University (IDC) Herzliya, Israel
- Department of Speech-Language Pathology, University of Toronto, Ontario, Canada
- Toronto Rehabilitation Institute, University Health Network (UHN), Ontario, Canada
| |
Collapse
|
13
|
More Than Words: the Relative Roles of Prosody and Semantics in the Perception of Emotions in Spoken Language by Postlingual Cochlear Implant Users. Ear Hear 2022; 43:1378-1389. [PMID: 35030551 DOI: 10.1097/aud.0000000000001199] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES The processing of emotional speech calls for the perception and integration of semantic and prosodic cues. Although cochlear implants allow for significant auditory improvements, they are limited in the transmission of spectro-temporal fine-structure information that may not support the processing of voice pitch cues. The goal of the current study is to compare the performance of postlingual cochlear implant (CI) users and a matched control group on perception, selective attention, and integration of emotional semantics and prosody. DESIGN Fifteen CI users and 15 normal hearing (NH) peers (age range, 18-65 years) 1istened to spoken sentences composed of different combinations of four discrete emotions (anger, happiness, sadness, and neutrality) presented in prosodic and semantic channels-T-RES: Test for Rating Emotions in Speech. In three separate tasks, listeners were asked to attend to the sentence as a whole, thus integrating both speech channels (integration), or to focus on one channel only (rating of target emotion) and ignore the other (selective attention). Their task was to rate how much they agreed that the sentence conveyed each of the predefined emotions. In addition, all participants performed standard tests of speech perception. RESULTS When asked to focus on one channel, semantics or prosody, both groups rated emotions similarly with comparable levels of selective attention. When the task was called for channel integration, group differences were found. CI users appeared to use semantic emotional information more than did their NH peers. CI users assigned higher ratings than did their NH peers to sentences that did not present the target emotion, indicating some degree of confusion. In addition, for CI users, individual differences in speech comprehension over the phone and identification of intonation were significantly related to emotional semantic and prosodic ratings, respectively. CONCLUSIONS CI users and NH controls did not differ in perception of prosodic and semantic emotions and in auditory selective attention. However, when the task called for integration of prosody and semantics, CI users overused the semantic information (as compared with NH). We suggest that as CI users adopt diverse cue weighting strategies with device experience, their weighting of prosody and semantics differs from those used by NH. Finally, CI users may benefit from rehabilitation strategies that strengthen perception of prosodic information to better understand emotional speech.
Collapse
|
14
|
Kikutani M, Ikemoto M. Detecting emotion in speech expressing incongruent emotional cues through voice and content: investigation on dominant modality and language. Cogn Emot 2022; 36:492-511. [PMID: 34978263 DOI: 10.1080/02699931.2021.2021144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
This research investigated how we detect emotion in speech when the emotional cues in the sound of voice do not match the semantic content. It examined the dominance of the voice or semantics in the perception of emotion from incongruent speech and the influence of language on the interaction between the two modalities. Japanese participants heard a voice emoting anger, happiness or sadness while saying "I'm angry", "I'm pleased" or "I'm sad", which were in their native language, in their second language (English) and in unfamiliar languages (Khmer and Swedish). They reported how much they agree that the speaker is expressing each of the three emotions. Two experiments were conducted with different number of voice stimuli, and both found consistent results. Strong reliance on the voice was found for the speech in participants' second and unfamiliar languages but the dominance was weakened for the speech in their native language. Among the three emotions, voice was most important for perception of sadness. This research concludes that the impact of the emotional cues expressed by the voice and semantics varies depending on the expressed emotions and the language.
Collapse
Affiliation(s)
- Mariko Kikutani
- Institute of Liberal Arts and Science, Kanazawa University, Ishikawa, Japan
| | | |
Collapse
|
15
|
Leshem R, Icht M, Ben-David BM. Processing of Spoken Emotions in Schizophrenia: Forensic and Non-forensic Patients Differ in Emotional Identification and Integration but Not in Selective Attention. Front Psychiatry 2022; 13:847455. [PMID: 35386523 PMCID: PMC8977511 DOI: 10.3389/fpsyt.2022.847455] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Accepted: 02/22/2022] [Indexed: 11/13/2022] Open
Abstract
Patients with schizophrenia (PwS) typically demonstrate deficits in visual processing of emotions. Less is known about auditory processing of spoken-emotions, as conveyed by the prosodic (tone) and semantics (words) channels. In a previous study, forensic PwS (who committed violent offenses) identified spoken-emotions and integrated the emotional information from both channels similarly to controls. However, their performance indicated larger failures of selective-attention, and lower discrimination between spoken-emotions, than controls. Given that forensic schizophrenia represents a special subgroup, the current study compared forensic and non-forensic PwS. Forty-five PwS listened to sentences conveying four basic emotions presented in semantic or prosodic channels, in different combinations. They were asked to rate how much they agreed that the sentences conveyed a predefined emotion, focusing on one channel or on the sentence as a whole. Their performance was compared to that of 21 forensic PwS (previous study). The two groups did not differ in selective-attention. However, better emotional identification and discrimination, as well as better channel integration were found for the forensic PwS. Results have several clinical implications: difficulties in spoken-emotions processing might not necessarily relate to schizophrenia; attentional deficits might not be a risk factor for aggression in schizophrenia; and forensic schizophrenia might have unique characteristics as related to spoken-emotions processing (motivation, stimulation).
Collapse
Affiliation(s)
- Rotem Leshem
- Department of Criminology, Bar-Ilan University, Ramat Gan, Israel
| | - Michal Icht
- Department of Communication Disorders, Ariel University, Ariel, Israel
| | - Boaz M Ben-David
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel.,Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada.,Toronto Rehabilitation Institute, University Health Networks, Toronto, ON, Canada
| |
Collapse
|
16
|
Lin Y, Ding H, Zhang Y. Unisensory and Multisensory Stroop Effects Modulate Gender Differences in Verbal and Nonverbal Emotion Perception. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:4439-4457. [PMID: 34469179 DOI: 10.1044/2021_jslhr-20-00338] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Purpose This study aimed to examine the Stroop effects of verbal and nonverbal cues and their relative impacts on gender differences in unisensory and multisensory emotion perception. Method Experiment 1 investigated how well 88 normal Chinese adults (43 women and 45 men) could identify emotions conveyed through face, prosody and semantics as three independent channels. Experiments 2 and 3 further explored gender differences during multisensory integration of emotion through a cross-channel (prosody-semantics) and a cross-modal (face-prosody-semantics) Stroop task, respectively, in which 78 participants (41 women and 37 men) were asked to selectively attend to one of the two or three communication channels. Results The integration of accuracy and reaction time data indicated that paralinguistic cues (i.e., face and prosody) of emotions were consistently more salient than linguistic ones (i.e., semantics) throughout the study. Additionally, women demonstrated advantages in processing all three types of emotional signals in the unisensory task, but only preserved their strengths in paralinguistic processing and showed greater Stroop effects of nonverbal cues on verbal ones during multisensory perception. Conclusions These findings demonstrate clear gender differences in verbal and nonverbal emotion perception that are modulated by sensory channels, which have important theoretical and practical implications. Supplemental Material https://doi.org/10.23641/asha.16435599.
Collapse
Affiliation(s)
- Yi Lin
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences & Center for Neurobehavioral Development, University of Minnesota, Minneapolis
| |
Collapse
|
17
|
Icht M, Zukerman G, Ben-Itzchak E, Ben-David BM. Keep it simple: Identification of basic versus complex emotions in spoken language in individuals with autism spectrum disorder without intellectual disability: A meta-analysis study. Autism Res 2021; 14:1948-1964. [PMID: 34101373 DOI: 10.1002/aur.2551] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 03/14/2021] [Accepted: 05/04/2021] [Indexed: 12/26/2022]
Abstract
Daily functioning involves identifying emotions in spoken language, a fundamental aspect of social interactions. To date, there is inconsistent evidence in the literature on whether individuals with autism spectrum disorder without intellectual disability (ASD-without-ID) experience difficulties in identification of spoken emotions. We conducted a meta-analysis (literature search following the PRISMA guidelines), with 26 data sets (taken from 23 peer-reviewed journal articles) comparing individuals with ASD-without-ID (N = 614) and typically-developed (TD) controls (N = 640), from nine countries and in seven languages (published until February 2020). In our analyses there was no sufficient evidence to suggest that individuals with HF-ASD differ from matched controls in the identification of simple prosodic emotions (e.g., sadness, happiness). However, individuals with ASD-without-ID were found to perform significantly worse than controls in identification of complex prosodic emotions (e.g., envy and boredom). The level of the semantic content of the stimuli presented (e.g., sentences vs. strings of digits) was not found to have an impact on the results. In conclusion, the difference in findings between simple and complex emotions calls for a new-look on emotion processing in ASD-without-ID. Intervention programs may rely on the intact abilities of individuals with ASD-without-ID to process simple emotions and target improved performance with complex emotions. LAY SUMMARY: Individuals with autism spectrum disorder without intellectual disability (ASD-without-ID) do not differ from matched controls in the identification of simple prosodic emotions (e.g., sadness, happiness). However, they were found to perform significantly worse than controls in the identification of complex prosodic emotions (e.g., envy, boredom). This was found in a meta-analysis of 26 data sets with 1254 participants from nine countries and in seven languages. Intervention programs may rely on the intact abilities of individuals with ASD-without-ID to process simple emotions.
Collapse
Affiliation(s)
- Michal Icht
- Department of Communication Disorders, Ariel University, Ariel, Israel
| | - Gil Zukerman
- Department of Communication Disorders, Ariel University, Ariel, Israel
| | - Esther Ben-Itzchak
- Department of Communication Disorders, Ariel University, Ariel, Israel.,The Bruckner Center for Research in Autism, Department of Communication Disorders, Ariel University, Ariel, Israel
| | - Boaz M Ben-David
- Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC) Herzliya, Herzliya, Israel.,Department of Speech-Language Pathology, and Rehabilitation Sciences Institute (RSI), University of Toronto, Toronto, Ontario, Canada.,Toronto Rehabilitation Institute, University Health Networks (UHN), Toronto, Ontario, Canada
| |
Collapse
|
18
|
Weighting of Prosodic and Lexical-Semantic Cues for Emotion Identification in Spectrally Degraded Speech and With Cochlear Implants. Ear Hear 2021; 42:1727-1740. [PMID: 34294630 PMCID: PMC8545870 DOI: 10.1097/aud.0000000000001057] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Normally-hearing (NH) listeners rely more on prosodic cues than on lexical-semantic cues for emotion perception in speech. In everyday spoken communication, the ability to decipher conflicting information between prosodic and lexical-semantic cues to emotion can be important: for example, in identifying sarcasm or irony. Speech degradation in cochlear implants (CIs) can be sufficiently overcome to identify lexical-semantic cues, but the distortion of voice pitch cues makes it particularly challenging to hear prosody with CIs. The purpose of this study was to examine changes in relative reliance on prosodic and lexical-semantic cues in NH adults listening to spectrally degraded speech and adult CI users. We hypothesized that, compared with NH counterparts, CI users would show increased reliance on lexical-semantic cues and reduced reliance on prosodic cues for emotion perception. We predicted that NH listeners would show a similar pattern when listening to CI-simulated versions of emotional speech. DESIGN Sixteen NH adults and 8 postlingually deafened adult CI users participated in the study. Sentences were created to convey five lexical-semantic emotions (angry, happy, neutral, sad, and scared), with five sentences expressing each category of emotion. Each of these 25 sentences was then recorded with the 5 (angry, happy, neutral, sad, and scared) prosodic emotions by 2 adult female talkers. The resulting stimulus set included 125 recordings (25 Sentences × 5 Prosodic Emotions) per talker, of which 25 were congruent (consistent lexical-semantic and prosodic cues to emotion) and the remaining 100 were incongruent (conflicting lexical-semantic and prosodic cues to emotion). The recordings were processed to have 3 levels of spectral degradation: full-spectrum, CI-simulated (noise-vocoded) to have 8 channels and 16 channels of spectral information, respectively. Twenty-five recordings (one sentence per lexical-semantic emotion recorded in all five prosodies) were used for a practice run in the full-spectrum condition. The remaining 100 recordings were used as test stimuli. For each talker and condition of spectral degradation, listeners indicated the emotion associated with each recording in a single-interval, five-alternative forced-choice task. The responses were scored as proportion correct, where "correct" responses corresponded to the lexical-semantic emotion. CI users heard only the full-spectrum condition. RESULTS The results showed a significant interaction between hearing status (NH, CI) and congruency in identifying the lexical-semantic emotion associated with the stimuli. This interaction was as predicted, that is, CI users showed increased reliance on lexical-semantic cues in the incongruent conditions, while NH listeners showed increased reliance on the prosodic cues in the incongruent conditions. As predicted, NH listeners showed increased reliance on lexical-semantic cues to emotion when the stimuli were spectrally degraded. CONCLUSIONS The present study confirmed previous findings of prosodic dominance for emotion perception by NH listeners in the full-spectrum condition. Further, novel findings with CI patients and NH listeners in the CI-simulated conditions showed reduced reliance on prosodic cues and increased reliance on lexical-semantic cues to emotion. These results have implications for CI listeners' ability to perceive conflicts between prosodic and lexical-semantic cues, with repercussions for their identification of sarcasm and humor. Understanding instances of sarcasm or humor can impact a person's ability to develop relationships, follow conversation, understand vocal emotion and intended message of a speaker, following jokes, and everyday communication in general.
Collapse
|
19
|
The brain mechanism of explicit and implicit processing of emotional prosodies: An fNIRS study. ACTA PSYCHOLOGICA SINICA 2021. [DOI: 10.3724/sp.j.1041.2021.00015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
20
|
Ben-David BM, Mentzel M, Icht M, Gilad M, Dor YI, Ben-David S, Carl M, Shakuf V. Challenges and opportunities for telehealth assessment during COVID-19: iT-RES, adapting a remote version of the test for rating emotions in speech. Int J Audiol 2020; 60:319-321. [PMID: 33063553 DOI: 10.1080/14992027.2020.1833255] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
OBJECTIVE COVID-19 social isolation restrictions have accelerated the need to adapt clinical assessment tools to telemedicine. Remote adaptations are of special importance for populations at risk, e.g. older adults and individuals with chronic medical comorbidities. In response to this urgent clinical and scientific need, we describe a remote adaptation of the T-RES (Oron et al. 2020; IJA), designed to assess the complex processing of spoken emotions, based on identification and integration of the semantics and prosody of spoken sentences. DESIGN We present iT-RES, an online version of the speech-perception assessment tool, detailing the challenges considered and solution chosen when designing the telehealth tool. We show a preliminary validation of performance against the original lab-based T-RES. STUDY SAMPLE A between-participants design, within two groups of 78 young adults (T-RES, n = 39; iT-RES, n = 39). RESULTS i-TRES performance closely followed that of T-RES, with no group differences found in the main trends, identification of emotions, selective attention, and integration. CONCLUSIONS The design of iT-RES mapped the main challenges for remote auditory assessments, and solutions taken to address them. We hope that this will encourage further efforts for telehealth adaptations of clinical services, to meet the needs of special populations and avoid halting scientific research.
Collapse
Affiliation(s)
- Boaz M Ben-David
- Baruch Ivcher School of Psychology, Interdisciplinary Center, Herzliya, Israel.,Toronto Rehabilitation Institute, University Health Networks (UHN), Toronto, ON, Canada.,Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada.,Rehabilitation Sciences Institute, University of Toronto, Toronto, ON, Canada
| | - Maya Mentzel
- Baruch Ivcher School of Psychology, Interdisciplinary Center, Herzliya, Israel.,School of Psychological Sciences, Tel-Aviv University, Tel-Aviv, Israel
| | - Michal Icht
- Department of Communication Disorders, Ariel University, Ariel, Israel
| | - Maya Gilad
- Efi Arazi School of Computer Sciences, Interdisciplinary Center (IDC), Herzliya, Israel
| | - Yehuda I Dor
- Baruch Ivcher School of Psychology, Interdisciplinary Center, Herzliya, Israel.,School of Psychological Sciences, Tel-Aviv University, Tel-Aviv, Israel
| | - Sarah Ben-David
- Department of Criminology, Ariel University, Ariel, Israel.,Department of Criminology, Bar-Ilan University, Ramat Gan, Israel
| | - Micalle Carl
- Department of Communication Disorders, Ariel University, Ariel, Israel
| | - Vered Shakuf
- Baruch Ivcher School of Psychology, Interdisciplinary Center, Herzliya, Israel.,Department of Communication Disorders, Achva Academic College, Shikmim, Israel
| |
Collapse
|
21
|
Kao C, Zhang Y. Differential Neurobehavioral Effects of Cross-Modal Selective Priming on Phonetic and Emotional Prosodic Information in Late Second Language Learners. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:2508-2521. [PMID: 32658561 DOI: 10.1044/2020_jslhr-19-00329] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose Spoken language is inherently multimodal and multidimensional in natural settings, but very little is known about how second language (L2) learners undertake multilayered speech signals with both phonetic and affective cues. This study investigated how late L2 learners undertake parallel processing of linguistic and affective information in the speech signal at behavioral and neurophysiological levels. Method Behavioral and event-related potential measures were taken in a selective cross-modal priming paradigm to examine how late L2 learners (N = 24, M age = 25.54 years) assessed the congruency of phonetic (target vowel: /a/ or /i/) and emotional (target affect: happy or angry) information between the visual primes of facial pictures and the auditory targets of spoken syllables. Results Behavioral accuracy data showed a significant congruency effect in affective (but not phonetic) priming. Unlike a previous report on monolingual first language (L1) users, the L2 users showed no facilitation in reaction time for congruency detection in either selective priming task. The neurophysiological results revealed a robust N400 response that was stronger in the phonetic condition but without clear lateralization and that the N400 effect was weaker in late L2 listeners than in monolingual L1 listeners. Following the N400, late L2 learners showed a weaker late positive response than the monolingual L1 users, particularly in the left central to posterior electrode regions. Conclusions The results demonstrate distinct patterns of behavioral and neural processing of phonetic and affective information in L2 speech with reduced neural representations in both the N400 and the later processing stage, and they provide an impetus for further research on similarities and differences in L1 and L2 multisensory speech perception in bilingualism.
Collapse
Affiliation(s)
- Chieh Kao
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis
- Center for Neurobehavioral Development, University of Minnesota, Minneapolis
| |
Collapse
|
22
|
Lin Y, Ding H, Zhang Y. Prosody Dominates Over Semantics in Emotion Word Processing: Evidence From Cross-Channel and Cross-Modal Stroop Effects. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:896-912. [PMID: 32186969 DOI: 10.1044/2020_jslhr-19-00258] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Purpose Emotional speech communication involves multisensory integration of linguistic (e.g., semantic content) and paralinguistic (e.g., prosody and facial expressions) messages. Previous studies on linguistic versus paralinguistic salience effects in emotional speech processing have produced inconsistent findings. In this study, we investigated the relative perceptual saliency of emotion cues in cross-channel auditory alone task (i.e., semantics-prosody Stroop task) and cross-modal audiovisual task (i.e., semantics-prosody-face Stroop task). Method Thirty normal Chinese adults participated in two Stroop experiments with spoken emotion adjectives in Mandarin Chinese. Experiment 1 manipulated auditory pairing of emotional prosody (happy or sad) and lexical semantic content in congruent and incongruent conditions. Experiment 2 extended the protocol to cross-modal integration by introducing visual facial expression during auditory stimulus presentation. Participants were asked to judge emotional information for each test trial according to the instruction of selective attention. Results Accuracy and reaction time data indicated that, despite an increase in cognitive demand and task complexity in Experiment 2, prosody was consistently more salient than semantic content for emotion word processing and did not take precedence over facial expression. While congruent stimuli enhanced performance in both experiments, the facilitatory effect was smaller in Experiment 2. Conclusion Together, the results demonstrate the salient role of paralinguistic prosodic cues in emotion word processing and congruence facilitation effect in multisensory integration. Our study contributes tonal language data on how linguistic and paralinguistic messages converge in multisensory speech processing and lays a foundation for further exploring the brain mechanisms of cross-channel/modal emotion integration with potential clinical applications.
Collapse
Affiliation(s)
- Yi Lin
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Science & Center for Neurobehavioral Development, University of Minnesota, Minneapolis
| |
Collapse
|
23
|
Leshem R, Icht M, Bentzur R, Ben-David BM. Processing of Emotions in Speech in Forensic Patients With Schizophrenia: Impairments in Identification, Selective Attention, and Integration of Speech Channels. Front Psychiatry 2020; 11:601763. [PMID: 33281649 PMCID: PMC7691229 DOI: 10.3389/fpsyt.2020.601763] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Accepted: 10/16/2020] [Indexed: 12/11/2022] Open
Abstract
Individuals with schizophrenia show deficits in recognition of emotions which may increase the risk of violence. This study explored how forensic patients with schizophrenia process spoken emotion by: (a) identifying emotions expressed in prosodic and semantic content separately, (b) selectively attending to one speech channel while ignoring the other, and (c) integrating the prosodic and the semantic channels, compared to non-clinical controls. Twenty-one forensic patients with schizophrenia and 21 matched controls listened to sentences conveying four emotions (anger, happiness, sadness, and neutrality) presented in semantic or prosodic channels, in different combinations. They were asked to rate how much they agreed that the sentences conveyed a predefined emotion, focusing on one channel or on the sentence as a whole. Forensic patients with schizophrenia performed with intact identification and integration of spoken emotions, but their ratings indicated reduced discrimination, larger failures of selective attention, and under-ratings of negative emotions, compared to controls. This finding doesn't support previous reports of an inclination to interpret social situations in a negative way among individuals with schizophrenia. Finally, current results may guide rehabilitation approaches matched to the pattern of auditory emotional processing presented by forensic patients with schizophrenia, improving social interactions and quality of life.
Collapse
Affiliation(s)
- Rotem Leshem
- Department of Criminology, Bar-Ilan University, Ramat Gan, Israel
| | - Michal Icht
- Department of Communication Disorders, Ariel University, Ariel, Israel
| | - Roni Bentzur
- Psychiatric Division, Sheba Medical Center, Tel Hashomer, Israel
| | - Boaz M Ben-David
- Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzliya, Israel.,Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada.,Toronto Rehabilitation Institute, University Health Networks (UHN), Toronto, ON, Canada
| |
Collapse
|
24
|
Ben-David BM, Ben-Itzchak E, Zukerman G, Yahav G, Icht M. The Perception of Emotions in Spoken Language in Undergraduates with High Functioning Autism Spectrum Disorder: A Preserved Social Skill. J Autism Dev Disord 2019; 50:741-756. [DOI: 10.1007/s10803-019-04297-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
25
|
Oron Y, Levy O, Avivi-Reich M, Goldfarb A, Handzel O, Shakuf V, Ben-David BM. Tinnitus affects the relative roles of semantics and prosody in the perception of emotions in spoken language. Int J Audiol 2019; 59:195-207. [DOI: 10.1080/14992027.2019.1677952] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
- Yahav Oron
- Department of Otolaryngology, Head, Neck and Maxillofacial Surgery, Tel-Aviv Sourasky Medical Center, Sackler School of Medicine, Tel Aviv University, Tel-Aviv, Israel
| | - Oren Levy
- Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzliya, Israel
| | - Meital Avivi-Reich
- Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzliya, Israel
- Communication Arts, Sciences and Disorders, Brooklyn College, City University of New York, New York, NY, USA
| | - Abraham Goldfarb
- Department of Otolaryngology, Head and Neck Surgery, The Edith Wolfson Medical Center, Sackler School of Medicine, Tel Aviv University, Tel-Aviv, Israel
| | - Ophir Handzel
- Department of Otolaryngology, Head, Neck and Maxillofacial Surgery, Tel-Aviv Sourasky Medical Center, Sackler School of Medicine, Tel Aviv University, Tel-Aviv, Israel
| | - Vered Shakuf
- Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzliya, Israel
| | - Boaz M. Ben-David
- Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzliya, Israel
- Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- Toronto Rehabilitation Institute, University Health Networks (UHN), Toronto, ON, Canada
| |
Collapse
|
26
|
The Jena Speaker Set (JESS)-A database of voice stimuli from unfamiliar young and old adult speakers. Behav Res Methods 2019; 52:990-1007. [PMID: 31637667 DOI: 10.3758/s13428-019-01296-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Here we describe the Jena Speaker Set (JESS), a free database for unfamiliar adult voice stimuli, comprising voices from 61 young (18-25 years) and 59 old (60-81 years) female and male speakers uttering various sentences, syllables, read text, semi-spontaneous speech, and vowels. Listeners rated two voice samples (short sentences) per speaker for attractiveness, likeability, two measures of distinctiveness ("deviation"-based [DEV] and "voice in the crowd"-based [VITC]), regional accent, and age. Interrater reliability was high, with Cronbach's α between .82 and .99. Young voices were generally rated as more attractive than old voices, but particularly so when male listeners judged female voices. Moreover, young female voices were rated as more likeable than both young male and old female voices. Young voices were judged to be less distinctive than old voices according to the DEV measure, with no differences in the VITC measure. In age ratings, listeners almost perfectly discriminated young from old voices; additionally, young female voices were perceived as being younger than young male voices. Correlations between the rating dimensions above demonstrated (among other things) that DEV-based distinctiveness was strongly negatively correlated with rated attractiveness and likeability. By contrast, VITC-based distinctiveness was uncorrelated with rated attractiveness and likeability in young voices, although a moderate negative correlation was observed for old voices. Overall, the present results demonstrate systematic effects of vocal age and gender on impressions based on the voice and inform as to the selection of suitable voice stimuli for further research into voice perception, learning, and memory.
Collapse
|
27
|
Ben-David BM, Gal-Rosenblum S, van Lieshout PHHM, Shakuf V. Age-Related Differences in the Perception of Emotion in Spoken Language: The Relative Roles of Prosody and Semantics. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:1188-1202. [PMID: 31026192 DOI: 10.1044/2018_jslhr-h-ascc7-18-0166] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Purpose We aim to identify the possible sources for age-related differences in the perception of emotion in speech, focusing on the distinct roles of semantics (words) and prosody (tone of speech) and their interaction. Method We implement the Test for Rating of Emotions in Speech ( Ben-David, Multani, Shakuf, Rudzicz, & van Lieshout, 2016 ). Forty older and 40 younger adults were presented with spoken sentences made of different combinations of 5 emotional categories (anger, fear, happiness, sadness, and neutral) presented in the prosody and semantics. In separate tasks, listeners were asked to attend to the sentence as a whole, integrating both speech channels, or to focus on 1 channel only (prosody/semantics). Their task was to rate how much they agree the sentence is conveying a predefined emotion. Results (a) Identification of emotions: both age groups identified presented emotions. (b) Failure of selective attention: both age groups were unable to selectively attend to 1 channel when instructed, with slightly larger failures for older adults. (c) Integration of channels: younger adults showed a bias toward prosody, whereas older adults showed a slight bias toward semantics. Conclusions Three possible sources are suggested for age-related differences: (a) underestimation of the emotional content of speech, (b) slightly larger failures to selectively attend to 1 channel, and (c) different weights assigned to the 2 speech channels.
Collapse
Affiliation(s)
- Boaz M Ben-David
- Communication Aging and Neuropsychology Lab, Baruch Ivcher School of Psychology, Interdisciplinary Center, Herzliya, Israel
- Department of Speech-Language Pathology, University of Toronto, Ontario, Canada
- Rehabilitation Sciences Institute, University of Toronto, Ontario, Canada
- Toronto Rehabilitation Institute, University Health Networks, Ontario, Canada
| | - Sarah Gal-Rosenblum
- Communication Aging and Neuropsychology Lab, Baruch Ivcher School of Psychology, Interdisciplinary Center, Herzliya, Israel
| | - Pascal H H M van Lieshout
- Department of Speech-Language Pathology, University of Toronto, Ontario, Canada
- Rehabilitation Sciences Institute, University of Toronto, Ontario, Canada
- Toronto Rehabilitation Institute, University Health Networks, Ontario, Canada
| | - Vered Shakuf
- Communication Aging and Neuropsychology Lab, Baruch Ivcher School of Psychology, Interdisciplinary Center, Herzliya, Israel
| |
Collapse
|
28
|
Leshem R, van Lieshout PHHM, Ben-David S, Ben-David BM. Does emotion matter? The role of alexithymia in violent recidivism: A systematic literature review. CRIMINAL BEHAVIOUR AND MENTAL HEALTH : CBMH 2019; 29:94-110. [PMID: 30916846 DOI: 10.1002/cbm.2110] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/07/2018] [Revised: 01/28/2019] [Accepted: 02/08/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND Several variables have been evidenced for their association with violent reoffending. Resultant interventions have been suggested, yet the rate of recidivism remains high. Alexithymia, characterised by deficits in emotion processing and verbal expression, might interact with these other risk factors to affect outcomes. AIM Our goal was to examine the role of alexithymia as a possible moderator of risk factors for violent offender recidivism. Our hypothesis was that, albeit with other risk factors, alexithymia increases the risk of violent reoffending. METHOD We conducted a systematic literature review, using terms for alexithymia and violent offending and their intersection. RESULTS (a) No study that directly tests the role of alexithymia in conjunction with other potential risk factors for recidivism and actual violent recidivism was uncovered. (b) Primarily alexithymia researchers and primarily researchers into violence have separately found several clinical features in common between aspects of alexithymia and violence, such as impulsivity (total n = 24 studies). (c) Other researchers have established a relationship between alexithymia and both dynamic and static risk factors for violent recidivism (n = 16 studies). CONCLUSION Alexithymia may be a possible moderator of risk of violent offence recidivism. Supplementing offenders' rehabilitation efforts with assessments of alexithymia may assist in designing individually tailored interventions to promote desistance among violent offenders.
Collapse
Affiliation(s)
- Rotem Leshem
- Department of Criminology, Bar-Ilan University, Ramat Gan, Israel
| | - Pascal H H M van Lieshout
- Oral Dynamics Lab, Department of Speech-Language Pathology, University of Toronto, Toronto, Ontario, Canada
- Toronto Rehabilitation Institute, University of Toronto, Toronto, Ontario, Canada
- Rehabilitation Sciences Institute, University of Toronto, Toronto, Ontario, Canada
| | | | - Boaz M Ben-David
- Communication, Aging and Neuropsychology lab (CANlab), Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC) Herzliya, Herzliya, Israel
- Oral Dynamics Lab, Department of Speech-Language Pathology, University of Toronto, Toronto, Ontario, Canada
- Toronto Rehabilitation Institute, University of Toronto, Toronto, Ontario, Canada
- Rehabilitation Sciences Institute, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
29
|
Ferrari C, Vecchi T, Merabet LB, Cattaneo Z. Blindness and social trust: The effect of early visual deprivation on judgments of trustworthiness. Conscious Cogn 2017; 55:156-164. [PMID: 28869844 DOI: 10.1016/j.concog.2017.08.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2017] [Revised: 08/17/2017] [Accepted: 08/18/2017] [Indexed: 11/17/2022]
Abstract
Investigating the impact of early visual deprivation on evaluations related to social trust has received little attention to date. This is despite consistent evidence suggesting that early onset blindness may interfere with the normal development of social skills. In this study, we investigated whether early blindness affects judgments of trustworthiness regarding the actions of an agent, with trustworthiness representing the fundamental dimension in the social evaluation. Specifically, we compared performance between a group of early blind individuals with that of sighted controls in their evaluation of trustworthiness of an agent after hearing a pair of two positive or two negative social behaviors (impression formation). Participants then repeated the same evaluation following the presentation of a third (consistent or inconsistent) behavior regarding the same agent (impression updating). Overall, blind individuals tended to give similar evaluations compared to their sighted counterparts. However, they also valued positive behaviors significantly more than sighted controls when forming their impression of an agent's trustworthiness. Moreover, when inconsistent information was provided, blind individuals were more prone to revise their initial evaluation compared to controls. These results suggest that early visual deprivation may have a dramatic effect on the evaluation of social factors such as trustworthiness.
Collapse
Affiliation(s)
- C Ferrari
- Department of Psychology, University of Milano-Bicocca, Milan 20126, Italy.
| | - T Vecchi
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia 27100, Italy; Brain Connectivity Center, C. Mondino National Neurological Institute, Pavia 27100, Italy
| | - L B Merabet
- The Laboratory for Visual Neuroplasticity, Department of Ophthalmology, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA, USA
| | - Z Cattaneo
- Department of Psychology, University of Milano-Bicocca, Milan 20126, Italy; Brain Connectivity Center, C. Mondino National Neurological Institute, Pavia 27100, Italy
| |
Collapse
|
30
|
Kim SK, Sumner M. Beyond lexical meaning: The effect of emotional prosody on spoken word recognition. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 142:EL49. [PMID: 28764484 DOI: 10.1121/1.4991328] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
This study employs an auditory-visual associative priming paradigm to test whether non-emotional words uttered in emotional prosody (e.g., pineapple spoken in angry prosody or happy prosody) facilitate recognition of semantically emotional words (e.g., mad, upset or smile, joy). The results show an affective priming effect between emotional prosody and emotional words independent of lexical carriers of the prosody. Learned acoustic patterns in speech (e.g., emotional prosody) map directly to social concepts and representations, and this social information influences the spoken word recognition process.
Collapse
Affiliation(s)
- Seung Kyung Kim
- Department of Linguistics, Stanford University, Stanford, California 94305, USA ,
| | - Meghan Sumner
- Department of Linguistics, Stanford University, Stanford, California 94305, USA ,
| |
Collapse
|
31
|
McGilton KS, Rochon E, Sidani S, Shaw A, Ben-David BM, Saragosa M, Boscart VM, Wilson R, Galimidi-Epstein KK, Pichora-Fuller MK. Can We Help Care Providers Communicate More Effectively With Persons Having Dementia Living in Long-Term Care Homes? Am J Alzheimers Dis Other Demen 2016; 32:41-50. [PMID: 27899433 PMCID: PMC5302128 DOI: 10.1177/1533317516680899] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Background: Effective communication between residents with dementia and care providers in long-term care homes (LTCHs) is essential to resident-centered care. Purpose: To determine the effects of a communication intervention on residents’ quality of life (QOL) and care, as well as care providers’ perceived knowledge, mood, and burden. Method: The intervention included (1) individualized communication plans, (2) a dementia care workshop, and (3) a care provider support system. Pre- and postintervention scores were compared to evaluate the effects of the intervention. A total of 12 residents and 20 care providers in an LTCH participated in the feasibility study. Results: The rate of care providers’ adherence to the communication plans was 91%. Postintervention, residents experienced a significant increase in overall QOL. Care providers had significant improvement in mood and perceived reduced burden. Conclusion: The results suggest that the communication intervention demonstrates preliminary evidence of positive effects on residents’ QOL and care providers’ mood and burden.
Collapse
Affiliation(s)
- Katherine S McGilton
- 1 Toronto Rehabilitation Institute - University Health Network, Toronto, Ontario, Canada.,2 Lawrence S. Bloomberg Faculty of Nursing, University of Toronto, Toronto, Ontario, Canada
| | - Elizabeth Rochon
- 1 Toronto Rehabilitation Institute - University Health Network, Toronto, Ontario, Canada.,3 Faculty of Medicine, Department of Speech-Language Pathology, University of Toronto, Toronto, Ontario, Canada
| | - Souraya Sidani
- 4 School of Nursing, Ryerson University, Toronto, Ontario, Canada
| | - Alexander Shaw
- 1 Toronto Rehabilitation Institute - University Health Network, Toronto, Ontario, Canada.,5 School of English and Liberal Studies, Seneca College Newnham Campus, Toronto, Ontario, Canada
| | - Boaz M Ben-David
- 1 Toronto Rehabilitation Institute - University Health Network, Toronto, Ontario, Canada.,3 Faculty of Medicine, Department of Speech-Language Pathology, University of Toronto, Toronto, Ontario, Canada.,6 Communication, Aging and Neuropsychology Lab (CANlab), Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC) Herzliya, Herzliya, Israel.,7 Rehabilitation Sciences Institute (RSI), University of Toronto, Toronto, Ontario, Canada.,8 St Michael's Hospital, Toronto, Ontario, Canada
| | | | - Veronique M Boscart
- 9 School of Health & Life Sciences and Community Services, Conestoga College Institute of Technology and Advanced Learning, Kitchener, Ontario, Canada
| | - Rozanne Wilson
- 10 School of Nursing, Trinity Western University, Langley, British Columbia, Canada.,11 Centre for Health Evaluation & Outcome Sciences (CHÉOS), University of British Columbia, Vancouver, British Columbia, Canada.,12 Patient-Centred Performance Measurement & Improvement, Providence Health Care, Vancouver, British Columbia, Canada
| | | | - M Kathleen Pichora-Fuller
- 1 Toronto Rehabilitation Institute - University Health Network, Toronto, Ontario, Canada.,14 Department of Psychology, University of Toronto, Mississauga, Ontario, Canada
| |
Collapse
|
32
|
Tolmacz R, Efrati Y, Ben-David BM. The sense of relational entitlement among adolescents toward their parents (SREap) - Testing an adaptation of the SRE. J Adolesc 2016; 53:127-140. [PMID: 27718380 DOI: 10.1016/j.adolescence.2016.09.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2016] [Revised: 09/04/2016] [Accepted: 09/07/2016] [Indexed: 11/15/2022]
Abstract
The quality of the adolescent-parent relationship is closely related to the adolescent's sense of entitlement. Study 1 (458 central-Israel adolescents, 69% girls, ages: 11-16) developed the sense of relational entitlement among adolescents toward their parents (SREap, adapted from the original SRE on adults' romantic relationships) and provided initial validity evidence of its three-factor structure: exaggerated, restricted and assertive - replicating the SRE's factor structure. Studies 2-5 (1237 adolescents, 56% girls) examined the link between the SREap factors and relevant psychological measures. Exaggerated and restricted SREap factors were associated with attachment insecurities. Restricted and exaggerated entitlement factors were related to higher levels of emotional problems, and lower levels of: wellbeing, positive mood and life satisfaction. Conversely, assertive entitlement was related to higher life satisfaction and self-efficacy and lower levels of emotional problems. The findings also indicate that SREap is not merely a form of narcissism. The implications of SREap are discussed.
Collapse
Affiliation(s)
- Rami Tolmacz
- Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC) Herzliya, Herzliya, Israel
| | - Yaniv Efrati
- Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC) Herzliya, Herzliya, Israel
| | - Boaz M Ben-David
- Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC) Herzliya, Herzliya, Israel.
| |
Collapse
|