1
|
Communicating Emotion: Vocal Expression of Linguistic and Emotional Prosody in Children With Mild to Profound Hearing Loss Compared With That of Normal Hearing Peers. Ear Hear 2024; 45:72-80. [PMID: 37316994 PMCID: PMC10718210 DOI: 10.1097/aud.0000000000001399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 06/01/2023] [Indexed: 06/16/2023]
Abstract
OBJECTIVES Emotional prosody is known to play an important role in social communication. Research has shown that children with cochlear implants (CCIs) may face challenges in their ability to express prosody, as their expressions may have less distinct acoustic contrasts and therefore may be judged less accurately. The prosody of children with milder degrees of hearing loss, wearing hearing aids, has sparsely been investigated. More understanding of the prosodic expression by children with hearing loss, hearing aid users in particular, could create more awareness among healthcare professionals and parents on limitations in social communication, which awareness may lead to more targeted rehabilitation. This study aimed to compare the prosodic expression potential of children wearing hearing aids (CHA) with that of CCIs and children with normal hearing (CNH). DESIGN In this prospective experimental study, utterances of pediatric hearing aid users, cochlear implant users, and CNH containing emotional expressions (happy, sad, and angry) were recorded during a reading task. Of the utterances, three acoustic properties were calculated: fundamental frequency (F0), variance in fundamental frequency (SD of F0), and intensity. Acoustic properties of the utterances were compared within subjects and between groups. RESULTS A total of 75 children were included (CHA: 26, CCI: 23, and CNH: 26). Participants were between 7 and 13 years of age. The 15 CCI with congenital hearing loss had received the cochlear implant at median age of 8 months. The acoustic patterns of emotions uttered by CHA were similar to those of CCI and CNH. Only in CCI, we found no difference in F0 variation between happiness and anger, although an intensity difference was present. In addition, CCI and CHA produced poorer happy-sad contrasts than did CNH. CONCLUSIONS The findings of this study suggest that on a fundamental, acoustic level, both CHA and CCI have a prosodic expression potential that is almost on par with normal hearing peers. However, there were some minor limitations observed in the prosodic expression of these children, it is important to determine whether these differences are perceptible to listeners and could affect social communication. This study sets the groundwork for more research that will help us fully understand the implications of these findings and how they may affect the communication abilities of these children. With a clearer understanding of these factors, we can develop effective ways to help improve their communication skills.
Collapse
|
2
|
Evaluating the Relative Perceptual Salience of Linguistic and Emotional Prosody in Quiet and Noisy Contexts. Behav Sci (Basel) 2023; 13:800. [PMID: 37887450 PMCID: PMC10603920 DOI: 10.3390/bs13100800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 09/22/2023] [Accepted: 09/25/2023] [Indexed: 10/28/2023] Open
Abstract
How people recognize linguistic and emotional prosody in different listening conditions is essential for understanding the complex interplay between social context, cognition, and communication. The perception of both lexical tones and emotional prosody depends on prosodic features including pitch, intensity, duration, and voice quality. However, it is unclear which aspect of prosody is perceptually more salient and resistant to noise. This study aimed to investigate the relative perceptual salience of emotional prosody and lexical tone recognition in quiet and in the presence of multi-talker babble noise. Forty young adults randomly sampled from a pool of native Mandarin Chinese with normal hearing listened to monosyllables either with or without background babble noise and completed two identification tasks, one for emotion recognition and the other for lexical tone recognition. Accuracy and speed were recorded and analyzed using generalized linear mixed-effects models. Compared with emotional prosody, lexical tones were more perceptually salient in multi-talker babble noise. Native Mandarin Chinese participants identified lexical tones more accurately and quickly than vocal emotions at the same signal-to-noise ratio. Acoustic and cognitive dissimilarities between linguistic prosody and emotional prosody may have led to the phenomenon, which calls for further explorations into the underlying psychobiological and neurophysiological mechanisms.
Collapse
|
3
|
Erratum: Influence of bodily resonances on emotional prosody perception. Front Psychol 2023; 14:1170276. [PMID: 36949924 PMCID: PMC10026019 DOI: 10.3389/fpsyg.2023.1170276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Accepted: 02/20/2023] [Indexed: 03/08/2023] Open
Abstract
[This corrects the article DOI: 10.3389/fpsyg.2022.1061930.].
Collapse
|
4
|
Abstract
OBJECTIVES The aim of this systematic review was to identify the presence and nature of relationships between specific forms of aprosodia (i.e., expressive and receptive emotional and linguistic prosodic deficits) and other cognitive-communication deficits and disorders in individuals with right hemisphere damage (RHD) due to stroke. METHODS One hundred and ninety articles from 1970 to February 2020 investigating receptive and expressive prosody in patients with relatively focal right hemisphere brain damage were identified via database searches. RESULTS Fourteen articles were identified that met inclusion criteria, passed quality reviews, and included sufficient information about prosody and potential co-occurring deficits. Twelve articles investigated receptive emotional aprosodia, and two articles investigated receptive linguistic aprosodia. Across the included studies, receptive emotional prosody was not systematically associated with hemispatial neglect, but did co-occur with deficits in emotional facial recognition, interpersonal interactions, or emotional semantics. Receptive linguistic processing was reported to co-occur with amusia and hemispatial neglect. No studies were found that investigated the co-occurrence of expressive emotional or linguistic prosodic deficits with other cognitive-communication impairments. CONCLUSIONS This systematic review revealed significant gaps in the research literature regarding the co-occurrence of common right hemisphere disorders with prosodic deficits. More rigorous empirical inquiry is required to identify specific patient profiles based on clusters of deficits associated with right hemisphere stroke. Future research may determine whether the co-occurrences identified are due to shared cognitive-linguistic processes, and may inform the development of evidence-based assessment and treatment recommendations for individuals with cognitive-communication deficits subsequent to RHD.
Collapse
|
5
|
Emotional Prosodies Processing and Its Relationship With Neurodevelopment Outcome at 24 Months in Infants of Diabetic Mothers. Front Pediatr 2022; 10:861432. [PMID: 35664869 PMCID: PMC9159506 DOI: 10.3389/fped.2022.861432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 04/19/2022] [Indexed: 11/13/2022] Open
Abstract
Gestational diabetes mellitus (GDM) is one of the most common complications of pregnancy. Hyperglycemia of pregnancy is a risk not only for later obesity of the offspring but also do harm to their neurodevelopment from fetus. An ERP research has shown that children with autism spectrum disorder (ASD) was characterized by impaired semantic processing. In this study, we used event-related potential (ERP) to assess the procession of different emotional prosodies (happy, fearful, and angry) in neonates of diabetic mothers, compared to the healthy term infants. And to explore whether the ERP measure has potential value for the evaluation of neurodevelopmental outcome in later childhood. A total of 43 full-term neonates were recruited from the neonatology department of Peking University First Hospital from December 1, 2017 to April 30, 2019. They were assigned to infants of diabetic mothers (IDM) group (n = 23) or control group (n = 20) according to their mother's oral glucose tolerance test's (OGTT) result during pregnancy. Using an oddball paradigm, ERP data were recorded while subjects listened to deviation stimulus (20%, happy/fearful/angry prosodies) and standard stimulus (80%, neutral prosody) to evaluate the potential prognostic value of ERP indexes for neurodevelopment at 24 months of age. Results showed that 1) mismatch response (MMR) amplitudes in IDM group were lower than the control; 2) lower MMR amplitude to fearful prosody at frontal lobe was a high risk for increased Modified Checklist for Autism in Toddlers (M-CHAT) scores at 24 months. These findings suggests that hyperglycemia of pregnancy may influence the ability to process emotional prosodies in neonatal brain; it could be reflected by decreased MMR amplitude in response to fearful prosody. Moreover, the decreased MMR amplitude at the frontal lobe may indicated an increased risk of ASD.
Collapse
|
6
|
Reduced impact of nonverbal cues during integration of verbal and nonverbal emotional information in adults with high-functioning autism. Front Psychiatry 2022; 13:1069028. [PMID: 36699473 PMCID: PMC9868406 DOI: 10.3389/fpsyt.2022.1069028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 12/13/2022] [Indexed: 01/11/2023] Open
Abstract
BACKGROUND When receiving mismatching nonverbal and verbal signals, most people tend to base their judgment regarding the current emotional state of others primarily on nonverbal information. However, individuals with high-functioning autism (HFA) have been described as having difficulties interpreting nonverbal signals. Recognizing emotional states correctly is highly important for successful social interaction. Alterations in perception of nonverbal emotional cues presumably contribute to misunderstanding and impairments in social interactions. METHODS To evaluate autism-specific differences in the relative impact of nonverbal and verbal cues, 18 adults with HFA (14 male and four female subjects, mean age 36.7 years (SD 11.4) and 18 age, gender and IQ-matched typically developed controls [14 m/4 f, mean age 36.4 years (SD 12.2)] rated the emotional state of speakers in video sequences with partly mismatching emotional signals. Standardized linear regression coefficients were calculated as a measure of the reliance on the nonverbal and verbal components of the videos for each participant. Regression coefficients were then compared between groups to test the hypothesis that autistic adults base their social evaluations less strongly on nonverbal information. Further exploratory analyses were performed for differences in valence ratings and response times. RESULTS Compared to the typically developed control group, nonverbal cue reliance was reduced in adults with high-functioning autism [t(23.14) = -2.44, p = 0.01 (one-sided)]. Furthermore, the exploratory analyses showed a tendency to avoid extreme answers in the HFA group, observable via less positive as well as less negative valence ratings in response to emotional expressions of increasingly strong valence. In addition, response time was generally longer in HFA compared to the control group [F (1, 33) = 10.65, p = 0.004]. CONCLUSION These findings suggest reduced impact of nonverbal cues and longer processing times in the analysis of multimodal emotional information, which may be associated with a subjectively lower relevance of this information and/or more processing difficulties for people with HFA. The less extreme answering tendency may indicate a lower sensitivity for nonverbal valence expression in HFA or result from a tendency to avoid incorrect answers when confronted with greater uncertainty in interpreting emotional states.
Collapse
|
7
|
Influence of bodily resonances on emotional prosody perception. Front Psychol 2022; 13:1061930. [PMID: 36571062 PMCID: PMC9773097 DOI: 10.3389/fpsyg.2022.1061930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Accepted: 11/22/2022] [Indexed: 12/13/2022] Open
Abstract
Introduction Emotional prosody is defined as suprasegmental and segmental changes in the human voice and related acoustic parameters that can inform the listener about the emotional state of the speaker. While the processing of emotional prosody is well represented in the literature, the mechanism of embodied cognition in emotional voice perception is very little studied. This study aimed to investigate the influence of induced bodily vibrations-through a vibrator placed close to the vocal cords-in the perception of emotional vocalizations. The main hypothesis was that induced body vibrations would constitute a potential interoceptive feedback that can influence the auditory perception of emotions. It was also expected that these effects would be greater for stimuli that are more ambiguous. Methods Participants were presented with emotional vocalizations expressing joy or anger which varied from low-intensity vocalizations, considered as ambiguous, to high-intensity ones, considered as non-ambiguous. Vibrations were induced simultaneously in half of the trials and expressed joy or anger congruently with the voice stimuli. Participants had to evaluate each voice stimulus using four visual analog scales (joy, anger, and surprise, sadness as control scales). Results A significant effect of the vibrations was observed on the three behavioral indexes-discrimination, confusion and accuracy-with vibrations confusing rather than facilitating vocal emotion processing. Conclusion Over all, this study brings new light on a poorly documented topic, namely the potential use of vocal cords vibrations as an interoceptive feedback allowing humans to modulate voice production and perception during social interactions.
Collapse
|
8
|
How does self name influence the neural processing of emotional prosody? An ERP study. Psych J 2021; 11:30-42. [PMID: 34856651 DOI: 10.1002/pchj.499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2020] [Revised: 09/24/2021] [Accepted: 09/29/2021] [Indexed: 11/06/2022]
Abstract
In this study, we investigated whether self-relevant information can accelerate the processing of emotional information. Our experiment, based on a passive auditory oddball paradigm, involved recording electroencephalography while participants listened to stimuli comprising their own names (ONs) and unfamiliar names (UNs) spoken with varying emotional prosody. At 220-300 ms, mismatch negativity (MMN) was more negative for ONs and angry prosody than for UNs and neutral prosody, respectively. These results suggest that attention is involuntarily attracted by ONs and emotional prosody, and that both types of information are given priority processing, even under pre-attentive conditions. Importantly, ONs with angry prosody induced more negative MMN than did similar UNs and ONs with neutral prosody, which indicates that the motivational significance embedded in angry prosody promotes the self-reference effect and, thus, involves more attention resources. At 300-500 ms, ONs triggered smaller P3a than did UNs, suggesting that less cognitive resources are required to process self-relevant information. These results suggest that self-relevant and emotional information of preferential processing interact with each other during the pre-attentive stage, with self-reference enhancing the processing of emotional information.
Collapse
|
9
|
Auditory Processing Disorders in Elderly Persons vs. Linguistic and Emotional Prosody. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph18126427. [PMID: 34198537 PMCID: PMC8296237 DOI: 10.3390/ijerph18126427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 06/04/2021] [Accepted: 06/08/2021] [Indexed: 11/17/2022]
Abstract
Background: Language communication, which is one of the basic forms of building and maintaining interpersonal relationships, deteriorates in elder age. One of the probable causes is a decline in auditory functioning, including auditory central processing. The aim of the present study is to evaluate the profile of central auditory processing disorders in the elderly as well as the relationship between these disorders and the perception of emotional and linguistic prosody. Methods: The Right Hemisphere Language Battery (RHLB-PL), and the Brain-Boy Universal Professional (BUP) were used. Results: There are statistically significant relationships between emotional prosody and: spatial hearing (r(18) = 0.46, p = 0.04); the time of the reaction (r(18) = 0.49, p = 0.03); recognizing the frequency pattern (r(18) = 0.49, p = 0.03 (4); and recognizing the duration pattern (r(18) = 0.45, p = 0.05. There are statistically significant correlations between linguistic prosody and: pitch discrimination (r(18) = 0.5, p = 0.02); recognition of the frequency pattern (r(18) = 0.55, p = 0.01); recognition of the temporal pattern; and emotional prosody (r(18) = 0.58, p = 0.01). Conclusions: The analysis of the disturbed components of auditory central processing among the tested samples showed a reduction in the functions related to frequency differentiation, the recognition of the temporal pattern, the process of discriminating between important sounds, and the speed of reaction. De-automation of the basic functions of auditory central processing, which we observe in older age, lowers the perception of both emotional and linguistic prosody, thus reducing the quality of communication in older people.
Collapse
|
10
|
A novel beamformer-based imaging of phase-amplitude coupling (BIPAC) unveiling the inter-regional connectivity of emotional prosody processing in women with primary dysmenorrhea. J Neural Eng 2021; 18. [PMID: 33691295 DOI: 10.1088/1741-2552/abed83] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Accepted: 03/10/2021] [Indexed: 12/30/2022]
Abstract
Objective. Neural communication or the interactions of brain regions play a key role in the formation of functional neural networks. A type of neural communication can be measured in the form of phase-amplitude coupling (PAC), which is the coupling between the phase of low-frequency oscillations and the amplitude of high-frequency oscillations. This paper presents a beamformer-based imaging method, beamformer-based imaging of PAC (BIPAC), to quantify the strength of PAC between a seed region and other brain regions.Approach. A dipole is used to model the ensemble of neural activity within a group of nearby neurons and represents a mixture of multiple source components of cortical activity. From ensemble activity at each brain location, the source component with the strongest coupling to the seed activity is extracted, while unrelated components are suppressed to enhance the sensitivity of coupled-source estimation.Main results. In evaluations using simulation data sets, BIPAC proved advantageous with regard to estimation accuracy in source localization, orientation, and coupling strength. BIPAC was also applied to the analysis of magnetoencephalographic signals recorded from women with primary dysmenorrhea in an implicit emotional prosody experiment. In response to negative emotional prosody, auditory areas revealed strong PAC with the ventral auditory stream and occipitoparietal areas in the theta-gamma and alpha-gamma bands, which may respectively indicate the recruitment of auditory sensory memory and attention reorientation. Moreover, patients with more severe pain experience appeared to have stronger coupling between auditory areas and temporoparietal regions.Significance. Our findings indicate that the implicit processing of emotional prosody is altered by menstrual pain experience. The proposed BIPAC is feasible and applicable to imaging inter-regional connectivity based on cross-frequency coupling estimates. The experimental results also demonstrate that BIPAC is capable of revealing autonomous brain processing and neurodynamics, which are more subtle than active and attended task-driven processing.
Collapse
|
11
|
Influence of emotional prosody, content, and repetition on memory recognition of speaker identity. Q J Exp Psychol (Hove) 2021; 74:1185-1201. [PMID: 33586530 DOI: 10.1177/1747021821998557] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Recognising individuals through their voice requires listeners to form an invariant representation of the speaker's identity, immune to episodic changes that may occur between encounters. We conducted two experiments to investigate to what extent within-speaker stimulus variability influences different behavioural indices of implicit and explicit identity recognition memory, using short sentences with semantically neutral content. In Experiment 1, we assessed how speaker recognition was affected by changes in prosody (fearful to neutral, and vice versa in a between-group design) and speech content. Results revealed that, regardless of encoding prosody, changes in prosody, independent of content, or changes in content, when prosody was kept unchanged, led to a reduced accuracy in explicit voice recognition. In contrast, both groups exhibited the same pattern of response times (RTs) for correctly recognised speakers: faster responses to fearful than neutral stimuli, and a facilitating effect for same-content stimuli only for neutral sentences. In Experiment 2, we investigated whether an invariant representation of a speaker's identity benefitted from exposure to different exemplars varying in emotional prosody (fearful and happy) and content (Multi condition), compared to repeated presentations of a single sentence (Uni condition). We found a significant repetition priming effect (i.e., reduced RTs over repetitions of the same voice identity) only for speakers in the Uni condition during encoding, but faster RTs when correctly recognising old speakers from the Multi, compared to the Uni, condition. Overall, our findings confirm that changes in emotional prosody and/or speech content can affect listeners' implicit and explicit recognition of newly familiarised speakers.
Collapse
|
12
|
Significance of the ability to differentiate emotional prosodies for the early diagnosis and prognostic prediction of mild hypoxic-ischemic encephalopathy in neonates. Int J Dev Neurosci 2020; 81:51-59. [PMID: 33118216 DOI: 10.1002/jdn.10074] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 10/18/2020] [Accepted: 10/22/2020] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Perinatal brain injury affects around 300,000 neonates in China each year, early diagnosis and active intervention are also crucial for timely treatment and better prognoses. As hearing is the earliest as well as the most sensitive sense to develop in neonates, we propose that the ability to differentiate among different emotional prosodies may differ between neonates with and without brain injuries. METHODS We enrolled full-term neonates admitted to the neonatology department of Peking University First Hospital from January 2016 to December 2016, conducted functional near-infrared spectroscopy (fNIRS) monitoring within 24 hr of admission, and analyzed changes in oxyhemoglobin (ΔHbO2 ) and deoxyhemoglobin (ΔHb) to study the ability of neonates to differentiate among emotional prosodies. The neonates were followed up to 36 months for neurological outcome evaluation. RESULTS AND CONCLUSIONS We found that neonates showed the early ability to differentiate among emotional prosodies, responding most sensitively to positive emotions, and this ability may have been impaired following brain injury.
Collapse
|
13
|
Vasopressin and parental expressed emotion in the transition to fatherhood. Attach Hum Dev 2020; 23:257-273. [PMID: 31997704 DOI: 10.1080/14616734.2020.1719427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
In the last decades, parenting researchers increasingly focused on the role of fathers in child development. However, it is still largely unknown which factors contribute to fathers' beliefs about their child, which may be crucial in the transition to fatherhood. In the current randomized within-subject experiment, the effect of nasal administration of vasopressin (AVP) on both Five Minute Speech Sample-based (FMSS) expressed emotion and emotional content or prosody was explored in 25 prospectivefathers. Moreover, we explored how the transition to fatherhood affected these FMSS-based parameters, using prenatal and early postnatal measurements. Analyses revealed that FMSS-based expressed emotion and emotional content were correlated, but not affected by prenatal AVP administration. However,child's birth was associated with an increase in positivity and a decrease in emotional prosody, suggesting that the child's birth is more influential with regard to paternal thoughts and feelings than prenatal AVP administration.
Collapse
|
14
|
How Therapeutic Tapping Can Alter Neural Correlates of Emotional Prosody Processing in Anxiety. Brain Sci 2019; 9:brainsci9080206. [PMID: 31430984 PMCID: PMC6721443 DOI: 10.3390/brainsci9080206] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Revised: 08/09/2019] [Accepted: 08/12/2019] [Indexed: 11/17/2022] Open
Abstract
Anxiety disorders are the most common psychological disorders worldwide resulting in a great demand of adequate and cost-effective treatment. New short-term interventions can be used as an effective adjunct or alternative to pharmaco- and psychotherapy. One of these approaches is therapeutic tapping. It combines somatic stimulation of acupressure points with elements from Cognitive Behavioral Therapy (CBT). Tapping reduces anxiety symptoms after only one session. Anxiety is associated with a deficient emotion regulation for threatening stimuli. These deficits are compensated e.g., by CBT. Whether Tapping can also elicit similar modulations and which dynamic neural correlates are affected was subject to this study. Anxiety patients were assessed listening to pseudowords with a different emotional prosody (happy, angry, fearful, and neutral) prior and after one Tapping session. The emotion-related component Late Positive Potential (LPP) was investigated via electroencephalography. Progressive Muscle Relaxation (PMR) served as control intervention. Results showed LPP reductions for negative stimuli after the interventions. Interestingly, PMR influenced fearful and Tapping altered angry prosody. While PMR generally reduced arousal for fearful prosody, Tapping specifically affected fear-eliciting, angry stimuli, and might thus be able to reduce anxiety symptoms. Findings highlight the efficacy of Tapping and its impact on neural correlates of emotion regulation.
Collapse
|
15
|
Exploring the Effects of Personality Traits on the Perception of Emotions From Prosody. Front Psychol 2019; 10:184. [PMID: 30828312 PMCID: PMC6385770 DOI: 10.3389/fpsyg.2019.00184] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2018] [Accepted: 01/18/2019] [Indexed: 11/13/2022] Open
Abstract
It has repeatedly been argued that individual differences in personality influence emotion processing, but findings from both the facial and vocal emotion recognition literature are contradictive, suggesting a lack of reliability across studies. To explore this relationship further in a more systematic manner using the Big Five Inventory, we designed two studies employing different research paradigms. Study 1 explored the relationship between personality traits and vocal emotion recognition accuracy while Study 2 examined how personality traits relate to vocal emotion recognition speed. The combined results did not indicate a pairwise linear relationship between self-reported individual differences in personality and vocal emotion processing, suggesting that the continuously proposed influence of personality characteristics on vocal emotion processing might have been overemphasized previously.
Collapse
|
16
|
Abstract
Being able to appropriately process different emotional prosodies is an important cognitive ability normally present at birth. In this study, we used event-related potential (ERP) to assess whether brain injury impacts the ability to process different emotional prosodies (happy, fear, and neutral) in neonates; whether the ERP measure has potential value for the evaluation of neurodevelopmental outcome in later childhood. A total of 42 full-term neonates were recruited from the neonatology department of Peking University First Hospital from June 2014 to January 2015. They were assigned to the brain injury group (n = 20) or control group (n = 22) according to their clinical manifestations, physical examinations, cranial images and routine EEG outcomes. Using an oddball paradigm, ERP data were recorded while subjects listened to happy (20%, deviation stimulus), fearful (20%, deviation stimulus) and neutral (80%, standard stimulus) prosodies to evaluate the potential prognostic value of ERP indexes for neurodevelopment at 30 months of age. Results showed that while the mismatch responses (MMRs) at the frontal lobe were larger for fearful than happy prosody in control neonates, this difference was not observed in neonates with brain injuries. This finding suggests that perinatal brain injury may influence the cognitive ability to process different emotional prosodies in neonatal brain; this deficit could be reflected by decreased MMR amplitudes in response to fearful prosody. Moreover, the decreased MMRs at the frontal lobe was associated with impaired neurodevelopment at 30 months old.
Collapse
|
17
|
Abstract
Conveying emotions in spoken poetry may be based on a poem's semantic content and/or on emotional prosody, i.e., on acoustic features above single speech sounds. However, hypotheses of more direct sound–emotion relations in poetry, such as those based on the frequency of occurrence of certain phonemes, have not withstood empirical (re)testing. Therefore, we investigated sound–emotion associations based on prosodic features as a potential alternative route for the, at least partially, non-semantic expression and perception of emotions in poetry. We first conducted a pre-study designed to validate relevant parameters of joy- and sadness-supporting prosody in the recitation, i.e. acoustic production, of poetry. The parameters obtained thereof guided the experimental modification of recordings of German joyful and sad poems such that for each poem, three prosodic variants were constructed: one with a joy-supporting prosody, one with a sadness-supporting prosody, and a neutral variant. In the subsequent experiment, native German speakers and participants with no command of German rated the joyfulness and sadness of these three variants. This design allowed us to investigate the role of emotional prosody, operationalized in terms of sound-emotion parameters, both in combination with and dissociated from semantic access to the emotional content of the poems. The findings from our pre-study showed that the emotional content of poems (based on pre-classifications into joyful and sad) indeed predicted the prosodic features pitch and articulation rate. The subsequent perception experiment revealed that cues provided by joyful and sad prosody specifically affect non-German-speaking listeners' emotion ratings of the poems. Thus, the present investigation lends support to the hypothesis of prosody-based iconic relations between perceived emotion and sound qualia. At the same time, our findings also highlight that semantic access substantially decreases the role of cross-language sound–emotion associations and indicate that non-German-speaking participants may also use phonetic and prosodic cues other than the ones that were targeted and manipulated here.
Collapse
|
18
|
Humans recognize emotional arousal in vocalizations across all classes of terrestrial vertebrates: evidence for acoustic universals. Proc Biol Sci 2018; 284:rspb.2017.0990. [PMID: 28747478 DOI: 10.1098/rspb.2017.0990] [Citation(s) in RCA: 58] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2017] [Accepted: 06/21/2017] [Indexed: 12/28/2022] Open
Abstract
Writing over a century ago, Darwin hypothesized that vocal expression of emotion dates back to our earliest terrestrial ancestors. If this hypothesis is true, we should expect to find cross-species acoustic universals in emotional vocalizations. Studies suggest that acoustic attributes of aroused vocalizations are shared across many mammalian species, and that humans can use these attributes to infer emotional content. But do these acoustic attributes extend to non-mammalian vertebrates? In this study, we asked human participants to judge the emotional content of vocalizations of nine vertebrate species representing three different biological classes-Amphibia, Reptilia (non-aves and aves) and Mammalia. We found that humans are able to identify higher levels of arousal in vocalizations across all species. This result was consistent across different language groups (English, German and Mandarin native speakers), suggesting that this ability is biologically rooted in humans. Our findings indicate that humans use multiple acoustic parameters to infer relative arousal in vocalizations for each species, but mainly rely on fundamental frequency and spectral centre of gravity to identify higher arousal vocalizations across species. These results suggest that fundamental mechanisms of vocal emotional expression are shared among vertebrates and could represent a homologous signalling system.
Collapse
|
19
|
Consequences of brain tumour resection on emotion recognition. J Neuropsychol 2017; 13:1-21. [PMID: 28700143 DOI: 10.1111/jnp.12130] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2017] [Revised: 04/13/2017] [Indexed: 11/27/2022]
Abstract
Emotion processing impairments are common in patients undergoing brain surgery for fronto-temporal tumour resection, with potential consequences on social interactions. However, evidence is controversial concerning side and site of lesions causing such deficits. This study investigates visual and auditory emotion recognition in brain tumour patients with the aim of clarifying which lesion sites are related to impairments in emotion processing from different modalities. Thirty-four patients were evaluated, before and after surgery, on facial expression and emotional prosody recognition; voxel-based lesion-symptom mapping (VLSM) analyses were performed on patients' post-surgery MRI images. Results showed that patients' performance decreased after surgery in both visual and auditory modalities, but, in general, recovered 3 months after surgery. In facial expression recognition, left brain-damaged patients showed greater post-surgery deterioration than right brain-damaged ones, whose performance specifically decreased for sadness and fear. VLSM analysis revealed two segregated areas in the left hemisphere accounting for post-surgery scores for happy (fronto-temporo-insular region) and surprised (middle frontal gyrus and inferior fronto-occipital fasciculus) facial expressions. Our findings demonstrate that surgical removal of tumours in the fronto-temporal region produces impairment in facial emotion recognition with an overall recovery at 3 months, suggesting a partially different representation of positive and negative emotions in the left and right hemispheres for visually - but not auditory - presented emotions; moreover, we show that deficits in specific expression recognition are associated with discrete lesion locations.
Collapse
|
20
|
Altered emotional prosody processing in patients with Parkinson's disease after subthalamic nucleus stimulation. Neuropsychiatr Dis Treat 2017; 13:2965-2975. [PMID: 29270014 PMCID: PMC5729839 DOI: 10.2147/ndt.s153505] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/18/2023] Open
Abstract
BACKGROUND Patients with Parkinson's disease (PD) exhibit deficits in recognizing and expressing vocal emotional prosody. The aim of this study was to explore emotional prosody processing in patients with PD shortly after subthalamic nucleus (STN) deep brain stimulation (DBS). METHODS Two groups of patients with PD (pre-DBS and post-DBS) and one healthy control (HC) group were recruited as participants. All participants (PD and HC) were assessed using the Montreal Affective Voices database 50 Voices Recognition test. All participants were asked to nonverbally express five basic emotions (happiness, anger, fear, sadness, and neutral) to test emotional prosody expression. Fifteen native Chinese speakers were recruited as raters. We recorded the accuracy rate, reaction time, confidence level, and two acoustic parameters (mean pitch and mean intensity). RESULTS The PD groups scored lower than the HC group in recognizing and expressing emotional prosody. STN DBS had no significant effect on the recognition of emotional prosody but had a significant effect on fear prosody expression. Pearson's correlation analysis revealed significant correlations between performance on emotional prosody recognition tests and performance on emotional prosody expression tests in both the pre-DBS PD and post-DBS PD groups. CONCLUSION Shortly after STN DBS, the ability to recognize emotional prosody was not altered, but fear expression was impaired. We identified associations between abnormalities in emotional prosody recognition and expression deficits both before and after STN DBS, indicating that the processes involved in recognizing and expressing emotional prosody may share a common system.
Collapse
|
21
|
Abstract
The majority of evidence on social anxiety (SA)-linked attentional biases to threat comes from research using facial expressions. Emotions are, however, communicated through other channels, such as voice. Despite its importance in the interpretation of social cues, emotional prosody processing in SA has been barely explored. This study investigated whether SA is associated with enhanced processing of task-irrelevant angry prosody. Fifty-three participants with high and low SA performed a dichotic listening task in which pairs of male/female voices were presented, one to each ear, with either the same or different prosody (neutral or angry). Participants were instructed to focus on either the left or right ear and to identify the speaker's gender in the attended side. Our main results show that, once attended, task-irrelevant angry prosody elicits greater interference than does neutral prosody. Surprisingly, high socially anxious participants were less prone to distraction from attended-angry (compared to attended-neutral) prosody than were low socially anxious individuals. These findings emphasise the importance of examining SA-related biases across modalities.
Collapse
|
22
|
Recruitment of Language-, Emotion- and Speech-Timing Associated Brain Regions for Expressing Emotional Prosody: Investigation of Functional Neuroanatomy with fMRI. Front Hum Neurosci 2016; 10:518. [PMID: 27803656 PMCID: PMC5067951 DOI: 10.3389/fnhum.2016.00518] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2016] [Accepted: 09/29/2016] [Indexed: 12/02/2022] Open
Abstract
We aimed to progress understanding of prosodic emotion expression by establishing brain regions active when expressing specific emotions, those activated irrespective of the target emotion, and those whose activation intensity varied depending on individual performance. BOLD contrast data were acquired whilst participants spoke non-sense words in happy, angry or neutral tones, or performed jaw-movements. Emotion-specific analyses demonstrated that when expressing angry prosody, activated brain regions included the inferior frontal and superior temporal gyri, the insula, and the basal ganglia. When expressing happy prosody, the activated brain regions also included the superior temporal gyrus, insula, and basal ganglia, with additional activation in the anterior cingulate. Conjunction analysis confirmed that the superior temporal gyrus and basal ganglia were activated regardless of the specific emotion concerned. Nevertheless, disjunctive comparisons between the expression of angry and happy prosody established that anterior cingulate activity was significantly higher for angry prosody than for happy prosody production. Degree of inferior frontal gyrus activity correlated with the ability to express the target emotion through prosody. We conclude that expressing prosodic emotions (vs. neutral intonation) requires generic brain regions involved in comprehending numerous aspects of language, emotion-related processes such as experiencing emotions, and in the time-critical integration of speech information.
Collapse
|
23
|
Abstract
To explore how cultural immersion modulates emotion processing, this study examined how Chinese immigrants to Canada process multisensory emotional expressions, which were compared to existing data from two groups, Chinese and North Americans. Stroop and Oddball paradigms were employed to examine different stages of emotion processing. The Stroop task presented face-voice pairs expressing congruent/incongruent emotions and participants actively judged the emotion of one modality while ignoring the other. A significant effect of cultural immersion was observed in the immigrants' behavioral performance, which showed greater interference from to-be-ignored faces, comparable with what was observed in North Americans. However, this effect was absent in their N400 data, which retained the same pattern as the Chinese. In the Oddball task, where immigrants passively viewed facial expressions with/without simultaneous vocal emotions, they exhibited a larger visual MMN for faces accompanied by voices, again mirroring patterns observed in Chinese. Correlation analyses indicated that the immigrants' living duration in Canada was associated with neural patterns (N400 and visual mismatch negativity) more closely resembling North Americans. Our data suggest that in multisensory emotion processing, adopting to a new culture first leads to behavioral accommodation followed by alterations in brain activities, providing new evidence on human's neurocognitive plasticity in communication.
Collapse
|
24
|
Cultural differences in on-line sensitivity to emotional voices: comparing East and West. Front Hum Neurosci 2015; 9:311. [PMID: 26074808 PMCID: PMC4448034 DOI: 10.3389/fnhum.2015.00311] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2015] [Accepted: 05/15/2015] [Indexed: 12/16/2022] Open
Abstract
Evidence that culture modulates on-line neural responses to the emotional meanings encoded by vocal and facial expressions was demonstrated recently in a study comparing English North Americans and Chinese (Liu et al., 2015). Here, we compared how individuals from these two cultures passively respond to emotional cues from faces and voices using an Oddball task. Participants viewed in-group emotional faces, with or without simultaneous vocal expressions, while performing a face-irrelevant visual task as the EEG was recorded. A significantly larger visual Mismatch Negativity (vMMN) was observed for Chinese vs. English participants when faces were accompanied by voices, suggesting that Chinese were influenced to a larger extent by task-irrelevant vocal cues. These data highlight further differences in how adults from East Asian vs. Western cultures process socio-emotional cues, arguing that distinct cultural practices in communication (e.g., display rules) shape neurocognitive activity associated with the early perception and integration of multi-sensory emotional cues.
Collapse
|
25
|
[Emotional and language prosody and working memory in patients with depression]. POLSKI MERKURIUSZ LEKARSKI : ORGAN POLSKIEGO TOWARZYSTWA LEKARSKIEGO 2015; 38:269-272. [PMID: 26039021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
AIM The aim of the study was to verify the hypothesis about the relationship between the efficiency of executive functions and emotional prosody and linguistic prosody among patients with recurrent depressive disorder (rDD). MATERIALS AND METHODS The study comprised 80 subjects, patients with rDD. Assessment of cognitive function was based on performance of the Trail Making Test (TMT), the Stroop Test, the Verbal Fluency Test (VFT), the Auditory Verbal Learning Test (AVLT), Ruff Figural Fluency Test (RFFT) and emotional prosody and linguistic prosody. RESULTS Efficiency of emotional prosody is linked to the execution of one part of the test VFT. Efficiency in the language prosody goes hand in hand with the speed of execution of both parts of the TMT, correctness of the implementation of VFT and RFFT. A negative impact of depressive symptoms only for language prosody was observed. CONCLUSIONS Deficits in the recognition of emotional stimuli in depression are not necessarily limited to visual information, but may also apply to non-verbal auditory stimuli (prosody). The severity of symptoms of depression impairs efficiency of linguistic prosody. The efficiency of frontal functions (both visual-spatial and verbal-auditory) is related to the ability of patients to use non-verbal communication of emotional information.
Collapse
|
26
|
[Altered identification with relative preservation of emotional prosody production in patients with Alzheimer's disease]. ACTA ACUST UNITED AC 2015; 13:106-15. [PMID: 25786430 DOI: 10.1684/pnv.2015.0524] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Patients with Alzheimer's disease (AD) show cognitive and behavioral disorders, which they and their caregivers have difficulties to cope with in daily life. Psychological symptoms seem to be increased by impaired emotion processing in patients, this ability being linked to social cognition and thus essential to maintain good interpersonal relationships. Non-verbal emotion processing is a genuine way to communicate, especially so for patients whose language may be rapidly impaired. Many studies focus on emotion identification in AD patients, mostly by means of facial expressions rather than emotional prosody; even fewer consider emotional prosody production, despite its playing a key role in interpersonal exchanges. The literature on this subject is scarce with contradictory results. The present study compares the performances of 14 AD patients (88.4±4.9 yrs; MMSE: 19.9±2.7) to those of 14 control subjects (87.5±5.1 yrs; MMSE: 28.1±1.4) in tasks of emotion identification through faces and voices (non linguistic vocal emotion or emotional prosody) and in a task of emotional prosody production (12 sentences were to be pronounced in a neutral, positive, or negative tone, after a context was read). The Alzheimer's disease patients showed weaker performances than control subjects in all emotional recognition tasks and particularly when identifying emotional prosody. A negative relation between the identification scores and the NPI (professional caregivers) scores was found which underlines their link to psychological and behavioral disorders. The production of emotional prosody seems relatively preserved in a mild to moderate stage of the disease: we found subtle differences regarding acoustic parameters but in a qualitative way judges established that the patients' productions were as good as those of control subjects. These results suggest interesting new directions for improving patients' care.
Collapse
|
27
|
Social cognition in schizophrenic patients: the effect of semantic content and emotional prosody in the comprehension of emotional discourse. Front Psychiatry 2014; 5:120. [PMID: 25309458 PMCID: PMC4159994 DOI: 10.3389/fpsyt.2014.00120] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/18/2014] [Accepted: 08/16/2014] [Indexed: 11/23/2022] Open
Abstract
BACKGROUND The recognition of the emotion expressed during conversation relies on the integration of both semantic processing and decoding of emotional prosody. The integration of both types of elements is necessary for social interaction. No study has investigated how these processes are impaired in patients with schizophrenia during the comprehension of an emotional speech. Since patients with schizophrenia have difficulty in daily interactions, it would be of great interest to investigate how these processes are impaired. We tested the hypothesis that patients present lesser performances regarding both semantic and emotional prosodic processes during emotional speech comprehension compared with healthy participants. METHODS The paradigm is based on sentences built with emotional (anger, happiness, or sadness) semantic content uttered with or without congruent emotional prosody. The study participants had to decide with which of the emotional categories each sentence corresponded. RESULTS Patients performed significantly worse than their matched controls, even in the presence of emotional prosody, showing that their ability to understand emotional semantic content was impaired. Although prosody improved performances in both groups, it benefited the patients more than the controls. CONCLUSION Patients exhibited both impaired semantic and emotional prosodic comprehensions. However, they took greater advantage of emotional prosody adjunction than healthy participants. Consequently, focusing on emotional prosody during carrying may improve social communication.
Collapse
|
28
|
Blunted feelings: alexithymia is associated with a diminished neural response to speech prosody. Soc Cogn Affect Neurosci 2013; 9:1108-17. [PMID: 23681887 DOI: 10.1093/scan/nst075] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023] Open
Abstract
How we perceive emotional signals from our environment depends on our personality. Alexithymia, a personality trait characterized by difficulties in emotion regulation has been linked to aberrant brain activity for visual emotional processing. Whether alexithymia also affects the brain's perception of emotional speech prosody is currently unknown. We used functional magnetic resonance imaging to investigate the impact of alexithymia on hemodynamic activity of three a priori regions of the prosody network: the superior temporal gyrus (STG), the inferior frontal gyrus and the amygdala. Twenty-two subjects performed an explicit task (emotional prosody categorization) and an implicit task (metrical stress evaluation) on the same prosodic stimuli. Irrespective of task, alexithymia was associated with a blunted response of the right STG and the bilateral amygdalae to angry, surprised and neutral prosody. Individuals with difficulty describing feelings deactivated the left STG and the bilateral amygdalae to a lesser extent in response to angry compared with neutral prosody, suggesting that they perceived angry prosody as relatively more salient than neutral prosody. In conclusion, alexithymia may be associated with a generally blunted neural response to speech prosody. Such restricted prosodic processing may contribute to problems in social communication associated with this personality trait.
Collapse
|
29
|
Abstract
OBJECTIVE Evaluate the influence of depressive symptoms on the recognition of emotional prosody in Parkinson's disease (PD) patients, and identify types of emotion on spoken sentences. METHODS Thirty-five PD patients and 65 normal participants were studied. Dementia was checked with the Mini Mental State Examination, Clinical Dementia Rating scale, and DSM IV. Recognition of emotional prosody was tested by asking subjects to listen to 12 recorded statements with neutral affective content that were read with a strong affective expression. Subjects had to recognize the correct emotion by one of four descriptors (angry, sad, cheerful, and neutral). The Beck Depression Inventory (BDI) was employed to rate depressive symptoms with the cutoff 14. RESULTS Total ratings of emotions correctly recognized by participants below and above the BDI cutoff were similar among PD patients and normal individuals. PD patients who correctly identified neutral and anger inflections presented higher rates of depressive symptoms (p = 0.011 and 0.044, respectively). No significant differences were observed in the normal group. CONCLUSIONS Depression may modify some modalities of emotional prosody perception in PD, by increasing the perception of non-pleasant emotions or lack of affection, such as anger or indifference.
Collapse
|