1
|
Sarzedas J, Lima CF, Roberto MS, Scott SK, Pinheiro AP, Conde T. Blindness influences emotional authenticity perception in voices: Behavioral and ERP evidence. Cortex 2024; 172:254-270. [PMID: 38123404 DOI: 10.1016/j.cortex.2023.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 10/31/2023] [Accepted: 11/10/2023] [Indexed: 12/23/2023]
Abstract
The ability to distinguish spontaneous from volitional emotional expressions is an important social skill. How do blind individuals perceive emotional authenticity? Unlike sighted individuals, they cannot rely on facial and body language cues, relying instead on vocal cues alone. Here, we combined behavioral and ERP measures to investigate authenticity perception in laughter and crying in individuals with early- or late-blindness onset. Early-blind, late-blind, and sighted control participants (n = 17 per group, N = 51) completed authenticity and emotion discrimination tasks while EEG data were recorded. The stimuli consisted of laughs and cries that were either spontaneous or volitional. The ERP analysis focused on the N1, P2, and late positive potential (LPP). Behaviorally, early-blind participants showed intact authenticity perception, but late-blind participants performed worse than controls. There were no group differences in the emotion discrimination task. In brain responses, all groups were sensitive to laughter authenticity at the P2 stage, and to crying authenticity at the early LPP stage. Nevertheless, only early-blind participants were sensitive to crying authenticity at the N1 and middle LPP stages, and to laughter authenticity at the early LPP stage. Furthermore, early-blind and sighted participants were more sensitive than late-blind ones to crying authenticity at the P2 and late LPP stages. Altogether, these findings suggest that early blindness relates to facilitated brain processing of authenticity in voices, both at early sensory and late cognitive-evaluative stages. Late-onset blindness, in contrast, relates to decreased sensitivity to authenticity at behavioral and brain levels.
Collapse
Affiliation(s)
- João Sarzedas
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - César F Lima
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal; Institute of Cognitive Neuroscience, University College London, London, UK
| | - Magda S Roberto
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Ana P Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal.
| | - Tatiana Conde
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal.
| |
Collapse
|
2
|
Martins I, Lima CF, Pinheiro AP. Enhanced salience of musical sounds in singers and instrumentalists. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2022; 22:1044-1062. [PMID: 35501427 DOI: 10.3758/s13415-022-01007-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/10/2022] [Indexed: 06/14/2023]
Abstract
Music training has been linked to facilitated processing of emotional sounds. However, most studies have focused on speech, and less is known about musicians' brain responses to other emotional sounds and in relation to instrument-specific experience. The current study combined behavioral and EEG methods to address two novel questions related to the perception of auditory emotional cues: whether and how long-term music training relates to a distinct emotional processing of nonverbal vocalizations and music; and whether distinct training profiles (vocal vs. instrumental) modulate brain responses to emotional sounds from early to late processing stages. Fifty-eight participants completed an EEG implicit emotional processing task, in which musical and vocal sounds differing in valence were presented as nontarget stimuli. After this task, participants explicitly evaluated the same sounds regarding the emotion being expressed, their valence, and arousal. Compared with nonmusicians, musicians displayed enhanced salience detection (P2), attention orienting (P3), and elaborative processing (Late Positive Potential) of musical (vs. vocal) sounds in event-related potential (ERP) data. The explicit evaluation of musical sounds also was distinct in musicians: accuracy in the emotional recognition of musical sounds was similar across valence types in musicians, who also judged musical sounds to be more pleasant and more arousing than nonmusicians. Specific profiles of music training (singers vs. instrumentalists) did not relate to differences in the processing of vocal vs. musical sounds. Together, these findings reveal that music has a privileged status in the auditory system of long-term musically trained listeners, irrespective of their instrument-specific experience.
Collapse
Affiliation(s)
- Inês Martins
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, 1649-013, Lisbon, Portugal
| | - César F Lima
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisbon, Portugal
| | - Ana P Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, 1649-013, Lisbon, Portugal.
| |
Collapse
|
3
|
Zhao S, Liu Y, Wei K. Pupil-Linked Arousal Response Reveals Aberrant Attention Regulation among Children with Autism Spectrum Disorder. J Neurosci 2022; 42:5427-5437. [PMID: 35641188 PMCID: PMC9270919 DOI: 10.1523/jneurosci.0223-22.2022] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2022] [Revised: 05/08/2022] [Accepted: 05/11/2022] [Indexed: 01/09/2023] Open
Abstract
Autism spectrum disorder (ASD) is a developmental disorder that is characterized by difficulties with social interaction and interpersonal communication. It has been argued that abnormal attentional function to exogenous stimuli precedes and contributes to the core ASD symptoms. Notably, the locus ceruleus (LC) and its noradrenergic projections throughout the brain modulate attentional function, but the extent to which this locus ceruleus-norepinephrine (LC-NE) system influences attention in individuals with ASD, who frequently exhibit dysregulated alerting and attention orienting, is unknown. We examined dynamic attention control in girls and boys with ASD at rest using the pupil dilation response (PDR) as a noninvasive measure of LC-NE activity. When gender- and age-matched neurotypical participants were passively exposed to an auditory stream, their PDR decreased for recurrent stimuli but remained sensitive to surprising deviant stimuli. In contrast, children with ASD showed less habituation to recurrent stimuli as well as a diminished phasic response to deviants, particularly those containing social information. Their tonic habituation impairment predicts their phasic orienting impairment, and both impairments correlated with the severity of ASD symptom. Because of the fact that these pupil-linked responses are observed when individuals passively listen without any task engagement, our findings imply that the intricate and dynamic attention allocation mechanism, mediated by the subcortical LC-NE system, is impaired in ASD.SIGNIFICANCE STATEMENT Autistic individuals show attentional abnormalities to even simple sensory inputs, which emerge even before formal diagnosis. One possible mechanism behind these abnormalities is a malfunctioning pacemaker of their attention system, the locus ceruleus-norepinephrine pathway. Here we found, according to the pupillary response (a noradrenergic activity proxy), autistic children are hypersensitive to repeated sounds but hyposensitive to surprising deviant sounds when compared with age-matched controls. Importantly, hypersensitivity to repetitions predicts hyposensitivity to deviant sounds, and both abnormalities positively correlate to the severity of autistic symptoms. This provides strong evidence that autistic children have faulty noradrenergic regulation, which might underly the attentional atypicalities previously evidenced in various cortical responses in autistic individuals.
Collapse
Affiliation(s)
- Sijia Zhao
- Department of Experimental Psychology, University of Oxford, Oxford OX2 6AE, United Kingdom
| | - Yajie Liu
- Beijing Key Laboratory of Behavior and Mental Health, School of Psychological and Cognitive Sciences, Peking University, Beijing 100080, China
- Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing 100080, China
| | - Kunlin Wei
- Beijing Key Laboratory of Behavior and Mental Health, School of Psychological and Cognitive Sciences, Peking University, Beijing 100080, China
- Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing 100080, China
| |
Collapse
|
4
|
The Time Course of Emotional Authenticity Detection in Nonverbal Vocalizations. Cortex 2022; 151:116-132. [DOI: 10.1016/j.cortex.2022.02.016] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 12/23/2021] [Accepted: 02/16/2022] [Indexed: 11/24/2022]
|
5
|
Martingano AJ, Konrath S. How cognitive and emotional empathy relate to rational thinking: empirical evidence and meta-analysis. The Journal of Social Psychology 2022; 162:143-160. [PMID: 35083952 DOI: 10.1080/00224545.2021.1985415] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Empathy is frequently described in opposition to rationality. Yet in two studies, we demonstrate that the relationship between rationality and empathy is nuanced and likely context dependent. Study 1 reports correlational data from two American samples and Study 2 presents a meta-analysis of existing literature (k = 22). We demonstrate that various types of cognitive empathy (perspective-taking, emotion recognition, and fantasy) are positively correlated with self-reported rationality, but unrelated to rational performance. In contrast, types of emotional empathy (empathic concern, personal distress, and emotion contagion) are generally negatively correlated with performance measures of rationality, but their relationships with self-reported rationality are divergent. Although these results do not settle the debate on empathy and rationality, they challenge the opposing domains hypothesis and provide tentative support for a dual-process model of empathy. Overall, these results indicate that the relationship between rationality and empathy differs depending upon how rationality and empathy are measured.
Collapse
Affiliation(s)
| | - Sara Konrath
- Indiana University.,University of Notre Dame, Institute for Advanced Study
| |
Collapse
|
6
|
Effects of mild-to-moderate sensorineural hearing loss and signal amplification on vocal emotion recognition in middle-aged–older individuals. PLoS One 2022; 17:e0261354. [PMID: 34995305 PMCID: PMC8740977 DOI: 10.1371/journal.pone.0261354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 11/29/2021] [Indexed: 11/19/2022] Open
Abstract
Previous research has shown deficits in vocal emotion recognition in sub-populations of individuals with hearing loss, making this a high priority research topic. However, previous research has only examined vocal emotion recognition using verbal material, in which emotions are expressed through emotional prosody. There is evidence that older individuals with hearing loss suffer from deficits in general prosody recognition, not specific to emotional prosody. No study has examined the recognition of non-verbal vocalization, which constitutes another important source for the vocal communication of emotions. It might be the case that individuals with hearing loss have specific difficulties in recognizing emotions expressed through prosody in speech, but not non-verbal vocalizations. We aim to examine whether vocal emotion recognition difficulties in middle- aged-to older individuals with sensorineural mild-moderate hearing loss are better explained by deficits in vocal emotion recognition specifically, or deficits in prosody recognition generally by including both sentences and non-verbal expressions. Furthermore a, some of the studies which have concluded that individuals with mild-moderate hearing loss have deficits in vocal emotion recognition ability have also found that the use of hearing aids does not improve recognition accuracy in this group. We aim to examine the effects of linear amplification and audibility on the recognition of different emotions expressed both verbally and non-verbally. Besides examining accuracy for different emotions we will also look at patterns of confusion (which specific emotions are mistaken for other specific emotion and at which rates) during both amplified and non-amplified listening, and we will analyze all material acoustically and relate the acoustic content to performance. Together these analyses will provide clues to effects of amplification on the perception of different emotions. For these purposes, a total of 70 middle-aged-older individuals, half with mild-moderate hearing loss and half with normal hearing will perform a computerized forced-choice vocal emotion recognition task with and without amplification.
Collapse
|
7
|
Abstract
OBJECTIVE The ability to recognize others' emotions is a central aspect of socioemotional functioning. Emotion recognition impairments are well documented in Alzheimer's disease and other dementias, but it is less understood whether they are also present in mild cognitive impairment (MCI). Results on facial emotion recognition are mixed, and crucially, it remains unclear whether the potential impairments are specific to faces or extend across sensory modalities. METHOD In the current study, 32 MCI patients and 33 cognitively intact controls completed a comprehensive neuropsychological assessment and two forced-choice emotion recognition tasks, including visual and auditory stimuli. The emotion recognition tasks required participants to categorize emotions in facial expressions and in nonverbal vocalizations (e.g., laughter, crying) expressing neutrality, anger, disgust, fear, happiness, pleasure, surprise, or sadness. RESULTS MCI patients performed worse than controls for both facial expressions and vocalizations. The effect was large, similar across tasks and individual emotions, and it was not explained by sensory losses or affective symptomatology. Emotion recognition impairments were more pronounced among patients with lower global cognitive performance, but they did not correlate with the ability to perform activities of daily living. CONCLUSIONS These findings indicate that MCI is associated with emotion recognition difficulties and that such difficulties extend beyond vision, plausibly reflecting a failure at supramodal levels of emotional processing. This highlights the importance of considering emotion recognition abilities as part of standard neuropsychological testing in MCI, and as a target of interventions aimed at improving social cognition in these patients.
Collapse
|
8
|
Pinheiro AP, Anikin A, Conde T, Sarzedas J, Chen S, Scott SK, Lima CF. Emotional authenticity modulates affective and social trait inferences from voices. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200402. [PMID: 34719249 PMCID: PMC8558771 DOI: 10.1098/rstb.2020.0402] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/12/2021] [Indexed: 01/31/2023] Open
Abstract
The human voice is a primary tool for verbal and nonverbal communication. Studies on laughter emphasize a distinction between spontaneous laughter, which reflects a genuinely felt emotion, and volitional laughter, associated with more intentional communicative acts. Listeners can reliably differentiate the two. It remains unclear, however, if they can detect authenticity in other vocalizations, and whether authenticity determines the affective and social impressions that we form about others. Here, 137 participants listened to laughs and cries that could be spontaneous or volitional and rated them on authenticity, valence, arousal, trustworthiness and dominance. Bayesian mixed models indicated that listeners detect authenticity similarly well in laughter and crying. Speakers were also perceived to be more trustworthy, and in a higher arousal state, when their laughs and cries were spontaneous. Moreover, spontaneous laughs were evaluated as more positive than volitional ones, and we found that the same acoustic features predicted perceived authenticity and trustworthiness in laughter: high pitch, spectral variability and less voicing. For crying, associations between acoustic features and ratings were less reliable. These findings indicate that emotional authenticity shapes affective and social trait inferences from voices, and that the ability to detect authenticity in vocalizations is not limited to laughter. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part I)'.
Collapse
Affiliation(s)
- Ana P. Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013 Lisboa, Portugal
| | - Andrey Anikin
- Equipe de Neuro-Ethologie Sensorielle (ENES)/Centre de Recherche em Neurosciences de Lyon (CRNL), University of Lyon/Saint-Etienne, CNRS UMR5292, INSERM UMR_S 1028, 42023 Saint-Etienne, France
- Division of Cognitive Science, Lund University, 221 00 Lund, Sweden
| | - Tatiana Conde
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013 Lisboa, Portugal
| | - João Sarzedas
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013 Lisboa, Portugal
| | - Sinead Chen
- National Taiwan University, Taipei City, 10617 Taiwan
| | - Sophie K. Scott
- Institute of Cognitive Neuroscience, University College London, London WC1N 3AZ, UK
| | - César F. Lima
- Institute of Cognitive Neuroscience, University College London, London WC1N 3AZ, UK
- Instituto Universitário de Lisboa (ISCTE-IUL), Avenida das Forças Armadas, 1649-026 Lisboa, Portugal
| |
Collapse
|
9
|
Lima CF, Arriaga P, Anikin A, Pires AR, Frade S, Neves L, Scott SK. Authentic and posed emotional vocalizations trigger distinct facial responses. Cortex 2021; 141:280-292. [PMID: 34102411 DOI: 10.1016/j.cortex.2021.04.015] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 04/21/2021] [Accepted: 04/27/2021] [Indexed: 11/28/2022]
Abstract
The ability to recognize the emotions of others is a crucial skill. In the visual modality, sensorimotor mechanisms provide an important route for emotion recognition. Perceiving facial expressions often evokes activity in facial muscles and in motor and somatosensory systems, and this activity relates to performance in emotion tasks. It remains unclear whether and how similar mechanisms extend to audition. Here we examined facial electromyographic and electrodermal responses to nonverbal vocalizations that varied in emotional authenticity. Participants (N = 100) passively listened to laughs and cries that could reflect an authentic or a posed emotion. Bayesian mixed models indicated that listening to laughter evoked stronger facial responses than listening to crying. These responses were sensitive to emotional authenticity. Authentic laughs evoked more activity than posed laughs in the zygomaticus and orbicularis, muscles typically associated with positive affect. We also found that activity in the orbicularis and corrugator related to subjective evaluations in a subsequent authenticity perception task. Stronger responses in the orbicularis predicted higher perceived laughter authenticity. Stronger responses in the corrugator, a muscle associated with negative affect, predicted lower perceived laughter authenticity. Moreover, authentic laughs elicited stronger skin conductance responses than posed laughs. This arousal effect did not predict task performance, however. For crying, physiological responses were not associated with authenticity judgments. Altogether, these findings indicate that emotional authenticity affects peripheral nervous system responses to vocalizations. They also point to a role of sensorimotor mechanisms in the evaluation of authenticity in the auditory modality.
Collapse
Affiliation(s)
- César F Lima
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal; Institute of Cognitive Neuroscience, University College London, London, UK.
| | | | - Andrey Anikin
- Equipe de Neuro-Ethologie Sensorielle (ENES)/Centre de Recherche en Neurosciences de Lyon (CRNL), University of Lyon/Saint-Etienne, CNRS UMR5292, INSERM UMR_S 1028, Saint-Etienne, France; Division of Cognitive Science, Lund University, Lund, Sweden
| | - Ana Rita Pires
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal
| | - Sofia Frade
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal
| | - Leonor Neves
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK
| |
Collapse
|
10
|
Cortes DS, Tornberg C, Bänziger T, Elfenbein HA, Fischer H, Laukka P. Effects of aging on emotion recognition from dynamic multimodal expressions and vocalizations. Sci Rep 2021; 11:2647. [PMID: 33514829 PMCID: PMC7846600 DOI: 10.1038/s41598-021-82135-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Accepted: 01/15/2021] [Indexed: 12/20/2022] Open
Abstract
Age-related differences in emotion recognition have predominantly been investigated using static pictures of facial expressions, and positive emotions beyond happiness have rarely been included. The current study instead used dynamic facial and vocal stimuli, and included a wider than usual range of positive emotions. In Task 1, younger and older adults were tested for their abilities to recognize 12 emotions from brief video recordings presented in visual, auditory, and multimodal blocks. Task 2 assessed recognition of 18 emotions conveyed by non-linguistic vocalizations (e.g., laughter, sobs, and sighs). Results from both tasks showed that younger adults had significantly higher overall recognition rates than older adults. In Task 1, significant group differences (younger > older) were only observed for the auditory block (across all emotions), and for expressions of anger, irritation, and relief (across all presentation blocks). In Task 2, significant group differences were observed for 6 out of 9 positive, and 8 out of 9 negative emotions. Overall, results indicate that recognition of both positive and negative emotions show age-related differences. This suggests that the age-related positivity effect in emotion recognition may become less evident when dynamic emotional stimuli are used and happiness is not the only positive emotion under study.
Collapse
Affiliation(s)
- Diana S Cortes
- Department of Psychology, Stockholm University, Stockholm, Sweden.
| | | | - Tanja Bänziger
- Department of Psychology, Mid Sweden University, Östersund, Sweden
| | | | - Håkan Fischer
- Department of Psychology, Stockholm University, Stockholm, Sweden
| | - Petri Laukka
- Department of Psychology, Stockholm University, Stockholm, Sweden.
| |
Collapse
|
11
|
Nieuwburg EGI, Ploeger A, Kret ME. Emotion recognition in nonhuman primates: How experimental research can contribute to a better understanding of underlying mechanisms. Neurosci Biobehav Rev 2021; 123:24-47. [PMID: 33453306 DOI: 10.1016/j.neubiorev.2020.11.029] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2019] [Revised: 09/09/2020] [Accepted: 11/25/2020] [Indexed: 02/02/2023]
Abstract
Recognising conspecifics' emotional expressions is important for nonhuman primates to navigate their physical and social environment. We address two possible mechanisms underlying emotion recognition: emotional contagion, the automatic matching of the observer's emotions to the emotional state of the observed individual, and cognitive empathy, the ability to understand the meaning and cause of emotional expressions while maintaining a distinction between own and others' emotions. We review experimental research in nonhuman primates to gain insight into the evolution of emotion recognition. Importantly, we focus on how emotional contagion and cognitive empathy can be studied experimentally. Evidence for aspects of cognitive empathy in different nonhuman primate lineages suggests that a wider range of primates than commonly assumed can infer emotional meaning from emotional expressions. Possibly, analogous rather than homologous evolution underlies emotion recognition. However, conclusions regarding its exact evolutionary course require more research in different modalities and species.
Collapse
Affiliation(s)
- Elisabeth G I Nieuwburg
- University of Amsterdam, Institute of Interdisciplinary Studies (IIS), Amsterdam, The Netherlands
| | - Annemie Ploeger
- University of Amsterdam, Faculty of Social and Behavioural Sciences, Programme Group Developmental Psychology, Amsterdam, The Netherlands
| | - Mariska E Kret
- Leiden University, Institute of Psychology, Cognitive Psychology Unit, Leiden, The Netherlands; Leiden University, Leiden Institute for Brain and Cognition (LIBC), Leiden, The Netherlands.
| |
Collapse
|
12
|
Seinfeld S, Zhan M, Poyo-Solanas M, Barsuola G, Vaessen M, Slater M, Sanchez-Vives MV, de Gelder B. Being the victim of virtual abuse changes default mode network responses to emotional expressions. Cortex 2020; 135:268-284. [PMID: 33418321 DOI: 10.1016/j.cortex.2020.11.018] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 09/11/2020] [Accepted: 11/16/2020] [Indexed: 01/10/2023]
Abstract
Recent behavioural studies have provided evidence that virtual reality (VR) experiences have an impact on socio-affective processes, and a number of findings now underscore the potential of VR for therapeutic interventions. An interesting recent result is that when male offenders experience a violent situation as a female victim of domestic violence in VR, their sensitivity for recognition of fearful facial expressions improves. A timely question now concerns the underlying brain mechanisms of these behavioural effects as these are still largely unknown. The current study used fMRI to measure the impact of a VR intervention in which participants experienced a violent aggression from the specific vantage point of the victim. We compared brain processes related to facial and bodily emotion perception before and after the VR experience. Our results show that the virtual abuse experience led to an enhancement of Default Mode Network (DMN) activity, specifically associated with changes in the processing of ambiguous emotional stimuli. In contrast, DMN activity was decreased when observing fully fearful expressions. Finally, we observed increased variability in brain activity for male versus female facial expressions. Taken together, these results suggest that the first-person perspective of a virtual violent situation impacts emotion recognition through modifications in DMN activity. Our study contributes to a better understanding of the brain mechanisms associated with the behavioural effects of VR interventions in the context of a violent confrontation with the male participant embodied as a female victim. Furthermore, this research also consolidates the use of VR embodied perspective-taking interventions for addressing socio-affective impairments.
Collapse
Affiliation(s)
- Sofia Seinfeld
- Systems Neuroscience, Institut d'Investigacions Biomèdiques August Pi i Sunyer, Barcelona, Spain
| | - Minye Zhan
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands
| | - Marta Poyo-Solanas
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands
| | - Giulia Barsuola
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands
| | - Maarten Vaessen
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands
| | - Mel Slater
- Event Lab, Department of Clinical Psychology and Psychobiology, Faculty of Psychology, University of Barcelona, Barcelona, Spain; Institute of Neurosciences of the University of Barcelona, Barcelona, Spain
| | - Maria V Sanchez-Vives
- Systems Neuroscience, Institut d'Investigacions Biomèdiques August Pi i Sunyer, Barcelona, Spain; Department of Cognition, Development and Educational Psychology, Faculty of Psychology, University of Barcelona, Barcelona, Spain
| | - Beatrice de Gelder
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands; Department of Computer Science, University College London, London, UK.
| |
Collapse
|
13
|
Vilaverde RF, Correia AI, Lima CF. Higher trait mindfulness is associated with empathy but not with emotion recognition abilities. ROYAL SOCIETY OPEN SCIENCE 2020; 7:192077. [PMID: 32968498 PMCID: PMC7481693 DOI: 10.1098/rsos.192077] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2019] [Accepted: 06/12/2020] [Indexed: 06/11/2023]
Abstract
Mindfulness involves an intentional and non-judgemental attention or awareness of present-moment experiences. It can be cultivated by meditation practice or present as an inherent disposition or trait. Higher trait mindfulness has been associated with improved emotional skills, but evidence comes primarily from studies on emotion regulation. It remains unclear whether improvements extend to other aspects of emotional processing, namely the ability to recognize emotions in others. In the current study, 107 participants (M age = 25.48 years) completed a measure of trait mindfulness, the Five Facet Mindfulness Questionnaire, and two emotion recognition tasks. These tasks required participants to categorize emotions in facial expressions and in speech prosody (modulations of the tone of voice). They also completed an empathy questionnaire and attention tasks. We found that higher trait mindfulness was associated positively with cognitive empathy, but not with the ability to recognize emotions. In fact, Bayesian analyses provided substantial evidence for the null hypothesis, both for emotion recognition in faces and in speech. Moreover, no associations were observed between mindfulness and attention performance. These findings suggest that the positive effects of trait mindfulness on emotional processing do not extend to emotion recognition abilities.
Collapse
Affiliation(s)
| | | | - César F. Lima
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Avenida das Forças Armadas, 1649-026Portugal
| |
Collapse
|
14
|
Where Sounds Occur Matters: Context Effects Influence Processing of Salient Vocalisations. Brain Sci 2020; 10:brainsci10070429. [PMID: 32640750 PMCID: PMC7407900 DOI: 10.3390/brainsci10070429] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Revised: 06/26/2020] [Accepted: 07/02/2020] [Indexed: 11/23/2022] Open
Abstract
The social context in which a salient human vocalisation is heard shapes the affective information it conveys. However, few studies have investigated how visual contextual cues lead to differential processing of such vocalisations. The prefrontal cortex (PFC) is implicated in processing of contextual information and evaluation of saliency of vocalisations. Using functional Near-Infrared Spectroscopy (fNIRS), we investigated PFC responses of young adults (N = 18) to emotive infant and adult vocalisations while they passively viewed the scenes of two categories of environmental contexts: a domestic environment (DE) and an outdoors environment (OE). Compared to a home setting (DE) which is associated with a fixed mental representation (e.g., expect seeing a living room in a typical house), the outdoor setting (OE) is more variable and less predictable, thus might demand greater processing effort. From our previous study in Azhari et al. (2018) that employed the same experimental paradigm, the OE context was found to elicit greater physiological arousal compared to the DE context. Similarly, we hypothesised that greater PFC activation will be observed when salient vocalisations are paired with the OE compared to the DE condition. Our finding supported this hypothesis: the left rostrolateral PFC, an area of the brain that facilitates relational integration, exhibited greater activation in the OE than DE condition which suggests that greater cognitive resources are required to process outdoor situational information together with salient vocalisations. The result from this study bears relevance in deepening our understanding of how contextual information differentially modulates the processing of salient vocalisations.
Collapse
|
15
|
Mitrenga KJ, Alderson-Day B, May L, Moffatt J, Moseley P, Fernyhough C. Reading characters in voices: Ratings of personality characteristics from voices predict proneness to auditory verbal hallucinations. PLoS One 2019; 14:e0221127. [PMID: 31404114 PMCID: PMC6690516 DOI: 10.1371/journal.pone.0221127] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2019] [Accepted: 07/30/2019] [Indexed: 11/23/2022] Open
Abstract
People rapidly make first impressions of others, often based on very little information–minimal exposure to faces or voices is sufficient for humans to make up their mind about personality of others. While there has been considerable research on voice personality perception, much less is known about its relevance to hallucination-proneness, despite auditory hallucinations being frequently perceived as personified social agents. The present paper reports two studies investigating the relation between voice personality perception and hallucination-proneness in non-clinical samples. A voice personality perception task was created, in which participants rated short voice recordings on four personality characteristics, relating to dimensions of the voice’s perceived Valence and Dominance. Hierarchical regression was used to assess contributions of Valence and Dominance voice personality ratings to hallucination-proneness scores, controlling for paranoia-proneness and vividness of mental imagery. Results from Study 1 suggested that high ratings of voices as dominant might be related to high hallucination-proneness; however, this relation seemed to be dependent on reported levels of paranoid thinking. In Study 2, we show that hallucination-proneness was associated with high ratings of voice dominance, and this was independent of paranoia and imagery abilities scores, both of which were found to be significant predictors of hallucination-proneness. Results from Study 2 suggest an interaction between gender of participants and the gender of the voice actor, where only ratings of own gender voices on Dominance characteristics are related to hallucination-proneness scores. These results are important for understanding the perception of characterful features of voices and its significance for psychopathology.
Collapse
Affiliation(s)
- Kaja Julia Mitrenga
- Department of Psychology, Durham University, Durham, England, United Kingdom
- * E-mail:
| | - Ben Alderson-Day
- Department of Psychology, Durham University, Durham, England, United Kingdom
| | - Lucy May
- School of Psychology and Clinical Language Science, University of Reading, Reading, England, United Kingdom
| | - Jamie Moffatt
- Department of Psychology, Durham University, Durham, England, United Kingdom
- School of Psychology, University of Sussex, Falmer, England, United Kingdom
| | - Peter Moseley
- Department of Psychology, Durham University, Durham, England, United Kingdom
- Department of Psychology, University of Central Lancashire, Preston, England, United Kingdom
| | - Charles Fernyhough
- Department of Psychology, Durham University, Durham, England, United Kingdom
| |
Collapse
|
16
|
Castiajo P, Pinheiro AP. Decoding emotions from nonverbal vocalizations: How much voice signal is enough? MOTIVATION AND EMOTION 2019. [DOI: 10.1007/s11031-019-09783-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
17
|
Nordström H, Laukka P. The time course of emotion recognition in speech and music. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:3058. [PMID: 31153307 DOI: 10.1121/1.5108601] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/02/2018] [Accepted: 04/25/2019] [Indexed: 06/09/2023]
Abstract
The auditory gating paradigm was adopted to study how much acoustic information is needed to recognize emotions from speech prosody and music performances. In Study 1, brief utterances conveying ten emotions were segmented into temporally fine-grained gates and presented to listeners, whereas Study 2 instead used musically expressed emotions. Emotion recognition accuracy increased with increasing gate duration and generally stabilized after a certain duration, with different trajectories for different emotions. Above-chance accuracy was observed for ≤100 ms stimuli for anger, happiness, neutral, and sadness, and for ≤250 ms stimuli for most other emotions, for both speech and music. This suggests that emotion recognition is a fast process that allows discrimination of several emotions based on low-level physical characteristics. The emotion identification points, which reflect the amount of information required for stable recognition, were shortest for anger and happiness for both speech and music, but recognition took longer to stabilize for music vs speech. This, in turn, suggests that acoustic cues that develop over time also play a role for emotion inferences (especially for music). Finally, acoustic cue patterns were positively correlated between speech and music, suggesting a shared acoustic code for expressing emotions.
Collapse
Affiliation(s)
- Henrik Nordström
- Department of Psychology, Stockholm University, 106 91 Stockholm, Sweden
| | - Petri Laukka
- Department of Psychology, Stockholm University, 106 91 Stockholm, Sweden
| |
Collapse
|
18
|
Pinheiro AP, Lima D, Albuquerque PB, Anikin A, Lima CF. Spatial location and emotion modulate voice perception. Cogn Emot 2019; 33:1577-1586. [PMID: 30870109 DOI: 10.1080/02699931.2019.1586647] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
How do we perceive voices coming from different spatial locations, and how is this affected by emotion? The current study probed the interplay between space and emotion during voice perception. Thirty participants listened to nonverbal vocalizations coming from different locations around the head (left vs. right; front vs. back), and differing in valence (neutral, positive [amusement] or negative [anger]). They were instructed to identify the location of the vocalizations (Experiment 1) and to evaluate their emotional qualities (Experiment 2). Emotion-space interactions were observed, but only in Experiment 1: emotional vocalizations were better localised than neutral ones when they were presented from the back and the right side. In Experiment 2, emotion recognition accuracy was increased for positive vs. negative and neutral vocalizations, and perceived arousal was increased for emotional vs. neutral vocalizations, but this was independent of spatial location. These findings indicate that emotional salience affects how we perceive the spatial location of voices. They additionally suggest that the interaction between spatial ("where") and emotional ("what") properties of the voice differs as a function of task.
Collapse
Affiliation(s)
- Ana P Pinheiro
- Faculdade de Psicologia, Universidade de Lisboa , Lisboa , Portugal
| | - Diogo Lima
- School of Psychology, University of Minho , Braga , Portugal
| | | | - Andrey Anikin
- Division of Cognitive Science, Department of Philosophy, Lund University , Lund , Sweden
| | - César F Lima
- Instituto Universitário de Lisboa (ISCTE-IUL) , Lisboa , Portugal
| |
Collapse
|
19
|
Picou EM, Singh G, Goy H, Russo F, Hickson L, Oxenham AJ, Buono GH, Ricketts TA, Launer S. Hearing, Emotion, Amplification, Research, and Training Workshop: Current Understanding of Hearing Loss and Emotion Perception and Priorities for Future Research. Trends Hear 2018; 22:2331216518803215. [PMID: 30270810 PMCID: PMC6168729 DOI: 10.1177/2331216518803215] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2018] [Revised: 08/18/2018] [Accepted: 09/03/2018] [Indexed: 12/19/2022] Open
Abstract
The question of how hearing loss and hearing rehabilitation affect patients' momentary emotional experiences is one that has received little attention but has considerable potential to affect patients' psychosocial function. This article is a product from the Hearing, Emotion, Amplification, Research, and Training workshop, which was convened to develop a consensus document describing research on emotion perception relevant for hearing research. This article outlines conceptual frameworks for the investigation of emotion in hearing research; available subjective, objective, neurophysiologic, and peripheral physiologic data acquisition research methods; the effects of age and hearing loss on emotion perception; potential rehabilitation strategies; priorities for future research; and implications for clinical audiologic rehabilitation. More broadly, this article aims to increase awareness about emotion perception research in audiology and to stimulate additional research on the topic.
Collapse
Affiliation(s)
- Erin M. Picou
- Vanderbilt University School of
Medicine, Nashville, TN, USA
| | - Gurjit Singh
- Phonak Canada, Mississauga, ON,
Canada
- Department of Speech-Language Pathology,
University of Toronto, ON, Canada
- Department of Psychology, Ryerson
University, Toronto, ON, Canada
| | - Huiwen Goy
- Department of Psychology, Ryerson
University, Toronto, ON, Canada
| | - Frank Russo
- Department of Psychology, Ryerson
University, Toronto, ON, Canada
| | - Louise Hickson
- School of Health and Rehabilitation
Sciences, University of Queensland, Brisbane, Australia
| | | | | | | | | |
Collapse
|