1
|
Wang L, Zhu T, Wang A, Wang Y. Transcranial Direct Current Stimulation (tDCS) over the left dorsolateral prefrontal cortex reduced attentional bias toward natural emotional sounds. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2024; 24:881-893. [PMID: 38955871 DOI: 10.3758/s13415-024-01202-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/10/2024] [Indexed: 07/04/2024]
Abstract
Previous research has indicated that the left dorsolateral prefrontal cortex (DLPFC) exerts an influence on attentional bias toward visual emotional information. However, it remains unclear whether the left DLPFC also play an important role in attentional bias toward natural emotional sounds. The current research employed the emotional spatial cueing paradigm, incorporating natural emotional sounds of considerable ecological validity as auditory cues. Additionally, high-definition transcranial direct current stimulation (HD-tDCS) was utilized to examine the impact of left dorsolateral prefrontal cortex (DLPFC) on attentional bias and its subcomponents, namely attentional engagement and attentional disengagement. The results showed that (1) compared to sham condition, anodal HD-tDCS over the left DLPFC reduced the attentional bias toward positive and negative sounds; (2) anodal HD-tDCS over the left DLPFC reduced the attentional engagement toward positive and negative sounds, whereas it did not affect attentional disengagement away from natural emotional sounds. Taken together, the present study has shown that left DLPFC, which was closely related with the top-down attention regulatory function, plays an important role in auditory emotional attentional bias.
Collapse
Affiliation(s)
- Linzi Wang
- Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University, No. 3663, North Zhong Shan Road, Shanghai, 200062, China
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| | - Tongtong Zhu
- Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University, No. 3663, North Zhong Shan Road, Shanghai, 200062, China
| | - Aijun Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China.
| | - Yanmei Wang
- Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University, No. 3663, North Zhong Shan Road, Shanghai, 200062, China.
- Shanghai Changning Mental Health Center, Shanghai, China.
- Key Laboratory of Philosophy and Social Science of Anhui Province on Adolescent Mental Health and Crisis Intelligence Intervention, Hefei Normal University, Hefei, China.
| |
Collapse
|
2
|
Morningstar M, Hughes C, French RC, Grannis C, Mattson WI, Nelson EE. Functional connectivity during facial and vocal emotion recognition: Preliminary evidence for dissociations in developmental change by nonverbal modality. Neuropsychologia 2024; 202:108946. [PMID: 38945440 DOI: 10.1016/j.neuropsychologia.2024.108946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Revised: 05/15/2024] [Accepted: 06/27/2024] [Indexed: 07/02/2024]
Abstract
The developmental trajectory of emotion recognition (ER) skills is thought to vary by nonverbal modality, with vocal ER becoming mature later than facial ER. To investigate potential neural mechanisms contributing to this dissociation at a behavioural level, the current study examined whether youth's neural functional connectivity during vocal and facial ER tasks showed differential developmental change across time. Youth ages 8-19 (n = 41) completed facial and vocal ER tasks while undergoing functional magnetic resonance imaging, at two timepoints (1 year apart; n = 36 for behavioural data, n = 28 for neural data). Partial least squares analyses revealed that functional connectivity during ER is both distinguishable by modality (with different patterns of connectivity for facial vs. vocal ER) and across time-with changes in connectivity being particularly pronounced for vocal ER. ER accuracy was greater for faces than voices, and positively associated with age; although task performance did not change appreciably across a 1-year period, changes in latent functional connectivity patterns across time predicted participants' ER accuracy at Time 2. Taken together, these results suggest that vocal and facial ER are supported by distinguishable neural correlates that may undergo different developmental trajectories. Our findings are also preliminary evidence that changes in network integration may support the development of ER skills in childhood and adolescence.
Collapse
Affiliation(s)
- M Morningstar
- Department of Psychology, Queen's University, Canada; Centre for Neuroscience Studies, Queen's University, Canada.
| | - C Hughes
- Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Canada
| | - R C French
- Center for Biobehavioral Health, Abigail Wexner Research Institute, Nationwide Children's Hospital, Columbus, OH, USA; Department of Psychological and Brain Sciences, Indiana University, Bloomington, USA
| | - C Grannis
- Center for Biobehavioral Health, Abigail Wexner Research Institute, Nationwide Children's Hospital, Columbus, OH, USA
| | - W I Mattson
- Center for Biobehavioral Health, Abigail Wexner Research Institute, Nationwide Children's Hospital, Columbus, OH, USA
| | - E E Nelson
- Center for Biobehavioral Health, Abigail Wexner Research Institute, Nationwide Children's Hospital, Columbus, OH, USA; Department of Pediatrics, Ohio State University Wexner College of Medicine, Columbus, OH, USA
| |
Collapse
|
3
|
Maallo AMS, Novembre G, Kusztor A, McIntyre S, Israr A, Gerling G, Björnsdotter M, Olausson H, Boehme R. Primary somatosensory cortical processing in tactile communication. Philos Trans R Soc Lond B Biol Sci 2024; 379:20230249. [PMID: 39005043 DOI: 10.1098/rstb.2023.0249] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 04/22/2024] [Indexed: 07/16/2024] Open
Abstract
Touch is an essential form of non-verbal communication. While language and its neural basis are widely studied, tactile communication is less well understood. We used fMRI and multivariate pattern analyses in pairs of emotionally close adults to examine the neural basis of human-to-human tactile communication. In each pair, a participant was designated either as sender or as receiver. The sender was instructed to communicate specific messages by touching only the arm of the receiver, who was inside the scanner. The receiver then identified the message based on the touch expression alone. We designed two multivariate decoder algorithms-one based on the sender's intent (sender-decoder), and another based on the receiver's response (receiver-decoder). We identified several brain areas that significantly predicted behavioural accuracy of the receiver. Regarding our a priori region of interest, the receiver's primary somatosensory cortex (S1), both decoders were able to accurately differentiate the messages based on neural activity patterns here. The receiver-decoder, which relied on the receivers' interpretations of the touch expressions, outperformed the sender-decoder, which relied on the sender's intent. Our results identified a network of brain areas involved in human-to-human tactile communication and supported the notion of non-sensory factors being represented in S1. This article is part of the theme issue 'Sensing and feeling: an integrative approach to sensory processing and emotional experience'.
Collapse
Affiliation(s)
- Anne Margarette S Maallo
- Center for Social and Affective Neuroscience, Department of Biomedical and Clinical Sciences, Linköping University, 58183 Linköping, Sweden
| | - Giovanni Novembre
- Division of Cell and Neurobiology, Department of Biomedical and Clinical Sciences, Linköping University, 58183 Linköping, Sweden
| | - Anikó Kusztor
- School of Psychological Science, Monash University, Melbourne, Victoria 3168, Australia
| | - Sarah McIntyre
- Center for Social and Affective Neuroscience, Department of Biomedical and Clinical Sciences, Linköping University, 58183 Linköping, Sweden
| | - Ali Israr
- Reality Labs Research, Meta Platforms Inc., Redmond, WA 98052, USA
| | - Gregory Gerling
- Department of Systems and Information Engineering, University of Virginia, Charlottesville, VA 22904, USA
| | - Malin Björnsdotter
- Department of Affective Psychiatry, Sahlgrenska University Hospital, 41345 Gothenburg, Sweden
- Center for Cognitive and Computational Neuropsychiatry (CCNP), Karolinska Institute, 17177 Solna, Sweden
| | - Håkan Olausson
- Center for Social and Affective Neuroscience, Department of Biomedical and Clinical Sciences, Linköping University, 58183 Linköping, Sweden
- Center for Medical Imaging and Visualization, Linköping University, 58183 Linköping, Sweden
| | - Rebecca Boehme
- Center for Social and Affective Neuroscience, Department of Biomedical and Clinical Sciences, Linköping University, 58183 Linköping, Sweden
- Center for Medical Imaging and Visualization, Linköping University, 58183 Linköping, Sweden
| |
Collapse
|
4
|
Xu HZ, Peng XR, Huan SY, Xu JJ, Yu J, Ma QG. Are older adults less generous? Age differences in emotion-related social decision making. Neuroimage 2024; 297:120756. [PMID: 39074759 DOI: 10.1016/j.neuroimage.2024.120756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 07/09/2024] [Accepted: 07/24/2024] [Indexed: 07/31/2024] Open
Abstract
In social interaction, age-related differences in emotional processing may lead to varied social decision making between young and older adults. However, previous studies of social decision making have paid less attention to the interactants' emotions, leaving age differences and underlying neural mechanisms unexplored. To address this gap, the present study combined functional and structural magnetic resonance imaging, employing a modified dictator game task with recipients displaying either neutral or sad facial expressions. Behavioral results indicated that although older adults' overall allocations did not differ significantly from those of young adults, older adults' allocations showing a decrease in emotion-related generosity compared to young adults. Using representational similarity analysis, we found that older adults showed reduced neural representations of recipients' emotions and gray matter volume in the right anterior cingulate gyrus (ACC), right insula, and left dorsomedial prefrontal cortex (DMPFC) compared to young adults. More importantly, mediation analyses indicated that age influenced allocations not only through serial mediation of neural representations of the right insula and left DMPFC, but also through serial mediation of the mean gray matter volume of the right ACC and left DMPFC. This study identifies the potential neural pathways through which age affects emotion-related social decision making, advancing our understanding of older adults' social interaction behavior that they may not be less generous unless confronted with individuals with specific emotions.
Collapse
Affiliation(s)
- Hong-Zhou Xu
- Faculty of Psychology, Southwest University, Chongqing 400715, China
| | - Xue-Rui Peng
- Faculty of Psychology, Technische Universität Dresden, Dresden 01062, Germany; Centre for Tactile Internet with Human-in-the-Loop, Technische Universität Dresden, Dresden 01062, Germany; Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Shen-Yin Huan
- Faculty of Psychology, Southwest University, Chongqing 400715, China
| | - Jia-Jie Xu
- Faculty of Psychology, Southwest University, Chongqing 400715, China
| | - Jing Yu
- Faculty of Psychology, Southwest University, Chongqing 400715, China.
| | - Qing-Guo Ma
- Neuromanagement Laboratory, School of Management, Zhejiang University, Hangzhou 310058, China; Institute of Neural Management Sciences, Zhejiang University of Technology, Hangzhou 310014, China
| |
Collapse
|
5
|
Suslow T, Kersting A, Bodenschatz CM. Dimensions of Alexithymia and Identification of Emotions in Masked and Unmasked Faces. Behav Sci (Basel) 2024; 14:692. [PMID: 39199088 PMCID: PMC11351596 DOI: 10.3390/bs14080692] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2024] [Revised: 07/19/2024] [Accepted: 08/05/2024] [Indexed: 09/01/2024] Open
Abstract
Alexithymia, a multifaceted personality construct, is known to be related to difficulties in the decoding of emotional facial expressions, especially in case of suboptimal stimuli. The present study investigated whether and which facets of alexithymia are related to impairments in the recognition of emotions in faces with face masks. Accuracy and speed of emotion recognition were examined in a block of faces with and a block of faces without face masks in a sample of 102 healthy individuals. The order of blocks varied between participants. Emotions were recognized better and faster in unmasked than in masked faces. Recognition performance was worst and slowest for participants starting the task with masked faces. In the whole sample, there were no correlations of alexithymia facets with accuracy and speed of emotion recognition for masked and unmasked faces. In participants starting the task with masked faces, the facet externally oriented thinking was positively correlated with reaction latencies of correct responses for masked faces. Our findings indicate that an externally oriented thinking style could be linked to a less efficient identification of emotions from faces wearing masks when task difficulty is high and support the utility of a facet approach in alexithymia research.
Collapse
Affiliation(s)
- Thomas Suslow
- Department of Psychosomatic Medicine and Psychotherapy, University of Leipzig Medical Center, 04103 Leipzig, Germany; (A.K.); (C.M.B.)
| | | | | |
Collapse
|
6
|
Scarpazza C, Gramegna C, Costa C, Pezzetta R, Saetti MC, Preti AN, Difonzo T, Zago S, Bolognini N. The Emotion Authenticity Recognition (EAR) test: normative data of an innovative test using dynamic emotional stimuli to evaluate the ability to recognize the authenticity of emotions expressed by faces. Neurol Sci 2024:10.1007/s10072-024-07689-0. [PMID: 39023709 DOI: 10.1007/s10072-024-07689-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Accepted: 07/08/2024] [Indexed: 07/20/2024]
Abstract
Despite research has massively focused on how emotions conveyed by faces are perceived, the perception of emotions' authenticity is a topic that has been surprisingly overlooked. Here, we present the Emotion Authenticity Recognition (EAR) test, a test specifically developed using dynamic stimuli depicting authentic and posed emotions to evaluate the ability of individuals to correctly identify an emotion (emotion recognition index, ER Index) and classify its authenticity (authenticity recognition index (EA Index). The EAR test has been validated on 522 healthy participants and normative values are provided. Correlations with demographic characteristics, empathy and general cognitive status have been obtained revealing that both indices are negatively correlated with age, and positively with education, cognitive status and different facets of empathy. The EAR test offers a new ecological test to assess the ability to detect emotion authenticity that allow to explore the eventual social cognitive deficit even in patients otherwise cognitively intact.
Collapse
Affiliation(s)
- Cristina Scarpazza
- Department of General Psychology, University of Padova, Via Venezia 8, Padova, PD, Italy.
- IRCCS S Camillo Hospital, Venezia, Italy.
| | - Chiara Gramegna
- Ph.D. Program in Neuroscience, School of Medicine and Surgery, University of Milano-Bicocca, Monza, Italy
- Department of Psychology, University of Milano-Bicocca, Milan, Italy
| | - Cristiano Costa
- Padova Neuroscience Center, University of Padova, Padova, Italy
| | | | - Maria Cristina Saetti
- Neurology Unit, IRCCS Fondazione Ca' Granda Ospedale Maggiore Policlinico, Milan, Italy
- Department of Pathophysiology and Transplantation, University of Milan, Milan, Italy
| | - Alice Naomi Preti
- Ph.D. Program in Neuroscience, School of Medicine and Surgery, University of Milano-Bicocca, Monza, Italy
- Department of Psychology, University of Milano-Bicocca, Milan, Italy
| | - Teresa Difonzo
- Neurology Unit, Foundation IRCCS Ca' Granda Hospital Maggiore Policlinico, Milano, Italy
| | - Stefano Zago
- Neurology Unit, Foundation IRCCS Ca' Granda Hospital Maggiore Policlinico, Milano, Italy
| | - Nadia Bolognini
- Department of Psychology, University of Milano-Bicocca, Milan, Italy
- Laboratory of Neuropsychology, Department of Neurorehabilitation Sciences, IRCCS Istituto Auxologico Italiano, Milano, Italy
| |
Collapse
|
7
|
Taitelbaum-Swead R, Ben-David BM. The Role of Early Intact Auditory Experience on the Perception of Spoken Emotions, Comparing Prelingual to Postlingual Cochlear Implant Users. Ear Hear 2024:00003446-990000000-00312. [PMID: 39004788 DOI: 10.1097/aud.0000000000001550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/16/2024]
Abstract
OBJECTIVES Cochlear implants (CI) are remarkably effective, but have limitations regarding the transformation of the spectro-temporal fine structures of speech. This may impair processing of spoken emotions, which involves the identification and integration of semantic and prosodic cues. Our previous study found spoken-emotions-processing differences between CI users with postlingual deafness (postlingual CI) and normal hearing (NH) matched controls (age range, 19 to 65 years). Postlingual CI users over-relied on semantic information in incongruent trials (prosody and semantics present different emotions), but rated congruent trials (same emotion) similarly to controls. Postlingual CI's intact early auditory experience may explain this pattern of results. The present study examined whether CI users without intact early auditory experience (prelingual CI) would generally perform worse on spoken emotion processing than NH and postlingual CI users, and whether CI use would affect prosodic processing in both CI groups. First, we compared prelingual CI users with their NH controls. Second, we compared the results of the present study to our previous study (Taitlebaum-Swead et al. 2022; postlingual CI). DESIGN Fifteen prelingual CI users and 15 NH controls (age range, 18 to 31 years) listened to spoken sentences composed of different combinations (congruent and incongruent) of three discrete emotions (anger, happiness, sadness) and neutrality (performance baseline), presented in prosodic and semantic channels (Test for Rating of Emotions in Speech paradigm). Listeners were asked to rate (six-point scale) the extent to which each of the predefined emotions was conveyed by the sentence as a whole (integration of prosody and semantics), or to focus only on one channel (rating the target emotion [RTE]) and ignore the other (selective attention). In addition, all participants performed standard tests of speech perception. Performance on the Test for Rating of Emotions in Speech was compared with the previous study (postlingual CI). RESULTS When asked to focus on one channel, semantics or prosody, both CI groups showed a decrease in prosodic RTE (compared with controls), but only the prelingual CI group showed a decrease in semantic RTE. When the task called for channel integration, both groups of CI users used semantic emotional information to a greater extent than their NH controls. Both groups of CI users rated sentences that did not present the target emotion higher than their NH controls, indicating some degree of confusion. However, only the prelingual CI group rated congruent sentences lower than their NH controls, suggesting reduced accumulation of information across channels. For prelingual CI users, individual differences in identification of monosyllabic words were significantly related to semantic identification and semantic-prosodic integration. CONCLUSIONS Taken together with our previous study, we found that the degradation of acoustic information by the CI impairs the processing of prosodic emotions, in both CI user groups. This distortion appears to lead CI users to over-rely on the semantic information when asked to integrate across channels. Early intact auditory exposure among CI users was found to be necessary for the effective identification of semantic emotions, as well as the accumulation of emotional information across the two channels. Results suggest that interventions for spoken-emotion processing should not ignore the onset of hearing loss.
Collapse
Affiliation(s)
- Riki Taitelbaum-Swead
- Department of Communication Disorders, Speech Perception and Listening Effort Lab in the name of Prof. Mordechai Himelfarb, Ariel University, Israel
- Meuhedet Health Services, Tel Aviv, Israel
| | - Boaz M Ben-David
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
- Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- KITE Research Institute, Toronto Rehabilitation Institute-University Health Network, Toronto, Ontario, Canada
| |
Collapse
|
8
|
Xiao J, Adkinson JA, Allawala AB, Banks G, Bartoli E, Fan X, Mocchi M, Pascuzzi B, Pulapaka S, Franch MC, Mathew SJ, Mathura RK, Myers J, Pirtle V, Provenza NR, Shofty B, Watrous AJ, Pitkow X, Goodman WK, Pouratian N, Sheth S, Bijanki KR, Hayden BY. Insula uses overlapping codes for emotion in self and others. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.04.596966. [PMID: 38895233 PMCID: PMC11185604 DOI: 10.1101/2024.06.04.596966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/21/2024]
Abstract
In daily life, we must recognize others' emotions so we can respond appropriately. This ability may rely, at least in part, on neural responses similar to those associated with our own emotions. We hypothesized that the insula, a cortical region near the junction of the temporal, parietal, and frontal lobes, may play a key role in this process. We recorded local field potential (LFP) activity in human neurosurgical patients performing two tasks, one focused on identifying their own emotional response and one on identifying facial emotional responses in others. We found matching patterns of gamma- and high-gamma band activity for the two tasks in the insula. Three other regions (MTL, ACC, and OFC) clearly encoded both self- and other-emotions, but used orthogonal activity patterns to do so. These results support the hypothesis that the insula plays a particularly important role in mediating between experienced vs. observed emotions.
Collapse
Affiliation(s)
- Jiayang Xiao
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, 77030
| | - Joshua A. Adkinson
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, 77030
| | | | - Garrett Banks
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, 77030
| | - Eleonora Bartoli
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, 77030
| | - Xiaoxu Fan
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, 77030
| | - Madaline Mocchi
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, 77030
| | - Bailey Pascuzzi
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, 77030
| | - Suhruthaa Pulapaka
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, 77030
| | - Melissa C. Franch
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, 77030
| | - Sanjay J. Mathew
- Department of Psychiatry, Baylor College of Medicine, Houston, TX, 77030
| | - Raissa K. Mathura
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, 77030
| | - John Myers
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, 77030
| | - Victoria Pirtle
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, 77030
| | - Nicole R Provenza
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, 77030
| | - Ben Shofty
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, 77030
| | - Andrew J. Watrous
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, 77030
| | - Xaq Pitkow
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, 77030
| | - Wayne K. Goodman
- Department of Psychiatry, Baylor College of Medicine, Houston, TX, 77030
| | - Nader Pouratian
- Department of Neurosurgery, University of Texas Southwestern, Dallas, TX, 75390
| | - Sameer Sheth
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, 77030
| | - Kelly R. Bijanki
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, 77030
| | - Benjamin Y. Hayden
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, 77030
| |
Collapse
|
9
|
Laukka P, Månsson KNT, Cortes DS, Manzouri A, Frick A, Fredborg W, Fischer H. Neural correlates of individual differences in multimodal emotion recognition ability. Cortex 2024; 175:1-11. [PMID: 38691922 DOI: 10.1016/j.cortex.2024.03.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 03/11/2024] [Accepted: 04/01/2024] [Indexed: 05/03/2024]
Abstract
Studies have reported substantial variability in emotion recognition ability (ERA) - an important social skill - but possible neural underpinnings for such individual differences are not well understood. This functional magnetic resonance imaging (fMRI) study investigated neural responses during emotion recognition in young adults (N = 49) who were selected for inclusion based on their performance (high or low) during previous testing of ERA. Participants were asked to judge brief video recordings in a forced-choice emotion recognition task, wherein stimuli were presented in visual, auditory and multimodal (audiovisual) blocks. Emotion recognition rates during brain scanning confirmed that individuals with high (vs low) ERA received higher accuracy for all presentation blocks. fMRI-analyses focused on key regions of interest (ROIs) involved in the processing of multimodal emotion expressions, based on previous meta-analyses. In neural response to emotional stimuli contrasted with neutral stimuli, individuals with high (vs low) ERA showed higher activation in the following ROIs during the multimodal condition: right middle superior temporal gyrus (mSTG), right posterior superior temporal sulcus (PSTS), and right inferior frontal cortex (IFC). Overall, results suggest that individual variability in ERA may be reflected across several stages of decisional processing, including extraction (mSTG), integration (PSTS) and evaluation (IFC) of emotional information.
Collapse
Affiliation(s)
- Petri Laukka
- Department of Psychology, Stockholm University, Stockholm, Sweden; Department of Psychology, Uppsala University, Uppsala, Sweden.
| | - Kristoffer N T Månsson
- Centre for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden; Department of Clinical Psychology and Psychotherapy, Babeș-Bolyai University, Cluj-Napoca, Romania
| | - Diana S Cortes
- Department of Psychology, Stockholm University, Stockholm, Sweden
| | - Amirhossein Manzouri
- Department of Psychology, Stockholm University, Stockholm, Sweden; Centre for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Andreas Frick
- Department of Medical Sciences, Psychiatry, Uppsala University, Uppsala, Sweden
| | - William Fredborg
- Department of Psychology, Stockholm University, Stockholm, Sweden
| | - Håkan Fischer
- Department of Psychology, Stockholm University, Stockholm, Sweden; Stockholm University Brain Imaging Centre (SUBIC), Stockholm University, Stockholm, Sweden; Aging Research Center, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet and Stockholm University, Stockholm, Sweden
| |
Collapse
|
10
|
Mota-Rojas D, Whittaker AL, Domínguez-Oliva A, Strappini AC, Álvarez-Macías A, Mora-Medina P, Ghezzi M, Lendez P, Lezama-García K, Grandin T. Tactile, Auditory, and Visual Stimulation as Sensory Enrichment for Dairy Cattle. Animals (Basel) 2024; 14:1265. [PMID: 38731269 PMCID: PMC11083412 DOI: 10.3390/ani14091265] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2024] [Revised: 04/18/2024] [Accepted: 04/18/2024] [Indexed: 05/13/2024] Open
Abstract
Several types of enrichment can be used to improve animal welfare. This review summarizes the literature on the use of mechanical brushes, tactile udder stimulation, music, and visual stimuli as enrichment methods for dairy cows. Mechanical brushes and tactile stimulation of the udder have been shown to have a positive effect on milk yield and overall behavioral repertoire, enhancing natural behavior. Classical music reduces stress levels and has similarly been associated with increased milk yield. A slow or moderate tempo (70 to 100 bpm) at frequencies below 70 dB is recommended to have this positive effect. Evidence on the impacts of other types of enrichment, such as visual stimulation through mirrors, pictures, and color lights, or the use of olfactory stimuli, is equivocal and requires further study.
Collapse
Affiliation(s)
- Daniel Mota-Rojas
- Neurophysiology, Behavior and Animal Welfare Assessment, DPAA, Universidad Autónoma Metropolitana (UAM), Mexico City 04960, Mexico (K.L.-G.)
| | - Alexandra L. Whittaker
- School of Animal and Veterinary Sciences, University of Adelaide, Roseworthy Campus, Adelaide, SA 5116, Australia
| | - Adriana Domínguez-Oliva
- Neurophysiology, Behavior and Animal Welfare Assessment, DPAA, Universidad Autónoma Metropolitana (UAM), Mexico City 04960, Mexico (K.L.-G.)
| | - Ana C. Strappini
- Animal Health and Welfare Department, Wageningen Livestock Research, Wageningen University and Research, 6708 WD Wageningen, The Netherlands
| | - Adolfo Álvarez-Macías
- Neurophysiology, Behavior and Animal Welfare Assessment, DPAA, Universidad Autónoma Metropolitana (UAM), Mexico City 04960, Mexico (K.L.-G.)
| | - Patricia Mora-Medina
- Facultad de Estudios Superiores Cuautitlán, Universidad Nacional Autónoma de México (UNAM), Cuautitlán 54714, Mexico
| | - Marcelo Ghezzi
- Anatomy Area, Faculty of Veterinary Sciences (FCV), Universidad Nacional del Centro de la Provincia de Buenos Aires (UNCPBA), University Campus, Tandil 7000, Argentina
- Centro de Investigación Veterinaria de Tandil CIVETAN, UNCPBA-CICPBA-CONICET (UNCPBA), University Campus, Tandil 7000, Argentina
| | - Pamela Lendez
- Anatomy Area, Faculty of Veterinary Sciences (FCV), Universidad Nacional del Centro de la Provincia de Buenos Aires (UNCPBA), University Campus, Tandil 7000, Argentina
- Centro de Investigación Veterinaria de Tandil CIVETAN, UNCPBA-CICPBA-CONICET (UNCPBA), University Campus, Tandil 7000, Argentina
| | - Karina Lezama-García
- Neurophysiology, Behavior and Animal Welfare Assessment, DPAA, Universidad Autónoma Metropolitana (UAM), Mexico City 04960, Mexico (K.L.-G.)
| | - Temple Grandin
- Department of Animal Science, Colorado State University, Fort Collins, CO 80526, USA
| |
Collapse
|
11
|
Lettieri G, Handjaras G, Cappello EM, Setti F, Bottari D, Bruno V, Diano M, Leo A, Tinti C, Garbarini F, Pietrini P, Ricciardi E, Cecchetti L. Dissecting abstract, modality-specific and experience-dependent coding of affect in the human brain. SCIENCE ADVANCES 2024; 10:eadk6840. [PMID: 38457501 PMCID: PMC10923499 DOI: 10.1126/sciadv.adk6840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 02/06/2024] [Indexed: 03/10/2024]
Abstract
Emotion and perception are tightly intertwined, as affective experiences often arise from the appraisal of sensory information. Nonetheless, whether the brain encodes emotional instances using a sensory-specific code or in a more abstract manner is unclear. Here, we answer this question by measuring the association between emotion ratings collected during a unisensory or multisensory presentation of a full-length movie and brain activity recorded in typically developed, congenitally blind and congenitally deaf participants. Emotional instances are encoded in a vast network encompassing sensory, prefrontal, and temporal cortices. Within this network, the ventromedial prefrontal cortex stores a categorical representation of emotion independent of modality and previous sensory experience, and the posterior superior temporal cortex maps the valence dimension using an abstract code. Sensory experience more than modality affects how the brain organizes emotional information outside supramodal regions, suggesting the existence of a scaffold for the representation of emotional states where sensory inputs during development shape its functioning.
Collapse
Affiliation(s)
- Giada Lettieri
- Crossmodal Perception and Plasticity Laboratory, Institute of Research in Psychology & Institute of Neuroscience, Université Catholique de Louvain, Louvain-la-Neuve, Belgium
- Social and Affective Neuroscience Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Giacomo Handjaras
- Social and Affective Neuroscience Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Elisa M. Cappello
- Social and Affective Neuroscience Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Francesca Setti
- Sensorimotor Experiences and Mental Representations Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Davide Bottari
- Sensorimotor Experiences and Mental Representations Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
- Sensory Experience Dependent Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | | | - Matteo Diano
- Department of Psychology, University of Turin, Turin, Italy
| | - Andrea Leo
- Department of of Translational Research and Advanced Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - Carla Tinti
- Department of Psychology, University of Turin, Turin, Italy
| | | | - Pietro Pietrini
- Forensic Neuroscience and Psychiatry Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Emiliano Ricciardi
- Sensorimotor Experiences and Mental Representations Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
- Sensory Experience Dependent Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Luca Cecchetti
- Social and Affective Neuroscience Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| |
Collapse
|
12
|
Sarzedas J, Lima CF, Roberto MS, Scott SK, Pinheiro AP, Conde T. Blindness influences emotional authenticity perception in voices: Behavioral and ERP evidence. Cortex 2024; 172:254-270. [PMID: 38123404 DOI: 10.1016/j.cortex.2023.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 10/31/2023] [Accepted: 11/10/2023] [Indexed: 12/23/2023]
Abstract
The ability to distinguish spontaneous from volitional emotional expressions is an important social skill. How do blind individuals perceive emotional authenticity? Unlike sighted individuals, they cannot rely on facial and body language cues, relying instead on vocal cues alone. Here, we combined behavioral and ERP measures to investigate authenticity perception in laughter and crying in individuals with early- or late-blindness onset. Early-blind, late-blind, and sighted control participants (n = 17 per group, N = 51) completed authenticity and emotion discrimination tasks while EEG data were recorded. The stimuli consisted of laughs and cries that were either spontaneous or volitional. The ERP analysis focused on the N1, P2, and late positive potential (LPP). Behaviorally, early-blind participants showed intact authenticity perception, but late-blind participants performed worse than controls. There were no group differences in the emotion discrimination task. In brain responses, all groups were sensitive to laughter authenticity at the P2 stage, and to crying authenticity at the early LPP stage. Nevertheless, only early-blind participants were sensitive to crying authenticity at the N1 and middle LPP stages, and to laughter authenticity at the early LPP stage. Furthermore, early-blind and sighted participants were more sensitive than late-blind ones to crying authenticity at the P2 and late LPP stages. Altogether, these findings suggest that early blindness relates to facilitated brain processing of authenticity in voices, both at early sensory and late cognitive-evaluative stages. Late-onset blindness, in contrast, relates to decreased sensitivity to authenticity at behavioral and brain levels.
Collapse
Affiliation(s)
- João Sarzedas
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - César F Lima
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal; Institute of Cognitive Neuroscience, University College London, London, UK
| | - Magda S Roberto
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Ana P Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal.
| | - Tatiana Conde
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisboa, Portugal.
| |
Collapse
|
13
|
Huerta-Chavez V, Ramos-Loyo J. Emotional congruency between faces and words benefits emotional judgments in women: An event-related potential study. Neurosci Lett 2024; 822:137644. [PMID: 38242346 DOI: 10.1016/j.neulet.2024.137644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 12/21/2023] [Accepted: 01/15/2024] [Indexed: 01/21/2024]
Abstract
The present study aimed to investigate the effects of emotional congruency between faces and words on word evaluation through event-related brain potentials (ERPs). To this end, 20 women performed a face-word congruency task in which an emotional face was presented simultaneously with an affective word in a non-superimposed format. Participants had to evaluate the emotional valence of the word in three different conditions: congruent, incongruent, and control. The emotionally congruent words were categorized faster and more accurately than the incongruent ones. In addition, the emotionally congruent words elicited higher P3/LPP amplitudes than the incongruent ones. These results indicate a beneficial effect of emotional face-word congruency on emotional judgments of words.
Collapse
|
14
|
Sells RC, Liversedge SP, Chronaki G. Vocal emotion recognition in attention-deficit hyperactivity disorder: a meta-analysis. Cogn Emot 2024; 38:23-43. [PMID: 37715528 DOI: 10.1080/02699931.2023.2258590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 06/06/2023] [Accepted: 06/28/2023] [Indexed: 09/17/2023]
Abstract
There is debate within the literature as to whether emotion dysregulation (ED) in Attention-Deficit Hyperactivity Disorder (ADHD) reflects deviant attentional mechanisms or atypical perceptual emotion processing. Previous reviews have reliably examined the nature of facial, but not vocal, emotion recognition accuracy in ADHD. The present meta-analysis quantified vocal emotion recognition (VER) accuracy scores in ADHD and controls using robust variance estimation, gathered from 21 published and unpublished papers. Additional moderator analyses were carried out to determine whether the nature of VER accuracy in ADHD varied depending on emotion type. Findings revealed a medium effect size for the presence of VER deficits in ADHD, and moderator analyses showed VER accuracy in ADHD did not differ due to emotion type. These results support the theories which implicate the role of attentional mechanisms in driving VER deficits in ADHD. However, there is insufficient data within the behavioural VER literature to support the presence of emotion processing atypicalities in ADHD. Future neuro-imaging research could explore the interaction between attention and emotion processing in ADHD, taking into consideration ADHD subtypes and comorbidities.
Collapse
Affiliation(s)
- Rohanna C Sells
- School of Psychology and Computer Science, University of Central Lancashire, UK
| | - Simon P Liversedge
- School of Psychology and Computer Science, University of Central Lancashire, UK
| | - Georgia Chronaki
- School of Psychology and Computer Science, University of Central Lancashire, UK
| |
Collapse
|
15
|
Lee JP, Jang H, Jang Y, Song H, Lee S, Lee PS, Kim J. Encoding of multi-modal emotional information via personalized skin-integrated wireless facial interface. Nat Commun 2024; 15:530. [PMID: 38225246 PMCID: PMC10789773 DOI: 10.1038/s41467-023-44673-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 12/28/2023] [Indexed: 01/17/2024] Open
Abstract
Human affects such as emotions, moods, feelings are increasingly being considered as key parameter to enhance the interaction of human with diverse machines and systems. However, their intrinsically abstract and ambiguous nature make it challenging to accurately extract and exploit the emotional information. Here, we develop a multi-modal human emotion recognition system which can efficiently utilize comprehensive emotional information by combining verbal and non-verbal expression data. This system is composed of personalized skin-integrated facial interface (PSiFI) system that is self-powered, facile, stretchable, transparent, featuring a first bidirectional triboelectric strain and vibration sensor enabling us to sense and combine the verbal and non-verbal expression data for the first time. It is fully integrated with a data processing circuit for wireless data transfer allowing real-time emotion recognition to be performed. With the help of machine learning, various human emotion recognition tasks are done accurately in real time even while wearing mask and demonstrated digital concierge application in VR environment.
Collapse
Affiliation(s)
- Jin Pyo Lee
- School of Material Science and Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919, South Korea
- School of Materials Science and Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore, 639798, Singapore
| | - Hanhyeok Jang
- School of Material Science and Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919, South Korea
| | - Yeonwoo Jang
- School of Material Science and Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919, South Korea
| | - Hyeonseo Song
- School of Material Science and Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919, South Korea
| | - Suwoo Lee
- School of Material Science and Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919, South Korea
| | - Pooi See Lee
- School of Materials Science and Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore, 639798, Singapore.
| | - Jiyun Kim
- School of Material Science and Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919, South Korea.
- Center for Multidimensional Programmable Matter, Ulsan National Institute of Science and Technology, Ulsan, 44919, South Korea.
| |
Collapse
|
16
|
Landmann E, Krahmer A, Böckler A. Social Understanding beyond the Familiar: Disparity in Visual Abilities Does Not Impede Empathy and Theory of Mind. J Intell 2023; 12:2. [PMID: 38248900 PMCID: PMC10816830 DOI: 10.3390/jintelligence12010002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 12/08/2023] [Accepted: 12/19/2023] [Indexed: 01/23/2024] Open
Abstract
Feeling with our conspecifics and understanding their sentiments and intentions is a crucial part of our lives. What is the basis for these forms of social understanding? If individuals ground their understanding of others' thoughts and feelings in their own perceptual and factual experiences, it could present a challenge to empathize and mentalize with those whose reality of life is significantly different. This preregistered study compared two groups of participants who differed in a central perceptual feature, their visual abilities (visually impaired vs. unimpaired; total N = 56), concerning their social understanding of others who were themselves either visually impaired or unimpaired. Employing an adjusted version of the EmpaToM task, participants heard short, autobiographic narrations by visually impaired or unimpaired individuals, and we assessed their empathic responding and mentalizing performance. Our findings did not reveal heightened empathy and mentalizing proclivities when the narrator's visual abilities aligned with those of the participant. However, in some circumstances, cognitive understanding of others' narrations benefitted from familiarity with the situation. Overall, our findings suggest that social understanding does not mainly rely on perceptual familiarity with concrete situations but is likely grounded in sharing emotions and experiences on a more fundamental level.
Collapse
Affiliation(s)
- Eva Landmann
- Department of Psychology, University of Würzburg, 97070 Würzburg, Germany (A.B.)
| | | | | |
Collapse
|
17
|
Mahayossanunt Y, Nupairoj N, Hemrungrojn S, Vateekul P. Explainable Depression Detection Based on Facial Expression Using LSTM on Attentional Intermediate Feature Fusion with Label Smoothing. SENSORS (BASEL, SWITZERLAND) 2023; 23:9402. [PMID: 38067773 PMCID: PMC10708765 DOI: 10.3390/s23239402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 10/26/2023] [Accepted: 11/23/2023] [Indexed: 12/18/2023]
Abstract
Machine learning is used for a fast pre-diagnosis approach to prevent the effects of Major Depressive Disorder (MDD). The objective of this research is to detect depression using a set of important facial features extracted from interview video, e.g., radians, gaze at angles, action unit intensity, etc. The model is based on LSTM with an attention mechanism. It aims to combine those features using the intermediate fusion approach. The label smoothing was presented to further improve the model's performance. Unlike other black-box models, the integrated gradient was presented as the model explanation to show important features of each patient. The experiment was conducted on 474 video samples collected at Chulalongkorn University. The data set was divided into 134 depressed and 340 non-depressed categories. The results showed that our model is the winner, with a 88.89% F1-score, 87.03% recall, 91.67% accuracy, and 91.40% precision. Moreover, the model can capture important features of depression, including head turning, no specific gaze, slow eye movement, no smiles, frowning, grumbling, and scowling, which express a lack of concentration, social disinterest, and negative feelings that are consistent with the assumptions in the depressive theories.
Collapse
Affiliation(s)
- Yanisa Mahayossanunt
- Department of Computer Engineering, Faculty of Engineering, Chulalongkorn University, Phayathai Rd, Pathumwan, Bangkok 10330, Thailand; (Y.M.); (N.N.)
| | - Natawut Nupairoj
- Department of Computer Engineering, Faculty of Engineering, Chulalongkorn University, Phayathai Rd, Pathumwan, Bangkok 10330, Thailand; (Y.M.); (N.N.)
- Center of Excellence in Digital and AI Innovation for Mental Health (AIMET), Chulalongkorn Unversity, Phayathai Rd, Pathumwan, Bangkok 10330, Thailand
| | - Solaphat Hemrungrojn
- Center of Excellence in Digital and AI Innovation for Mental Health (AIMET), Chulalongkorn Unversity, Phayathai Rd, Pathumwan, Bangkok 10330, Thailand
- Department of Psychiatry, Faculty of Medicine, Chulalongkorn University, Phayathai Rd, Pathumwan, Bangkok 10330, Thailand
- Cognitive Fitness and Biopsychiatry Technology Research Unit, Faculty of Medicine, Chulalongkorn University, Phayathai Rd, Pathumwan, Bangkok 10330, Thailand
| | - Peerapon Vateekul
- Department of Computer Engineering, Faculty of Engineering, Chulalongkorn University, Phayathai Rd, Pathumwan, Bangkok 10330, Thailand; (Y.M.); (N.N.)
- Center of Excellence in Digital and AI Innovation for Mental Health (AIMET), Chulalongkorn Unversity, Phayathai Rd, Pathumwan, Bangkok 10330, Thailand
- Cognitive Fitness and Biopsychiatry Technology Research Unit, Faculty of Medicine, Chulalongkorn University, Phayathai Rd, Pathumwan, Bangkok 10330, Thailand
| |
Collapse
|
18
|
Portnova GV, Podlepich VV, Skorokhodov IV. Patients With Better Outcome Have Higher ERP Response to Emotional Auditory Stimuli. J Clin Neurophysiol 2023; 40:634-640. [PMID: 37931164 DOI: 10.1097/wnp.0000000000000938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
PURPOSE Accuracy of outcome prognosis is one of the most important tasks of coma arousal therapy. Reactions toward sensory stimuli are the most significant predictor of conscience and cognitive functions restoration after a brain injury. A paradigm that includes ERP registration has the advantage of detailed stimuli processing visualization. The authors aimed to investigate perception and distinguishing of emotionally significant sounds (crying and laughter) in coma patients with different consciousness restoration prognosis. METHODS EEG was recorded in 24 comatose patients with different outcomes (scored with Glasgow Outcome Scale-Extended) and 32 healthy volunteers. The authors presented sounds of crying and laughter. ERPs for sound stimulation were calculated. RESULTS An analysis of the correlation of ERP components and Glasgow Outcome Scale-Extended score was performed. P200 (r = 0.6, P = 0.0014) and N200 amplitudes (r = -0.56, P = 0.0037) for emotional sounds correlated with the Glasgow Outcome Scale-Extended score. The significant differences of P300 and N400 amplitudes corresponded to differences of response between sounds of crying and laughter in subjects of the control group. Unlike the control group, comatose participants with good outcome produced similar electrical activity toward pleasant and unpleasant emotional stimuli. CONCLUSIONS Comatose patients with good outcome produced more prominent ERP for emotional sounds. Even the good outcome participants were unable to distinguish emotional sounds of different moods, which indicate the preservation of solely robust mechanisms of sound processing. N200 and P200 amplitudes for emotional stimuli correlated significantly with outcome prognosis in coma patients.
Collapse
Affiliation(s)
- Galina V Portnova
- Institute of Higher Nervous Activity and Neurophysiology, Russian Academy of Sciences, Moscow, Russia
| | - Vitaliy V Podlepich
- Federal State Autonomous Institution N. N. Burdenko National Medical Research Center of Neurosurgery of the Ministry of Health of the Russian Federation, Moscow, Russian Federation
| | - Ivan V Skorokhodov
- Rehabilitation Center for Children with Autistic Spectrum Disorders "OUR SUNNY WORLD" (Non-Government, Non-Profit Organization), Moscow, Russia; and
- Pushkin State Russian Language Institute, Moscow, Russia
| |
Collapse
|
19
|
Vaessen M, Van der Heijden K, de Gelder B. Modality-specific brain representations during automatic processing of face, voice and body expressions. Front Neurosci 2023; 17:1132088. [PMID: 37869514 PMCID: PMC10587395 DOI: 10.3389/fnins.2023.1132088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Accepted: 09/05/2023] [Indexed: 10/24/2023] Open
Abstract
A central question in affective science and one that is relevant for its clinical applications is how emotions provided by different stimuli are experienced and represented in the brain. Following the traditional view emotional signals are recognized with the help of emotion concepts that are typically used in descriptions of mental states and emotional experiences, irrespective of the sensory modality. This perspective motivated the search for abstract representations of emotions in the brain, shared across variations in stimulus type (face, body, voice) and sensory origin (visual, auditory). On the other hand, emotion signals like for example an aggressive gesture, trigger rapid automatic behavioral responses and this may take place before or independently of full abstract representation of the emotion. This pleads in favor specific emotion signals that may trigger rapid adaptative behavior only by mobilizing modality and stimulus specific brain representations without relying on higher order abstract emotion categories. To test this hypothesis, we presented participants with naturalistic dynamic emotion expressions of the face, the whole body, or the voice in a functional magnetic resonance (fMRI) study. To focus on automatic emotion processing and sidestep explicit concept-based emotion recognition, participants performed an unrelated target detection task presented in a different sensory modality than the stimulus. By using multivariate analyses to assess neural activity patterns in response to the different stimulus types, we reveal a stimulus category and modality specific brain organization of affective signals. Our findings are consistent with the notion that under ecological conditions emotion expressions of the face, body and voice may have different functional roles in triggering rapid adaptive behavior, even if when viewed from an abstract conceptual vantage point, they may all exemplify the same emotion. This has implications for a neuroethologically grounded emotion research program that should start from detailed behavioral observations of how face, body, and voice expressions function in naturalistic contexts.
Collapse
|
20
|
Mayorova L, Portnova G, Skorokhodov I. Cortical Response Variation with Social and Non-Social Affective Touch Processing in the Glabrous and Hairy Skin of the Leg: A Pilot fMRI Study. SENSORS (BASEL, SWITZERLAND) 2023; 23:7881. [PMID: 37765936 PMCID: PMC10538157 DOI: 10.3390/s23187881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 08/12/2023] [Accepted: 08/21/2023] [Indexed: 09/29/2023]
Abstract
Despite the crucial role of touch in social development and its importance for social interactions, there has been very little functional magnetic resonance imaging (fMRI) research on brain mechanisms underlying social touch processing. Moreover, there has been very little research on the perception of social touch in the lower extremities in humans, even though this information could expand our understanding of the mechanisms of the c-tactile system. Here, variations in the neural response to stimulation by social and non-social affective leg touch were investigated using fMRI. Participants were subjected to slow a (at 3-5 cm/s) stroking social touch (hand, skin-to-skin) and a non-social touch (peacock feather) to the hairy skin of the shin and to the glabrous skin of the foot sole. Stimulation of the glabrous skin of the foot sole, regardless of the type of stimulus, elicited a much more widespread cortical response, including structures such as the medial segment of precentral gyri, left precentral gyrus, bilateral putamen, anterior insula, left postcentral gyrus, right thalamus, and pallidum. Stimulation of the hairy skin of the shin elicited a relatively greater response in the left middle cingulate gyrus, left angular gyrus, left frontal eye field, bilateral anterior prefrontal cortex, and left frontal pole. Activation of brain structures, some of which belong to the "social brain"-the pre- and postcentral gyri bilaterally, superior and middle occipital gyri bilaterally, left middle and superior temporal gyri, right anterior cingulate gyrus and caudate, left middle and inferior frontal gyri, and left lateral ventricle area, was associated with the perception of non-social stimuli in the leg. The left medial segment of pre- and postcentral gyri, left postcentral gyrus and precuneus, bilateral parietal operculum, right planum temporale, left central operculum, and left thalamus proper showed greater activation for social tactile touch. There are regions in the cerebral cortex that responded specifically to hand and feather touch in the foot sole region. These areas included the posterior insula, precentral gyrus; putamen, pallidum and anterior insula; superior parietal cortex; transverse temporal gyrus and parietal operculum, supramarginal gyrus and planum temporale. Subjective assessment of stimulus ticklishness was related to activation of the left cuneal region. Our results make some contribution to understanding the physiology of the perception of social and non-social tactile stimuli and the CT system, including its evolution, and they have clinical impact in terms of environmental enrichment.
Collapse
Affiliation(s)
- Larisa Mayorova
- Laboratory of Physiology of Sensory Systems, Institute of Higher Nervous Activity and Neurophysiology of Russian Academy of Science, 117485 Moscow, Russia
- Laboratory for the Study of Tactile Communication, Pushkin State Russian Language Institute, 117485 Moscow, Russia
| | - Galina Portnova
- Laboratory for the Study of Tactile Communication, Pushkin State Russian Language Institute, 117485 Moscow, Russia
- Laboratory of Human Higher Nervous Activity, Institute of Higher Nervous Activity and Neurophysiology of Russian Academy of Science, 117485 Moscow, Russia
| | - Ivan Skorokhodov
- Laboratory for the Study of Tactile Communication, Pushkin State Russian Language Institute, 117485 Moscow, Russia
| |
Collapse
|
21
|
K A, Prasad S, Chakrabarty M. Trait anxiety modulates the detection sensitivity of negative affect in speech: an online pilot study. Front Behav Neurosci 2023; 17:1240043. [PMID: 37744950 PMCID: PMC10512416 DOI: 10.3389/fnbeh.2023.1240043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 08/21/2023] [Indexed: 09/26/2023] Open
Abstract
Acoustic perception of emotions in speech is relevant for humans to navigate the social environment optimally. While sensory perception is known to be influenced by ambient noise, and bodily internal states (e.g., emotional arousal and anxiety), their relationship to human auditory perception is relatively less understood. In a supervised, online pilot experiment sans the artificially controlled laboratory environment, we asked if the detection sensitivity of emotions conveyed by human speech-in-noise (acoustic signals) varies between individuals with relatively lower and higher levels of subclinical trait-anxiety, respectively. In a task, participants (n = 28) accurately discriminated the target emotion conveyed by the temporally unpredictable acoustic signals (signal to noise ratio = 10 dB), which were manipulated at four levels (Happy, Neutral, Fear, and Disgust). We calculated the empirical area under the curve (a measure of acoustic signal detection sensitivity) based on signal detection theory to answer our questions. A subset of individuals with High trait-anxiety relative to Low in the above sample showed significantly lower detection sensitivities to acoustic signals of negative emotions - Disgust and Fear and significantly lower detection sensitivities to acoustic signals when averaged across all emotions. The results from this pilot study with a small but statistically relevant sample size suggest that trait-anxiety levels influence the overall acoustic detection of speech-in-noise, especially those conveying threatening/negative affect. The findings are relevant for future research on acoustic perception anomalies underlying affective traits and disorders.
Collapse
Affiliation(s)
- Achyuthanand K
- Department of Computational Biology, Indraprastha Institute of Information Technology Delhi, New Delhi, India
| | - Saurabh Prasad
- Department of Computer Science and Engineering, Indraprastha Institute of Information Technology Delhi, New Delhi, India
| | - Mrinmoy Chakrabarty
- Department of Social Sciences and Humanities, Indraprastha Institute of Information Technology Delhi, New Delhi, India
- Centre for Design and New Media, Indraprastha Institute of Information Technology Delhi, New Delhi, India
| |
Collapse
|
22
|
Jeong DK, Kim HG, Kim JY. Emotion Recognition Using Hierarchical Spatiotemporal Electroencephalogram Information from Local to Global Brain Regions. Bioengineering (Basel) 2023; 10:1040. [PMID: 37760143 PMCID: PMC10525488 DOI: 10.3390/bioengineering10091040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 08/26/2023] [Accepted: 08/28/2023] [Indexed: 09/29/2023] Open
Abstract
To understand human emotional states, local activities in various regions of the cerebral cortex and the interactions among different brain regions must be considered. This paper proposes a hierarchical emotional context feature learning model that improves multichannel electroencephalography (EEG)-based emotion recognition by learning spatiotemporal EEG features from a local brain region to a global brain region. The proposed method comprises a regional brain-level encoding module, a global brain-level encoding module, and a classifier. First, multichannel EEG signals grouped into nine regions based on the functional role of the brain are input into a regional brain-level encoding module to learn local spatiotemporal information. Subsequently, the global brain-level encoding module improved emotional classification performance by integrating local spatiotemporal information from various brain regions to learn the global context features of brain regions related to emotions. Next, we applied a two-layer bidirectional gated recurrent unit (BGRU) with self-attention to the regional brain-level module and a one-layer BGRU with self-attention to the global brain-level module. Experiments were conducted using three datasets to evaluate the EEG-based emotion recognition performance of the proposed method. The results proved that the proposed method achieves superior performance by reflecting the characteristics of multichannel EEG signals better than state-of-the-art methods.
Collapse
Affiliation(s)
- Dong-Ki Jeong
- Department of Electronic Convergence Engineering, Kwangwoon University, 20 Gwangun-ro, Nowon-gu, Seoul 01897, Republic of Korea;
| | - Hyoung-Gook Kim
- Department of Electronic Convergence Engineering, Kwangwoon University, 20 Gwangun-ro, Nowon-gu, Seoul 01897, Republic of Korea;
| | - Jin-Young Kim
- Department of ICT Convergence System Engineering, Chonnam National University, 77 Yongbong-ro, Buk-gu, Gwangju 61186, Republic of Korea;
| |
Collapse
|
23
|
Landsiedel J, Koldewyn K. Auditory dyadic interactions through the "eye" of the social brain: How visual is the posterior STS interaction region? IMAGING NEUROSCIENCE (CAMBRIDGE, MASS.) 2023; 1:1-20. [PMID: 37719835 PMCID: PMC10503480 DOI: 10.1162/imag_a_00003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 05/17/2023] [Indexed: 09/19/2023]
Abstract
Human interactions contain potent social cues that meet not only the eye but also the ear. Although research has identified a region in the posterior superior temporal sulcus as being particularly sensitive to visually presented social interactions (SI-pSTS), its response to auditory interactions has not been tested. Here, we used fMRI to explore brain response to auditory interactions, with a focus on temporal regions known to be important in auditory processing and social interaction perception. In Experiment 1, monolingual participants listened to two-speaker conversations (intact or sentence-scrambled) and one-speaker narrations in both a known and an unknown language. Speaker number and conversational coherence were explored in separately localised regions-of-interest (ROI). In Experiment 2, bilingual participants were scanned to explore the role of language comprehension. Combining univariate and multivariate analyses, we found initial evidence for a heteromodal response to social interactions in SI-pSTS. Specifically, right SI-pSTS preferred auditory interactions over control stimuli and represented information about both speaker number and interactive coherence. Bilateral temporal voice areas (TVA) showed a similar, but less specific, profile. Exploratory analyses identified another auditory-interaction sensitive area in anterior STS. Indeed, direct comparison suggests modality specific tuning, with SI-pSTS preferring visual information while aSTS prefers auditory information. Altogether, these results suggest that right SI-pSTS is a heteromodal region that represents information about social interactions in both visual and auditory domains. Future work is needed to clarify the roles of TVA and aSTS in auditory interaction perception and further probe right SI-pSTS interaction-selectivity using non-semantic prosodic cues.
Collapse
Affiliation(s)
- Julia Landsiedel
- Department of Psychology, School of Human and Behavioural Sciences, Bangor University, Bangor, United Kingdom
| | - Kami Koldewyn
- Department of Psychology, School of Human and Behavioural Sciences, Bangor University, Bangor, United Kingdom
| |
Collapse
|
24
|
Della Longa L, Carnevali L, Farroni T. The role of affective touch in modulating emotion processing among preschool children. J Exp Child Psychol 2023; 235:105726. [PMID: 37336064 DOI: 10.1016/j.jecp.2023.105726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 05/26/2023] [Accepted: 05/29/2023] [Indexed: 06/21/2023]
Abstract
Recognizing emotional expressions is a prerequisite for understanding others' feelings and intentions, a key component of social interactions that develops throughout childhood. In multisensory social environments, touch may be crucial for emotion processing, linking external sensory information with internal affective states. The current study investigated whether affective touch facilitates recognition of emotional expressions throughout childhood. Preschool children (N = 121 3- to 6-year-olds) were presented with different tactile stimulations followed by an emotion-matching task. Results revealed that affective touch fosters the recognition of negative emotions and increases the speed of association of positive emotions, highlighting the centrality of tactile experiences for socioemotional understanding. The current research opens new perspectives on how to support emotional recognition with potential consequences for the development of social functioning.
Collapse
Affiliation(s)
- Letizia Della Longa
- Department of Developmental Psychology and Socialization, University of Padova, 35131 Padova, Italy.
| | - Laura Carnevali
- Department of Developmental Psychology and Socialization, University of Padova, 35131 Padova, Italy
| | - Teresa Farroni
- Department of Developmental Psychology and Socialization, University of Padova, 35131 Padova, Italy
| |
Collapse
|
25
|
Gao C, Uchitomi H, Miyake Y. Influence of Multimodal Emotional Stimulations on Brain Activity: An Electroencephalographic Study. SENSORS (BASEL, SWITZERLAND) 2023; 23:4801. [PMID: 37430714 PMCID: PMC10221168 DOI: 10.3390/s23104801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 05/05/2023] [Accepted: 05/12/2023] [Indexed: 07/12/2023]
Abstract
This study aimed to reveal the influence of emotional valence and sensory modality on neural activity in response to multimodal emotional stimuli using scalp EEG. In this study, 20 healthy participants completed the emotional multimodal stimulation experiment for three stimulus modalities (audio, visual, and audio-visual), all of which are from the same video source with two emotional components (pleasure or unpleasure), and EEG data were collected using six experimental conditions and one resting state. We analyzed power spectral density (PSD) and event-related potential (ERP) components in response to multimodal emotional stimuli, for spectral and temporal analysis. PSD results showed that the single modality (audio only/visual only) emotional stimulation PSD differed from multi-modality (audio-visual) in a wide brain and band range due to the changes in modality and not from the changes in emotional degree. The most pronounced N200-to-P300 potential shifts occurred in monomodal rather than multimodal emotional stimulations. This study suggests that emotional saliency and sensory processing efficiency perform a significant role in shaping neural activity during multimodal emotional stimulation, with the sensory modality being more influential in PSD. These findings contribute to our understanding of the neural mechanisms involved in multimodal emotional stimulation.
Collapse
Affiliation(s)
- Chenguang Gao
- Department of Computer Science, Tokyo Institute of Technology, Yokohama 226-8502, Japan; (H.U.); (Y.M.)
| | | | | |
Collapse
|
26
|
Heffer N, Dennie E, Ashwin C, Petrini K, Karl A. Multisensory processing of emotional cues predicts intrusive memories after virtual reality trauma. VIRTUAL REALITY 2023; 27:2043-2057. [PMID: 37614716 PMCID: PMC10442266 DOI: 10.1007/s10055-023-00784-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Accepted: 03/03/2023] [Indexed: 08/25/2023]
Abstract
Research has shown that high trait anxiety can alter multisensory processing of threat cues (by amplifying integration of angry faces and voices); however, it remains unknown whether differences in multisensory processing play a role in the psychological response to trauma. This study examined the relationship between multisensory emotion processing and intrusive memories over seven days following exposure to an analogue trauma in a sample of 55 healthy young adults. We used an adapted version of the trauma film paradigm, where scenes showing a car accident trauma were presented using virtual reality, rather than a conventional 2D film. Multisensory processing was assessed prior to the trauma simulation using a forced choice emotion recognition paradigm with happy, sad and angry voice-only, face-only, audiovisual congruent (face and voice expressed matching emotions) and audiovisual incongruent expressions (face and voice expressed different emotions). We found that increased accuracy in recognising anger (but not happiness and sadness) in the audiovisual condition relative to the voice- and face-only conditions was associated with more intrusions following VR trauma. Despite previous results linking trait anxiety and intrusion development, no significant influence of trait anxiety on intrusion frequency was observed. Enhanced integration of threat-related information (i.e. angry faces and voices) could lead to overly threatening appraisals of stressful life events and result in greater intrusion development after trauma. Supplementary Information The online version contains supplementary material available at 10.1007/s10055-023-00784-1.
Collapse
Affiliation(s)
- Naomi Heffer
- Department of Psychology, University of Bath, Claverton Down, Bath, BA2 7AY UK
- School of Sciences, Bath Spa University, Bath, UK
| | - Emma Dennie
- Mood Disorders Centre, University of Exeter, Exeter, UK
| | - Chris Ashwin
- Department of Psychology, University of Bath, Claverton Down, Bath, BA2 7AY UK
- Centre for Applied Autism Research (CAAR), Bath, UK
| | - Karin Petrini
- Department of Psychology, University of Bath, Claverton Down, Bath, BA2 7AY UK
- The Centre for the Analysis of Motion, Entertainment Research and Applications (CAMERA), Bath, UK
| | - Anke Karl
- Mood Disorders Centre, University of Exeter, Exeter, UK
| |
Collapse
|
27
|
Oya R, Tanaka A. Touch and voice have different advantages in perceiving positive and
negative emotions. Iperception 2023; 14:20416695231160420. [PMID: 36968320 PMCID: PMC10031610 DOI: 10.1177/20416695231160420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 02/13/2023] [Indexed: 03/24/2023] Open
Abstract
Previous research has revealed that several emotions can be perceived via touch.
What advantages does touch have over other nonverbal communication channels? In
our study, we compared the perception of emotions from touch with that from
voice to examine the advantages of each channel at the emotional valence level.
In our experiment, the encoder expressed 12 different emotions by touching the
decoder's arm or uttering a syllable /e/, and the decoder judged the emotion.
The results showed that the categorical average accuracy of negative emotions
was higher for voice than for touch, whereas that of positive emotions was
marginally higher for touch than for voice. These results suggest that different
channels (touch and voice) have different advantages for the perception of
positive and negative emotions.
Collapse
Affiliation(s)
- Rika Oya
- Graduate School of Humanities and Sciences,
Tokyo Woman's Christian University, Tokyo, Japan;
Japan
Society for the Promotion of
Science, Tokyo, Japan
| | - Akihiro Tanaka
- Akihiro Tanaka, Department of Psychology,
Tokyo Woman's Christian University 2-6-1 Zempukuji, Suginami-ku, Tokyo 167-8585,
Japan.
| |
Collapse
|
28
|
Owners' Beliefs regarding the Emotional Capabilities of Their Dogs and Cats. Animals (Basel) 2023; 13:ani13050820. [PMID: 36899676 PMCID: PMC10000035 DOI: 10.3390/ani13050820] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 02/15/2023] [Accepted: 02/22/2023] [Indexed: 03/12/2023] Open
Abstract
The correct interpretation of an animal's emotional state is crucial for successful human-animal interaction. When studying dog and cat emotional expressions, a key source of information is the pet owner, given the extensive interactions they have had with their pets. In this online survey we asked 438 owners whether their dogs and/or cats could express 22 different primary and secondary emotions, and to indicate the behavioral cues they relied upon to identify those expressed emotions. Overall, more emotions were reported in dogs compared to cats, both from owners that owned just one species and those that owned both. Although owners reported a comparable set of sources of behavioral cues (e.g., body posture, facial expression, and head posture) for dogs and cats in expressing the same emotion, distinct combinations tended to be associated with specific emotions in both cats and dogs. Furthermore, the number of emotions reported by dog owners was positively correlated with their personal experience with dogs but negatively correlated with their professional experience. The number of emotions reported in cats was higher in cat-only households compared to those that also owned dogs. These results provide a fertile ground for further empirical investigation of the emotional expressions of dogs and cats, aimed at validating specific emotions in these species.
Collapse
|
29
|
Leung FYN, Stojanovik V, Micai M, Jiang C, Liu F. Emotion recognition in autism spectrum disorder across age groups: A cross-sectional investigation of various visual and auditory communicative domains. Autism Res 2023; 16:783-801. [PMID: 36727629 DOI: 10.1002/aur.2896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Accepted: 01/19/2023] [Indexed: 02/03/2023]
Abstract
Previous research on emotion processing in autism spectrum disorder (ASD) has predominantly focused on human faces and speech prosody, with little attention paid to other domains such as nonhuman faces and music. In addition, emotion processing in different domains was often examined in separate studies, making it challenging to evaluate whether emotion recognition difficulties in ASD generalize across domains and age cohorts. The present study investigated: (i) the recognition of basic emotions (angry, scared, happy, and sad) across four domains (human faces, face-like objects, speech prosody, and song) in 38 autistic and 38 neurotypical (NT) children, adolescents, and adults in a forced-choice labeling task, and (ii) the impact of pitch and visual processing profiles on this ability. Results showed similar recognition accuracy between the ASD and NT groups across age groups for all domains and emotion types, although processing speed was slower in the ASD compared to the NT group. Age-related differences were seen in both groups, which varied by emotion, domain, and performance index. Visual processing style was associated with facial emotion recognition speed and pitch perception ability with auditory emotion recognition in the NT group but not in the ASD group. These findings suggest that autistic individuals may employ different emotion processing strategies compared to NT individuals, and that emotion recognition difficulties as manifested by slower response times may result from a generalized, rather than a domain-specific underlying mechanism that governs emotion recognition processes across domains in ASD.
Collapse
Affiliation(s)
- Florence Y N Leung
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK.,Department of Psychology, University of Bath, Bath, UK
| | - Vesna Stojanovik
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Martina Micai
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Cunmei Jiang
- Music College, Shanghai Normal University, Shanghai, China
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| |
Collapse
|
30
|
Disentangling emotional signals in the brain: an ALE meta-analysis of vocal affect perception. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2023; 23:17-29. [PMID: 35945478 DOI: 10.3758/s13415-022-01030-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 07/24/2022] [Indexed: 11/08/2022]
Abstract
Recent advances in neuroimaging research on vocal emotion perception have revealed voice-sensitive areas specialized in processing affect. Experimental data on this subject is varied, investigating a wide range of emotions through different vocal signals and task demands. The present meta-analysis was designed to disentangle this diversity of results by summarizing neuroimaging data in the vocal emotion perception literature. Data from 44 experiments contrasting emotional and neutral voices was analyzed to assess brain areas involved in vocal affect perception in general, as well as depending on the type of voice signal (speech prosody or vocalizations), the task demands (implicit or explicit attention to emotions), and the specific emotion perceived. Results reassessed a consistent bilateral network of Emotional Voices Areas consisting of the superior temporal cortex and primary auditory regions. Specific activations and lateralization of these regions, as well as additional areas (insula, middle temporal gyrus) were further modulated by signal type and task demands. Exploring the sparser data on single emotions also suggested the recruitment of other regions (insula, inferior frontal gyrus, frontal operculum) for specific aspects of each emotion. These novel meta-analytic results suggest that while the bulk of vocal affect processing is localized in the STC, the complexity and variety of such vocal signals entails functional specificities in complex and varied cortical (and potentially subcortical) response pathways.
Collapse
|
31
|
Song Y, Zhao T. Inferring influence of people's emotions at court on defendant's emotions using a prediction model. Front Psychol 2023; 14:1131724. [PMID: 36949927 PMCID: PMC10025348 DOI: 10.3389/fpsyg.2023.1131724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Accepted: 02/10/2023] [Indexed: 03/08/2023] Open
Abstract
People's emotions may be affected by the sound environment in court. A courtroom's sound environment usually consists of the people's voices, such as the judge's voice, the plaintiff's voice, and the defendant's voice. The judge, plaintiff, and defendant usually express their emotions through their voices. Human communication is heavily reliant on emotions. Emotions may also reflect a person's condition. Therefore, People's emotions at the Court must be recognized, especially for vulnerable groups, and the impact of the sound on the defendant's motions and judgment must be inferred. However, people's emotions are difficult to recognize in a courtroom. In addition, as far as we know, no existing study deals with the impact of sound on people in court. Based on sound perception, we develop a deep neural network-based model to infer people's emotions in our previous work. In the proposed model, we use the convolutional neural network and long short-term memory network to obtain features from speech signals and apply a dense neural network to infer people's emotions. Applying the model for emotion prediction based on sound at court, we explore the impact of sound at court on the defendant. Using the voice data collected from fifty trail records, we demonstrate that the voice of the judge can affect the defendant's emotions. Angry, neutrality and fear are the top three emotions of the defendant in court. In particular, the judge's voice expressing anger usually induces fear in the defendant. The plaintiff's angry voice may not have a substantial impact on the defendant's emotions.
Collapse
Affiliation(s)
- Yun Song
- Rule of Law Institute, Northwest University of Political Science and Law, Xi'an, China
- *Correspondence: Yun Song
| | - Tianyi Zhao
- School of Health and Medicine, Harbin Institute of Technology, Harbin, China
| |
Collapse
|
32
|
Moret-Tatay C, Mundi-Ricós P, Irigaray TQ. The Relationship between Face Processing, Cognitive and Affective Empathy. Behav Sci (Basel) 2022; 13:bs13010021. [PMID: 36661593 PMCID: PMC9854795 DOI: 10.3390/bs13010021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 12/19/2022] [Accepted: 12/20/2022] [Indexed: 12/28/2022] Open
Abstract
This study aims to examine the relationship between affective and cognitive empathy scores and perceptual face recognition skills. A total of 18 young adults participated in the study. Cognitive and Affective Empathy Test (TECA), The eyes Test and an experimental task were carried out. The experimental task has two blocks, a presentation, and a recognition phase, under the Karolinska battery of images expressing different emotions. Cognitive empathy sub-factors were found to be related to the hit rate on the recognition of surprise faces as well as the discarding of faces of disgust. In relation to the hit rate on discarding faces of disgust, this was related to perspective taking. Reaction time and Cognitive empathy subfactors were found to be positively correlated to the recognition of disgust, surprise, and sadness. Lastly, Perspective taking was also related to the discarding of disgust reaction time in a direct way. The relationships between affective empathy and other measures for emotional face recognition were not statistically significant. Knowledge of individual differences in cognitive and affective empathy, as well as of their relationship with behavioral responses such as the recognition or dismissal of emotional faces are of interest for social interaction and in psychotherapy.
Collapse
Affiliation(s)
- Carmen Moret-Tatay
- MEB Lab, Universidad Católica de Valencia San Vicente Mártir, Avenida de la Ilustración 2, Burjassot, 46100 Valencia, Spain
- Correspondence:
| | - Paloma Mundi-Ricós
- MEB Lab, Universidad Católica de Valencia San Vicente Mártir, Avenida de la Ilustración 2, Burjassot, 46100 Valencia, Spain
| | - Tatiana Quarti Irigaray
- ARIHA, Pós-Graduate Program in Psychology, Pontifícia Universidade Católica do Rio Grande do Sul, Porto Alegre 91215-330, Brazil
| |
Collapse
|
33
|
Learning coordinated emotion representation between voice and face. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04216-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
34
|
Cultural differences in vocal expression analysis: Effects of task, language, and stimulus-related factors. PLoS One 2022; 17:e0275915. [PMID: 36215311 PMCID: PMC9550067 DOI: 10.1371/journal.pone.0275915] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 09/26/2022] [Indexed: 11/20/2022] Open
Abstract
Cultural context shapes the way that emotions are expressed and socially interpreted. Building on previous research looking at cultural differences in judgements of facial expressions, we examined how listeners recognize speech-embedded emotional expressions and make inferences about a speaker's feelings in relation to their vocal display. Canadian and Chinese participants categorized vocal expressions of emotions (anger, fear, happiness, sadness) expressed at different intensity levels in three languages (English, Mandarin, Hindi). In two additional tasks, participants rated the intensity of each emotional expression and the intensity of the speaker's feelings from the same stimuli. Each group was more accurate at recognizing emotions produced in their native language (in-group advantage). However, Canadian and Chinese participants both judged the speaker's feelings to be equivalent or more intense than their actual display (especially for highly aroused, negative emotions), suggesting that similar inference rules were applied to vocal expressions by the two cultures in this task. Our results provide new insights on how people categorize and interpret speech-embedded vocal expressions versus facial expressions and what cultural factors are at play.
Collapse
|
35
|
Behavioral correlates of temporal attention biases during emotional prosody perception. Sci Rep 2022; 12:16754. [PMID: 36202849 PMCID: PMC9537340 DOI: 10.1038/s41598-022-20806-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 09/19/2022] [Indexed: 11/16/2022] Open
Abstract
Emotional prosody perception (EPP) unfolds in time given the intrinsic temporal nature of auditory stimuli, and has been shown to be modulated by spatial attention. Yet, the influence of temporal attention (TA) on EPP remains largely unexplored. TA studies manipulate subject’s motor preparedness according to an upcoming event, with targets to discriminate during short attended trials arriving quickly, and, targets to discriminate during long unattended trials arriving at a later time point. We used here a classic paradigm manipulating TA to investigate its influence on behavioral responses to EPP (n = 100) and we found that TA bias was associated with slower reaction times (RT) for angry but not neutral prosodies and only during short trials. Importantly, TA biases were observed for accuracy measures only for angry voices and especially during short trials, suggesting that neutral stimuli are less subject to TA biases. Importantly, emotional facilitation, with faster RTs for angry voices in comparison to neutral voices, was observed when the stimuli were temporally attended and during short trials, suggesting an influential role of TA during EPP. Together, these results demonstrate for the first time the major influence of TA in RTs and behavioral performance while discriminating emotional prosody.
Collapse
|
36
|
Goldenberg A, Schöne J, Huang Z, Sweeny TD, Ong DC, Brady TF, Robinson MM, Levari D, Zaki J, Gross JJ. Amplification in the evaluation of multiple emotional expressions over time. Nat Hum Behav 2022; 6:1408-1416. [PMID: 35760844 PMCID: PMC10263387 DOI: 10.1038/s41562-022-01390-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Accepted: 05/16/2022] [Indexed: 11/09/2022]
Abstract
Social interactions are dynamic and unfold over time. To make sense of social interactions, people must aggregate sequential information into summary, global evaluations. But how do people do this? Here, to address this question, we conducted nine studies (N = 1,583) using a diverse set of stimuli. Our focus was a central aspect of social interaction-namely, the evaluation of others' emotional responses. The results suggest that when aggregating sequences of images and videos expressing varying degrees of emotion, perceivers overestimate the sequence's average emotional intensity. This tendency for overestimation is driven by stronger memory of more emotional expressions. A computational model supports this account and shows that amplification cannot be explained only by nonlinear perception of individual exemplars. Our results demonstrate an amplification effect in the perception of sequential emotional information, which may have implications for the many types of social interactions that involve repeated emotion estimation.
Collapse
Affiliation(s)
- Amit Goldenberg
- Harvard Business School, Harvard University, Boston, MA, USA.
| | - Jonas Schöne
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Zi Huang
- Harvard Business School, Harvard University, Boston, MA, USA
| | | | - Desmond C Ong
- Department of Information Systems and Analytics, National University of Singapore, Singapore, Singapore
- Institute of High Performance Computing, Agency for Science, Technology and Research, Singapore, Singapore
| | | | | | - David Levari
- Harvard Business School, Harvard University, Boston, MA, USA
| | - Jamil Zaki
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - James J Gross
- Department of Psychology, Stanford University, Stanford, CA, USA
| |
Collapse
|
37
|
Wu YE, Hong W. Neural basis of prosocial behavior. Trends Neurosci 2022; 45:749-762. [PMID: 35853793 PMCID: PMC10039809 DOI: 10.1016/j.tins.2022.06.008] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Revised: 06/22/2022] [Accepted: 06/27/2022] [Indexed: 01/10/2023]
Abstract
The ability to behave in ways that benefit other individuals' well-being is among the most celebrated human characteristics crucial for social cohesiveness. Across mammalian species, animals display various forms of prosocial behaviors - comforting, helping, and resource sharing - to support others' emotions, goals, and/or material needs. In this review, we provide a cross-species view of the behavioral manifestations, proximate and ultimate drives, and neural mechanisms of prosocial behaviors. We summarize key findings from recent studies in humans and rodents that have shed light on the neural mechanisms underlying different processes essential for prosocial interactions, from perception and empathic sharing of others' states to prosocial decisions and actions.
Collapse
Affiliation(s)
- Ye Emily Wu
- Department of Neurobiology and Department of Biological Chemistry, David Geffen School of Medicine, University of California, Los Angeles, CA 90095, USA
| | - Weizhe Hong
- Department of Neurobiology and Department of Biological Chemistry, David Geffen School of Medicine, University of California, Los Angeles, CA 90095, USA.
| |
Collapse
|
38
|
Zhao Y, Wu C. Childhood maltreatment experiences and emotion perception in young Chinese adults: Sex as a moderator. Stress Health 2022; 38:666-678. [PMID: 34921491 DOI: 10.1002/smi.3122] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 12/07/2021] [Accepted: 12/14/2021] [Indexed: 12/14/2022]
Abstract
Men and women seem to perceive and react differently to emotional stimuli and have different susceptibilities to childhood trauma. With a cross-sectional design, this study aimed to investigate whether specific patterns of childhood-maltreatment experiences are associated with specific patterns of emotion perception and the sex differences in this relationship. A total of 173 adults rated valence, arousal, and dominance for 60 pictures (varying in pleasantness, unpleasantness, and neutral emotion) from the International Affective Picture System and completed the Childhood Trauma Questionnaire-Short Form. Using a partial least squares (PLS) regression analysis, after controlling for depressive and anxious states, recent stressful life events, personality, and cognitive reappraisal strategy, we identified a profile (linear combination) of childhood-maltreatment experiences (emotional neglect, physical neglect, and physical abuse) that was associated with a profile of emotion-perception dimensions (perceiving negative visual stimuli as more unpleasant and subservient, positive stimuli as more pleasant and dominant, and neutral stimuli as more arousing). This association pattern was significant only for the male participants. Hence, our findings suggest that childhood maltreatment might make men more "emotional" in their adulthood. The impact of this childhood-maltreatment-associated alteration in emotion perception on male mental health needs further investigation.
Collapse
Affiliation(s)
- Yiran Zhao
- School of Nursing, Peking University Health Science Center, Beijing, China
| | - Chao Wu
- School of Nursing, Peking University Health Science Center, Beijing, China
| |
Collapse
|
39
|
Zimmer U, Wendt M, Pacharra M. Enhancing allocation of visual attention with emotional cues presented in two sensory modalities. BEHAVIORAL AND BRAIN FUNCTIONS : BBF 2022; 18:10. [PMID: 36138461 PMCID: PMC9494825 DOI: 10.1186/s12993-022-00195-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 08/27/2022] [Indexed: 11/10/2022]
Abstract
Background Responses to a visual target stimulus in an exogenous spatial cueing paradigm are usually faster if cue and target occur in the same rather than in different locations (i.e., valid vs. invalid), although perceptual conditions for cue and target processing are otherwise equivalent. This cueing validity effect can be increased by adding emotional (task-unrelated) content to the cue. In contrast, adding a secondary non-emotional sensory modality to the cue (bimodal), has not consistently yielded increased cueing effects in previous studies. Here, we examined the interplay of bimodally presented cue content (i.e., emotional vs. neutral), by using combined visual-auditory cues. Specifically, the current ERP-study investigated whether bimodal presentation of fear-related content amplifies deployment of spatial attention to the cued location. Results A behavioral cueing validity effect occurred selectively in trials in which both aspects of the cue (i.e., face and voice) were related to fear. Likewise, the posterior contra-ipsilateral P1-activity in valid trials was significantly larger when both cues were fear-related than in all other cue conditions. Although the P3a component appeared uniformly increased in invalidly cued trials, regardless of cue content, a positive LPC deflection, starting about 450 ms after target onset, was, again, maximal for the validity contrast in trials associated with bimodal presentation of fear-related cues. Conclusions Simultaneous presentation of fear-related stimulus information in the visual and auditory modality appears to increase sustained visual attention (impairing disengagement of attention from the cued location) and to affect relatively late stages of target processing.
Collapse
Affiliation(s)
- Ulrike Zimmer
- Faculty of Human Sciences, Department of Psychology, MSH Medical School Hamburg, Hamburg, Germany. .,ICAN Insitute of Cognitive and Affective Neuroscience, MSH Medical School Hamburg, Hamburg, Germany.
| | - Mike Wendt
- Faculty of Human Sciences, Department of Psychology, MSH Medical School Hamburg, Hamburg, Germany.,ICAN Insitute of Cognitive and Affective Neuroscience, MSH Medical School Hamburg, Hamburg, Germany
| | - Marlene Pacharra
- Faculty of Psychology, Department of Biopsychology, Institute of Cognitive Neuroscience, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
40
|
Plate RC, Schapiro AC, Waller R. Emotional Faces Facilitate Statistical Learning. AFFECTIVE SCIENCE 2022; 3:662-672. [PMID: 36385906 PMCID: PMC9537398 DOI: 10.1007/s42761-022-00130-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 06/07/2022] [Indexed: 06/16/2023]
Abstract
Detecting regularities and extracting patterns is a vital skill to organize complex information in our environments. Statistical learning, a process where we detect regularities by attending to relationships between cues in our environment, contributes to knowledge acquisition across myriad domains. However, less is known about how emotional cues-specifically facial configurations of emotion-influence statistical learning. Here, we tested two pre-registered aims to advance knowledge about emotional signals and statistical learning: (1) we examined statistical learning in the context of emotional compared to non-emotional information, and (2) we assessed how emotional congruency (i.e., whether facial stimuli conveyed the same, or different emotions) influenced regularity extraction. We demonstrated statistical learning in the context of emotional signals. Further, we showed that statistical learning occurs more efficiently in the context of emotional faces. We also established that congruent cues benefited an online measure of statistical learning, but had varied effects when statistical learning was assessed via post-exposure recognition test. The results shed light on how affective signals influence well-studied cognitive skills and address a knowledge gap about how cue congruency impacts statistical learning, including how emotional cues might guide predictions in our social world. Supplementary Information The online version contains supplementary material available at 10.1007/s42761-022-00130-9.
Collapse
Affiliation(s)
- Rista C. Plate
- Department of Psychology, University of Pennsylvania, Levin Building, 425 S. University Ave, Philadelphia, PA 19104 USA
| | - Anna C. Schapiro
- Department of Psychology, University of Pennsylvania, Levin Building, 425 S. University Ave, Philadelphia, PA 19104 USA
| | - Rebecca Waller
- Department of Psychology, University of Pennsylvania, Levin Building, 425 S. University Ave, Philadelphia, PA 19104 USA
| |
Collapse
|
41
|
Suslow T, Kersting A. The Relations of Attention to and Clarity of Feelings With Facial Affect Perception. Front Psychol 2022; 13:819902. [PMID: 35874362 PMCID: PMC9298753 DOI: 10.3389/fpsyg.2022.819902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 06/20/2022] [Indexed: 11/13/2022] Open
Abstract
Attention to emotions and emotional clarity are core dimensions of individual differences in emotion awareness. Findings from prior research based on self-report indicate that attention to and recognition of one's own emotions are related to attention to and recognition of other people's emotions. In the present experimental study, we examined the relations of attention to and clarity of emotions with the efficiency of facial affect perception. Moreover, it was explored whether attention to and clarity of emotions are linked to negative interpretations of facial expressions. A perception of facial expressions (PFE) task based on schematic faces with neutral, ambiguous, or unambiguous emotional expressions and a gender decision task were administered to healthy individuals along with measures of emotion awareness, state and trait anxiety, depression, and verbal intelligence. Participants had to decide how much the faces express six basic affects. Evaluative ratings and decision latencies were analyzed. Attention to feelings was negatively correlated with evaluative decision latency, whereas clarity of feelings was not related to decision latency in the PFE task. Attention to feelings was positively correlated with the perception of negative affects in ambiguous faces. Attention to feelings and emotional clarity were not related to gender decision latency. According to our results, dispositional attention to feelings goes along with an enhanced efficiency of facial affect perception. Habitually paying attention to one's own emotions may facilitate processing of external emotional information. Preliminary evidence was obtained suggesting a relationship of dispositional attention to feelings with negative interpretations of facial expressions.
Collapse
Affiliation(s)
- Thomas Suslow
- Department of Psychosomatic Medicine and Psychotherapy, University of Leipzig Medical Center, Leipzig, Germany
| | - Anette Kersting
- Department of Psychosomatic Medicine and Psychotherapy, University of Leipzig Medical Center, Leipzig, Germany
| |
Collapse
|
42
|
Maltezou-Papastylianou C, Russo R, Wallace D, Harmsworth C, Paulmann S. Different stages of emotional prosody processing in healthy ageing–evidence from behavioural responses, ERPs, tDCS, and tRNS. PLoS One 2022; 17:e0270934. [PMID: 35862317 PMCID: PMC9302842 DOI: 10.1371/journal.pone.0270934] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Accepted: 06/21/2022] [Indexed: 11/22/2022] Open
Abstract
Past research suggests that the ability to recognise the emotional intent of a speaker decreases as a function of age. Yet, few studies have looked at the underlying cause for this effect in a systematic way. This paper builds on the view that emotional prosody perception is a multi-stage process and explores which step of the recognition processing line is impaired in healthy ageing using time-sensitive event-related brain potentials (ERPs). Results suggest that early processes linked to salience detection as reflected in the P200 component and initial build-up of emotional representation as linked to a subsequent negative ERP component are largely unaffected in healthy ageing. The two groups show, however, emotional prosody recognition differences: older participants recognise emotional intentions of speakers less well than younger participants do. These findings were followed up by two neuro-stimulation studies specifically targeting the inferior frontal cortex to test if recognition improves during active stimulation relative to sham. Overall, results suggests that neither tDCS nor high-frequency tRNS stimulation at 2mA for 30 minutes facilitates emotional prosody recognition rates in healthy older adults.
Collapse
Affiliation(s)
| | - Riccardo Russo
- Department of Psychology and Centre for Brain Science, University of Essex, Colchester, United Kingdom
- Department of Brain and Behavioural Sciences, Universita’ di Pavia, Pavia, Italy
| | - Denise Wallace
- Department of Psychology and Centre for Brain Science, University of Essex, Colchester, United Kingdom
| | - Chelsea Harmsworth
- Department of Psychology and Centre for Brain Science, University of Essex, Colchester, United Kingdom
| | - Silke Paulmann
- Department of Psychology and Centre for Brain Science, University of Essex, Colchester, United Kingdom
- * E-mail:
| |
Collapse
|
43
|
How Live Streaming Interactions and Their Visual Stimuli Affect Users’ Sustained Engagement Behaviour—A Comparative Experiment Using Live and Virtual Live Streaming. SUSTAINABILITY 2022. [DOI: 10.3390/su14148907] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
With the massive expansion in live streaming, enhancing the sustained engagement of users has become a key issue in ensuring its success. This study examines the relationship between real-time interaction, user perceptions, user intention to keep using live streaming, and whether this relationship differs between a live and a virtual live streaming environment. Using partial least squares (PLS) structural equation modelling (SEM), this paper analyses 240 valid questionnaire responses and finds that there is a link between real-time interactions, visual stimuli, and users’ sustained engagement. This shows that users’ active interactions while watching live streaming videos significantly affect their perceptions of social presence and trust, which in turn, affect their sustained engagement behaviour. These effects were found to vary with differences in the live streaming environment. The findings of this paper will play a positive role in understanding the differences between various live streaming environments, in optimizing the design of live streaming content and in improving the perceptions of emotional warmth by live streaming users.
Collapse
|
44
|
Kuttenreich AM, von Piekartz H, Heim S. Is There a Difference in Facial Emotion Recognition after Stroke with vs. without Central Facial Paresis? Diagnostics (Basel) 2022; 12:diagnostics12071721. [PMID: 35885625 PMCID: PMC9325259 DOI: 10.3390/diagnostics12071721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Revised: 07/06/2022] [Accepted: 07/10/2022] [Indexed: 11/16/2022] Open
Abstract
The Facial Feedback Hypothesis (FFH) states that facial emotion recognition is based on the imitation of facial emotional expressions and the processing of physiological feedback. In the light of limited and contradictory evidence, this hypothesis is still being debated. Therefore, in the present study, emotion recognition was tested in patients with central facial paresis after stroke. Performance in facial vs. auditory emotion recognition was assessed in patients with vs. without facial paresis. The accuracy of objective facial emotion recognition was significantly lower in patients with vs. without facial paresis and also in comparison to healthy controls. Moreover, for patients with facial paresis, the accuracy measure for facial emotion recognition was significantly worse than that for auditory emotion recognition. Finally, in patients with facial paresis, the subjective judgements of their own facial emotion recognition abilities differed strongly from their objective performances. This pattern of results demonstrates a specific deficit in facial emotion recognition in central facial paresis and thus provides support for the FFH and points out certain effects of stroke.
Collapse
Affiliation(s)
- Anna-Maria Kuttenreich
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany;
- Department of Neurology, Medical Faculty, RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany
- Department of Otorhinolaryngology, Jena University Hospital, Am Klinikum 1, 07747 Jena, Germany
- Facial-Nerve-Center Jena, Jena University Hospital, Am Klinikum 1, 07747 Jena, Germany
- Center of Rare Diseases Jena, Jena University Hospital, Am Klinikum 1, 07747 Jena, Germany
- Correspondence: ; Tel.: +49-3641-9329398
| | - Harry von Piekartz
- Department of Physical Therapy and Rehabilitation Science, Osnabrück University of Applied Sciences, Albrechtstr. 30, 49076 Osnabrück, Germany;
| | - Stefan Heim
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany;
- Department of Neurology, Medical Faculty, RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany
- Institute of Neuroscience and Medicine (INM−1), Forschungszentrum Jülich, Leo-Brand-Str. 5, 52428 Jülich, Germany
| |
Collapse
|
45
|
MacLean KE. Designing affective haptic experience for wellness and social communication: where designers need affective neuroscience and psychology. Curr Opin Behav Sci 2022. [DOI: 10.1016/j.cobeha.2022.101113] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
46
|
Plank IS, Hindi Attar C, Kunas SL, Dziobek I, Bermpohl F. Motherhood and theory of mind: increased activation in the posterior cingulate cortex and insulae. Soc Cogn Affect Neurosci 2022; 17:470-481. [PMID: 34592763 PMCID: PMC9071419 DOI: 10.1093/scan/nsab109] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Revised: 08/24/2021] [Accepted: 09/30/2021] [Indexed: 01/27/2023] Open
Abstract
Despite growing evidence on effects of parenthood on social understanding, little is known about the influence of parenthood on theory of mind (ToM), the capacity to infer mental and affective states of others. It is also unclear whether any possible effects of parenthood on ToM would generalise to inferring states of adults or are specific to children. We investigated neural activation in mothers and women without children while they predicted action intentions from child and adult faces. Region-of-interest analyses showed stronger activation in mothers in the bilateral posterior cingulate cortex, precuneus (ToM-related areas) and insulae (emotion-related areas). Whole-brain analyses revealed that mothers compared to non-mothers more strongly activated areas including the left angular gyrus and the ventral prefrontal cortex but less strongly activated the right supramarginal gyrus and the dorsal prefrontal cortex. These differences were not specific to child stimuli but occurred in response to both adult and child stimuli and might indicate that mothers and non-mothers employ different strategies to infer action intentions from affective faces. Whether these general differences in affective ToM between mothers and non-mothers are due to biological or experience-related changes should be subject of further investigation.
Collapse
Affiliation(s)
- Irene Sophia Plank
- Department of Psychology, Humboldt-Universität zu Berlin, Berlin 10099, Germany
- Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin 10099, Germany
- Department of Psychiatry and Neurosciences | CCM, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin 10117, Germany
- Einstein Center for Neurosciences, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin 10117, Germany
| | - Catherine Hindi Attar
- Department of Psychiatry and Neurosciences | CCM, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin 10117, Germany
| | - Stefanie Lydia Kunas
- Department of Psychiatry and Neurosciences | CCM, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin 10117, Germany
| | - Isabel Dziobek
- Department of Psychology, Humboldt-Universität zu Berlin, Berlin 10099, Germany
- Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin 10099, Germany
- Einstein Center for Neurosciences, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin 10117, Germany
| | - Felix Bermpohl
- Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin 10099, Germany
- Department of Psychiatry and Neurosciences | CCM, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin 10117, Germany
- Einstein Center for Neurosciences, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin 10117, Germany
| |
Collapse
|
47
|
Kuttenreich AM, Volk GF, Guntinas-Lichius O, von Piekartz H, Heim S. Facial Emotion Recognition in Patients with Post-Paralytic Facial Synkinesis-A Present Competence. Diagnostics (Basel) 2022; 12:1138. [PMID: 35626294 PMCID: PMC9139660 DOI: 10.3390/diagnostics12051138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 04/28/2022] [Accepted: 04/28/2022] [Indexed: 11/17/2022] Open
Abstract
Facial palsy is a movement disorder with impacts on verbal and nonverbal communication. The aim of this study is to investigate the effects of post-paralytic facial synkinesis on facial emotion recognition. In a prospective cross-sectional study, we compared facial emotion recognition between n = 30 patients with post-paralytic facial synkinesis (mean disease time: 1581 ± 1237 days) and n = 30 healthy controls matched in sex, age, and education level. Facial emotion recognition was measured by the Myfacetraining Program. As an intra-individual control condition, auditory emotion recognition was assessed via Montreal Affective Voices. Moreover, self-assessed emotion recognition was studied with questionnaires. In facial as well as auditory emotion recognition, on average, there was no significant difference between patients and healthy controls. The outcomes of the measurements as well as the self-reports were comparable between patients and healthy controls. In contrast to previous studies in patients with peripheral and central facial palsy, these results indicate unimpaired ability for facial emotion recognition. Only in single patients with pronounced facial asymmetry and severe facial synkinesis was an impaired facial and auditory emotion recognition detected. Further studies should compare emotion recognition in patients with pronounced facial asymmetry in acute and chronic peripheral paralysis and central and peripheral facial palsy.
Collapse
Affiliation(s)
- Anna-Maria Kuttenreich
- Department of Otorhinolaryngology, Jena University Hospital, Am Klinikum 1, 07747 Jena, Germany; (G.F.V.); (O.G.-L.)
- Facial-Nerve-Center Jena, Jena University Hospital, Am Klinikum 1, 07747 Jena, Germany
- Center of Rare Diseases Jena, Jena University Hospital, Am Klinikum 1, 07747 Jena, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany;
- Department of Neurology, Medical Faculty, RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany
| | - Gerd Fabian Volk
- Department of Otorhinolaryngology, Jena University Hospital, Am Klinikum 1, 07747 Jena, Germany; (G.F.V.); (O.G.-L.)
- Facial-Nerve-Center Jena, Jena University Hospital, Am Klinikum 1, 07747 Jena, Germany
- Center of Rare Diseases Jena, Jena University Hospital, Am Klinikum 1, 07747 Jena, Germany
| | - Orlando Guntinas-Lichius
- Department of Otorhinolaryngology, Jena University Hospital, Am Klinikum 1, 07747 Jena, Germany; (G.F.V.); (O.G.-L.)
- Facial-Nerve-Center Jena, Jena University Hospital, Am Klinikum 1, 07747 Jena, Germany
- Center of Rare Diseases Jena, Jena University Hospital, Am Klinikum 1, 07747 Jena, Germany
| | - Harry von Piekartz
- Department of Physical Therapy and Rehabilitation Science, Osnabrück University of Applied Sciences, Albrechtstr. 30, 49076 Osnabrück, Germany;
| | - Stefan Heim
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany;
- Department of Neurology, Medical Faculty, RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany
- Institute of Neuroscience and Medicine (INM-1), Forschungszentrum Jülich, Leo-Brand-Strasse 5, 52428 Jülich, Germany
| |
Collapse
|
48
|
Ocklenburg S, Peterburs J, Mundorf A. Hemispheric asymmetries in the amygdala: a comparative primer. Prog Neurobiol 2022; 214:102283. [DOI: 10.1016/j.pneurobio.2022.102283] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2022] [Revised: 04/18/2022] [Accepted: 05/02/2022] [Indexed: 11/16/2022]
|
49
|
Brain functional connectivities that mediate the association between childhood traumatic events, and adult mental health and cognition. EBioMedicine 2022; 79:104002. [PMID: 35472671 PMCID: PMC9058958 DOI: 10.1016/j.ebiom.2022.104002] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Revised: 03/16/2022] [Accepted: 03/29/2022] [Indexed: 11/24/2022] Open
Abstract
BACKGROUND Childhood traumatic events are risk factors for psychopathology, but large-scale studies of how childhood traumatic events relate to mental health and cognition in adulthood, and how the brain is involved, are needed. METHODS The associations between childhood traumatic events (such as abuse and neglect, and defined by the 'Childhood Trauma' questions in the UK Biobank database) and brain functional connectivity, mental health problems, and cognitive performance were investigated by a univariate correlation analysis with 19,535 participants aged 45-79 from the UK Biobank dataset. The results were replicated with 17,747 independent participants in the second release from the same dataset. FINDINGS Childhood traumatic events were significantly associated with mental health problems in adulthood including anxiety (r=0.19, p<1.0 × 10-323), depression (r=0.21, p<1.0 × 10-323), and self-harm (r=0.24, p<1.0 × 10-323), and with adult cognitive performance including fluid intelligence (r=-0.05, p=2.8 × 10-10) and prospective memory (r=-0.04, p=6.8 × 10-8). Functional connectivities of the medial and lateral temporal cortex, the precuneus, the medial orbitofrontal cortex; and the superior, middle and inferior prefrontal cortex extending back to precentral regions were negatively correlated with the childhood traumatic events (FDR corrected, p<0.01). These lower functional connectivities significantly mediated the associations between childhood traumatic events and addiction, anxiety, depression and well-being (all p<1.0 × 10-3), and cognitive performance. The association between childhood traumatic events and behavioural measures and functional connectivity were confirmed in a replication with different participants in the second release of the UK Biobank dataset. INTERPRETATION Childhood traumatic events are strongly associated with adult mental health problems mediated by brain functional connectivities in brain areas involved in executive function, emotion, face processing, and memory. This understanding may help with prevention and treatment. FUNDING Funding was provided by the National Key R&D Program of China (No. 2018YFC1312900 and No. 2019YFA0709502).
Collapse
|
50
|
Janse van Rensburg EO, Botha RA, von Solms R. Utility indicator for emotion detection in a speaker authentication system. INFORMATION AND COMPUTER SECURITY 2022. [DOI: 10.1108/ics-07-2021-0097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Purpose
Authenticating an individual through voice can prove convenient as nothing needs to be stored and cannot easily be stolen. However, if an individual is authenticating under duress, the coerced attempt must be acknowledged and appropriate warnings issued. Furthermore, as duress may entail multiple combinations of emotions, the current f-score evaluation does not accommodate that multiple selected samples possess similar levels of importance. Thus, this study aims to demonstrate an approach to identifying duress within a voice-based authentication system.
Design/methodology/approach
Measuring the value that a classifier presents is often done using an f-score. However, the f-score does not effectively portray the proposed value when multiple classes could be grouped as one. The f-score also does not provide any information when numerous classes are often incorrectly identified as the other. Therefore, the proposed approach uses the confusion matrix, aggregates the select classes into another matrix and calculates a more precise representation of the selected classifier’s value. The utility of the proposed approach is demonstrated through multiple tests and is conducted as follows. The initial tests’ value is presented by an f-score, which does not value the individual emotions. The lack of value is then remedied with further tests, which include a confusion matrix. Final tests are then conducted that aggregate selected emotions within the confusion matrix to present a more precise utility value.
Findings
Two tests within the set of experiments achieved an f-score difference of 1%, indicating, Mel frequency cepstral coefficient, emotion detection, confusion matrix, multi-layer perceptron, Ryerson audio-visual database of emotional speech and song (RAVDESS), voice authentication that the two tests provided similar value. The confusion matrix used to calculate the f-score indicated that some emotions are often confused, which could all be considered closely related. Although the f-score can represent an accuracy value, these tests’ value is not accurately portrayed when not considering often confused emotions. Deciding which approach to take based on the f-score did not prove beneficial as it did not address the confused emotions. When aggregating the confusion matrix of these two tests based on selected emotions, the newly calculated utility value demonstrated a difference of 4%, indicating that the two tests may not provide a similar value as previously indicated.
Research limitations/implications
This approach’s performance is dependent on the data presented to it. If the classifier is presented with incomplete or degraded data, the results obtained from the classifier will reflect that. Additionally, the grouping of emotions is not based on psychological evidence, and this was purely done to demonstrate the implementation of an aggregated confusion matrix.
Originality/value
The f-score offers a value that represents the classifiers’ ability to classify a class correctly. This paper demonstrates that aggregating a confusion matrix could provide more value than a single f-score in the context of classifying an emotion that could consist of a combination of emotions. This approach can similarly be applied to different combinations of classifiers for the desired effect of extracting a more accurate performance value that a selected classifier presents.
Collapse
|