1
|
Morningstar M, Billetdeaux KA, Mattson WI, Gilbert AC, Nelson EE, Hoskinson KR. Neural response to vocal emotional intensity in youth. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2024:10.3758/s13415-024-01224-6. [PMID: 39300012 DOI: 10.3758/s13415-024-01224-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 08/24/2024] [Indexed: 09/22/2024]
Abstract
Previous research has identified regions of the brain that are sensitive to emotional intensity in faces, with some evidence for developmental differences in this pattern of response. However, comparable understanding of how the brain tracks linear variations in emotional prosody is limited-especially in youth samples. The current study used novel stimuli (morphing emotional prosody from neutral to anger/happiness in linear increments) to investigate whether neural response to vocal emotion was parametrically modulated by emotional intensity and whether there were age-related changes in this effect. Participants aged 8-21 years (n = 56, 52% female) completed a vocal emotion recognition task, in which they identified the intended emotion in morphed recordings of vocal prosody, while undergoing functional magnetic resonance imaging. Parametric analyses of whole-brain response to morphed stimuli found that activation in the bilateral superior temporal gyrus (STG) scaled to emotional intensity in angry (but not happy) voices. Multivariate region-of-interest analyses revealed the same pattern in the right amygdala. Sensitivity to emotional intensity did not vary by participants' age. These findings provide evidence for the linear parameterization of emotional intensity in angry vocal prosody within the bilateral STG and right amygdala. Although findings should be replicated, the current results also suggest that this pattern of neural sensitivity may not be subject to strong developmental influences.
Collapse
Affiliation(s)
- M Morningstar
- Department of Psychology, Queen's University, 62 Arch Street, Kingston, ON, K7L 3L3, Canada.
- Centre for Neuroscience Studies, Queen's University, Kingston, Canada.
| | - K A Billetdeaux
- Center for Biobehavioral Health, Abigail Wexner Research Institute at Nationwide Children's Hospital, Columbus, OH, USA
| | - W I Mattson
- Center for Biobehavioral Health, Abigail Wexner Research Institute at Nationwide Children's Hospital, Columbus, OH, USA
| | - A C Gilbert
- School of Communication Sciences and Disorders, McGill University, Montreal, Canada
- Centre for Research on Brain, Language, and Music, Montreal, Canada
| | - E E Nelson
- Center for Biobehavioral Health, Abigail Wexner Research Institute at Nationwide Children's Hospital, Columbus, OH, USA
- Department of Pediatrics, The Ohio State University, Columbus, OH, USA
| | - K R Hoskinson
- Center for Biobehavioral Health, Abigail Wexner Research Institute at Nationwide Children's Hospital, Columbus, OH, USA
- Department of Pediatrics, The Ohio State University, Columbus, OH, USA
| |
Collapse
|
2
|
Hashimoto RI, Okada R, Aoki R, Nakamura M, Ohta H, Itahashi T. Functional alterations of lateral temporal cortex for processing voice prosody in adults with autism spectrum disorder. Cereb Cortex 2024; 34:bhae363. [PMID: 39270675 DOI: 10.1093/cercor/bhae363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Revised: 08/17/2024] [Accepted: 08/21/2024] [Indexed: 09/15/2024] Open
Abstract
The human auditory system includes discrete cortical patches and selective regions for processing voice information, including emotional prosody. Although behavioral evidence indicates individuals with autism spectrum disorder (ASD) have difficulties in recognizing emotional prosody, it remains understudied whether and how localized voice patches (VPs) and other voice-sensitive regions are functionally altered in processing prosody. This fMRI study investigated neural responses to prosodic voices in 25 adult males with ASD and 33 controls using voices of anger, sadness, and happiness with varying degrees of emotion. We used a functional region-of-interest analysis with an independent voice localizer to identify multiple VPs from combined ASD and control data. We observed a general response reduction to prosodic voices in specific VPs of left posterior temporal VP (TVP) and right middle TVP. Reduced cortical responses in right middle TVP were consistently correlated with the severity of autistic symptoms for all examined emotional prosodies. Moreover, representation similarity analysis revealed the reduced effect of emotional intensity in multivoxel activation patterns in left anterior superior temporal cortex only for sad prosody. These results indicate reduced response magnitudes to voice prosodies in specific TVPs and altered emotion intensity-dependent multivoxel activation patterns in adult ASDs, potentially underlying their socio-communicative difficulties.
Collapse
Affiliation(s)
- Ryu-Ichiro Hashimoto
- Medical Institute of Developmental Disabilities Research, Showa University, 6-11-11 Kita-Karasuyama, Setagaya-ku, Tokyo 157-8577, Japan
- Department of Language Sciences, Graduate School of Humanities, Tokyo Metropolitan University, 1-1 Minami-Osawa, Hachioji-shi, Tokyo 192-0397, Japan
| | - Rieko Okada
- Faculty of Intercultural Japanese Studies, Otemae University, 6-42 Ochayasho-cho, Nishinomiya-shi Hyogo 662-8552, Japan
| | - Ryuta Aoki
- Department of Language Sciences, Graduate School of Humanities, Tokyo Metropolitan University, 1-1 Minami-Osawa, Hachioji-shi, Tokyo 192-0397, Japan
- Human Brain Research Center, Graduate School of Medicine, Kyoto University, 54 Shogoin-Kawahara-cho, Sakyo-ku, Kyoto 606-8507, Japan
| | - Motoaki Nakamura
- Medical Institute of Developmental Disabilities Research, Showa University, 6-11-11 Kita-Karasuyama, Setagaya-ku, Tokyo 157-8577, Japan
| | - Haruhisa Ohta
- Medical Institute of Developmental Disabilities Research, Showa University, 6-11-11 Kita-Karasuyama, Setagaya-ku, Tokyo 157-8577, Japan
| | - Takashi Itahashi
- Medical Institute of Developmental Disabilities Research, Showa University, 6-11-11 Kita-Karasuyama, Setagaya-ku, Tokyo 157-8577, Japan
| |
Collapse
|
3
|
de Jong TJ, van der Schroeff MP, Hakkesteegt M, Vroegop JL. Emotional prosodic expression of children with hearing aids or cochlear implants, rated by adults and peers. Int J Audiol 2024:1-8. [PMID: 39126382 DOI: 10.1080/14992027.2024.2380098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Revised: 07/08/2024] [Accepted: 07/09/2024] [Indexed: 08/12/2024]
Abstract
OBJECTIVE The emotional prosodic expression potential of children with cochlear implants is poorer than that of normal hearing peers. Though little is known about children with hearing aids. DESIGN This study was set up to generate a better understanding of hearing aid users' prosodic identifiability compared to cochlear implant users and peers without hearing loss. STUDY SAMPLE Emotional utterances of 75 Dutch speaking children (7 - 12 yr; 26 CHA, 23 CCI, 26 CNH) were gathered. Utterances were evaluated blindly by normal hearing Dutch listeners: 22 children and 9 adults (17 - 24 yrs) for resemblance to three emotions (happiness, sadness, anger). RESULTS Emotions were more accurately recognised by adults than by children. Both children and adults correctly judged happiness significantly less often in CCI than in CNH. Also, adult listeners confused happiness with sadness more often in both CHA and CCI than in CNH. CONCLUSIONS Children and adults are able to accurately evaluate the emotions expressed through speech by children with varying degrees of hearing loss, ranging from mild to profound, nearly as well as they can with typically hearing children. The favourable outcomes emphasise the resilience of children with hearing loss in developing effective emotional communication skills.
Collapse
Affiliation(s)
- Tjeerd J de Jong
- Department of Otorhinolaryngology and Head and Neck Surgery, Erasmus University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Marc P van der Schroeff
- Department of Otorhinolaryngology and Head and Neck Surgery, Erasmus University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Marieke Hakkesteegt
- Department of Otorhinolaryngology and Head and Neck Surgery, Erasmus University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Jantien L Vroegop
- Department of Otorhinolaryngology and Head and Neck Surgery, Erasmus University Medical Center Rotterdam, Rotterdam, The Netherlands
| |
Collapse
|
4
|
Yuan Y, Shang J, Gao C, Sommer W, Li W. A premium for positive social interest and attractive voices in the acceptability of unfair offers? An ERP study. Eur J Neurosci 2024; 60:4078-4094. [PMID: 38777332 DOI: 10.1111/ejn.16422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 04/18/2024] [Accepted: 05/05/2024] [Indexed: 05/25/2024]
Abstract
Although the attractiveness of voices plays an important role in social interactions, it is unclear how voice attractiveness and social interest influence social decision-making. Here, we combined the ultimatum game with recording event-related brain potentials (ERPs) and examined the effect of attractive versus unattractive voices of the proposers, expressing positive versus negative social interest ("I like you" vs. "I don't like you"), on the acceptance of the proposal. Overall, fair offers were accepted at significantly higher rates than unfair offers, and high voice attractiveness increased acceptance rates for all proposals. In ERPs in response to the voices, their attractiveness and expressed social interests yielded early additive effects in the N1 component, followed by interactions in the subsequent P2, P3 and N400 components. More importantly, unfair offers elicited a larger Medial Frontal Negativity (MFN) than fair offers but only when the proposer's voice was unattractive or when the voice carried positive social interest. These results suggest that both voice attractiveness and social interest moderate social decision-making and there is a similar "beauty premium" for voices as for faces.
Collapse
Affiliation(s)
- Yan Yuan
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, China
- Key Laboratory of Brain and Cognitive Neuroscience, Dalian, Liaoning Province, China
| | - Junchen Shang
- School of Humanities, Southeast University, Jiangsu, China
| | - Chunhai Gao
- College of Education, Shenzhen University, Shenzhen, China
| | - Werner Sommer
- Institut für Psychologie, Humboldt-Universität zu Berlin, Berlin, Germany
- Department of Psychology, Zhejiang Normal University, Jin Hua, China
- Department of Physics and Life Sciences Imaging Center, Hongkong Baptist University, Hong Kong
- Faculty of Education, National University of Malaysia, Kuala Lumpur, Malaysia
| | - Weijun Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, China
- Key Laboratory of Brain and Cognitive Neuroscience, Dalian, Liaoning Province, China
| |
Collapse
|
5
|
Li C, Otgaar H, Battista F, Muris P, Zhang Y. The effect of mood on shaping belief and recollection following false feedback. PSYCHOLOGICAL RESEARCH 2024; 88:1638-1652. [PMID: 38581439 DOI: 10.1007/s00426-024-01957-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Accepted: 03/19/2024] [Indexed: 04/08/2024]
Abstract
The current study examined how mood affects the impact of false feedback on belief and recollection. In a three-session experiment, participants first watched 40 neutral mini videos, which were accompanied by music to induce either a positive or negative mood, or no music. Following a recognition test, they received false feedback to reduce belief in the occurrence of the events displayed in some of the videos (Session 2). This was followed by an immediate memory test and a delayed memory assessment one week later (Session 3). The results revealed that participants in negative mood reported higher belief scores compared to those in positive moods, despite an overall decline in belief scores for all groups following the false feedback. Notably, individuals in negative moods exhibited less reduction in their belief scores after encountering challenges, thereby maintaining a higher accuracy in their testimonies. Over time, a reduction in the clarity of participants' memory recall was observed, which correspondingly reduced their testimony accuracy. This study thus indicates that mood states play a role in shaping belief and memory recall under the influence of false feedback.
Collapse
Affiliation(s)
- Chunlin Li
- Faculty of Law and Criminology, Catholic University of Leuven, Leuven, 3000, Belgium.
- Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands.
| | - Henry Otgaar
- Faculty of Law and Criminology, Catholic University of Leuven, Leuven, 3000, Belgium
- Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Fabiana Battista
- Department of Education, Psychology, Communication, University of Bari, Bari, Italy
| | - Peter Muris
- Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
- Department of Psychology, Stellenbosch University, Stellenbosch, South Africa
| | - Yikang Zhang
- Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
6
|
Laukka P, Månsson KNT, Cortes DS, Manzouri A, Frick A, Fredborg W, Fischer H. Neural correlates of individual differences in multimodal emotion recognition ability. Cortex 2024; 175:1-11. [PMID: 38691922 DOI: 10.1016/j.cortex.2024.03.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 03/11/2024] [Accepted: 04/01/2024] [Indexed: 05/03/2024]
Abstract
Studies have reported substantial variability in emotion recognition ability (ERA) - an important social skill - but possible neural underpinnings for such individual differences are not well understood. This functional magnetic resonance imaging (fMRI) study investigated neural responses during emotion recognition in young adults (N = 49) who were selected for inclusion based on their performance (high or low) during previous testing of ERA. Participants were asked to judge brief video recordings in a forced-choice emotion recognition task, wherein stimuli were presented in visual, auditory and multimodal (audiovisual) blocks. Emotion recognition rates during brain scanning confirmed that individuals with high (vs low) ERA received higher accuracy for all presentation blocks. fMRI-analyses focused on key regions of interest (ROIs) involved in the processing of multimodal emotion expressions, based on previous meta-analyses. In neural response to emotional stimuli contrasted with neutral stimuli, individuals with high (vs low) ERA showed higher activation in the following ROIs during the multimodal condition: right middle superior temporal gyrus (mSTG), right posterior superior temporal sulcus (PSTS), and right inferior frontal cortex (IFC). Overall, results suggest that individual variability in ERA may be reflected across several stages of decisional processing, including extraction (mSTG), integration (PSTS) and evaluation (IFC) of emotional information.
Collapse
Affiliation(s)
- Petri Laukka
- Department of Psychology, Stockholm University, Stockholm, Sweden; Department of Psychology, Uppsala University, Uppsala, Sweden.
| | - Kristoffer N T Månsson
- Centre for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden; Department of Clinical Psychology and Psychotherapy, Babeș-Bolyai University, Cluj-Napoca, Romania
| | - Diana S Cortes
- Department of Psychology, Stockholm University, Stockholm, Sweden
| | - Amirhossein Manzouri
- Department of Psychology, Stockholm University, Stockholm, Sweden; Centre for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Andreas Frick
- Department of Medical Sciences, Psychiatry, Uppsala University, Uppsala, Sweden
| | - William Fredborg
- Department of Psychology, Stockholm University, Stockholm, Sweden
| | - Håkan Fischer
- Department of Psychology, Stockholm University, Stockholm, Sweden; Stockholm University Brain Imaging Centre (SUBIC), Stockholm University, Stockholm, Sweden; Aging Research Center, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet and Stockholm University, Stockholm, Sweden
| |
Collapse
|
7
|
Vignal L, Vielle C, Williams M, Maurice N, Degoulet M, Baunez C. Subthalamic high-frequency deep brain stimulation reduces addiction-like alcohol use and the possible negative influence of a peer presence. Psychopharmacology (Berl) 2024:10.1007/s00213-024-06532-w. [PMID: 38307944 DOI: 10.1007/s00213-024-06532-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 01/08/2024] [Indexed: 02/04/2024]
Abstract
RATIONALE The immediate social context significantly influences alcohol consumption in humans. Recent studies have revealed that peer presence could modulate drugs use in rats. The most efficient condition to reduce cocaine intake is the presence of a stranger peer, naive to drugs. Deep brain stimulation (DBS) of the Subthalamic Nucleus (STN), which was shown to have beneficial effects on addiction to cocaine or alcohol, also modulates the protective influence of peer's presence on cocaine use. OBJECTIVES This study aimed to: 1) explore how the presence of an alcohol-naive stranger peer affects recreational and escalated alcohol intake, and 2) assess the involvement of STN on alcohol use and in the modulation induced by the presence of an alcohol-naïve stranger peer. METHODS Rats with STN DBS and control animals self-administered 10% (v/v) ethanol in presence, or absence, of an alcohol-naive stranger peer, before and after escalation of ethanol intake (observed after intermittent alcohol (20% (v/v) ethanol) access). RESULTS Neither STN DBS nor the presence of an alcohol-naive stranger peer modulated significantly recreational alcohol intake. After the escalation procedure, STN DBS reduced ethanol consumption. The presence of an alcohol-naive stranger peer increased consumption only in low drinkers, which effect was suppressed by STN DBS. CONCLUSIONS These results highlight the influence of a peer's presence on escalated alcohol intake, and confirm the role of STN in addiction-like alcohol intake and in the social influence on drug consumption.
Collapse
Affiliation(s)
- Lucie Vignal
- Institut de Neurosciences de La Timone, UMR 7289 CNRS & Aix-Marseille Université, 13005, Marseille, France
| | - Cassandre Vielle
- Institut de Neurosciences de La Timone, UMR 7289 CNRS & Aix-Marseille Université, 13005, Marseille, France
| | - Maya Williams
- Institut de Neurosciences de La Timone, UMR 7289 CNRS & Aix-Marseille Université, 13005, Marseille, France
| | - Nicolas Maurice
- Institut de Neurosciences de La Timone, UMR 7289 CNRS & Aix-Marseille Université, 13005, Marseille, France
| | - Mickael Degoulet
- Institut de Neurosciences de La Timone, UMR 7289 CNRS & Aix-Marseille Université, 13005, Marseille, France
| | - Christelle Baunez
- Institut de Neurosciences de La Timone, UMR 7289 CNRS & Aix-Marseille Université, 13005, Marseille, France.
| |
Collapse
|
8
|
Chaudhary S, Zhang S, Zhornitsky S, Chen Y, Chao HH, Li CSR. Age-related reduction in trait anxiety: Behavioral and neural evidence of automaticity in negative facial emotion processing. Neuroimage 2023; 276:120207. [PMID: 37263454 PMCID: PMC10330646 DOI: 10.1016/j.neuroimage.2023.120207] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 05/14/2023] [Accepted: 05/29/2023] [Indexed: 06/03/2023] Open
Abstract
Trait anxiety diminishes with age, which may result from age-related decline in registering salient emotional stimuli and/or enhancement in emotion regulation. We tested the hypotheses in 88 adults 21 to 85 years of age and studied with fMRI of the Hariri task. Age-related decline in stimulus registration would manifest in delayed reaction time (RT) and diminished saliency circuit activity in response to emotional vs. neutral stimuli. Enhanced control of negative emotions would manifest in diminished limbic/emotional circuit and higher prefrontal cortical (PFC) responses to negative emotion. The results showed that anxiety was negatively correlated with age. Age was associated with faster RT and diminished activation of the medial PFC, in the area of the dorsal and rostral anterior cingulate cortex (dACC/rACC) - a hub of the saliency circuit - during matching of negative but not positive vs. neutral emotional faces. A slope test confirmed the differences in the regressions. Further, age was not associated with activation of the PFC in whole-brain regression or in region-of-interest analysis of the dorsolateral PFC, an area identified from meta-analyses of the emotion regulation literature. Together, the findings fail to support either hypothesis; rather, the findings suggest age-related automaticity in processing negative emotions as a potential mechanism of diminished anxiety. Automaticity results in faster RT and diminished anterior cingulate activity in response to negative but not positive emotional stimuli. In support, analyses of psychophysiological interaction demonstrated higher dACC/rACC connectivity with the default mode network, which has been implicated in automaticity in information processing. As age increased, individuals demonstrated faster RT with higher connectivity during matching of negative vs. neutral images. Automaticity in negative emotion processing needs to be investigated as a mechanism of age-related reduction in anxiety.
Collapse
Affiliation(s)
- Shefali Chaudhary
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT 06519, United States.
| | - Sheng Zhang
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT 06519, United States.
| | - Simon Zhornitsky
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT 06519, United States.
| | - Yu Chen
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT 06519, United States.
| | - Herta H Chao
- VA Connecticut Healthcare System, West Haven, CT 06516, United States; Department of Medicine, Yale University School of Medicine, New Haven, CT 06519, United States.
| | - Chiang-Shan R Li
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT 06519, United States; Department of Neuroscience, Yale University School of Medicine, New Haven, CT 06520, United States; Wu Tsai Institute, Yale University, New Haven, CT 06520, United States.
| |
Collapse
|
9
|
Leipold S, Abrams DA, Karraker S, Phillips JM, Menon V. Aberrant Emotional Prosody Circuitry Predicts Social Communication Impairments in Children With Autism. BIOLOGICAL PSYCHIATRY. COGNITIVE NEUROSCIENCE AND NEUROIMAGING 2023; 8:531-541. [PMID: 36635147 PMCID: PMC10973204 DOI: 10.1016/j.bpsc.2022.09.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 08/25/2022] [Accepted: 09/30/2022] [Indexed: 11/06/2022]
Abstract
BACKGROUND Emotional prosody provides acoustical cues that reflect a communication partner's emotional state and is crucial for successful social interactions. Many children with autism have deficits in recognizing emotions from voices; however, the neural basis for these impairments is unknown. We examined brain circuit features underlying emotional prosody processing deficits and their relationship to clinical symptoms of autism. METHODS We used an event-related functional magnetic resonance imaging task to measure neural activity and connectivity during processing of sad and happy emotional prosody and neutral speech in 22 children with autism and 21 matched control children (7-12 years old). We employed functional connectivity analyses to test competing theoretical accounts that attribute emotional prosody impairments to either sensory processing deficits in auditory cortex or theory of mind deficits instantiated in the temporoparietal junction (TPJ). RESULTS Children with autism showed specific behavioral impairments for recognizing emotions from voices. They also showed aberrant functional connectivity between voice-sensitive auditory cortex and the bilateral TPJ during emotional prosody processing. Neural activity in the bilateral TPJ during processing of both sad and happy emotional prosody stimuli was associated with social communication impairments in children with autism. In contrast, activity and decoding of emotional prosody in auditory cortex was comparable between autism and control groups and did not predict social communication impairments. CONCLUSIONS Our findings support a social-cognitive deficit model of autism by identifying a role for TPJ dysfunction during emotional prosody processing. Our study underscores the importance of tuning in to vocal-emotional cues for building social connections in children with autism.
Collapse
Affiliation(s)
- Simon Leipold
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, California.
| | - Daniel A Abrams
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, California
| | - Shelby Karraker
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, California
| | - Jennifer M Phillips
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, California
| | - Vinod Menon
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, California; Department of Neurology and Neurological Sciences, Stanford University, Stanford, California; Stanford Neurosciences Institute, Stanford University, Stanford, California.
| |
Collapse
|
10
|
Disentangling emotional signals in the brain: an ALE meta-analysis of vocal affect perception. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2023; 23:17-29. [PMID: 35945478 DOI: 10.3758/s13415-022-01030-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 07/24/2022] [Indexed: 11/08/2022]
Abstract
Recent advances in neuroimaging research on vocal emotion perception have revealed voice-sensitive areas specialized in processing affect. Experimental data on this subject is varied, investigating a wide range of emotions through different vocal signals and task demands. The present meta-analysis was designed to disentangle this diversity of results by summarizing neuroimaging data in the vocal emotion perception literature. Data from 44 experiments contrasting emotional and neutral voices was analyzed to assess brain areas involved in vocal affect perception in general, as well as depending on the type of voice signal (speech prosody or vocalizations), the task demands (implicit or explicit attention to emotions), and the specific emotion perceived. Results reassessed a consistent bilateral network of Emotional Voices Areas consisting of the superior temporal cortex and primary auditory regions. Specific activations and lateralization of these regions, as well as additional areas (insula, middle temporal gyrus) were further modulated by signal type and task demands. Exploring the sparser data on single emotions also suggested the recruitment of other regions (insula, inferior frontal gyrus, frontal operculum) for specific aspects of each emotion. These novel meta-analytic results suggest that while the bulk of vocal affect processing is localized in the STC, the complexity and variety of such vocal signals entails functional specificities in complex and varied cortical (and potentially subcortical) response pathways.
Collapse
|
11
|
Leipold S, Abrams DA, Karraker S, Menon V. Neural decoding of emotional prosody in voice-sensitive auditory cortex predicts social communication abilities in children. Cereb Cortex 2023; 33:709-728. [PMID: 35296892 PMCID: PMC9890475 DOI: 10.1093/cercor/bhac095] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Revised: 02/11/2022] [Accepted: 02/12/2022] [Indexed: 02/04/2023] Open
Abstract
During social interactions, speakers signal information about their emotional state through their voice, which is known as emotional prosody. Little is known regarding the precise brain systems underlying emotional prosody decoding in children and whether accurate neural decoding of these vocal cues is linked to social skills. Here, we address critical gaps in the developmental literature by investigating neural representations of prosody and their links to behavior in children. Multivariate pattern analysis revealed that representations in the bilateral middle and posterior superior temporal sulcus (STS) divisions of voice-sensitive auditory cortex decode emotional prosody information in children. Crucially, emotional prosody decoding in middle STS was correlated with standardized measures of social communication abilities; more accurate decoding of prosody stimuli in the STS was predictive of greater social communication abilities in children. Moreover, social communication abilities were specifically related to decoding sadness, highlighting the importance of tuning in to negative emotional vocal cues for strengthening social responsiveness and functioning. Findings bridge an important theoretical gap by showing that the ability of the voice-sensitive cortex to detect emotional cues in speech is predictive of a child's social skills, including the ability to relate and interact with others.
Collapse
Affiliation(s)
- Simon Leipold
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Daniel A Abrams
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Shelby Karraker
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
| | - Vinod Menon
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, CA, USA
- Department of Neurology and Neurological Sciences, Stanford University, Stanford, CA, USA
- Stanford Neurosciences Institute, Stanford University, Stanford, CA, USA
| |
Collapse
|
12
|
Johne M, Helgers SOA, Alam M, Jelinek J, Hubka P, Krauss JK, Scheper V, Kral A, Schwabe K. Processing of auditory information in forebrain regions after hearing loss in adulthood: Behavioral and electrophysiological studies in a rat model. Front Neurosci 2022; 16:966568. [DOI: 10.3389/fnins.2022.966568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2022] [Accepted: 10/20/2022] [Indexed: 11/12/2022] Open
Abstract
BackgroundHearing loss was proposed as a factor affecting development of cognitive impairment in elderly. Deficits cannot be explained primarily by dysfunctional neuronal networks within the central auditory system. We here tested the impact of hearing loss in adult rats on motor, social, and cognitive function. Furthermore, potential changes in the neuronal activity in the medial prefrontal cortex (mPFC) and the inferior colliculus (IC) were evaluated.Materials and methodsIn adult male Sprague Dawley rats hearing loss was induced under general anesthesia with intracochlear injection of neomycin. Sham-operated and naive rats served as controls. Postsurgical acoustically evoked auditory brainstem response (ABR)-measurements verified hearing loss after intracochlear neomycin-injection, respectively, intact hearing in sham-operated and naive controls. In intervals of 8 weeks and up to 12 months after surgery rats were tested for locomotor activity (open field) and coordination (Rotarod), for social interaction and preference, and for learning and memory (4-arms baited 8-arms radial maze test). In a final setting, electrophysiological recordings were performed in the mPFC and the IC.ResultsLocomotor activity did not differ between deaf and control rats, whereas motor coordination on the Rotarod was disturbed in deaf rats (P < 0.05). Learning the concept of the radial maze test was initially disturbed in deaf rats (P < 0.05), whereas retesting every 8 weeks did not show long-term memory deficits. Social interaction and preference was also not affected by hearing loss. Final electrophysiological recordings in anesthetized rats revealed reduced firing rates, enhanced irregular firing, and reduced oscillatory theta band activity (4–8 Hz) in the mPFC of deaf rats as compared to controls (P < 0.05). In the IC, reduced oscillatory theta (4–8 Hz) and gamma (30–100 Hz) band activity was found in deaf rats (P < 0.05).ConclusionMinor and transient behavioral deficits do not confirm direct impact of long-term hearing loss on cognitive function in rats. However, the altered neuronal activities in the mPFC and IC after hearing loss indicate effects on neuronal networks in and outside the central auditory system with potential consequences on cognitive function.
Collapse
|
13
|
Nussbaum C, Schirmer A, Schweinberger SR. Contributions of fundamental frequency and timbre to vocal emotion perception and their electrophysiological correlates. Soc Cogn Affect Neurosci 2022; 17:1145-1154. [PMID: 35522247 PMCID: PMC9714422 DOI: 10.1093/scan/nsac033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Revised: 04/12/2022] [Accepted: 05/06/2022] [Indexed: 01/12/2023] Open
Abstract
Our ability to infer a speaker's emotional state depends on the processing of acoustic parameters such as fundamental frequency (F0) and timbre. Yet, how these parameters are processed and integrated to inform emotion perception remains largely unknown. Here we pursued this issue using a novel parameter-specific voice morphing technique to create stimuli with emotion modulations in only F0 or only timbre. We used these stimuli together with fully modulated vocal stimuli in an event-related potential (ERP) study in which participants listened to and identified stimulus emotion. ERPs (P200 and N400) and behavioral data converged in showing that both F0 and timbre support emotion processing but do so differently for different emotions: Whereas F0 was most relevant for responses to happy, fearful and sad voices, timbre was most relevant for responses to voices expressing pleasure. Together, these findings offer original insights into the relative significance of different acoustic parameters for early neuronal representations of speaker emotion and show that such representations are predictive of subsequent evaluative judgments.
Collapse
Affiliation(s)
- Christine Nussbaum
- Correspondence should be addressed to Christine Nussbaum, Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Leutragraben 1, Jena 07743, Germany. E-mail:
| | - Annett Schirmer
- Department of Psychology, The Chinese University of Hong Kong, Shatin 999077, Hong Kong SAR,Brain and Mind Institute, The Chinese University of Hong Kong, Shatin 999077, Hong Kong SAR,Center for Cognition and Brain Studies, The Chinese University of Hong Kong, Shatin 999077, Hong Kong SAR
| | - Stefan R Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, Jena 07743, Germany,Voice Research Unit, Friedrich Schiller University, Jena 07743, Germany,Swiss Center for Affective Sciences, University of Geneva, Geneva 1202, Switzerland
| |
Collapse
|
14
|
Abstract
OBJECTIVE The ability to recognize others' emotions is a central aspect of socioemotional functioning. Emotion recognition impairments are well documented in Alzheimer's disease and other dementias, but it is less understood whether they are also present in mild cognitive impairment (MCI). Results on facial emotion recognition are mixed, and crucially, it remains unclear whether the potential impairments are specific to faces or extend across sensory modalities. METHOD In the current study, 32 MCI patients and 33 cognitively intact controls completed a comprehensive neuropsychological assessment and two forced-choice emotion recognition tasks, including visual and auditory stimuli. The emotion recognition tasks required participants to categorize emotions in facial expressions and in nonverbal vocalizations (e.g., laughter, crying) expressing neutrality, anger, disgust, fear, happiness, pleasure, surprise, or sadness. RESULTS MCI patients performed worse than controls for both facial expressions and vocalizations. The effect was large, similar across tasks and individual emotions, and it was not explained by sensory losses or affective symptomatology. Emotion recognition impairments were more pronounced among patients with lower global cognitive performance, but they did not correlate with the ability to perform activities of daily living. CONCLUSIONS These findings indicate that MCI is associated with emotion recognition difficulties and that such difficulties extend beyond vision, plausibly reflecting a failure at supramodal levels of emotional processing. This highlights the importance of considering emotion recognition abilities as part of standard neuropsychological testing in MCI, and as a target of interventions aimed at improving social cognition in these patients.
Collapse
|
15
|
Domínguez-Borràs J, Vuilleumier P. Amygdala function in emotion, cognition, and behavior. HANDBOOK OF CLINICAL NEUROLOGY 2022; 187:359-380. [PMID: 35964983 DOI: 10.1016/b978-0-12-823493-8.00015-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The amygdala is a core structure in the anterior medial temporal lobe, with an important role in several brain functions involving memory, emotion, perception, social cognition, and even awareness. As a key brain structure for saliency detection, it triggers and controls widespread modulatory signals onto multiple areas of the brain, with a great impact on numerous aspects of adaptive behavior. Here we discuss the neural mechanisms underlying these functions, as established by animal and human research, including insights provided in both healthy and pathological conditions.
Collapse
Affiliation(s)
- Judith Domínguez-Borràs
- Department of Clinical Psychology and Psychobiology & Institute of Neurosciences, University of Barcelona, Barcelona, Spain
| | - Patrik Vuilleumier
- Department of Neuroscience and Center for Affective Sciences, University of Geneva, Geneva, Switzerland.
| |
Collapse
|
16
|
Tomasello R, Grisoni L, Boux I, Sammler D, Pulvermüller F. OUP accepted manuscript. Cereb Cortex 2022; 32:4885-4901. [PMID: 35136980 PMCID: PMC9626830 DOI: 10.1093/cercor/bhab522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 11/16/2021] [Accepted: 12/17/2021] [Indexed: 11/20/2022] Open
Abstract
During conversations, speech prosody provides important clues about the speaker’s communicative intentions. In many languages, a rising vocal pitch at the end of a sentence typically expresses a question function, whereas a falling pitch suggests a statement. Here, the neurophysiological basis of intonation and speech act understanding were investigated with high-density electroencephalography (EEG) to determine whether prosodic features are reflected at the neurophysiological level. Already approximately 100 ms after the sentence-final word differing in prosody, questions, and statements expressed with the same sentences led to different neurophysiological activity recorded in the event-related potential. Interestingly, low-pass filtered sentences and acoustically matched nonvocal musical signals failed to show any neurophysiological dissociations, thus suggesting that the physical intonation alone cannot explain this modulation. Our results show rapid neurophysiological indexes of prosodic communicative information processing that emerge only when pragmatic and lexico-semantic information are fully expressed. The early enhancement of question-related activity compared with statements was due to sources in the articulatory-motor region, which may reflect the richer action knowledge immanent to questions, namely the expectation of the partner action of answering the question. The present findings demonstrate a neurophysiological correlate of prosodic communicative information processing, which enables humans to rapidly detect and understand speaker intentions in linguistic interactions.
Collapse
Affiliation(s)
- Rosario Tomasello
- Address correspondence to Rosario Tomasello, Brain Language Laboratory, Department of Philosophy and Humanities, WE4, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany.
| | - Luigi Grisoni
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, 14195 Berlin, Germany
- Cluster of Excellence ‘Matters of Activity. Image Space Material’, Humboldt Universität zu Berlin, 10099 Berlin, Germany
| | - Isabella Boux
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, 14195 Berlin, Germany
- Berlin School of Mind and Brain, Humboldt Universität zu Berlin, 10117 Berlin, Germany
- Einstein Center for Neurosciences, 10117 Berlin, Germany
| | - Daniela Sammler
- Research Group ‘Neurocognition of Music and Language’, Max Planck Institute for Empirical Aesthetics, 60322 Frankfurt am Main, Germany
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, 04103 Leipzig, Germany
| | - Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, 14195 Berlin, Germany
- Cluster of Excellence ‘Matters of Activity. Image Space Material’, Humboldt Universität zu Berlin, 10099 Berlin, Germany
- Berlin School of Mind and Brain, Humboldt Universität zu Berlin, 10117 Berlin, Germany
- Einstein Center for Neurosciences, 10117 Berlin, Germany
| |
Collapse
|
17
|
Liang J, Li Y, Zhang Z, Luo W. Sound gaps boost emotional audiovisual integration independent of attention: Evidence from an ERP study. Biol Psychol 2021; 168:108246. [PMID: 34968556 DOI: 10.1016/j.biopsycho.2021.108246] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2021] [Revised: 12/18/2021] [Accepted: 12/23/2021] [Indexed: 11/02/2022]
Abstract
The emotion discrimination paradigm was adopted to study the effect of interrupted sound on visual emotional processing under different attentional states. There were two experiments: Experiment 1: judging facial expressions (explicit task), Experiment 2: judging the position of a bar (implicit task). In Experiment 1, ERP results showed that there was a sound gap accelerating the effect of P1 present only under neutral faces. In Experiment 2, the accelerating effect (P1) existed regardless of the emotional condition. Combining two experiments, P1 findings suggest that sound gap enhances bottom-up attention. The N170 and late positive component (LPC) were found to be regulated by emotion face in both experiments, with fear over the neutral. Comparing the two experiments, the explicit task induced a larger LPC than the implicit task. Overall, sound gaps boosted the audiovisual integration by bottom-up attention in early integration, while cognitive expectations led to top-down attention in late stages.
Collapse
Affiliation(s)
- Junyu Liang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Dalian 116029, Liaoning Province, China
| | - Yuchen Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Dalian 116029, Liaoning Province, China
| | - Zhao Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Institute of Psychology, Weifang Medical University, Weifang 216053, China; Key Laboratory of Brain and Cognitive Neuroscience, Dalian 116029, Liaoning Province, China
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Dalian 116029, Liaoning Province, China.
| |
Collapse
|
18
|
Nussbaum C, von Eiff CI, Skuk VG, Schweinberger SR. Vocal emotion adaptation aftereffects within and across speaker genders: Roles of timbre and fundamental frequency. Cognition 2021; 219:104967. [PMID: 34875400 DOI: 10.1016/j.cognition.2021.104967] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 10/22/2021] [Accepted: 11/23/2021] [Indexed: 12/12/2022]
Abstract
While the human perceptual system constantly adapts to the environment, some of the underlying mechanisms are still poorly understood. For instance, although previous research demonstrated perceptual aftereffects in emotional voice adaptation, the contribution of different vocal cues to these effects is unclear. In two experiments, we used parameter-specific morphing of adaptor voices to investigate the relative roles of fundamental frequency (F0) and timbre in vocal emotion adaptation, using angry and fearful utterances. Participants adapted to voices containing emotion-specific information in either F0 or timbre, with all other parameters kept constant at an intermediate 50% morph level. Full emotional voices and ambiguous voices were used as reference conditions. All adaptor stimuli were either of the same (Experiment 1) or opposite speaker gender (Experiment 2) of subsequently presented target voices. In Experiment 1, we found consistent aftereffects in all adaptation conditions. Crucially, aftereffects following timbre adaptation were much larger than following F0 adaptation and were only marginally smaller than those following full adaptation. In Experiment 2, adaptation aftereffects appeared massively and proportionally reduced, with differences between morph types being no longer significant. These results suggest that timbre plays a larger role than F0 in vocal emotion adaptation, and that vocal emotion adaptation is compromised by eliminating gender-correspondence between adaptor and target stimuli. Our findings also add to mounting evidence suggesting a major role of timbre in auditory adaptation.
Collapse
Affiliation(s)
- Christine Nussbaum
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Germany.
| | - Celina I von Eiff
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Germany
| | - Verena G Skuk
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Germany
| | - Stefan R Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Germany.
| |
Collapse
|
19
|
Wang Y, Liu L, Zhang Y, Wei C, Xin T, He Q, Hou X, Liu Y. The Neural Processing of Vocal Emotion After Hearing Reconstruction in Prelingual Deaf Children: A Functional Near-Infrared Spectroscopy Brain Imaging Study. Front Neurosci 2021; 15:705741. [PMID: 34393716 PMCID: PMC8355545 DOI: 10.3389/fnins.2021.705741] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Accepted: 07/08/2021] [Indexed: 11/24/2022] Open
Abstract
As elucidated by prior research, children with hearing loss have impaired vocal emotion recognition compared with their normal-hearing peers. Cochlear implants (CIs) have achieved significant success in facilitating hearing and speech abilities for people with severe-to-profound sensorineural hearing loss. However, due to the current limitations in neuroimaging tools, existing research has been unable to detail the neural processing for perception and the recognition of vocal emotions during early stage CI use in infant and toddler CI users (ITCI). In the present study, functional near-infrared spectroscopy (fNIRS) imaging was employed during preoperative and postoperative tests to describe the early neural processing of perception in prelingual deaf ITCIs and their recognition of four vocal emotions (fear, anger, happiness, and neutral). The results revealed that the cortical response elicited by vocal emotional stimulation on the left pre-motor and supplementary motor area (pre-SMA), right middle temporal gyrus (MTG), and right superior temporal gyrus (STG) were significantly different between preoperative and postoperative tests. These findings indicate differences between the preoperative and postoperative neural processing associated with vocal emotional stimulation. Further results revealed that the recognition of vocal emotional stimuli appeared in the right supramarginal gyrus (SMG) after CI implantation, and the response elicited by fear was significantly greater than the response elicited by anger, indicating a negative bias. These findings indicate that the development of emotional bias and the development of emotional perception and recognition capabilities in ITCIs occur on a different timeline and involve different neural processing from those in normal-hearing peers. To assess the speech perception and production abilities, the Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS) and Speech Intelligibility Rating (SIR) were used. The results revealed no significant differences between preoperative and postoperative tests. Finally, the correlates of the neurobehavioral results were investigated, and the results demonstrated that the preoperative response of the right SMG to anger stimuli was significantly and positively correlated with the evaluation of postoperative behavioral outcomes. And the postoperative response of the right SMG to anger stimuli was significantly and negatively correlated with the evaluation of postoperative behavioral outcomes.
Collapse
Affiliation(s)
- Yuyang Wang
- Department of Otolaryngology, Head and Neck Surgery, Peking University First Hospital, Beijing, China
| | - Lili Liu
- Department of Pediatrics, Peking University First Hospital, Beijing, China
| | - Ying Zhang
- Department of Otolaryngology, Head and Neck Surgery, The Second Hospital of Hebei Medical University, Shijiazhuang, China
| | - Chaogang Wei
- Department of Otolaryngology, Head and Neck Surgery, Peking University First Hospital, Beijing, China
| | - Tianyu Xin
- Department of Otolaryngology, Head and Neck Surgery, Peking University First Hospital, Beijing, China
| | - Qiang He
- Department of Otolaryngology, Head and Neck Surgery, The Second Hospital of Hebei Medical University, Shijiazhuang, China
| | - Xinlin Hou
- Department of Pediatrics, Peking University First Hospital, Beijing, China
| | - Yuhe Liu
- Department of Otolaryngology, Head and Neck Surgery, Peking University First Hospital, Beijing, China
| |
Collapse
|
20
|
Sheppard SM, Meier EL, Zezinka Durfee A, Walker A, Shea J, Hillis AE. Characterizing subtypes and neural correlates of receptive aprosodia in acute right hemisphere stroke. Cortex 2021; 141:36-54. [PMID: 34029857 DOI: 10.1016/j.cortex.2021.04.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Revised: 03/20/2021] [Accepted: 04/09/2021] [Indexed: 02/04/2023]
Abstract
INTRODUCTION Speakers naturally produce prosodic variations depending on their emotional state. Receptive prosody has several processing stages. We aimed to conduct lesion-symptom mapping to determine whether damage (core infarct or hypoperfusion) to specific brain areas was associated with receptive aprosodia or with impairment at different processing stages in individuals with acute right hemisphere stroke. We also aimed to determine whether different subtypes of receptive aprosodia exist that are characterized by distinctive behavioral performance patterns. METHODS Twenty patients with receptive aprosodia following right hemisphere ischemic stroke were enrolled within five days of stroke; clinical imaging was acquired. Participants completed tests of receptive emotional prosody, and tests of each stage of prosodic processing (Stage 1: acoustic analysis; Stage 2: analyzing abstract representations of acoustic characteristics that convey emotion; Stage 3: semantic processing). Emotional facial recognition was also assessed. LASSO regression was used to identify predictors of performance on each behavioral task. Predictors entered into each model included 14 right hemisphere regions, hypoperfusion in four vascular territories as measured using FLAIR hyperintense vessel ratings, lesion volume, age, and education. A k-medoid cluster analysis was used to identify different subtypes of receptive aprosodia based on performance on the behavioral tasks. RESULTS Impaired receptive emotional prosody and impaired emotional facial expression recognition were both predicted by greater percent damage to the caudate. The k-medoid cluster analysis identified three different subtypes of aprosodia. One group was primarily impaired on Stage 1 processing and primarily had frontotemporal lesions. The second group had a domain-general emotion recognition impairment and maximal lesion overlap in subcortical areas. Finally, the third group was characterized by a Stage 2 processing deficit and had lesion overlap in posterior regions. CONCLUSIONS Subcortical structures, particularly the caudate, play an important role in emotional prosody comprehension. Receptive aprosodia can result from impairments at different processing stages.
Collapse
Affiliation(s)
- Shannon M Sheppard
- Department of Communication Sciences & Disorders, Chapman University, Irvine, CA, USA; Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| | - Erin L Meier
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | | | - Alex Walker
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Jennifer Shea
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Argye E Hillis
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA; Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, MD, USA; Department of Cognitive Science, Krieger School of Arts and Sciences, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
21
|
Kogan VV, Reiterer SM. Eros, Beauty, and Phon-Aesthetic Judgements of Language Sound. We Like It Flat and Fast, but Not Melodious. Comparing Phonetic and Acoustic Features of 16 European Languages. Front Hum Neurosci 2021; 15:578594. [PMID: 33708080 PMCID: PMC7940689 DOI: 10.3389/fnhum.2021.578594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Accepted: 01/12/2021] [Indexed: 11/13/2022] Open
Abstract
This article concerns sound aesthetic preferences for European foreign languages. We investigated the phonetic-acoustic dimension of the linguistic aesthetic pleasure to describe the "music" found in European languages. The Romance languages, French, Italian, and Spanish, take a lead when people talk about melodious language - the music-like effects in the language (a.k.a., phonetic chill). On the other end of the melodiousness spectrum are German and Arabic that are often considered sounding harsh and un-attractive. Despite the public interest, limited research has been conducted on the topic of phonaesthetics, i.e., the subfield of phonetics that is concerned with the aesthetic properties of speech sounds (Crystal, 2008). Our goal is to fill the existing research gap by identifying the acoustic features that drive the auditory perception of language sound beauty. What is so music-like in the language that makes people say "it is music in my ears"? We had 45 central European participants listening to 16 auditorily presented European languages and rating each language in terms of 22 binary characteristics (e.g., beautiful - ugly and funny - boring) plus indicating their language familiarities, L2 backgrounds, speaker voice liking, demographics, and musicality levels. Findings revealed that all factors in complex interplay explain a certain percentage of variance: familiarity and expertise in foreign languages, speaker voice characteristics, phonetic complexity, musical acoustic properties, and finally musical expertise of the listener. The most important discovery was the trade-off between speech tempo and so-called linguistic melody (pitch variance): the faster the language, the flatter/more atonal it is in terms of the pitch (speech melody), making it highly appealing acoustically (sounding beautiful and sexy), but not so melodious in a "musical" sense.
Collapse
Affiliation(s)
- Vita V Kogan
- School of European Culture and Languages, University of Kent, Kent, United Kingdom
| | - Susanne M Reiterer
- Department of Linguistics, University of Vienna, Vienna, Austria.,Teacher Education Centre, University of Vienna, Vienna, Austria
| |
Collapse
|
22
|
The brain mechanism of explicit and implicit processing of emotional prosodies: An fNIRS study. ACTA PSYCHOLOGICA SINICA 2021. [DOI: 10.3724/sp.j.1041.2021.00015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
23
|
Nof A, Amir O, Goldstein P, Zilcha-Mano S. What do these sounds tell us about the therapeutic alliance: Acoustic markers as predictors of alliance. Clin Psychol Psychother 2020; 28:807-817. [PMID: 33270316 DOI: 10.1002/cpp.2534] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Revised: 10/01/2020] [Accepted: 11/12/2020] [Indexed: 11/07/2022]
Abstract
Predicting the trajectories of alliance formation that the patient is likely to establish with the therapist during treatment, even before their first meeting, can help prevent the potentially harmful consequences of deterioration in alliance, such as poor outcome and premature dropout. The present study aimed to examine the ability of four pretreatment acoustic markers to predict the alliance that is likely to be formed in the course of treatment: F0 span, speech rate, pause proportion and jitter. Data from 560 observations of 38 patients were collected as part of an ongoing randomized clinical trial of short-term psychotherapy for major depressive disorder. The acoustic markers were measured using high-quality recordings at baseline, before the patient and therapist ever met or had any type of communication. A multilevel model was used to examine the ability of the four acoustic markers to predict the slopes of alliance formation in the course of treatment, all markers being introduced in the same model. The clinical utility of the acoustic markers was explored in two case studies. The model explained 22% of the variance in alliance formation. Higher levels of both jitter and pause proportion at baseline predicted less strengthening of the alliance in the course of treatment. The findings, which should be replicated in larger samples, suggest that much of the therapeutic alliance can be predicted based on the acoustic characteristics of the patient's voice in the first 3 min of their intake, before they even meet their therapist.
Collapse
Affiliation(s)
- Aviv Nof
- Department of Psychology, University of Haifa, Haifa, Israel
| | - Ofer Amir
- Department of Communication Disorders, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | | | | |
Collapse
|
24
|
Zhao Z, Lei S, Weiqi H, Suyong Y, Wenbo L. The influence of the cross-modal emotional pre-preparation effect on audiovisual integration. Neuroreport 2020; 31:1161-1166. [PMID: 32991523 DOI: 10.1097/wnr.0000000000001530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Previous studies have shown that the cross-modal pre-preparation effect is an important factor for audiovisual integration. However, the facilitating influence of the pre-preparation effect on the integration of emotional cues remains unclear. Therefore, this study examined the emotional pre-preparation effect during the multistage process of audiovisual integration. Event-related potentials (ERPs) were recorded while participants performed a synchronous or asynchronous integration task with fearful or neutral stimuli. The results indicated that, compared with the sum of the unisensory presentation of visual (V) and auditory (A) stimuli (A+V), only fearful audiovisual stimuli induced a decreased N1 and an enhanced P2; this was not found for the neutral stimuli. Moreover, the fearful stimuli triggered a larger P2 than the neutral stimuli in the audiovisual condition, but not in the sum of the combined (A+V) waveforms. Our findings imply that, in the early perceptual processing stage and perceptual fine processing stage, fear improves the processing efficiency of the emotional audiovisual integration. In the last cognitively assessing stage, the fearful audiovisual induced a larger late positive component (LPC) than the neutral audiovisual. Moreover, the asynchronous-audiovisual induced a greater LPC than the synchronous-audiovisual during the 400-550 ms period. The different integration effects between the fearful and neutral stimuli may reflect the existence of distinct mechanisms of the pre-preparation in terms of the emotional dimension. In light of these results, we present a cross-modal emotional pre-preparation effect involving a three-phase emotional audiovisual integration.
Collapse
Affiliation(s)
- Zhang Zhao
- Institute of Psychology, Weifang Medical University, Weifang.,Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University.,Key Laboratory of Brain and Cognitive Neurosience, Dalian, Liaoning Province
| | - Sun Lei
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University.,Key Laboratory of Brain and Cognitive Neurosience, Dalian, Liaoning Province
| | - He Weiqi
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University.,Key Laboratory of Brain and Cognitive Neurosience, Dalian, Liaoning Province
| | - Yang Suyong
- School of Psychology, Shanghai University of Sport, Shanghai, China
| | - Luo Wenbo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University.,Key Laboratory of Brain and Cognitive Neurosience, Dalian, Liaoning Province
| |
Collapse
|
25
|
Sonderfeld M, Mathiak K, Häring GS, Schmidt S, Habel U, Gur R, Klasen M. Supramodal neural networks support top-down processing of social signals. Hum Brain Mapp 2020; 42:676-689. [PMID: 33073911 PMCID: PMC7814753 DOI: 10.1002/hbm.25252] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Revised: 08/08/2020] [Accepted: 09/29/2020] [Indexed: 12/17/2022] Open
Abstract
The perception of facial and vocal stimuli is driven by sensory input and cognitive top‐down influences. Important top‐down influences are attentional focus and supramodal social memory representations. The present study investigated the neural networks underlying these top‐down processes and their role in social stimulus classification. In a neuroimaging study with 45 healthy participants, we employed a social adaptation of the Implicit Association Test. Attentional focus was modified via the classification task, which compared two domains of social perception (emotion and gender), using the exactly same stimulus set. Supramodal memory representations were addressed via congruency of the target categories for the classification of auditory and visual social stimuli (voices and faces). Functional magnetic resonance imaging identified attention‐specific and supramodal networks. Emotion classification networks included bilateral anterior insula, pre‐supplementary motor area, and right inferior frontal gyrus. They were pure attention‐driven and independent from stimulus modality or congruency of the target concepts. No neural contribution of supramodal memory representations could be revealed for emotion classification. In contrast, gender classification relied on supramodal memory representations in rostral anterior cingulate and ventromedial prefrontal cortices. In summary, different domains of social perception involve different top‐down processes which take place in clearly distinguishable neural networks.
Collapse
Affiliation(s)
- Melina Sonderfeld
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Klaus Mathiak
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Gianna S Häring
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Sarah Schmidt
- Life & Brain - Institute for Experimental Epileptology and Cognition Research, Bonn, Germany
| | - Ute Habel
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany
| | - Raquel Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Martin Klasen
- Department of Psychiatry, Psychotherapy, and Psychosomatics, Medical School, RWTH Aachen, Aachen, Germany.,JARA-Translational Brain Medicine, RWTH Aachen University, Aachen, Germany.,Interdisciplinary Training Centre for Medical Education and Patient Safety - AIXTRA, Medical Faculty, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
26
|
Morningstar M, Mattson WI, Singer S, Venticinque JS, Nelson EE. Children and adolescents' neural response to emotional faces and voices: Age-related changes in common regions of activation. Soc Neurosci 2020; 15:613-629. [PMID: 33017278 DOI: 10.1080/17470919.2020.1832572] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
The perception of facial and vocal emotional expressions engages overlapping regions of the brain. However, at a behavioral level, the ability to recognize the intended emotion in both types of nonverbal cues follows a divergent developmental trajectory throughout childhood and adolescence. The current study a) identified regions of common neural activation to facial and vocal stimuli in 8- to 19-year-old typically-developing adolescents, and b) examined age-related changes in blood-oxygen-level dependent (BOLD) response within these areas. Both modalities elicited activation in an overlapping network of subcortical regions (insula, thalamus, dorsal striatum), visual-motor association areas, prefrontal regions (inferior frontal cortex, dorsomedial prefrontal cortex), and the right superior temporal gyrus. Within these regions, increased age was associated with greater frontal activation to voices, but not faces. Results suggest that processing facial and vocal stimuli elicits activation in common areas of the brain in adolescents, but that age-related changes in response within these regions may vary by modality.
Collapse
Affiliation(s)
- M Morningstar
- Center for Biobehavioral Health, Nationwide Children's Hospital , Columbus, OH, USA.,Department of Pediatrics, The Ohio State University , Columbus, OH, USA.,Department of Psychology, Queen's University , Kingston, ON, Canada
| | - W I Mattson
- Center for Biobehavioral Health, Nationwide Children's Hospital , Columbus, OH, USA
| | - S Singer
- Center for Biobehavioral Health, Nationwide Children's Hospital , Columbus, OH, USA
| | - J S Venticinque
- Center for Biobehavioral Health, Nationwide Children's Hospital , Columbus, OH, USA
| | - E E Nelson
- Center for Biobehavioral Health, Nationwide Children's Hospital , Columbus, OH, USA.,Department of Pediatrics, The Ohio State University , Columbus, OH, USA
| |
Collapse
|
27
|
Where Sounds Occur Matters: Context Effects Influence Processing of Salient Vocalisations. Brain Sci 2020; 10:brainsci10070429. [PMID: 32640750 PMCID: PMC7407900 DOI: 10.3390/brainsci10070429] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Revised: 06/26/2020] [Accepted: 07/02/2020] [Indexed: 11/23/2022] Open
Abstract
The social context in which a salient human vocalisation is heard shapes the affective information it conveys. However, few studies have investigated how visual contextual cues lead to differential processing of such vocalisations. The prefrontal cortex (PFC) is implicated in processing of contextual information and evaluation of saliency of vocalisations. Using functional Near-Infrared Spectroscopy (fNIRS), we investigated PFC responses of young adults (N = 18) to emotive infant and adult vocalisations while they passively viewed the scenes of two categories of environmental contexts: a domestic environment (DE) and an outdoors environment (OE). Compared to a home setting (DE) which is associated with a fixed mental representation (e.g., expect seeing a living room in a typical house), the outdoor setting (OE) is more variable and less predictable, thus might demand greater processing effort. From our previous study in Azhari et al. (2018) that employed the same experimental paradigm, the OE context was found to elicit greater physiological arousal compared to the DE context. Similarly, we hypothesised that greater PFC activation will be observed when salient vocalisations are paired with the OE compared to the DE condition. Our finding supported this hypothesis: the left rostrolateral PFC, an area of the brain that facilitates relational integration, exhibited greater activation in the OE than DE condition which suggests that greater cognitive resources are required to process outdoor situational information together with salient vocalisations. The result from this study bears relevance in deepening our understanding of how contextual information differentially modulates the processing of salient vocalisations.
Collapse
|
28
|
Dricu M, Frühholz S. A neurocognitive model of perceptual decision-making on emotional signals. Hum Brain Mapp 2020; 41:1532-1556. [PMID: 31868310 PMCID: PMC7267943 DOI: 10.1002/hbm.24893] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2019] [Revised: 11/18/2019] [Accepted: 11/29/2019] [Indexed: 01/09/2023] Open
Abstract
Humans make various kinds of decisions about which emotions they perceive from others. Although it might seem like a split-second phenomenon, deliberating over which emotions we perceive unfolds across several stages of decisional processing. Neurocognitive models of general perception postulate that our brain first extracts sensory information about the world then integrates these data into a percept and lastly interprets it. The aim of the present study was to build an evidence-based neurocognitive model of perceptual decision-making on others' emotions. We conducted a series of meta-analyses of neuroimaging data spanning 30 years on the explicit evaluations of others' emotional expressions. We find that emotion perception is rather an umbrella term for various perception paradigms, each with distinct neural structures that underline task-related cognitive demands. Furthermore, the left amygdala was responsive across all classes of decisional paradigms, regardless of task-related demands. Based on these observations, we propose a neurocognitive model that outlines the information flow in the brain needed for a successful evaluation of and decisions on other individuals' emotions. HIGHLIGHTS: Emotion classification involves heterogeneous perception and decision-making tasks Decision-making processes on emotions rarely covered by existing emotions theories We propose an evidence-based neuro-cognitive model of decision-making on emotions Bilateral brain processes for nonverbal decisions, left brain processes for verbal decisions Left amygdala involved in any kind of decision on emotions.
Collapse
Affiliation(s)
- Mihai Dricu
- Department of PsychologyUniversity of BernBernSwitzerland
| | - Sascha Frühholz
- Cognitive and Affective Neuroscience Unit, Department of PsychologyUniversity of ZurichZurichSwitzerland
- Neuroscience Center Zurich (ZNZ)University of Zurich and ETH ZurichZurichSwitzerland
- Center for Integrative Human Physiology (ZIHP)University of ZurichZurichSwitzerland
| |
Collapse
|
29
|
Proverbio AM, Santoni S, Adorni R. ERP Markers of Valence Coding in Emotional Speech Processing. iScience 2020; 23:100933. [PMID: 32151976 PMCID: PMC7063241 DOI: 10.1016/j.isci.2020.100933] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2019] [Revised: 12/20/2019] [Accepted: 02/19/2020] [Indexed: 11/01/2022] Open
Abstract
How is auditory emotional information processed? The study's aim was to compare cerebral responses to emotionally positive or negative spoken phrases matched for structure and content. Twenty participants listened to 198 vocal stimuli while detecting filler phrases containing first names. EEG was recorded from 128 sites. Three event-related potential (ERP) components were quantified and found to be sensitive to emotional valence since 350 ms of latency. P450 and late positivity were enhanced by positive content, whereas anterior negativity was larger to negative content. A similar set of markers (P300, N400, LP) was found previously for the processing of positive versus negative affective vocalizations, prosody, and music, which suggests a common neural mechanism for extracting the emotional content of auditory information. SwLORETA applied to potentials recorded between 350 and 550 ms showed that negative speech activated the right temporo/parietal areas (BA40, BA20/21), whereas positive speech activated the left homologous and inferior frontal areas.
Collapse
Affiliation(s)
- Alice Mado Proverbio
- Milan Center for Neuroscience, Department of Psychology, University of Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, Milan, Italy.
| | - Sacha Santoni
- Milan Center for Neuroscience, Department of Psychology, University of Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, Milan, Italy
| | - Roberta Adorni
- Milan Center for Neuroscience, Department of Psychology, University of Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, Milan, Italy
| |
Collapse
|
30
|
Zhang Z, He W, Li Y, Zhang M, Luo W. Facilitation of Crossmodal Integration During Emotional Prediction in Methamphetamine Dependents. Front Neural Circuits 2020; 13:80. [PMID: 32038178 PMCID: PMC6989411 DOI: 10.3389/fncir.2019.00080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2019] [Accepted: 12/11/2019] [Indexed: 12/05/2022] Open
Abstract
Methamphetamine (meth) can greatly damage the prefrontal cortex of the brain and trigger dysfunction of the cognitive control loop, which triggers not only drug dependence but also emotional disorders. The imbalance between the cognitive and emotional systems will lead to crossmodal emotional deficits. Until now, the negative impact of meth dependence on crossmodal emotional processing has not received attention. Therefore, the present study firstly examined the differences in crossmodal emotional processing between healthy controls and meth dependents (MADs) and then investigated the role of visual- or auditory-leading cues in the promotion of crossmodal emotional processing. Experiment 1 found that MADs made a visual-auditory integration disorder for fearful emotion, which may be related to the defects in information transmission between the auditory and auditory cortex. Experiment 2 found that MADs had a crossmodal disorder pertaining to fear under visual-leading cues, but the fearful sound improved the detection of facial emotions for MADs. Experiment 3 reconfirmed that, for MADs, A-leading cues could induce crossmodal integration immediately more easily than V-leading ones. These findings provided sufficient quantitative indicators and evidences that meth dependence was associated with crossmodal integration disorders, which in turn was associated with auditory-leading cues that enhanced the recognition ability of MADs for complex emotions (all results are available at: https://osf.io/x6rv5/). These results provided a better understanding for individuals using drugs in order to enhance the cognition for the complex crossmodal emotional integration.
Collapse
Affiliation(s)
| | | | | | | | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, China
| |
Collapse
|
31
|
Kreifelts B, Ethofer T, Wiegand A, Brück C, Wächter S, Erb M, Lotze M, Wildgruber D. The Neural Correlates of Face-Voice-Integration in Social Anxiety Disorder. Front Psychiatry 2020; 11:657. [PMID: 32765311 PMCID: PMC7381153 DOI: 10.3389/fpsyt.2020.00657] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Accepted: 06/24/2020] [Indexed: 12/04/2022] Open
Abstract
Faces and voices are very important sources of threat in social anxiety disorder (SAD), a common psychiatric disorder where core elements are fears of social exclusion and negative evaluation. Previous research in social anxiety evidenced increased cerebral responses to negative facial or vocal expressions and also generally increased hemodynamic responses to voices and faces. But it is unclear if also the cerebral process of face-voice-integration is altered in SAD. Applying functional magnetic resonance imaging, we investigated the correlates of the audiovisual integration of dynamic faces and voices in SAD as compared to healthy individuals. In the bilateral midsections of the superior temporal sulcus (STS) increased integration effects in SAD were observed driven by greater activation increases during audiovisual stimulation as compared to auditory stimulation. This effect was accompanied by increased functional connectivity with the visual association cortex and a more anterior position of the individual integration maxima along the STS in SAD. These findings demonstrate that the audiovisual integration of facial and vocal cues in SAD is not only systematically altered with regard to intensity and connectivity but also the individual location of the integration areas within the STS. These combined findings offer a novel perspective on the neuronal representation of social signal processing in individuals suffering from SAD.
Collapse
Affiliation(s)
- Benjamin Kreifelts
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany
| | - Thomas Ethofer
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany.,Department for Biomedical Magnetic Resonance, University of Tübingen, Tübingen, Germany
| | - Ariane Wiegand
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany
| | - Carolin Brück
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany
| | - Sarah Wächter
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany
| | - Michael Erb
- Department for Biomedical Magnetic Resonance, University of Tübingen, Tübingen, Germany
| | - Martin Lotze
- Functional Imaging Group, Department for Diagnostic Radiology and Neuroradiology, University of Greifswald, Greifswald, Germany
| | - Dirk Wildgruber
- Department of Psychiatry and Psychotherapy, University of Tübingen, Tübingen, Germany
| |
Collapse
|
32
|
What you say versus how you say it: Comparing sentence comprehension and emotional prosody processing using fMRI. Neuroimage 2019; 209:116509. [PMID: 31899288 DOI: 10.1016/j.neuroimage.2019.116509] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Revised: 12/23/2019] [Accepted: 12/26/2019] [Indexed: 11/24/2022] Open
Abstract
While language processing is often described as lateralized to the left hemisphere (LH), the processing of emotion carried by vocal intonation is typically attributed to the right hemisphere (RH) and more specifically, to areas mirroring the LH language areas. However, the evidence base for this hypothesis is inconsistent, with some studies supporting right-lateralization but others favoring bilateral involvement in emotional prosody processing. Here we compared fMRI activations for an emotional prosody task with those for a sentence comprehension task in 20 neurologically healthy adults, quantifying lateralization using a lateralization index. We observed right-lateralized frontotemporal activations for emotional prosody that roughly mirrored the left-lateralized activations for sentence comprehension. In addition, emotional prosody also evoked bilateral activation in pars orbitalis (BA47), amygdala, and anterior insula. These findings are consistent with the idea that analysis of the auditory speech signal is split between the hemispheres, possibly according to their preferred temporal resolution, with the left preferentially encoding phonetic and the right encoding prosodic information. Once processed, emotional prosody information is fed to domain-general emotion processing areas and integrated with semantic information, resulting in additional bilateral activations.
Collapse
|
33
|
Correia AI, Branco P, Martins M, Reis AM, Martins N, Castro SL, Lima CF. Resting-state connectivity reveals a role for sensorimotor systems in vocal emotional processing in children. Neuroimage 2019; 201:116052. [DOI: 10.1016/j.neuroimage.2019.116052] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2018] [Revised: 07/19/2019] [Accepted: 07/23/2019] [Indexed: 11/17/2022] Open
|
34
|
Domínguez-Borràs J, Guex R, Méndez-Bértolo C, Legendre G, Spinelli L, Moratti S, Frühholz S, Mégevand P, Arnal L, Strange B, Seeck M, Vuilleumier P. Human amygdala response to unisensory and multisensory emotion input: No evidence for superadditivity from intracranial recordings. Neuropsychologia 2019; 131:9-24. [PMID: 31158367 DOI: 10.1016/j.neuropsychologia.2019.05.027] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2018] [Revised: 05/15/2019] [Accepted: 05/28/2019] [Indexed: 12/14/2022]
Abstract
The amygdala is crucially implicated in processing emotional information from various sensory modalities. However, there is dearth of knowledge concerning the integration and relative time-course of its responses across different channels, i.e., for auditory, visual, and audiovisual input. Functional neuroimaging data in humans point to a possible role of this region in the multimodal integration of emotional signals, but direct evidence for anatomical and temporal overlap of unisensory and multisensory-evoked responses in amygdala is still lacking. We recorded event-related potentials (ERPs) and oscillatory activity from 9 amygdalae using intracranial electroencephalography (iEEG) in patients prior to epilepsy surgery, and compared electrophysiological responses to fearful, happy, or neutral stimuli presented either in voices alone, faces alone, or voices and faces simultaneously delivered. Results showed differential amygdala responses to fearful stimuli, in comparison to neutral, reaching significance 100-200 ms post-onset for auditory, visual and audiovisual stimuli. At later latencies, ∼400 ms post-onset, amygdala response to audiovisual information was also amplified in comparison to auditory or visual stimuli alone. Importantly, however, we found no evidence for either super- or subadditivity effects in any of the bimodal responses. These results suggest, first, that emotion processing in amygdala occurs at globally similar early stages of perceptual processing for auditory, visual, and audiovisual inputs; second, that overall larger responses to multisensory information occur at later stages only; and third, that the underlying mechanisms of this multisensory gain may reflect a purely additive response to concomitant visual and auditory inputs. Our findings provide novel insights on emotion processing across the sensory pathways, and their convergence within the limbic system.
Collapse
Affiliation(s)
- Judith Domínguez-Borràs
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland; Center for Affective Sciences, University of Geneva, Switzerland; Campus Biotech, Geneva, Switzerland.
| | - Raphaël Guex
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland; Center for Affective Sciences, University of Geneva, Switzerland; Campus Biotech, Geneva, Switzerland.
| | | | - Guillaume Legendre
- Campus Biotech, Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| | - Laurent Spinelli
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland.
| | - Stephan Moratti
- Department of Experimental Psychology, Complutense University of Madrid, Spain; Laboratory for Clinical Neuroscience, Centre for Biomedical Technology, Universidad Politécnica de Madrid, Spain.
| | - Sascha Frühholz
- Department of Psychology, University of Zurich, Switzerland.
| | - Pierre Mégevand
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| | - Luc Arnal
- Campus Biotech, Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| | - Bryan Strange
- Laboratory for Clinical Neuroscience, Centre for Biomedical Technology, Universidad Politécnica de Madrid, Spain; Department of Neuroimaging, Alzheimer's Disease Research Centre, Reina Sofia-CIEN Foundation, Madrid, Spain.
| | - Margitta Seeck
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland.
| | - Patrik Vuilleumier
- Center for Affective Sciences, University of Geneva, Switzerland; Campus Biotech, Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| |
Collapse
|
35
|
Tuned to voices and faces: Cerebral responses linked to social anxiety. Neuroimage 2019; 197:450-456. [PMID: 31075391 DOI: 10.1016/j.neuroimage.2019.05.018] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2019] [Revised: 04/23/2019] [Accepted: 05/06/2019] [Indexed: 11/23/2022] Open
Abstract
Voices and faces are the most common sources of threat in social anxiety (SA) where the fear of negative evaluation and social exclusion is the central element. SA itself is spectrally distributed among the general population and its clinical manifestation, termed social anxiety disorder, is one of the most common anxiety disorders. While heightened cerebral responses to angry or contemptuous facial or vocal expressions are well documented, it remains unclear if the brain of socially anxious individuals is generally more sensitive to voices and faces. Using functional magnetic resonance imaging, we investigated how SA affects the cerebral processing of voices and faces as compared to various other stimulus types in a study population with greatly varying SA (N = 50, 26 female). While cerebral voice-sensitivity correlated positively with SA in the left temporal voice area (TVA) and the left amygdala, an association of face-sensitivity and SA was observed in the right fusiform face area (FFA) and the face processing area of the right posterior superior temporal sulcus (pSTSFA). These results demonstrate that the increase of cerebral responses associated with social anxiety is not limited to facial or vocal expressions of social threat but that the respective sensory and emotion processing structures are also generally tuned to voices and faces.
Collapse
|
36
|
新生儿情绪性语音加工的正性偏向——来自事件相关电位的证据. ACTA PSYCHOLOGICA SINICA 2019. [DOI: 10.3724/sp.j.1041.2019.00462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
37
|
Zhang D, Chen Y, Hou X, Wu YJ. Near-infrared spectroscopy reveals neural perception of vocal emotions in human neonates. Hum Brain Mapp 2019; 40:2434-2448. [PMID: 30697881 DOI: 10.1002/hbm.24534] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2018] [Revised: 01/19/2019] [Accepted: 01/20/2019] [Indexed: 12/20/2022] Open
Abstract
Processing affective prosody, that is the emotional tone of a speaker, is fundamental to human communication and adaptive behaviors. Previous studies have mainly focused on adults and infants; thus the neural mechanisms underlying the processing of affective prosody in newborns remain unclear. Here, we used near-infrared spectroscopy to examine the ability of 0-to-4-day-old neonates to discriminate emotions conveyed by speech prosody in their maternal language and a foreign language. Happy, fearful, and angry prosodies enhanced neural activation in the right superior temporal gyrus relative to neutral prosody in the maternal but not the foreign language. Happy prosody elicited greater activation than negative prosody in the left superior frontal gyrus and the left angular gyrus, regions that have not been associated with affective prosody processing in infants or adults. These findings suggest that sensitivity to affective prosody is formed through prenatal exposure to vocal stimuli of the maternal language. Furthermore, the sensitive neural correlates appeared more distributed in neonates than infants, indicating a high-level of neural specialization between the neonatal stage and early infancy. Finally, neonates showed preferential neural responses to positive over negative prosody, which is contrary to the "negativity bias" phenomenon established in adult and infant studies.
Collapse
Affiliation(s)
- Dandan Zhang
- College of Psychology and Sociology, Shenzhen University, Shenzhen, China.,Shenzhen Key Laboratory of Affective and Social Cognitive Science, Shenzhen University, Shenzhen, China
| | - Yu Chen
- College of Psychology and Sociology, Shenzhen University, Shenzhen, China
| | - Xinlin Hou
- Department of Pediatrics, Peking University First Hospital, Beijing, China
| | - Yan Jing Wu
- Faculty of Foreign Languages, Ningbo University, Ningbo, China
| |
Collapse
|
38
|
Cerebral resting state markers of biased perception in social anxiety. Brain Struct Funct 2018; 224:759-777. [PMID: 30506458 DOI: 10.1007/s00429-018-1803-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2018] [Accepted: 11/24/2018] [Indexed: 01/29/2023]
Abstract
Social anxiety (SA) comprises a multitude of persistent fears around the central element of dreaded negative evaluation and exclusion. This very common anxiety is spectrally distributed among the general population and associated with social perception biases deemed causal in its maintenance. Here, we investigated cerebral resting state markers linking SA and biased social perception. To this end, resting state functional connectivity (RSFC) was assessed as the neurobiological marker in a study population with greatly varying SA using fMRI in the first step of the experiment. One month later the impact of unattended laughter-exemplifying social threat-on a face rating task was evaluated as a measure of biased social perception. Applying a dimensional approach, SA-related cognitive biases tied to the valence, dominance and arousal of the threat signal and their underlying RSFC patterns among central nodes of the cerebral emotion, voice and face processing networks were identified. In particular, the connectivity patterns between the amygdalae and the right temporal voice area met all criteria for a cerebral mediation of the association between SA and the laughter valence-related interpretation bias. Thus, beyond this identification of non-state-dependent cerebral markers of biased perception in SA, this study highlights both a starting point and targets for future research on the causal relationships between cerebral connectivity patterns, SA and biased perception, potentially via neurofeedback methods.
Collapse
|
39
|
Martinelli A, Kreifelts B, Wildgruber D, Ackermann K, Bernhard A, Freitag CM, Schwenck C. Aggression modulates neural correlates of hostile intention attribution to laughter in children. Neuroimage 2018; 184:621-631. [PMID: 30266262 DOI: 10.1016/j.neuroimage.2018.09.066] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2018] [Revised: 09/01/2018] [Accepted: 09/24/2018] [Indexed: 10/28/2022] Open
Abstract
The tendency to interpret nonverbal social signals as hostile in intention is associated with aggressive responding, poor social functioning and mental illness, and can already be observed in childhood. To investigate the neural correlates of such hostile attributions of social intention, we performed a functional magnetic imaging study in 10-18 year old children and adolescents. Fifty healthy participants rated videos of laughter, which they were told to imagine as being directed towards them, as friendly versus hostile in social intention. Hostile intention ratings were associated with neural response in the right temporal voice area (TVA). Moreover, self-reported trait physical aggression modulated this relationship in both the right TVA and bilateral lingual gyrus, with stronger associations between hostile intention ratings and neural activation in children with higher trait physical aggression scores. Functional connectivity results showed decreased connectivity between the right TVA and left dorsolateral prefrontal cortex with increasing trait physical aggression for making hostile social intention attributions. We conclude that children's social intention attributions are more strongly related to activation of early face and voice-processing regions with increasing trait physical aggression.
Collapse
Affiliation(s)
- A Martinelli
- Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, University Hospital Frankfurt, Goethe University, Deutschordenstrasse 50, 60327, Frankfurt am Main, Germany.
| | - B Kreifelts
- Department of Psychiatry and Psychotherapy, University of Tübingen, Calwerstrasse 14, 72076, Tübingen, Germany
| | - D Wildgruber
- Department of Psychiatry and Psychotherapy, University of Tübingen, Calwerstrasse 14, 72076, Tübingen, Germany
| | - K Ackermann
- Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, University Hospital Frankfurt, Goethe University, Deutschordenstrasse 50, 60327, Frankfurt am Main, Germany
| | - A Bernhard
- Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, University Hospital Frankfurt, Goethe University, Deutschordenstrasse 50, 60327, Frankfurt am Main, Germany
| | - C M Freitag
- Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, University Hospital Frankfurt, Goethe University, Deutschordenstrasse 50, 60327, Frankfurt am Main, Germany
| | - C Schwenck
- Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, University Hospital Frankfurt, Goethe University, Deutschordenstrasse 50, 60327, Frankfurt am Main, Germany; Department of Special Needs Educational and Clinical Child and Adolescent Psychology, University of Giessen, Otto-Behaghel-Straße 10C, 35394, Giessen, Germany
| |
Collapse
|
40
|
Lindström R, Lepistö-Paisley T, Makkonen T, Reinvall O, Nieminen-von Wendt T, Alén R, Kujala T. Atypical perceptual and neural processing of emotional prosodic changes in children with autism spectrum disorders. Clin Neurophysiol 2018; 129:2411-2420. [PMID: 30278390 DOI: 10.1016/j.clinph.2018.08.018] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Revised: 07/20/2018] [Accepted: 08/22/2018] [Indexed: 11/29/2022]
Abstract
OBJECTIVE The present study explored the processing of emotional speech prosody in school-aged children with autism spectrum disorders (ASD) but without marked language impairments (children with ASD [no LI]). METHODS The mismatch negativity (MMN)/the late discriminative negativity (LDN), reflecting pre-attentive auditory discrimination processes, and the P3a, indexing involuntary orienting to attention-catching changes, were recorded to natural word stimuli uttered with different emotional connotations (neutral, sad, scornful and commanding). Perceptual prosody discrimination was addressed with a behavioral sound-discrimination test. RESULTS Overall, children with ASD (no LI) were slower in behaviorally discriminating prosodic features of speech stimuli than typically developed control children. Further, smaller standard-stimulus event related potentials (ERPs) and MMN/LDNs were found in children with ASD (no LI) than in controls. In addition, the amplitude of the P3a was diminished and differentially distributed on the scalp in children with ASD (no LI) than in control children. CONCLUSIONS Processing of words and changes in emotional speech prosody is impaired at various levels of information processing in school-aged children with ASD (no LI). SIGNIFICANCE The results suggest that low-level speech sound discrimination and orienting deficits might contribute to emotional speech prosody processing impairments observed in ASD.
Collapse
Affiliation(s)
- R Lindström
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland.
| | - T Lepistö-Paisley
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland; Department of Pediatric Neurology, Helsinki University Hospital, Helsinki, Finland
| | - T Makkonen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland
| | - O Reinvall
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland; Department of Pediatric Neurology, Helsinki University Hospital, Helsinki, Finland
| | - T Nieminen-von Wendt
- Neuropsychiatric Rehabilitation and Medical Centre NeuroMental, Helsinki, Finland
| | - R Alén
- Department of Child Neurology, Central Finland Central Hospital, Jyväskylä, Finland
| | - T Kujala
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland
| |
Collapse
|
41
|
Sammler D, Cunitz K, Gierhan SME, Anwander A, Adermann J, Meixensberger J, Friederici AD. White matter pathways for prosodic structure building: A case study. BRAIN AND LANGUAGE 2018; 183:1-10. [PMID: 29758365 DOI: 10.1016/j.bandl.2018.05.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2017] [Revised: 03/14/2018] [Accepted: 05/03/2018] [Indexed: 06/08/2023]
Abstract
The relevance of left dorsal and ventral fiber pathways for syntactic and semantic comprehension is well established, while pathways for prosody are little explored. The present study examined linguistic prosodic structure building in a patient whose right arcuate/superior longitudinal fascicles and posterior corpus callosum were transiently compromised by a vasogenic peritumoral edema. Compared to ten matched healthy controls, the patient's ability to detect irregular prosodic structure significantly improved between pre- and post-surgical assessment. This recovery was accompanied by an increase in average fractional anisotropy (FA) in right dorsal and posterior transcallosal fiber tracts. Neither general cognitive abilities nor (non-prosodic) syntactic comprehension nor FA in right ventral and left dorsal fiber tracts showed a similar pre-post increase. Together, these findings suggest a contribution of right dorsal and inter-hemispheric pathways to prosody perception, including the right-dorsal tracking and structuring of prosodic pitch contours that is transcallosally informed by concurrent syntactic information.
Collapse
Affiliation(s)
- Daniela Sammler
- Otto Hahn Group "Neural Bases of Intonation in Speech and Music", Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany; Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany.
| | - Katrin Cunitz
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany; Department of Child and Adolescent Psychiatry and Psychotherapy, University Hospital Ulm, Steinhövelstraße 5, 89075 Ulm, Germany
| | - Sarah M E Gierhan
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany; Berlin School of Mind and Brain, Humboldt University Berlin, Unter den Linden 6, 10099 Berlin, Germany
| | - Alfred Anwander
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany
| | - Jens Adermann
- University Hospital Leipzig, Clinic and Policlinic for Neurosurgery, Liebigstraße 20, 04103 Leipzig, Germany
| | - Jürgen Meixensberger
- University Hospital Leipzig, Clinic and Policlinic for Neurosurgery, Liebigstraße 20, 04103 Leipzig, Germany
| | - Angela D Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany; Berlin School of Mind and Brain, Humboldt University Berlin, Unter den Linden 6, 10099 Berlin, Germany
| |
Collapse
|
42
|
Morningstar M, Nelson EE, Dirks MA. Maturation of vocal emotion recognition: Insights from the developmental and neuroimaging literature. Neurosci Biobehav Rev 2018; 90:221-230. [DOI: 10.1016/j.neubiorev.2018.04.019] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2017] [Revised: 03/16/2018] [Accepted: 04/24/2018] [Indexed: 01/05/2023]
|
43
|
Aryani A, Conrad M, Schmidtke D, Jacobs A. Why 'piss' is ruder than 'pee'? The role of sound in affective meaning making. PLoS One 2018; 13:e0198430. [PMID: 29874293 PMCID: PMC5991420 DOI: 10.1371/journal.pone.0198430] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2017] [Accepted: 05/18/2018] [Indexed: 11/29/2022] Open
Abstract
Most language users agree that some words sound harsh (e.g. grotesque) whereas others sound soft and pleasing (e.g. lagoon). While this prominent feature of human language has always been creatively deployed in art and poetry, it is still largely unknown whether the sound of a word in itself makes any contribution to the word's meaning as perceived and interpreted by the listener. In a large-scale lexicon analysis, we focused on the affective substrates of words' meaning (i.e. affective meaning) and words' sound (i.e. affective sound); both being measured on a two-dimensional space of valence (ranging from pleasant to unpleasant) and arousal (ranging from calm to excited). We tested the hypothesis that the sound of a word possesses affective iconic characteristics that can implicitly influence listeners when evaluating the affective meaning of that word. The results show that a significant portion of the variance in affective meaning ratings of printed words depends on a number of spectral and temporal acoustic features extracted from these words after converting them to their spoken form (study1). In order to test the affective nature of this effect, we independently assessed the affective sound of these words using two different methods: through direct rating (study2a), and through acoustic models that we implemented based on pseudoword materials (study2b). In line with our hypothesis, the estimated contribution of words' sound to ratings of words' affective meaning was indeed associated with the affective sound of these words; with a stronger effect for arousal than for valence. Further analyses revealed crucial phonetic features potentially causing the effect of sound on meaning: For instance, words with short vowels, voiceless consonants, and hissing sibilants (as in 'piss') feel more arousing and negative. Our findings suggest that the process of meaning making is not solely determined by arbitrary mappings between formal aspects of words and concepts they refer to. Rather, even in silent reading, words' acoustic profiles provide affective perceptual cues that language users may implicitly use to construct words' overall meaning.
Collapse
Affiliation(s)
- Arash Aryani
- Department of Experimental and Neurocognitive Psychology, Freie Universität Berlin, Berlin, Germany
| | - Markus Conrad
- Department of Cognitive, Social and Organizational Psychology, University of La Laguna, La Laguna, Spain
| | - David Schmidtke
- Department of Experimental and Neurocognitive Psychology, Freie Universität Berlin, Berlin, Germany
| | - Arthur Jacobs
- Department of Experimental and Neurocognitive Psychology, Freie Universität Berlin, Berlin, Germany
- Centre for Cognitive Neuroscience Berlin (CCNB), Berlin, Germany
| |
Collapse
|
44
|
Wright A, Saxena S, Sheppard SM, Hillis AE. Selective impairments in components of affective prosody in neurologically impaired individuals. Brain Cogn 2018; 124:29-36. [PMID: 29723680 DOI: 10.1016/j.bandc.2018.04.001] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2018] [Revised: 04/13/2018] [Accepted: 04/15/2018] [Indexed: 11/30/2022]
Abstract
The intent and feelings of the speaker are often conveyed less by what they say than by how they say it, in terms of the affective prosody - modulations in pitch, loudness, rate, and rhythm of the speech to convey emotion. Here we propose a cognitive architecture of the perceptual, cognitive, and motor processes underlying recognition and generation of affective prosody. We developed the architecture on the basis of the computational demands of the task, and obtained evidence for various components by identifying neurologically impaired patients with relatively specific deficits in one component. We report analysis of performance across tasks of recognizing and producing affective prosody by four patients (three with right hemisphere stroke and one with frontotemporal dementia). Their distinct patterns of performance across tasks and quality of their abnormal performance provides preliminary evidence that some of the components of the proposed architecture can be selectively impaired by focal brain damage.
Collapse
Affiliation(s)
- Amy Wright
- Johns Hopkins University School of Medicine, Department of Neurology, USA
| | - Sadhvi Saxena
- Johns Hopkins University School of Medicine, Department of Neurology, USA
| | - Shannon M Sheppard
- Johns Hopkins University School of Medicine, Department of Neurology, USA
| | - Argye E Hillis
- Johns Hopkins University School of Medicine, Department of Neurology, USA; Johns Hopkins University School of Medicine, Department of Physical and Medicine & Rehabilitation, USA; Johns Hopkins University, Department of Cognitive Science, USA.
| |
Collapse
|
45
|
Klasen M, von Marschall C, Isman G, Zvyagintsev M, Gur RC, Mathiak K. Prosody production networks are modulated by sensory cues and social context. Soc Cogn Affect Neurosci 2018. [PMID: 29514331 PMCID: PMC5928400 DOI: 10.1093/scan/nsy015] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Abstract
The neurobiology of emotional prosody production is not well investigated. In particular, the effects of cues and social context are not known. The present study sought to differentiate cued from free emotion generation and the effect of social feedback from a human listener. Online speech filtering enabled functional magnetic resonance imaging during prosodic communication in 30 participants. Emotional vocalizations were (i) free, (ii) auditorily cued, (iii) visually cued or (iv) with interactive feedback. In addition to distributed language networks, cued emotions increased activity in auditory and—in case of visual stimuli—visual cortex. Responses were larger in posterior superior temporal gyrus at the right hemisphere and the ventral striatum when participants were listened to and received feedback from the experimenter. Sensory, language and reward networks contributed to prosody production and were modulated by cues and social context. The right posterior superior temporal gyrus is a central hub for communication in social interactions—in particular for interpersonal evaluation of vocal emotions.
Collapse
Affiliation(s)
- Martin Klasen
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA - Translational Brain Medicine, 52074 Aachen, Germany
| | - Clara von Marschall
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA - Translational Brain Medicine, 52074 Aachen, Germany
| | - Güldehen Isman
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Mikhail Zvyagintsev
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA - Translational Brain Medicine, 52074 Aachen, Germany
| | - Ruben C Gur
- Department of Psychiatry, University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104, USA
| | - Klaus Mathiak
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA - Translational Brain Medicine, 52074 Aachen, Germany
| |
Collapse
|
46
|
Krause AL, Colic L, Borchardt V, Li M, Strauss B, Buchheim A, Wildgruber D, Fonagy P, Nolte T, Walter M. Functional connectivity changes following interpersonal reactivity. Hum Brain Mapp 2018; 39:866-879. [PMID: 29164726 PMCID: PMC6866275 DOI: 10.1002/hbm.23888] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2017] [Revised: 09/12/2017] [Accepted: 11/06/2017] [Indexed: 01/24/2023] Open
Abstract
Attachment experiences substantially influence emotional and cognitive development. Narratives comprising attachment-dependent content were proposed to modulate activation of cognitive-emotional schemata in listeners. We studied the effects after listening to prototypical attachment narratives on wellbeing and countertransference-reactions in 149 healthy participants. Neural correlates of these cognitive-emotional schema activations were investigated in a 7 Tesla rest-task-rest fMRI-study (23 healthy males) using functional connectivity (FC) analysis of the social approach network (seed regions: left and right Caudate Nucleus, CN). Reduced FC between left CN and bilateral dorsolateral prefrontal cortex (DLPFC) represented a general effect of prior auditory stimulation. After presentation of the insecure-dismissing narrative, FC between left CN and bilateral temporo-parietal junction, and right dorsal posterior Cingulum was reduced, compared to baseline. Post-narrative FC-patterns of insecure-dismissing and insecure-preoccupied narratives differed in strength between left CN and right DLPFC. Neural correlates of the moderating effect of individual attachment anxiety were represented in a reduced CN-DLPFC FC as a function of individual neediness-levels. These findings suggest specific neural processing of prolonged mood-changes and schema activation induced by attachment-specific speech patterns. Individual desire for interpersonal proximity was predicted by attachment anxiety and furthermore modulated FC of the social approach network in those exposed to such narratives.
Collapse
Affiliation(s)
- A L Krause
- Clinical Affective Neuroimaging Laboratory, Magdeburg, Germany
- Department of Psychiatry and Psychotherapy, Otto von Guericke University, Magdeburg, Germany
| | - L Colic
- Clinical Affective Neuroimaging Laboratory, Magdeburg, Germany
- Department of Behavioral Neurology, Leibniz Institute for Neurobiology, Magdeburg, Germany
| | - V Borchardt
- Clinical Affective Neuroimaging Laboratory, Magdeburg, Germany
- Department of Behavioral Neurology, Leibniz Institute for Neurobiology, Magdeburg, Germany
| | - M Li
- Clinical Affective Neuroimaging Laboratory, Magdeburg, Germany
- Department of Behavioral Neurology, Leibniz Institute for Neurobiology, Magdeburg, Germany
| | - B Strauss
- University Hospital Jena, Institute of Psychosocial Medicine and Psychotherapy, Jena, Germany
| | - A Buchheim
- Institute of Psychology, University of Innsbruck, Innsbruck, Austria
| | - D Wildgruber
- Clinic for Psychiatry and Psychotherapy, Eberhard-Karls University, Tuebingen, Germany
| | - P Fonagy
- Research Department of Clinical, Educational and Health Psychology, University College London, United Kingdom
- Anna Freud National Centre for Children and Families, London, United Kingdom
| | - T Nolte
- Anna Freud National Centre for Children and Families, London, United Kingdom
- Wellcome Trust Centre for Neuroimaging, University College London, United Kingdom
| | - M Walter
- Clinical Affective Neuroimaging Laboratory, Magdeburg, Germany
- Department of Psychiatry and Psychotherapy, Otto von Guericke University, Magdeburg, Germany
- Department of Behavioral Neurology, Leibniz Institute for Neurobiology, Magdeburg, Germany
- Clinic for Psychiatry and Psychotherapy, Eberhard-Karls University, Tuebingen, Germany
- Center for Behavioral Brain Sciences (CBBS), Magdeburg, Germany
| |
Collapse
|
47
|
Speech Prosodies of Different Emotional Categories Activate Different Brain Regions in Adult Cortex: an fNIRS Study. Sci Rep 2018; 8:218. [PMID: 29317758 PMCID: PMC5760650 DOI: 10.1038/s41598-017-18683-2] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Accepted: 12/14/2017] [Indexed: 11/12/2022] Open
Abstract
Emotional expressions of others embedded in speech prosodies are important for social interactions. This study used functional near-infrared spectroscopy to investigate how speech prosodies of different emotional categories are processed in the cortex. The results demonstrated several cerebral areas critical for emotional prosody processing. We confirmed that the superior temporal cortex, especially the right middle and posterior parts of superior temporal gyrus (BA 22/42), primarily works to discriminate between emotional and neutral prosodies. Furthermore, the results suggested that categorization of emotions occurs within a high-level brain region–the frontal cortex, since the brain activation patterns were distinct when positive (happy) were contrasted to negative (fearful and angry) prosody in the left middle part of inferior frontal gyrus (BA 45) and the frontal eye field (BA8), and when angry were contrasted to neutral prosody in bilateral orbital frontal regions (BA 10/11). These findings verified and extended previous fMRI findings in adult brain and also provided a “developed version” of brain activation for our following neonatal study.
Collapse
|
48
|
Dricu M, Ceravolo L, Grandjean D, Frühholz S. Biased and unbiased perceptual decision-making on vocal emotions. Sci Rep 2017; 7:16274. [PMID: 29176612 PMCID: PMC5701116 DOI: 10.1038/s41598-017-16594-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2017] [Accepted: 11/15/2017] [Indexed: 01/20/2023] Open
Abstract
Perceptual decision-making on emotions involves gathering sensory information about the affective state of another person and forming a decision on the likelihood of a particular state. These perceptual decisions can be of varying complexity as determined by different contexts. We used functional magnetic resonance imaging and a region of interest approach to investigate the brain activation and functional connectivity behind two forms of perceptual decision-making. More complex unbiased decisions on affective voices recruited an extended bilateral network consisting of the posterior inferior frontal cortex, the orbitofrontal cortex, the amygdala, and voice-sensitive areas in the auditory cortex. Less complex biased decisions on affective voices distinctly recruited the right mid inferior frontal cortex, pointing to a functional distinction in this region following decisional requirements. Furthermore, task-induced neural connectivity revealed stronger connections between these frontal, auditory, and limbic regions during unbiased relative to biased decision-making on affective voices. Together, the data shows that different types of perceptual decision-making on auditory emotions have distinct patterns of activations and functional coupling that follow the decisional strategies and cognitive mechanisms involved during these perceptual decisions.
Collapse
Affiliation(s)
- Mihai Dricu
- Swiss Center for Affective Sciences, Campus Biotech, University of Geneva, 1202, Geneva, Switzerland. .,Department of Experimental Psychology and Neuropsychology, University of Bern, 3012, Bern, Switzerland.
| | - Leonardo Ceravolo
- Swiss Center for Affective Sciences, Campus Biotech, University of Geneva, 1202, Geneva, Switzerland.,Department of Psychology and Educational Sciences, University of Geneva, 1205, Geneva, Switzerland
| | - Didier Grandjean
- Swiss Center for Affective Sciences, Campus Biotech, University of Geneva, 1202, Geneva, Switzerland.,Department of Psychology and Educational Sciences, University of Geneva, 1205, Geneva, Switzerland
| | - Sascha Frühholz
- Swiss Center for Affective Sciences, Campus Biotech, University of Geneva, 1202, Geneva, Switzerland.,Department of Psychology, University of Zurich, 8050, Zurich, Switzerland.,Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland.,Center for Integrative Human Physiology (ZIHP), University of Zurich, Zurich, Switzerland
| |
Collapse
|
49
|
Carminati M, Fiori-Duharcourt N, Isel F. Neurophysiological differentiation between preattentive and attentive processing of emotional expressions on French vowels. Biol Psychol 2017; 132:55-63. [PMID: 29102707 DOI: 10.1016/j.biopsycho.2017.10.013] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2017] [Revised: 10/17/2017] [Accepted: 10/30/2017] [Indexed: 12/29/2022]
Abstract
The present electrophysiological study investigated the processing of emotional prosody by minimizing as much as possible the effect of emotional information conveyed by the lexical-semantic context. Emotionally colored French vowels (i.e., happiness, sadness, fear, and neutral) were presented in a mismatch negativity (MMN) oddball paradigm. Both the MMN, i.e., an event-related potential (ERP) component thought to reflect preattentive change detection, and the P3a, i.e., an ERP marker of involuntary orientation of attention toward deviant stimuli, were significantly modulated by the emotional deviants compared to the neutral ones. Critically, the largest amplitude (MMN, P3a) and the shortest peak latency (MMN) were observed for fear deviants, all other things being equal. Taken together, the present findings lend support to a sequential neurocognitive model of emotion processing (Scherer, 2001) which postulates, among other checks, a first stage of automatic emotion detection (MMN) followed by a second stage of subjective evaluation of the stimulus or event (P3a). Consistently with previous studies, our data suggest that among the six universal emotions, fear could have a special status probably because of its adaptive role in the evolution of the human species.
Collapse
Affiliation(s)
- Mathilde Carminati
- Laboratory Vision Action Cognition - EA 7326, Institute of Psychology, Paris Descartes University - Sorbonne Paris Cité, France.
| | - Nicole Fiori-Duharcourt
- Laboratory Vision Action Cognition - EA 7326, Institute of Psychology, Paris Descartes University - Sorbonne Paris Cité, France
| | - Frédéric Isel
- University Paris Nanterre - Paris Lumières, CNRS, UMR 7114 Models, Dynamics, Corpora, France
| |
Collapse
|
50
|
Lavan N, McGettigan C. Increased Discriminability of Authenticity from Multimodal Laughter is Driven by Auditory Information. Q J Exp Psychol (Hove) 2017; 70:2159-2168. [DOI: 10.1080/17470218.2016.1226370] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
We present an investigation of the perception of authenticity in audiovisual laughter, in which we contrast spontaneous and volitional samples and examine the contributions of unimodal affective information to multimodal percepts. In a pilot study, we demonstrate that listeners perceive spontaneous laughs as more authentic than volitional ones, both in unimodal (audio-only, visual-only) and multimodal contexts (audiovisual). In the main experiment, we show that the discriminability of volitional and spontaneous laughter is enhanced for multimodal laughter. Analyses of relationships between affective ratings and the perception of authenticity show that, while both unimodal percepts significantly predict evaluations of audiovisual laughter, it is auditory affective cues that have the greater influence on multimodal percepts. We discuss differences and potential mismatches in emotion signalling through voices and faces, in the context of spontaneous and volitional behaviour, and highlight issues that should be addressed in future studies of dynamic multimodal emotion processing.
Collapse
Affiliation(s)
- Nadine Lavan
- Department of Psychology, Royal Holloway, University of London, Egham, UK
| | - Carolyn McGettigan
- Department of Psychology, Royal Holloway, University of London, Egham, UK
- Institute of Cognitive Neuroscience, University College London, London, UK
| |
Collapse
|