1
|
Maekawa T, Sasaoka T, Inui T, Fermin ASR, Yamawaki S. Heart rate and insula activity increase in response to music in individuals with high interoceptive sensitivity. PLoS One 2024; 19:e0299091. [PMID: 39172913 PMCID: PMC11340984 DOI: 10.1371/journal.pone.0299091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 02/05/2024] [Indexed: 08/24/2024] Open
Abstract
Interoception plays an important role in emotion processing. However, the neurobiological substrates of the relationship between visceral responses and emotional experiences remain unclear. In the present study, we measured interoceptive sensitivity using the heartbeat discrimination task and investigated the effects of individual differences in interoceptive sensitivity on changes in pulse rate and insula activity in response to subjective emotional intensity. We found a positive correlation between heart rate and valence level when listening to music only in the high interoceptive sensitivity group. The valence level was also positively correlated with music-elicited anterior insula activity. Furthermore, a region of interest analysis of insula subregions revealed significant activity in the left dorsal dysgranular insula for individuals with high interoceptive sensitivity relative to individuals with low interoceptive sensitivity while listening to the high-valence music pieces. Our results suggest that individuals with high interoceptive sensitivity use their physiological responses to assess their emotional level when listening to music. In addition, insula activity may reflect the use of interoceptive signals to estimate emotions.
Collapse
Affiliation(s)
- Toru Maekawa
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Minami-Ku, Hiroshima, Japan
| | - Takafumi Sasaoka
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Minami-Ku, Hiroshima, Japan
| | | | - Alan S. R. Fermin
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Minami-Ku, Hiroshima, Japan
| | - Shigeto Yamawaki
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Minami-Ku, Hiroshima, Japan
| |
Collapse
|
2
|
Vigl J, Talamini F, Strauss H, Zentner M. Prosodic discrimination skills mediate the association between musical aptitude and vocal emotion recognition ability. Sci Rep 2024; 14:16462. [PMID: 39014043 PMCID: PMC11252295 DOI: 10.1038/s41598-024-66889-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Accepted: 07/04/2024] [Indexed: 07/18/2024] Open
Abstract
The current study tested the hypothesis that the association between musical ability and vocal emotion recognition skills is mediated by accuracy in prosody perception. Furthermore, it was investigated whether this association is primarily related to musical expertise, operationalized by long-term engagement in musical activities, or musical aptitude, operationalized by a test of musical perceptual ability. To this end, we conducted three studies: In Study 1 (N = 85) and Study 2 (N = 93), we developed and validated a new instrument for the assessment of prosodic discrimination ability. In Study 3 (N = 136), we examined whether the association between musical ability and vocal emotion recognition was mediated by prosodic discrimination ability. We found evidence for a full mediation, though only in relation to musical aptitude and not in relation to musical expertise. Taken together, these findings suggest that individuals with high musical aptitude have superior prosody perception skills, which in turn contribute to their vocal emotion recognition skills. Importantly, our results suggest that these benefits are not unique to musicians, but extend to non-musicians with high musical aptitude.
Collapse
Affiliation(s)
- Julia Vigl
- Department of Psychology, University of Innsbruck, Universitätsstraße 15, 6020, Innsbruck, Austria.
| | - Francesca Talamini
- Department of Psychology, University of Innsbruck, Universitätsstraße 15, 6020, Innsbruck, Austria
| | - Hannah Strauss
- Department of Psychology, University of Innsbruck, Universitätsstraße 15, 6020, Innsbruck, Austria
| | - Marcel Zentner
- Department of Psychology, University of Innsbruck, Universitätsstraße 15, 6020, Innsbruck, Austria
| |
Collapse
|
3
|
Nussbaum C, Schirmer A, Schweinberger SR. Musicality - Tuned to the melody of vocal emotions. Br J Psychol 2024; 115:206-225. [PMID: 37851369 DOI: 10.1111/bjop.12684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 09/12/2023] [Accepted: 09/24/2023] [Indexed: 10/19/2023]
Abstract
Musicians outperform non-musicians in vocal emotion perception, likely because of increased sensitivity to acoustic cues, such as fundamental frequency (F0) and timbre. Yet, how musicians make use of these acoustic cues to perceive emotions, and how they might differ from non-musicians, is unclear. To address these points, we created vocal stimuli that conveyed happiness, fear, pleasure or sadness, either in all acoustic cues, or selectively in either F0 or timbre only. We then compared vocal emotion perception performance between professional/semi-professional musicians (N = 39) and non-musicians (N = 38), all socialized in Western music culture. Compared to non-musicians, musicians classified vocal emotions more accurately. This advantage was seen in the full and F0-modulated conditions, but was absent in the timbre-modulated condition indicating that musicians excel at perceiving the melody (F0), but not the timbre of vocal emotions. Further, F0 seemed more important than timbre for the recognition of all emotional categories. Additional exploratory analyses revealed a link between time-varying F0 perception in music and voices that was independent of musical training. Together, these findings suggest that musicians are particularly tuned to the melody of vocal emotions, presumably due to a natural predisposition to exploit melodic patterns.
Collapse
Affiliation(s)
- Christine Nussbaum
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, Jena, Germany
- Voice Research Unit, Friedrich Schiller University, Jena, Germany
| | - Annett Schirmer
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, Jena, Germany
- Institute of Psychology, University of Innsbruck, Innsbruck, Austria
| | - Stefan R Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, Jena, Germany
- Voice Research Unit, Friedrich Schiller University, Jena, Germany
- Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| |
Collapse
|
4
|
Cui Z, Meng L, Zhang Q, Lou J, Lin Y, Sun Y. White and Gray Matter Abnormalities in Young Adult Females with Dependent Personality Disorder: A Diffusion-Tensor Imaging and Voxel-Based Morphometry Study. Brain Topogr 2024; 37:102-115. [PMID: 37831323 DOI: 10.1007/s10548-023-01013-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 09/30/2023] [Indexed: 10/14/2023]
Abstract
We applied diffusion-tensor imaging (DTI) including measurements of fractional anisotropy (FA), a parameter of neuronal fiber integrity, mean diffusivity (MD), a parameter of brain tissue integrity, as well as voxel-based morphometry (VBM), a measure of gray and white matter volume, to provide a basis to improve our understanding of the neurobiological basis of dependent personality disorder (DPD). DTI was performed on young girls with DPD (N = 17) and young female healthy controls (N = 17). Tract-based spatial statistics (TBSS) were used to examine microstructural characteristics. Gray matter volume differences between the two groups were investigated using voxel-based morphometry (VBM). The Pearson correlation analysis was utilized to examine the relationship between distinct brain areas of white matter and gray matter and the Dy score on the MMPI. The DPD had significantly higher fractional anisotropy (FA) values than the HC group in the right retrolenticular part of the internal capsule, right external capsule, the corpus callosum, right posterior thalamic radiation (include optic radiation), right cerebral peduncle (p < 0.05), which was strongly positively correlated with the Dy score of MMPI. The volume of gray matter in the right postcentral gyrus and left cuneus in DPD was significantly increased (p < 0.05), which was strongly positively correlated with the Dy score of MMPI (r1,2= 0.467,0.353; p1,2 = 0.005,0.04). Our results provide new insights into the changes in the brain structure in DPD, which suggests that alterations in the brain structure might implicate the pathophysiology of DPD. Possible visual and somatosensory association with motor nerve circuits in DPD.
Collapse
Affiliation(s)
- Zhixia Cui
- Weifang Mental Health Center, Weifang, Shandong, China
| | | | - Qing Zhang
- Department of Radiology, Affiliated Zhongshan Hospital of Dalian University, Dalian, China
| | - Jing Lou
- Beijing Normal University, Beijing, China
| | - Yuan Lin
- First Clinical Department, Dalian Medical University, Dalian, China
| | - Yueji Sun
- Department of Psychiatry and Behavioral Sciences, Dalian Medical University, Dalian, China.
| |
Collapse
|
5
|
Childress A, Lou M. Illness Narratives in Popular Music: An Untapped Resource for Medical Education. THE JOURNAL OF MEDICAL HUMANITIES 2023; 44:533-552. [PMID: 37566168 DOI: 10.1007/s10912-023-09813-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/19/2023] [Indexed: 08/12/2023]
Abstract
Illness narratives convey a person's feelings, thoughts, beliefs, and descriptions of suffering and healing as a result of physical or mental breakdown. Recognized genres include fiction, nonfiction, poetry, plays, and films. Like poets and playwrights, musicians also use their life experiences as fodder for their art. However, illness narratives as expressed through popular music are an understudied and underutilized source of insights into the experience of suffering, healing, and coping with illness, disease, and death. Greater attention to the value of music within medical education is needed to improve students' perspective-taking and communication. Like reading a good book, songs that resonate with listeners speak to shared experiences or invite them into a universe of possibilities that they had not yet imagined. In this article, we show how uncovering these themes in popular music might be integrated into medical education, thus creating a space for reflection on the nature and meaning of illness and the fragility of the human condition. We describe three kinds of illness narratives that may be found in popular music (autobiographical, biographical, and metaphorical) and show how developing skills of close listening through exposure to these narrative forms can improve patient-physician communication and expand students' moral imaginations.
Collapse
Affiliation(s)
- Andrew Childress
- Humanities Expression and Arts Lab, Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, USA.
| | - Monica Lou
- Department of Medicine, Baylor College of Medicine, Houston, TX, USA
| |
Collapse
|
6
|
Singh M, Mehr SA. Universality, domain-specificity, and development of psychological responses to music. NATURE REVIEWS PSYCHOLOGY 2023; 2:333-346. [PMID: 38143935 PMCID: PMC10745197 DOI: 10.1038/s44159-023-00182-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/30/2023] [Indexed: 12/26/2023]
Abstract
Humans can find music happy, sad, fearful, or spiritual. They can be soothed by it or urged to dance. Whether these psychological responses reflect cognitive adaptations that evolved expressly for responding to music is an ongoing topic of study. In this Review, we examine three features of music-related psychological responses that help to elucidate whether the underlying cognitive systems are specialized adaptations: universality, domain-specificity, and early expression. Focusing on emotional and behavioural responses, we find evidence that the relevant psychological mechanisms are universal and arise early in development. However, the existing evidence cannot establish that these mechanisms are domain-specific. To the contrary, many findings suggest that universal psychological responses to music reflect more general properties of emotion, auditory perception, and other human cognitive capacities that evolved for non-musical purposes. Cultural evolution, driven by the tinkering of musical performers, evidently crafts music to compellingly appeal to shared psychological mechanisms, resulting in both universal patterns (such as form-function associations) and culturally idiosyncratic styles.
Collapse
Affiliation(s)
- Manvir Singh
- Institute for Advanced Study in Toulouse, University of
Toulouse 1 Capitole, Toulouse, France
| | - Samuel A. Mehr
- Yale Child Study Center, Yale University, New Haven, CT,
USA
- School of Psychology, University of Auckland, Auckland,
New Zealand
| |
Collapse
|
7
|
Cui W, Wang S, Chen B, Fan G. White matter structural network alterations in congenital bilateral profound sensorineural hearing loss children: A graph theory analysis. Hear Res 2022; 422:108521. [PMID: 35660126 DOI: 10.1016/j.heares.2022.108521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Revised: 03/22/2022] [Accepted: 05/14/2022] [Indexed: 11/25/2022]
Abstract
Functional magnetic resonance imaging (fMRI) studies have revealed a functional reorganization in patients with sensorineural hearing loss (SNHL). The structural basement of functional changes has also been investigated recently. Graph theory analysis brings a new understanding of the structural connectome and topological features in central neural system diseases. However, little is known about the structural network connectome changes in SNHL patients, especially in children. We explored the differences in topologic organization, rich-club organization, and structural connection between children with congenital bilateral profound SNHL and normal hearing under the age of three using graph theory analysis and probabilistic tractography. Compared with the normal-hearing (NH) group, the SNHL group showed no difference in global and nodal topological parameters. Increased structural connection strength were found in the right cortico-striatal-thalamus-cortical circuity. Decreased cross-hemisphere connections were found between the right precuneus and the left auditory cortex as well as the left subcortical regions. Rich-club organization analysis found increased local connection in the SNHL group. These results revealed structural organizations after hearing deprivation in congenital bilateral profound SNHL children.
Collapse
Affiliation(s)
- Wenzhuo Cui
- Department of Radiology, The First Affiliated Hospital of China Medical University, Shenyang, LN, China
| | - Shanshan Wang
- Department of Radiology, The First Affiliated Hospital of China Medical University, Shenyang, LN, China
| | - Boyu Chen
- Department of Radiology, The First Affiliated Hospital of China Medical University, Shenyang, LN, China
| | - Guoguang Fan
- Department of Radiology, The First Affiliated Hospital of China Medical University, Shenyang, LN, China.
| |
Collapse
|
8
|
Bálint A, Eleőd H, Magyari L, Kis A, Gácsi M. Differences in dogs' event-related potentials in response to human and dog vocal stimuli; a non-invasive study. ROYAL SOCIETY OPEN SCIENCE 2022; 9:211769. [PMID: 35401994 PMCID: PMC8984299 DOI: 10.1098/rsos.211769] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 01/31/2022] [Indexed: 05/03/2023]
Abstract
Recent advances in the field of canine neuro-cognition allow for the non-invasive research of brain mechanisms in family dogs. Considering the striking similarities between dog's and human (infant)'s socio-cognition at the behavioural level, both similarities and differences in neural background can be of particular relevance. The current study investigates brain responses of n = 17 family dogs to human and conspecific emotional vocalizations using a fully non-invasive event-related potential (ERP) paradigm. We found that similarly to humans, dogs show a differential ERP response depending on the species of the caller, demonstrated by a more positive ERP response to human vocalizations compared to dog vocalizations in a time window between 250 and 650 ms after stimulus onset. A later time window between 800 and 900 ms also revealed a valence-sensitive ERP response in interaction with the species of the caller. Our results are, to our knowledge, the first ERP evidence to show the species sensitivity of vocal neural processing in dogs along with indications of valence sensitive processes in later post-stimulus time periods.
Collapse
Affiliation(s)
- Anna Bálint
- MTA-ELTE Comparative Ethology Research Group, Budapest, Hungary
- Department of Ethology, ELTE Eötvös Loránd University, Budapest, Hungary
| | - Huba Eleőd
- Department of Ethology, ELTE Eötvös Loránd University, Budapest, Hungary
- Doctoral School of Biology, Institute of Biology, ELTE Eötvös Loránd University, Budapest, Hungary
| | - Lilla Magyari
- MTA-ELTE ‘Lendület’ Neuroethology of Communication Research Group, Hungarian Academy of Sciences, ELTE Eötvös Loránd University, Budapest, Hungary
- Department of Social Studies, University of Stavanger, Stavanger, Norway
| | - Anna Kis
- Department of Ethology, ELTE Eötvös Loránd University, Budapest, Hungary
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences,Budapest, Hungary
| | - Márta Gácsi
- MTA-ELTE Comparative Ethology Research Group, Budapest, Hungary
- Department of Ethology, ELTE Eötvös Loránd University, Budapest, Hungary
| |
Collapse
|
9
|
Bedoya D, Arias P, Rachman L, Liuni M, Canonne C, Goupil L, Aucouturier JJ. Even violins can cry: specifically vocal emotional behaviours also drive the perception of emotions in non-vocal music. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200396. [PMID: 34719254 PMCID: PMC8558776 DOI: 10.1098/rstb.2020.0396] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
A wealth of theoretical and empirical arguments have suggested that music triggers emotional responses by resembling the inflections of expressive vocalizations, but have done so using low-level acoustic parameters (pitch, loudness, speed) that, in fact, may not be processed by the listener in reference to human voice. Here, we take the opportunity of the recent availability of computational models that allow the simulation of three specifically vocal emotional behaviours: smiling, vocal tremor and vocal roughness. When applied to musical material, we find that these three acoustic manipulations trigger emotional perceptions that are remarkably similar to those observed on speech and scream sounds, and identical across musician and non-musician listeners. Strikingly, this not only applied to singing voice with and without musical background, but also to purely instrumental material. This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part I)’.
Collapse
Affiliation(s)
- D Bedoya
- Science and Technology of Music and Sound, IRCAM/CNRS/Sorbonne Université, Paris, France
| | - P Arias
- Science and Technology of Music and Sound, IRCAM/CNRS/Sorbonne Université, Paris, France.,Department of Cognitive Science, Lund University, Lund, Sweden
| | - L Rachman
- Faculty of Medical Sciences, University of Groningen, Groningen, The Netherlands
| | - M Liuni
- Alta Voce SAS, Houilles, France
| | - C Canonne
- Science and Technology of Music and Sound, IRCAM/CNRS/Sorbonne Université, Paris, France
| | - L Goupil
- BabyDevLab, University of East London, London, UK
| | - J-J Aucouturier
- FEMTO-ST Institute, Université de Bourgogne Franche-Comté/CNRS, Besançon, France
| |
Collapse
|
10
|
Abstract
Links between musicality and vocal emotion perception skills have only recently emerged as a focus of study. Here we review current evidence for or against such links. Based on a systematic literature search, we identified 33 studies that addressed either (a) vocal emotion perception in musicians and nonmusicians, (b) vocal emotion perception in individuals with congenital amusia, (c) the role of individual differences (e.g., musical interests, psychoacoustic abilities), or (d) effects of musical training interventions on both the normal hearing population and cochlear implant users. Overall, the evidence supports a link between musicality and vocal emotion perception abilities. We discuss potential factors moderating the link between emotions and music, and possible directions for future research.
Collapse
Affiliation(s)
- Christine Nussbaum
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Germany
| | - Stefan R. Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Germany
| |
Collapse
|
11
|
Putkinen V, Nazari-Farsani S, Seppälä K, Karjalainen T, Sun L, Karlsson HK, Hudson M, Heikkilä TT, Hirvonen J, Nummenmaa L. Decoding Music-Evoked Emotions in the Auditory and Motor Cortex. Cereb Cortex 2021; 31:2549-2560. [PMID: 33367590 DOI: 10.1093/cercor/bhaa373] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Revised: 10/16/2020] [Accepted: 11/06/2020] [Indexed: 11/14/2022] Open
Abstract
Music can induce strong subjective experience of emotions, but it is debated whether these responses engage the same neural circuits as emotions elicited by biologically significant events. We examined the functional neural basis of music-induced emotions in a large sample (n = 102) of subjects who listened to emotionally engaging (happy, sad, fearful, and tender) pieces of instrumental music while their hemodynamic brain activity was measured with functional magnetic resonance imaging (fMRI). Ratings of the four categorical emotions and liking were used to predict hemodynamic responses in general linear model (GLM) analysis of the fMRI data. Multivariate pattern analysis (MVPA) was used to reveal discrete neural signatures of the four categories of music-induced emotions. To map neural circuits governing non-musical emotions, the subjects were scanned while viewing short emotionally evocative film clips. The GLM revealed that most emotions were associated with activity in the auditory, somatosensory, and motor cortices, cingulate gyrus, insula, and precuneus. Fear and liking also engaged the amygdala. In contrast, the film clips strongly activated limbic and cortical regions implicated in emotional processing. MVPA revealed that activity in the auditory cortex and primary motor cortices reliably discriminated the emotion categories. Our results indicate that different music-induced basic emotions have distinct representations in regions supporting auditory processing, motor control, and interoception but do not strongly rely on limbic and medial prefrontal regions critical for emotions with survival value.
Collapse
Affiliation(s)
- Vesa Putkinen
- Turku PET Centre, and Turku University Hospital, University of Turku, 20520, Turku, Finland
| | - Sanaz Nazari-Farsani
- Turku PET Centre, and Turku University Hospital, University of Turku, 20520, Turku, Finland
| | - Kerttu Seppälä
- Turku PET Centre, and Turku University Hospital, University of Turku, 20520, Turku, Finland
| | - Tomi Karjalainen
- Turku PET Centre, and Turku University Hospital, University of Turku, 20520, Turku, Finland
| | - Lihua Sun
- Turku PET Centre, and Turku University Hospital, University of Turku, 20520, Turku, Finland
| | - Henry K Karlsson
- Turku PET Centre, and Turku University Hospital, University of Turku, 20520, Turku, Finland
| | - Matthew Hudson
- Turku PET Centre, and Turku University Hospital, University of Turku, 20520, Turku, Finland.,National College of Ireland, D01 K6W2, Dublin, Ireland
| | - Timo T Heikkilä
- Department of Psychology, University of Turku, FI-20014, Turku, Finland
| | - Jussi Hirvonen
- Turku PET Centre, and Turku University Hospital, University of Turku, 20520, Turku, Finland.,Department of Radiology, Turku University Hospital, 20520, Turku, Finland
| | - Lauri Nummenmaa
- Turku PET Centre, and Turku University Hospital, University of Turku, 20520, Turku, Finland.,Department of Psychology, University of Turku, FI-20014, Turku, Finland
| |
Collapse
|
12
|
Frontotemporal dementia, music perception and social cognition share neurobiological circuits: A meta-analysis. Brain Cogn 2021; 148:105660. [PMID: 33421942 DOI: 10.1016/j.bandc.2020.105660] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 10/27/2020] [Accepted: 11/26/2020] [Indexed: 01/18/2023]
Abstract
Frontotemporal dementia (FTD) is a neurodegenerative disease that presents with profound changes in social cognition. Music might be a sensitive probe for social cognition abilities, but underlying neurobiological substrates are unclear. We performed a meta-analysis of voxel-based morphometry studies in FTD patients and functional MRI studies for music perception and social cognition tasks in cognitively normal controls to identify robust patterns of atrophy (FTD) or activation (music perception or social cognition). Conjunction analyses were performed to identify overlapping brain regions. In total 303 articles were included: 53 for FTD (n = 1153 patients, 42.5% female; 1337 controls, 53.8% female), 28 for music perception (n = 540, 51.8% female) and 222 for social cognition in controls (n = 5664, 50.2% female). We observed considerable overlap in atrophy patterns associated with FTD, and functional activation associated with music perception and social cognition, mostly encompassing the ventral language network. We further observed overlap across all three modalities in mesolimbic, basal forebrain and striatal regions. The results of our meta-analysis suggest that music perception and social cognition share neurobiological circuits that are affected in FTD. This supports the idea that music might be a sensitive probe for social cognition abilities with implications for diagnosis and monitoring.
Collapse
|
13
|
Ciarrusta J, Dimitrova R, McAlonan G. Early maturation of the social brain: How brain development provides a platform for the acquisition of social-cognitive competence. PROGRESS IN BRAIN RESEARCH 2020; 254:49-70. [PMID: 32859293 DOI: 10.1016/bs.pbr.2020.05.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Across the last century psychology has provided a lot of insight about social-cognitive competence. Recognizing facial expressions, joint attention, discrimination of cues and experiencing empathy are just a few examples of the social skills humans acquire from birth to adolescence. However, how very early brain maturation provides a platform to support the attainment of highly complex social behavior later in development remains poorly understood. Magnetic Resonance Imaging provides a safe means to investigate the typical and atypical maturation of regions of the brain responsible for social cognition in as early as the perinatal period. Here, we first review some technical challenges and advances of using functional magnetic resonance imaging on developing infants to then describe current knowledge on the development of diverse systems associated with social function. We will then explain how these characteristics might differ in infants with genetic or environmental risk factors, who are vulnerable to atypical neurodevelopment. Finally, given the rapid early development of systems necessary for social skills, we propose a new framework to investigate sensitive time windows of development when neural substrates might be more vulnerable to impairment due to a genetic or environmental insult.
Collapse
Affiliation(s)
- Judit Ciarrusta
- Centre for the Developing Brain, School Biomedical Engineering & Imaging Sciences, King's College London, St Thomas' Hospital, London, United Kingdom; Sackler Institute for Translational Neurodevelopment and Department of Forensic and Neurodevelopmental Sciences, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom
| | - Ralica Dimitrova
- Centre for the Developing Brain, School Biomedical Engineering & Imaging Sciences, King's College London, St Thomas' Hospital, London, United Kingdom; Sackler Institute for Translational Neurodevelopment and Department of Forensic and Neurodevelopmental Sciences, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom
| | - Grainne McAlonan
- Sackler Institute for Translational Neurodevelopment and Department of Forensic and Neurodevelopmental Sciences, Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, United Kingdom; MRC Centre for Neurodevelopmental Disorders, King's College London, London, United Kingdom; South London and Maudsley NHS Foundation Trust, London, United Kingdom.
| |
Collapse
|
14
|
Proverbio AM, Santoni S, Adorni R. ERP Markers of Valence Coding in Emotional Speech Processing. iScience 2020; 23:100933. [PMID: 32151976 PMCID: PMC7063241 DOI: 10.1016/j.isci.2020.100933] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2019] [Revised: 12/20/2019] [Accepted: 02/19/2020] [Indexed: 11/01/2022] Open
Abstract
How is auditory emotional information processed? The study's aim was to compare cerebral responses to emotionally positive or negative spoken phrases matched for structure and content. Twenty participants listened to 198 vocal stimuli while detecting filler phrases containing first names. EEG was recorded from 128 sites. Three event-related potential (ERP) components were quantified and found to be sensitive to emotional valence since 350 ms of latency. P450 and late positivity were enhanced by positive content, whereas anterior negativity was larger to negative content. A similar set of markers (P300, N400, LP) was found previously for the processing of positive versus negative affective vocalizations, prosody, and music, which suggests a common neural mechanism for extracting the emotional content of auditory information. SwLORETA applied to potentials recorded between 350 and 550 ms showed that negative speech activated the right temporo/parietal areas (BA40, BA20/21), whereas positive speech activated the left homologous and inferior frontal areas.
Collapse
Affiliation(s)
- Alice Mado Proverbio
- Milan Center for Neuroscience, Department of Psychology, University of Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, Milan, Italy.
| | - Sacha Santoni
- Milan Center for Neuroscience, Department of Psychology, University of Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, Milan, Italy
| | - Roberta Adorni
- Milan Center for Neuroscience, Department of Psychology, University of Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, Milan, Italy
| |
Collapse
|
15
|
Siponkoski ST, Martínez-Molina N, Kuusela L, Laitinen S, Holma M, Ahlfors M, Jordan-Kilkki P, Ala-Kauhaluoma K, Melkas S, Pekkola J, Rodriguez-Fornells A, Laine M, Ylinen A, Rantanen P, Koskinen S, Lipsanen J, Särkämö T. Music Therapy Enhances Executive Functions and Prefrontal Structural Neuroplasticity after Traumatic Brain Injury: Evidence from a Randomized Controlled Trial. J Neurotrauma 2020; 37:618-634. [DOI: 10.1089/neu.2019.6413] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
Affiliation(s)
- Sini-Tuuli Siponkoski
- Department of Psychology and Logopedics, Cognitive Brain Research Unit, University of Helsinki, Helsinki, Finland
| | - Noelia Martínez-Molina
- Department of Psychology and Logopedics, Cognitive Brain Research Unit, University of Helsinki, Helsinki, Finland
| | - Linda Kuusela
- HUS Medical Imaging Center, Department of Radiology, Helsinki Central University Hospital and University of Helsinki, Helsinki, Finland
- Department of Physics, University of Helsinki, Helsinki, Finland
| | | | - Milla Holma
- Musiikkiterapiaosuuskunta InstruMental (Music Therapy Cooperative InstruMental), Helsinki, Finland
| | | | | | - Katja Ala-Kauhaluoma
- Ludus Oy Tutkimus- ja kuntoutuspalvelut (Assessment and Intervention Services), Helsinki, Finland
| | - Susanna Melkas
- Department of Neurology and Brain Injury Outpatient Clinic, Helsinki University Central Hospital, Helsinki, Finland
| | - Johanna Pekkola
- HUS Medical Imaging Center, Department of Radiology, Helsinki Central University Hospital and University of Helsinki, Helsinki, Finland
| | - Antoni Rodriguez-Fornells
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute, L'Hospitalet de Llobregat, Barcelona, Spain
- Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona, Spain
- Catalan Institution for Research and Advanced Studies, Barcelona, Spain
| | - Matti Laine
- Department of Psychology, Åbo Akademi University, Turku, Finland
| | - Aarne Ylinen
- Department of Neurology and Brain Injury Outpatient Clinic, Helsinki University Central Hospital, Helsinki, Finland
- Tampere University Hospital, Tampere, Finland
| | | | - Sanna Koskinen
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
| | - Jari Lipsanen
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
| | - Teppo Särkämö
- Department of Psychology and Logopedics, Cognitive Brain Research Unit, University of Helsinki, Helsinki, Finland
| |
Collapse
|
16
|
Sachs ME, Habibi A, Damasio A, Kaplan JT. Dynamic intersubject neural synchronization reflects affective responses to sad music. Neuroimage 2019; 218:116512. [PMID: 31901418 DOI: 10.1016/j.neuroimage.2019.116512] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2019] [Revised: 11/14/2019] [Accepted: 12/31/2019] [Indexed: 12/30/2022] Open
Abstract
Psychological theories of emotion often highlight the dynamic quality of the affective experience, yet neuroimaging studies of affect have traditionally relied on static stimuli that lack ecological validity. Consequently, the brain regions that represent emotions and feelings as they unfold remain unclear. Recently, dynamic, model-free analytical techniques have been employed with naturalistic stimuli to better capture time-varying patterns of activity in the brain; yet, few studies have focused on relating these patterns to changes in subjective feelings. Here, we address this gap, using intersubject correlation and phase synchronization to assess how stimulus-driven changes in brain activity and connectivity are related to two aspects of emotional experience: emotional intensity and enjoyment. During fMRI scanning, healthy volunteers listened to a full-length piece of music selected to induce sadness. After scanning, participants listened to the piece twice while simultaneously rating the intensity of felt sadness or felt enjoyment. Activity in the auditory cortex, insula, and inferior frontal gyrus was significantly synchronized across participants. Synchronization in auditory, visual, and prefrontal regions was significantly greater in participants with higher measures of a subscale of trait empathy related to feeling emotions in response to music. When assessed dynamically, continuous enjoyment ratings positively predicted a moment-to-moment measure of intersubject synchronization in auditory, default mode, and striatal networks, as well as the orbitofrontal cortex, whereas sadness predicted intersubject synchronization in limbic and striatal networks. The results suggest that stimulus-driven patterns of neural communication in emotional processing and high-level cortical regions carry meaningful information with regards to our feeling in response to a naturalistic stimulus.
Collapse
Affiliation(s)
- Matthew E Sachs
- Brain and Creativity Institute, University of Southern California, 3620A McClintock Avenue, Los Angeles, CA, 90089-2921, USA; Center for Science and Society, Columbia University in the City of New York, 1180 Amsterdam Avenue, New York, NY, 10027, USA.
| | - Assal Habibi
- Brain and Creativity Institute, University of Southern California, 3620A McClintock Avenue, Los Angeles, CA, 90089-2921, USA
| | - Antonio Damasio
- Brain and Creativity Institute, University of Southern California, 3620A McClintock Avenue, Los Angeles, CA, 90089-2921, USA
| | - Jonas T Kaplan
- Brain and Creativity Institute, University of Southern California, 3620A McClintock Avenue, Los Angeles, CA, 90089-2921, USA
| |
Collapse
|
17
|
Proverbio AM, Benedetto F, Guazzone M. Shared neural mechanisms for processing emotions in music and vocalizations. Eur J Neurosci 2019; 51:1987-2007. [DOI: 10.1111/ejn.14650] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Revised: 11/21/2019] [Accepted: 12/05/2019] [Indexed: 12/21/2022]
Affiliation(s)
- Alice Mado Proverbio
- Department of Psychology University of Milano‐Bicocca Milan Italy
- Milan Center for Neuroscience Milan Italy
| | - Francesco Benedetto
- Department of Psychology University of Milano‐Bicocca Milan Italy
- Milan Center for Neuroscience Milan Italy
| | - Martina Guazzone
- Department of Psychology University of Milano‐Bicocca Milan Italy
- Milan Center for Neuroscience Milan Italy
| |
Collapse
|
18
|
Kuo PC, Tseng YL, Zilles K, Suen S, Eickhoff SB, Lee JD, Cheng PE, Liou M. Brain dynamics and connectivity networks under natural auditory stimulation. Neuroimage 2019; 202:116042. [PMID: 31344485 DOI: 10.1016/j.neuroimage.2019.116042] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2019] [Revised: 07/17/2019] [Accepted: 07/20/2019] [Indexed: 02/03/2023] Open
Abstract
The analysis of functional magnetic resonance imaging (fMRI) data is challenging when subjects are under exposure to natural sensory stimulation. In this study, a two-stage approach was developed to enable the identification of connectivity networks involved in the processing of information in the brain under natural sensory stimulation. In the first stage, the degree of concordance between the results of inter-subject and intra-subject correlation analyses is assessed statistically. The microstructurally (i.e., cytoarchitectonically) defined brain areas are designated either as concordant in which the results of both correlation analyses are in agreement, or as discordant in which one analysis method shows a higher proportion of supra-threshold voxels than does the other. In the second stage, connectivity networks are identified using the time courses of supra-threshold voxels in brain areas contingent upon the classifications derived in the first stage. In an empirical study, fMRI data were collected from 40 young adults (19 males, average age 22.76 ± 3.25), who underwent auditory stimulation involving sound clips of human voices and animal vocalizations under two operational conditions (i.e., eyes-closed and eyes-open). The operational conditions were designed to assess confounding effects due to auditory instructions or visual perception. The proposed two-stage analysis demonstrated that stress modulation (affective) and language networks in the limbic and cortical structures were respectively engaged during sound stimulation, and presented considerable variability among subjects. The network involved in regulating visuomotor control was sensitive to the eyes-open instruction, and presented only small variations among subjects. A high degree of concordance was observed between the two analyses in the primary auditory cortex which was highly sensitive to the pitch of sound clips. Our results have indicated that brain areas can be identified as concordant or discordant based on the two correlation analyses. This may further facilitate the search for connectivity networks involved in the processing of information under natural sensory stimulation.
Collapse
Affiliation(s)
- Po-Chih Kuo
- Institute of Statistical Science, Academia Sinica, Taipei, Taiwan
| | - Yi-Li Tseng
- Department of Electrical Engineering, Fu Jen Catholic University, New Taipei City, Taiwan
| | - Karl Zilles
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany
| | - Summit Suen
- Institute of Statistical Science, Academia Sinica, Taipei, Taiwan
| | - Simon B Eickhoff
- Institute of Systems Neuroscience, Medical Faculty, Heinrich Heine University Düsseldorf, Düsseldorf, Germany; Institute of Neuroscience and Medicine (INM-7), Research Centre Jülich, Jülich, Germany
| | - Juin-Der Lee
- Graduate Institute of Business Administration, National Chengchi University, Taipei, Taiwan
| | - Philip E Cheng
- Institute of Statistical Science, Academia Sinica, Taipei, Taiwan
| | - Michelle Liou
- Institute of Statistical Science, Academia Sinica, Taipei, Taiwan.
| |
Collapse
|
19
|
Pralus A, Fornoni L, Bouet R, Gomot M, Bhatara A, Tillmann B, Caclin A. Emotional prosody in congenital amusia: Impaired and spared processes. Neuropsychologia 2019; 134:107234. [DOI: 10.1016/j.neuropsychologia.2019.107234] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Revised: 08/12/2019] [Accepted: 10/16/2019] [Indexed: 12/15/2022]
|
20
|
Manno FAM, Lau C, Fernandez-Ruiz J, Manno SHC, Cheng SH, Barrios FA. The human amygdala disconnecting from auditory cortex preferentially discriminates musical sound of uncertain emotion by altering hemispheric weighting. Sci Rep 2019; 9:14787. [PMID: 31615998 PMCID: PMC6794305 DOI: 10.1038/s41598-019-50042-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Accepted: 08/24/2019] [Indexed: 02/06/2023] Open
Abstract
How do humans discriminate emotion from non-emotion? The specific psychophysical cues and neural responses involved with resolving emotional information in sound are unknown. In this study we used a discrimination psychophysical-fMRI sparse sampling paradigm to locate threshold responses to happy and sad acoustic stimuli. The fine structure and envelope of auditory signals were covaried to vary emotional certainty. We report that emotion identification at threshold in music utilizes fine structure cues. The auditory cortex was activated but did not vary with emotional uncertainty. Amygdala activation was modulated by emotion identification and was absent when emotional stimuli were chance identifiable, especially in the left hemisphere. The right hemisphere amygdala was considerably more deactivated in response to uncertain emotion. The threshold of emotion was signified by a right amygdala deactivation and change of left amygdala greater than right amygdala activation. Functional sex differences were noted during binaural uncertain emotional stimuli presentations, where the right amygdala showed larger activation in females. Negative control (silent stimuli) experiments investigated sparse sampling of silence to ensure modulation effects were inherent to emotional resolvability. No functional modulation of Heschl's gyrus occurred during silence; however, during rest the amygdala baseline state was asymmetrically lateralized. The evidence indicates changing hemispheric activation and deactivation patterns between the left and right amygdala is a hallmark feature of discriminating emotion from non-emotion in music.
Collapse
Affiliation(s)
- Francis A M Manno
- School of Biomedical Engineering, Faculty of Engineering, The University of Sydney, Sydney, New South Wales, Australia.
- Department of Physics, City University of Hong Kong, HKSAR, China.
| | - Condon Lau
- Department of Physics, City University of Hong Kong, HKSAR, China.
| | - Juan Fernandez-Ruiz
- Departamento de Fisiología, Facultad de Medicina, Universidad Nacional Autónoma de México, México City, 04510, Mexico
| | | | - Shuk Han Cheng
- Department of Biomedical Sciences, City University of Hong Kong, HKSAR, China
| | - Fernando A Barrios
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Juriquilla, Querétaro, Mexico.
| |
Collapse
|
21
|
Di Mauro M, Toffalini E, Grassi M, Petrini K. Effect of Long-Term Music Training on Emotion Perception From Drumming Improvisation. Front Psychol 2018; 9:2168. [PMID: 30473677 PMCID: PMC6237981 DOI: 10.3389/fpsyg.2018.02168] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2018] [Accepted: 10/22/2018] [Indexed: 11/13/2022] Open
Abstract
Long-term music training has been shown to affect different cognitive and perceptual abilities. However, it is less well known whether it can also affect the perception of emotion from music, especially purely rhythmic music. Hence, we asked a group of 16 non-musicians, 16 musicians with no drumming experience, and 16 drummers to judge the level of expressiveness, the valence (positive and negative), and the category of emotion perceived from 96 drumming improvisation clips (audio-only, video-only, and audiovideo) that varied in several music features (e.g., musical genre, tempo, complexity, drummer’s expressiveness, and drummer’s style). Our results show that the level and type of music training influence the perceived expressiveness, valence, and emotion from solo drumming improvisation. Overall, non-musicians, non-drummer musicians, and drummers were affected differently by changes in some characteristics of the music performance, for example musicians (with and without drumming experience) gave a greater weight to the visual performance than non-musicians when giving their emotional judgments. These findings suggest that besides influencing several cognitive and perceptual abilities, music training also affects how we perceive emotion from music.
Collapse
Affiliation(s)
- Martina Di Mauro
- Department of General Psychology, University of Padua, Padua, Italy
| | - Enrico Toffalini
- Department of General Psychology, University of Padua, Padua, Italy
| | - Massimo Grassi
- Department of General Psychology, University of Padua, Padua, Italy
| | - Karin Petrini
- Department of Psychology, University of Bath, Bath, United Kingdom
| |
Collapse
|
22
|
Akkermans J, Schapiro R, Müllensiefen D, Jakubowski K, Shanahan D, Baker D, Busch V, Lothwesen K, Elvers P, Fischinger T, Schlemmer K, Frieler K. Decoding emotions in expressive music performances: A multi-lab replication and extension study. Cogn Emot 2018; 33:1099-1118. [PMID: 30409082 DOI: 10.1080/02699931.2018.1541312] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
With over 560 citations reported on Google Scholar by April 2018, a publication by Juslin and Gabrielsson (1996) presented evidence supporting performers' abilities to communicate, with high accuracy, their intended emotional expressions in music to listeners. Though there have been related studies published on this topic, there has yet to be a direct replication of this paper. A replication is warranted given the paper's influence in the field and the implications of its results. The present experiment joins the recent replication effort by producing a five-lab replication using the original methodology. Expressive performances of seven emotions (e.g. happy, sad, angry, etc.) by professional musicians were recorded using the same three melodies from the original study. Participants (N = 319) were presented with recordings and rated how well each emotion matched the emotional quality using a 0-10 scale. The same instruments from the original study (i.e. violin, voice, and flute) were used, with the addition of piano. In an effort to increase the accessibility of the experiment and allow for a more ecologically-valid environment, the recordings were presented using an internet-based survey platform. As an extension to the original study, this experiment investigated how musicality, emotional intelligence, and emotional contagion might explain individual differences in the decoding process. Results found overall high decoding accuracy (57%) when using emotion ratings aggregated for the sample of participants, similar to the method of analysis from the original study. However, when decoding accuracy was scored for each participant individually the average accuracy was much lower (31%). Unlike in the original study, the voice was found to be the most expressive instrument. Generalised Linear Mixed Effects Regression modelling revealed that musical training and emotional engagement with music positively influences emotion decoding accuracy.
Collapse
Affiliation(s)
- Jessica Akkermans
- a Department of Psychology , Goldsmiths, University of London , London , UK
| | - Renee Schapiro
- a Department of Psychology , Goldsmiths, University of London , London , UK
| | | | | | - Daniel Shanahan
- c College of Humanities and Social Sciences , Louisiana State University , Baton Rouge , LA , USA
| | - David Baker
- c College of Humanities and Social Sciences , Louisiana State University , Baton Rouge , LA , USA
| | - Veronika Busch
- d Department of Musicology and Music Education , University of Bremen , Bremen , Germany
| | - Kai Lothwesen
- d Department of Musicology and Music Education , University of Bremen , Bremen , Germany
| | - Paul Elvers
- e Music Department , Max Planck Institute for Empirical Aesthetics , Frankfurt am Main , Germany
| | - Timo Fischinger
- e Music Department , Max Planck Institute for Empirical Aesthetics , Frankfurt am Main , Germany
| | - Kathrin Schlemmer
- f Music Department , Catholic University of Eichstätt-Ingolstadt , Eichstaett , Germany
| | - Klaus Frieler
- g Institute for Musicology , University of Music "Franz Liszt" Weimar , Hamburg , Germany
| |
Collapse
|
23
|
Liang B, Du Y. The Functional Neuroanatomy of Lexical Tone Perception: An Activation Likelihood Estimation Meta-Analysis. Front Neurosci 2018; 12:495. [PMID: 30087589 PMCID: PMC6066585 DOI: 10.3389/fnins.2018.00495] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2018] [Accepted: 07/02/2018] [Indexed: 11/13/2022] Open
Abstract
In tonal language such as Chinese, lexical tone serves as a phonemic feature in determining word meaning. Meanwhile, it is close to prosody in terms of suprasegmental pitch variations and larynx-based articulation. The important yet mixed nature of lexical tone has evoked considerable studies, but no consensus has been reached on its functional neuroanatomy. This meta-analysis aimed at uncovering the neural network of lexical tone perception in comparison with that of phoneme and prosody in a unified framework. Independent Activation Likelihood Estimation meta-analyses were conducted for different linguistic elements: lexical tone by native tonal language speakers, lexical tone by non-tonal language speakers, phoneme, word-level prosody, and sentence-level prosody. Results showed that lexical tone and prosody studies demonstrated more extensive activations in the right than the left auditory cortex, whereas the opposite pattern was found for phoneme studies. Only tonal language speakers consistently recruited the left anterior superior temporal gyrus (STG) for processing lexical tone, an area implicated in phoneme processing and word-form recognition. Moreover, an anterior-lateral to posterior-medial gradient of activation as a function of element timescale was revealed in the right STG, in which the activation for lexical tone lied between that for phoneme and that for prosody. Another topological pattern was shown on the left precentral gyrus (preCG), with the activation for lexical tone overlapped with that for prosody but ventral to that for phoneme. These findings provide evidence that the neural network for lexical tone perception is hybrid with those for phoneme and prosody. That is, resembling prosody, lexical tone perception, regardless of language experience, involved right auditory cortex, with activation localized between sites engaged by phonemic and prosodic processing, suggesting a hierarchical organization of representations in the right auditory cortex. For tonal language speakers, lexical tone additionally engaged the left STG lexical mapping network, consistent with the phonemic representation. Similarly, when processing lexical tone, only tonal language speakers engaged the left preCG site implicated in prosody perception, consistent with tonal language speakers having stronger articulatory representations for lexical tone in the laryngeal sensorimotor network. A dynamic dual-stream model for lexical tone perception was proposed and discussed.
Collapse
Affiliation(s)
- Baishen Liang
- CAS Key Laboratory of Behavioral Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Yi Du
- CAS Key Laboratory of Behavioral Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
24
|
The right touch: Stroking of CT-innervated skin promotes vocal emotion processing. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2018; 17:1129-1140. [PMID: 28933047 PMCID: PMC5709431 DOI: 10.3758/s13415-017-0537-5] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Research has revealed a special mechanoreceptor, called C-tactile (CT) afferent, that is situated in hairy skin and that seems relevant for the processing of social touch. We pursued a possible role of this receptor in the perception of other social signals such as a person’s voice. Participants completed three sessions in which they heard surprised and neutral vocal and nonvocal sounds and detected rare sound repetitions. In a given session, participants received no touch or soft brushstrokes to the arm (CT innervated) or palm (CT free). Event-related potentials elicited to sounds revealed that stroking to the arm facilitated the integration of vocal and emotional information. The late positive potential was greater for surprised vocal relative to neutral vocal and nonvocal sounds, and this effect was greater for arm touch relative to both palm touch and no touch. Together, these results indicate that stroking to the arm facilitates the allocation of processing resources to emotional voices, thus supporting the possibility that CT stimulation benefits social perception cross-modally.
Collapse
|
25
|
Sachs ME, Habibi A, Damasio A, Kaplan JT. Decoding the neural signatures of emotions expressed through sound. Neuroimage 2018; 174:1-10. [DOI: 10.1016/j.neuroimage.2018.02.058] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2017] [Revised: 02/23/2018] [Accepted: 02/27/2018] [Indexed: 12/15/2022] Open
|
26
|
Paquette S, Takerkart S, Saget S, Peretz I, Belin P. Cross-classification of musical and vocal emotions in the auditory cortex. Ann N Y Acad Sci 2018; 1423:329-337. [PMID: 29741242 DOI: 10.1111/nyas.13666] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2017] [Revised: 02/05/2018] [Accepted: 02/13/2018] [Indexed: 12/17/2022]
Abstract
Whether emotions carried by voice and music are processed by the brain using similar mechanisms has long been investigated. Yet neuroimaging studies do not provide a clear picture, mainly due to lack of control over stimuli. Here, we report a functional magnetic resonance imaging (fMRI) study using comparable stimulus material in the voice and music domains-the Montreal Affective Voices and the Musical Emotional Bursts-which include nonverbal short bursts of happiness, fear, sadness, and neutral expressions. We use a multivariate emotion-classification fMRI analysis involving cross-timbre classification as a means of comparing the neural mechanisms involved in processing emotional information in the two domains. We find, for affective stimuli in the violin, clarinet, or voice timbres, that local fMRI patterns in the bilateral auditory cortex and upper premotor regions support above-chance emotion classification when training and testing sets are performed within the same timbre category. More importantly, classifier performance generalized well across timbre in cross-classifying schemes, albeit with a slight accuracy drop when crossing the voice-music boundary, providing evidence for a shared neural code for processing musical and vocal emotions, with possibly a cost for the voice due to its evolutionary significance.
Collapse
Affiliation(s)
- Sébastien Paquette
- Department of Psychology, International Laboratory for Brain Music and Sound Research, Université de Montréal, Montreal, Canada
- Department of Neurology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts
| | - Sylvain Takerkart
- Institut de Neurosciences de La Timone, CNRS & Aix-Marseille University, Marseille, France
| | - Shinji Saget
- Institut de Neurosciences de La Timone, CNRS & Aix-Marseille University, Marseille, France
| | - Isabelle Peretz
- Department of Psychology, International Laboratory for Brain Music and Sound Research, Université de Montréal, Montreal, Canada
| | - Pascal Belin
- Department of Psychology, International Laboratory for Brain Music and Sound Research, Université de Montréal, Montreal, Canada
- Institut de Neurosciences de La Timone, CNRS & Aix-Marseille University, Marseille, France
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| |
Collapse
|
27
|
Schirmer A, Gunter TC. Temporal signatures of processing voiceness and emotion in sound. Soc Cogn Affect Neurosci 2018; 12:902-909. [PMID: 28338796 PMCID: PMC5472162 DOI: 10.1093/scan/nsx020] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2016] [Accepted: 02/07/2017] [Indexed: 12/22/2022] Open
Abstract
This study explored the temporal course of vocal and emotional sound processing. Participants detected rare repetitions in a stimulus stream comprising neutral and surprised non-verbal exclamations and spectrally rotated control sounds. Spectral rotation preserved some acoustic and emotional properties of the vocal originals. Event-related potentials elicited to unrepeated sounds revealed effects of voiceness and emotion. Relative to non-vocal sounds, vocal sounds elicited a larger centro-parietally distributed N1. This effect was followed by greater positivity to vocal relative to non-vocal sounds beginning with the P2 and extending throughout the recording epoch (N4, late positive potential) with larger amplitudes in female than in male listeners. Emotion effects overlapped with the voiceness effects but were smaller and differed topographically. Voiceness and emotion interacted only for the late positive potential, which was greater for vocal-emotional as compared with all other sounds. Taken together, these results point to a multi-stage process in which voiceness and emotionality are represented independently before being integrated in a manner that biases responses to stimuli with socio-emotional relevance.
Collapse
Affiliation(s)
- Annett Schirmer
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Department of Psychology, Chinese University of Hong Kong, Hong Kong
| | - Thomas C Gunter
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
28
|
Cespedes-Guevara J, Eerola T. Music Communicates Affects, Not Basic Emotions - A Constructionist Account of Attribution of Emotional Meanings to Music. Front Psychol 2018; 9:215. [PMID: 29541041 PMCID: PMC5836201 DOI: 10.3389/fpsyg.2018.00215] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2017] [Accepted: 02/08/2018] [Indexed: 12/24/2022] Open
Abstract
Basic Emotion theory has had a tremendous influence on the affective sciences, including music psychology, where most researchers have assumed that music expressivity is constrained to a limited set of basic emotions. Several scholars suggested that these constrains to musical expressivity are explained by the existence of a shared acoustic code to the expression of emotions in music and speech prosody. In this article we advocate for a shift from this focus on basic emotions to a constructionist account. This approach proposes that the phenomenon of perception of emotions in music arises from the interaction of music's ability to express core affects and the influence of top-down and contextual information in the listener's mind. We start by reviewing the problems with the concept of Basic Emotions, and the inconsistent evidence that supports it. We also demonstrate how decades of developmental and cross-cultural research on music and emotional speech have failed to produce convincing findings to conclude that music expressivity is built upon a set of biologically pre-determined basic emotions. We then examine the cue-emotion consistencies between music and speech, and show how they support a parsimonious explanation, where musical expressivity is grounded on two dimensions of core affect (arousal and valence). Next, we explain how the fact that listeners reliably identify basic emotions in music does not arise from the existence of categorical boundaries in the stimuli, but from processes that facilitate categorical perception, such as using stereotyped stimuli and close-ended response formats, psychological processes of construction of mental prototypes, and contextual information. Finally, we outline our proposal of a constructionist account of perception of emotions in music, and spell out the ways in which this approach is able to make solve past conflicting findings. We conclude by providing explicit pointers about the methodological choices that will be vital to move beyond the popular Basic Emotion paradigm and start untangling the emergence of emotional experiences with music in the actual contexts in which they occur.
Collapse
Affiliation(s)
| | - Tuomas Eerola
- Department of Music, Durham University, Durham, United Kingdom
| |
Collapse
|
29
|
Koelsch S, Skouras S, Lohmann G. The auditory cortex hosts network nodes influential for emotion processing: An fMRI study on music-evoked fear and joy. PLoS One 2018; 13:e0190057. [PMID: 29385142 PMCID: PMC5791961 DOI: 10.1371/journal.pone.0190057] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2016] [Accepted: 12/07/2017] [Indexed: 01/12/2023] Open
Abstract
Sound is a potent elicitor of emotions. Auditory core, belt and parabelt regions have anatomical connections to a large array of limbic and paralimbic structures which are involved in the generation of affective activity. However, little is known about the functional role of auditory cortical regions in emotion processing. Using functional magnetic resonance imaging and music stimuli that evoke joy or fear, our study reveals that anterior and posterior regions of auditory association cortex have emotion-characteristic functional connectivity with limbic/paralimbic (insula, cingulate cortex, and striatum), somatosensory, visual, motor-related, and attentional structures. We found that these regions have remarkably high emotion-characteristic eigenvector centrality, revealing that they have influential positions within emotion-processing brain networks with “small-world” properties. By contrast, primary auditory fields showed surprisingly strong emotion-characteristic functional connectivity with intra-auditory regions. Our findings demonstrate that the auditory cortex hosts regions that are influential within networks underlying the affective processing of auditory information. We anticipate our results to incite research specifying the role of the auditory cortex—and sensory systems in general—in emotion processing, beyond the traditional view that sensory cortices have merely perceptual functions.
Collapse
Affiliation(s)
- Stefan Koelsch
- Department of Biological and Medical Psychology, University of Bergen, Bergen, Norway
- * E-mail:
| | - Stavros Skouras
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| | - Gabriele Lohmann
- Department of Biomedical Magnetic Resonance, University Clinic Tübingen, Tübingen, Germany
- Magnetic Resonance Center, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
30
|
Risk of depression enhances auditory Pitch discrimination in the brain as indexed by the mismatch negativity. Clin Neurophysiol 2017; 128:1923-1936. [DOI: 10.1016/j.clinph.2017.07.004] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2017] [Revised: 06/08/2017] [Accepted: 07/01/2017] [Indexed: 11/19/2022]
|
31
|
Bravo F, Cross I, Hawkins S, Gonzalez N, Docampo J, Bruno C, Stamatakis EA. Neural mechanisms underlying valence inferences to sound: The role of the right angular gyrus. Neuropsychologia 2017; 102:144-162. [PMID: 28602997 DOI: 10.1016/j.neuropsychologia.2017.05.029] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2016] [Revised: 05/24/2017] [Accepted: 05/31/2017] [Indexed: 01/03/2023]
Abstract
We frequently infer others' intentions based on non-verbal auditory cues. Although the brain underpinnings of social cognition have been extensively studied, no empirical work has yet examined the impact of musical structure manipulation on the neural processing of emotional valence during mental state inferences. We used a novel sound-based theory-of-mind paradigm in which participants categorized stimuli of different sensory dissonance level in terms of positive/negative valence. Whilst consistent with previous studies which propose facilitated encoding of consonances, our results demonstrated that distinct levels of consonance/dissonance elicited differential influences on the right angular gyrus, an area implicated in mental state attribution and attention reorienting processes. Functional and effective connectivity analyses further showed that consonances modulated a specific inhibitory interaction from associative memory to mental state attribution substrates. Following evidence suggesting that individuals with autism may process social affective cues differently, we assessed the relationship between participants' task performance and self-reported autistic traits in clinically typical adults. Higher scores on the social cognition scales of the AQ were associated with deficits in recognising positive valence in consonant sound cues. These findings are discussed with respect to Bayesian perspectives on autistic perception, which highlight a functional failure to optimize precision in relation to prior beliefs.
Collapse
Affiliation(s)
- Fernando Bravo
- University of Cambridge, Centre for Music and Science, Cambridge, UK; TU Dresden, Institut für Kunst- und Musikwissenschaft (E.A.R.S.), Dresden, Germany.
| | - Ian Cross
- University of Cambridge, Centre for Music and Science, Cambridge, UK
| | - Sarah Hawkins
- University of Cambridge, Centre for Music and Science, Cambridge, UK
| | - Nadia Gonzalez
- Fundación Científica del Sur Imaging Centre, Buenos Aires, Argentina
| | - Jorge Docampo
- Fundación Científica del Sur Imaging Centre, Buenos Aires, Argentina
| | - Claudio Bruno
- Fundación Científica del Sur Imaging Centre, Buenos Aires, Argentina
| | | |
Collapse
|
32
|
Abstract
The specific efficacy of antipsychotics on negative symptoms is questionable, suggesting an urgent need for specific treatments for negative symptoms. This review includes studies published since 2014 with a primary or secondary focus on treating negative symptoms in schizophrenia. Special emphasis is given to recently published meta-analyses. Topics include novel pharmacological approaches, including glutamatergic-based and nicotinic-acetylcholinergic treatments, treatments approved for other indications by the US FDA (or other regulatory bodies) (antipsychotics, antidepressants, and mood stabilizers), brain stimulation, and behavioral- and activity-based approaches, including physical exercise. Potential complications regarding the design of current negative symptom trials are discussed and include inconsistent placebo effects, lack of reliable biomarkers, negative symptom scale and inclusion criteria variability, attempts to distinguish between primary and secondary negative symptoms, lack of focus on early psychosis, and the potential iatrogenic bias of clinical trials.
Collapse
Affiliation(s)
- Joshua T Kantrowitz
- Schizophrenia Research Center, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, 10962, USA. .,Division of Experimental Therapeutics, Department of Psychiatry, Columbia University, New York, NY, 10032, USA. .,New York State Psychiatric Institute, 1051 Riverside Drive, New York, NY, 10023, USA.
| |
Collapse
|
33
|
Schirmer A, Adolphs R. Emotion Perception from Face, Voice, and Touch: Comparisons and Convergence. Trends Cogn Sci 2017; 21:216-228. [PMID: 28173998 DOI: 10.1016/j.tics.2017.01.001] [Citation(s) in RCA: 140] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2016] [Revised: 12/23/2016] [Accepted: 01/03/2017] [Indexed: 11/30/2022]
Abstract
Historically, research on emotion perception has focused on facial expressions, and findings from this modality have come to dominate our thinking about other modalities. Here we examine emotion perception through a wider lens by comparing facial with vocal and tactile processing. We review stimulus characteristics and ensuing behavioral and brain responses and show that audition and touch do not simply duplicate visual mechanisms. Each modality provides a distinct input channel and engages partly nonoverlapping neuroanatomical systems with different processing specializations (e.g., specific emotions versus affect). Moreover, processing of signals across the different modalities converges, first into multi- and later into amodal representations that enable holistic emotion judgments.
Collapse
Affiliation(s)
- Annett Schirmer
- Chinese University of Hong Kong, Hong Kong; Max Planck Institute for Human Cognitive and Brain Sciences, Germany; National University of Singapore, Singapore.
| | - Ralph Adolphs
- California Institute of Technology, Pasadena, CA, USA.
| |
Collapse
|
34
|
Schirmer A, Meck WH, Penney TB. The Socio-Temporal Brain: Connecting People in Time. Trends Cogn Sci 2016; 20:760-772. [DOI: 10.1016/j.tics.2016.08.002] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2016] [Revised: 08/05/2016] [Accepted: 08/08/2016] [Indexed: 10/21/2022]
|
35
|
Brauer J, Xiao Y, Poulain T, Friederici AD, Schirmer A. Frequency of Maternal Touch Predicts Resting Activity and Connectivity of the Developing Social Brain. Cereb Cortex 2016; 26:3544-52. [PMID: 27230216 PMCID: PMC4961023 DOI: 10.1093/cercor/bhw137] [Citation(s) in RCA: 82] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
Previous behavioral research points to a positive relationship between maternal touch and early social development. Here, we explored the brain correlates of this relationship. The frequency of maternal touch was recorded for 43 five-year-old children during a 10 min standardized play session. Additionally, all children completed a resting-state functional magnetic resonance imaging session. Investigating the default mode network revealed a positive relation between the frequency of maternal touch and activity in the right posterior superior temporal sulcus (pSTS) extending into the temporo-parietal junction. Using this effect as a seed in a functional connectivity analysis identified a network including extended bilateral regions along the temporal lobe, bilateral frontal cortex, and left insula. Compared with children with low maternal touch, children with high maternal touch showed additional connectivity with the right dorso-medial prefrontal cortex. Together these results support the notion that childhood tactile experiences shape the developing "social brain" with a particular emphasis on a network involved in mentalizing.
Collapse
Affiliation(s)
- Jens Brauer
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Yaqiong Xiao
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Tanja Poulain
- LIFE Research Center, University of Leipzig, Leipzig, Germany
| | - Angela D Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Annett Schirmer
- Department of Psychology and LSI Neurobiology/Ageing Programme, National University of Singapore, Singapore, Singapore Duke/NUS Graduate Medical School, Singapore, Singapore
| |
Collapse
|
36
|
The sound of emotions-Towards a unifying neural network perspective of affective sound processing. Neurosci Biobehav Rev 2016; 68:96-110. [PMID: 27189782 DOI: 10.1016/j.neubiorev.2016.05.002] [Citation(s) in RCA: 117] [Impact Index Per Article: 14.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2016] [Revised: 05/01/2016] [Accepted: 05/04/2016] [Indexed: 12/15/2022]
Abstract
Affective sounds are an integral part of the natural and social environment that shape and influence behavior across a multitude of species. In human primates, these affective sounds span a repertoire of environmental and human sounds when we vocalize or produce music. In terms of neural processing, cortical and subcortical brain areas constitute a distributed network that supports our listening experience to these affective sounds. Taking an exhaustive cross-domain view, we accordingly suggest a common neural network that facilitates the decoding of the emotional meaning from a wide source of sounds rather than a traditional view that postulates distinct neural systems for specific affective sound types. This new integrative neural network view unifies the decoding of affective valence in sounds, and ascribes differential as well as complementary functional roles to specific nodes within a common neural network. It also highlights the importance of an extended brain network beyond the central limbic and auditory brain systems engaged in the processing of affective sounds.
Collapse
|
37
|
Korb S, Frühholz S, Grandjean D. Reappraising the voices of wrath. Soc Cogn Affect Neurosci 2015; 10:1644-60. [PMID: 25964502 PMCID: PMC4666101 DOI: 10.1093/scan/nsv051] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2014] [Revised: 04/08/2015] [Accepted: 05/07/2015] [Indexed: 11/12/2022] Open
Abstract
Cognitive reappraisal recruits prefrontal and parietal cortical areas. Because of the near exclusive usage in past research of visual stimuli to elicit emotions, it is unknown whether the same neural substrates underlie the reappraisal of emotions induced through other sensory modalities. Here, participants reappraised their emotions in order to increase or decrease their emotional response to angry prosody, or maintained their attention to it in a control condition. Neural activity was monitored with fMRI, and connectivity was investigated by using psychophysiological interaction analyses. A right-sided network encompassing the superior temporal gyrus, the superior temporal sulcus and the inferior frontal gyrus was found to underlie the processing of angry prosody. During reappraisal to increase emotional response, the left superior frontal gyrus showed increased activity and became functionally coupled to right auditory cortices. During reappraisal to decrease emotional response, a network that included the medial frontal gyrus and posterior parietal areas showed increased activation and greater functional connectivity with bilateral auditory regions. Activations pertaining to this network were more extended on the right side of the brain. Although directionality cannot be inferred from PPI analyses, the findings suggest a similar frontoparietal network for the reappraisal of visually and auditorily induced negative emotions.
Collapse
Affiliation(s)
- Sebastian Korb
- International School for Advanced Studies (SISSA), Trieste, Italy,
| | - Sascha Frühholz
- Swiss Center for Affective Sciences, Geneva, Switzerland, and Department of Psychology and Educational Sciences, University of Geneva, Switzerland
| | - Didier Grandjean
- Swiss Center for Affective Sciences, Geneva, Switzerland, and Department of Psychology and Educational Sciences, University of Geneva, Switzerland
| |
Collapse
|
38
|
Weisgerber A, Vermeulen N, Peretz I, Samson S, Philippot P, Maurage P, De Graeuwe D'Aoust C, De Jaegere A, Delatte B, Gillain B, De Longueville X, Constant E. Facial, vocal and musical emotion recognition is altered in paranoid schizophrenic patients. Psychiatry Res 2015. [PMID: 26210647 DOI: 10.1016/j.psychres.2015.07.042] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Disturbed processing of emotional faces and voices is typically observed in schizophrenia. This deficit leads to impaired social cognition and interactions. In this study, we investigated whether impaired processing of emotions also affects musical stimuli, which are widely present in daily life and known for their emotional impact. Thirty schizophrenic patients and 30 matched healthy controls evaluated the emotional content of musical, vocal and facial stimuli. Schizophrenic patients are less accurate than healthy controls in recognizing emotion in music, voices and faces. Our results confirm impaired recognition of emotion in voice and face stimuli in schizophrenic patients and extend this observation to the recognition of emotion in musical stimuli.
Collapse
Affiliation(s)
- Anne Weisgerber
- Université catholique de Louvain (UCLouvain), Psychological Sciences Research Institute (IPSY), Louvain-la-Neuve, Belgium; National Research Fund (FNR), Luxembourg.
| | - Nicolas Vermeulen
- Université catholique de Louvain (UCLouvain), Psychological Sciences Research Institute (IPSY), Louvain-la-Neuve, Belgium; Fund for Scientific Research (F.R.S.-FNRS), Belgium
| | - Isabelle Peretz
- International Laboratory for Brain, Music, and Sound research (BRAMS), Université de Montréal, Canada
| | - Séverine Samson
- Neuropsychology and Auditory Cognition, University Lille-Nord de France, France
| | - Pierre Philippot
- Université catholique de Louvain (UCLouvain), Psychological Sciences Research Institute (IPSY), Louvain-la-Neuve, Belgium
| | - Pierre Maurage
- Université catholique de Louvain (UCLouvain), Psychological Sciences Research Institute (IPSY), Louvain-la-Neuve, Belgium; Fund for Scientific Research (F.R.S.-FNRS), Belgium
| | - Catherine De Graeuwe D'Aoust
- Université catholique de Louvain (UCLouvain), Psychological Sciences Research Institute (IPSY), Louvain-la-Neuve, Belgium
| | - Aline De Jaegere
- Department of Adult Psychiatry, Université catholique de Louvain (UCLouvain), Institute of Neurosciences (IoNS), 1200 Brussels, Belgium
| | | | | | | | - Eric Constant
- Department of Adult Psychiatry, Université catholique de Louvain (UCLouvain), Institute of Neurosciences (IoNS), 1200 Brussels, Belgium.
| |
Collapse
|
39
|
Clark CN, Downey LE, Warren JD. Brain disorders and the biological role of music. Soc Cogn Affect Neurosci 2015; 10:444-52. [PMID: 24847111 PMCID: PMC4350491 DOI: 10.1093/scan/nsu079] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2013] [Revised: 03/07/2014] [Accepted: 05/14/2014] [Indexed: 12/16/2022] Open
Abstract
Despite its evident universality and high social value, the ultimate biological role of music and its connection to brain disorders remain poorly understood. Recent findings from basic neuroscience have shed fresh light on these old problems. New insights provided by clinical neuroscience concerning the effects of brain disorders promise to be particularly valuable in uncovering the underlying cognitive and neural architecture of music and for assessing candidate accounts of the biological role of music. Here we advance a new model of the biological role of music in human evolution and the link to brain disorders, drawing on diverse lines of evidence derived from comparative ethology, cognitive neuropsychology and neuroimaging studies in the normal and the disordered brain. We propose that music evolved from the call signals of our hominid ancestors as a means mentally to rehearse and predict potentially costly, affectively laden social routines in surrogate, coded, low-cost form: essentially, a mechanism for transforming emotional mental states efficiently and adaptively into social signals. This biological role of music has its legacy today in the disordered processing of music and mental states that characterizes certain developmental and acquired clinical syndromes of brain network disintegration.
Collapse
Affiliation(s)
- Camilla N Clark
- Dementia Research Centre, UCL Institute of Neurology, University College London, London WC1N 3BG, UK
| | - Laura E Downey
- Dementia Research Centre, UCL Institute of Neurology, University College London, London WC1N 3BG, UK
| | - Jason D Warren
- Dementia Research Centre, UCL Institute of Neurology, University College London, London WC1N 3BG, UK
| |
Collapse
|
40
|
Park M, Gutyrchik E, Welker L, Carl P, Pöppel E, Zaytseva Y, Meindl T, Blautzik J, Reiser M, Bao Y. Sadness is unique: neural processing of emotions in speech prosody in musicians and non-musicians. Front Hum Neurosci 2015; 8:1049. [PMID: 25688196 PMCID: PMC4311618 DOI: 10.3389/fnhum.2014.01049] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2014] [Accepted: 12/15/2014] [Indexed: 01/30/2023] Open
Abstract
Musical training has been shown to have positive effects on several aspects of speech processing, however, the effects of musical training on the neural processing of speech prosody conveying distinct emotions are yet to be better understood. We used functional magnetic resonance imaging (fMRI) to investigate whether the neural responses to speech prosody conveying happiness, sadness, and fear differ between musicians and non-musicians. Differences in processing of emotional speech prosody between the two groups were only observed when sadness was expressed. Musicians showed increased activation in the middle frontal gyrus, the anterior medial prefrontal cortex, the posterior cingulate cortex and the retrosplenial cortex. Our results suggest an increased sensitivity of emotional processing in musicians with respect to sadness expressed in speech, possibly reflecting empathic processes.
Collapse
Affiliation(s)
- Mona Park
- Institute of Medical Psychology, Ludwig-Maximilians-Universität Munich, Germany ; Human Science Center, Ludwig-Maximilians-Universität Munich, Germany ; Parmenides Center for Art and Science Pullach, Germany
| | - Evgeny Gutyrchik
- Institute of Medical Psychology, Ludwig-Maximilians-Universität Munich, Germany ; Human Science Center, Ludwig-Maximilians-Universität Munich, Germany ; Parmenides Center for Art and Science Pullach, Germany
| | - Lorenz Welker
- Human Science Center, Ludwig-Maximilians-Universität Munich, Germany ; Institute of Musicology, Ludwig-Maximilians-Universität Munich, Germany
| | - Petra Carl
- Institute of Medical Psychology, Ludwig-Maximilians-Universität Munich, Germany ; Human Science Center, Ludwig-Maximilians-Universität Munich, Germany
| | - Ernst Pöppel
- Institute of Medical Psychology, Ludwig-Maximilians-Universität Munich, Germany ; Human Science Center, Ludwig-Maximilians-Universität Munich, Germany ; Parmenides Center for Art and Science Pullach, Germany ; Department of Psychology and Key Laboratory of Machine Perception (MoE), Peking University Beijing, China ; Institute of Psychology, Chinese Academy of Sciences Beijing, China
| | - Yuliya Zaytseva
- Institute of Medical Psychology, Ludwig-Maximilians-Universität Munich, Germany ; Human Science Center, Ludwig-Maximilians-Universität Munich, Germany ; Parmenides Center for Art and Science Pullach, Germany ; Moscow Research Institute of Psychiatry Moscow, Russia ; Prague Psychiatric Centre, 3rd Faculty of Medicine, Charles University in Prague Prague, Czech Republic
| | - Thomas Meindl
- Institute of Clinical Radiology, Ludwig-Maximilians-Universität Munich, Germany
| | - Janusch Blautzik
- Institute of Clinical Radiology, Ludwig-Maximilians-Universität Munich, Germany
| | - Maximilian Reiser
- Institute of Clinical Radiology, Ludwig-Maximilians-Universität Munich, Germany
| | - Yan Bao
- Institute of Medical Psychology, Ludwig-Maximilians-Universität Munich, Germany ; Human Science Center, Ludwig-Maximilians-Universität Munich, Germany ; Parmenides Center for Art and Science Pullach, Germany ; Department of Psychology and Key Laboratory of Machine Perception (MoE), Peking University Beijing, China
| |
Collapse
|
41
|
|
42
|
Pinheiro AP, Vasconcelos M, Dias M, Arrais N, Gonçalves ÓF. The music of language: an ERP investigation of the effects of musical training on emotional prosody processing. BRAIN AND LANGUAGE 2015; 140:24-34. [PMID: 25461917 DOI: 10.1016/j.bandl.2014.10.009] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2014] [Revised: 09/30/2014] [Accepted: 10/22/2014] [Indexed: 06/04/2023]
Abstract
Recent studies have demonstrated the positive effects of musical training on the perception of vocally expressed emotion. This study investigated the effects of musical training on event-related potential (ERP) correlates of emotional prosody processing. Fourteen musicians and fourteen control subjects listened to 228 sentences with neutral semantic content, differing in prosody (one third with neutral, one third with happy and one third with angry intonation), with intelligible semantic content (semantic content condition--SCC) and unintelligible semantic content (pure prosody condition--PPC). Reduced P50 amplitude was found in musicians. A difference between SCC and PPC conditions was found in P50 and N100 amplitude in non-musicians only, and in P200 amplitude in musicians only. Furthermore, musicians were more accurate in recognizing angry prosody in PPC sentences. These findings suggest that auditory expertise characterizing extensive musical training may impact different stages of vocal emotional processing.
Collapse
Affiliation(s)
- Ana P Pinheiro
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal; Cognitive Neuroscience Lab, Department of Psychiatry, Harvard Medical School, Boston, MA, USA.
| | - Margarida Vasconcelos
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Marcelo Dias
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Nuno Arrais
- Music Department, Institute of Arts and Human Sciences, University of Minho, Braga, Portugal
| | - Óscar F Gonçalves
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal; Spaulding Center of Neuromodulation, Department of Physical Medicine & Rehabilitation, Spaulding Rehabilitation Hospital and Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
43
|
Frühholz S, Trost W, Grandjean D. The role of the medial temporal limbic system in processing emotions in voice and music. Prog Neurobiol 2014; 123:1-17. [PMID: 25291405 DOI: 10.1016/j.pneurobio.2014.09.003] [Citation(s) in RCA: 89] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2014] [Revised: 09/16/2014] [Accepted: 09/29/2014] [Indexed: 01/15/2023]
Abstract
Subcortical brain structures of the limbic system, such as the amygdala, are thought to decode the emotional value of sensory information. Recent neuroimaging studies, as well as lesion studies in patients, have shown that the amygdala is sensitive to emotions in voice and music. Similarly, the hippocampus, another part of the temporal limbic system (TLS), is responsive to vocal and musical emotions, but its specific roles in emotional processing from music and especially from voices have been largely neglected. Here we review recent research on vocal and musical emotions, and outline commonalities and differences in the neural processing of emotions in the TLS in terms of emotional valence, emotional intensity and arousal, as well as in terms of acoustic and structural features of voices and music. We summarize the findings in a neural framework including several subcortical and cortical functional pathways between the auditory system and the TLS. This framework proposes that some vocal expressions might already receive a fast emotional evaluation via a subcortical pathway to the amygdala, whereas cortical pathways to the TLS are thought to be equally used for vocal and musical emotions. While the amygdala might be specifically involved in a coarse decoding of the emotional value of voices and music, the hippocampus might process more complex vocal and musical emotions, and might have an important role especially for the decoding of musical emotions by providing memory-based and contextual associations.
Collapse
Affiliation(s)
- Sascha Frühholz
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology, University of Geneva, Geneva, Switzerland; Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland.
| | - Wiebke Trost
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology, University of Geneva, Geneva, Switzerland; Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| | - Didier Grandjean
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology, University of Geneva, Geneva, Switzerland; Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| |
Collapse
|
44
|
Kantrowitz JT, Scaramello N, Jakubovitz A, Lehrfeld JM, Laukka P, Elfenbein HA, Silipo G, Javitt DC. Amusia and protolanguage impairments in schizophrenia. Psychol Med 2014; 44:2739-2748. [PMID: 25066878 PMCID: PMC5373691 DOI: 10.1017/s0033291714000373] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
BACKGROUND Both language and music are thought to have evolved from a musical protolanguage that communicated social information, including emotion. Individuals with perceptual music disorders (amusia) show deficits in auditory emotion recognition (AER). Although auditory perceptual deficits have been studied in schizophrenia, their relationship with musical/protolinguistic competence has not previously been assessed. METHOD Musical ability was assessed in 31 schizophrenia/schizo-affective patients and 44 healthy controls using the Montreal Battery for Evaluation of Amusia (MBEA). AER was assessed using a novel battery in which actors provided portrayals of five separate emotions. The Disorganization factor of the Positive and Negative Syndrome Scale (PANSS) was used as a proxy for language/thought disorder and the MATRICS Consensus Cognitive Battery (MCCB) was used to assess cognition. RESULTS Highly significant deficits were seen between patients and controls across auditory tasks (p < 0.001). Moreover, significant differences were seen in AER between the amusia and intact music-perceiving groups, which remained significant after controlling for group status and education. Correlations with AER were specific to the melody domain, and correlations between protolanguage (melody domain) and language were independent of overall cognition. DISCUSSION This is the first study to document a specific relationship between amusia, AER and thought disorder, suggesting a shared linguistic/protolinguistic impairment. Once amusia was considered, other cognitive factors were no longer significant predictors of AER, suggesting that musical ability in general and melodic discrimination ability in particular may be crucial targets for treatment development and cognitive remediation in schizophrenia.
Collapse
Affiliation(s)
- J. T. Kantrowitz
- Schizophrenia Research Center, Nathan Kline Institute, Orangeburg, NY, USA
- Department of Psychiatry, Columbia University, New York, NY, USA
| | - N. Scaramello
- Schizophrenia Research Center, Nathan Kline Institute, Orangeburg, NY, USA
| | - A. Jakubovitz
- Schizophrenia Research Center, Nathan Kline Institute, Orangeburg, NY, USA
| | - J. M. Lehrfeld
- Schizophrenia Research Center, Nathan Kline Institute, Orangeburg, NY, USA
| | - P. Laukka
- Department of Psychology, Stockholm University, Sweden
| | - H. A. Elfenbein
- Olin Business School, Washington University, St Louis, MO, USA
| | - G. Silipo
- Schizophrenia Research Center, Nathan Kline Institute, Orangeburg, NY, USA
| | - D. C. Javitt
- Schizophrenia Research Center, Nathan Kline Institute, Orangeburg, NY, USA
- Department of Psychiatry, Columbia University, New York, NY, USA
| |
Collapse
|
45
|
Aubé W, Angulo-Perkins A, Peretz I, Concha L, Armony JL. Fear across the senses: brain responses to music, vocalizations and facial expressions. Soc Cogn Affect Neurosci 2014; 10:399-407. [PMID: 24795437 DOI: 10.1093/scan/nsu067] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Intrinsic emotional expressions such as those communicated by faces and vocalizations have been shown to engage specific brain regions, such as the amygdala. Although music constitutes another powerful means to express emotions, the neural substrates involved in its processing remain poorly understood. In particular, it is unknown whether brain regions typically associated with processing 'biologically relevant' emotional expressions are also recruited by emotional music. To address this question, we conducted an event-related functional magnetic resonance imaging study in 47 healthy volunteers in which we directly compared responses to basic emotions (fear, sadness and happiness, as well as neutral) expressed through faces, non-linguistic vocalizations and short novel musical excerpts. Our results confirmed the importance of fear in emotional communication, as revealed by significant blood oxygen level-dependent signal increased in a cluster within the posterior amygdala and anterior hippocampus, as well as in the posterior insula across all three domains. Moreover, subject-specific amygdala responses to fearful music and vocalizations were correlated, consistent with the proposal that the brain circuitry involved in the processing of musical emotions might be shared with the one that have evolved for vocalizations. Overall, our results show that processing of fear expressed through music, engages some of the same brain areas known to be crucial for detecting and evaluating threat-related information.
Collapse
Affiliation(s)
- William Aubé
- International Laboratory for Brain, Music and Sound Research (BRAMS) H2V 4P3, Centre for Research on Brain, Language and Music (CRBLM) H3G 2A8, Department of Psychology, Université de Montréal, Montreal, Canada H2V 2S9, Universidad Nacional Autónoma de México, Queretaro, Mexico C.P. 76230 and Department of Psychiatry and Douglas Mental Health University Institute, McGill University, Montreal, Canada H4H 1R3 International Laboratory for Brain, Music and Sound Research (BRAMS) H2V 4P3, Centre for Research on Brain, Language and Music (CRBLM) H3G 2A8, Department of Psychology, Université de Montréal, Montreal, Canada H2V 2S9, Universidad Nacional Autónoma de México, Queretaro, Mexico C.P. 76230 and Department of Psychiatry and Douglas Mental Health University Institute, McGill University, Montreal, Canada H4H 1R3 International Laboratory for Brain, Music and Sound Research (BRAMS) H2V 4P3, Centre for Research on Brain, Language and Music (CRBLM) H3G 2A8, Department of Psychology, Université de Montréal, Montreal, Canada H2V 2S9, Universidad Nacional Autónoma de México, Queretaro, Mexico C.P. 76230 and Department of Psychiatry and Douglas Mental Health University Institute, McGill University, Montreal, Canada H4H 1R3
| | - Arafat Angulo-Perkins
- International Laboratory for Brain, Music and Sound Research (BRAMS) H2V 4P3, Centre for Research on Brain, Language and Music (CRBLM) H3G 2A8, Department of Psychology, Université de Montréal, Montreal, Canada H2V 2S9, Universidad Nacional Autónoma de México, Queretaro, Mexico C.P. 76230 and Department of Psychiatry and Douglas Mental Health University Institute, McGill University, Montreal, Canada H4H 1R3
| | - Isabelle Peretz
- International Laboratory for Brain, Music and Sound Research (BRAMS) H2V 4P3, Centre for Research on Brain, Language and Music (CRBLM) H3G 2A8, Department of Psychology, Université de Montréal, Montreal, Canada H2V 2S9, Universidad Nacional Autónoma de México, Queretaro, Mexico C.P. 76230 and Department of Psychiatry and Douglas Mental Health University Institute, McGill University, Montreal, Canada H4H 1R3 International Laboratory for Brain, Music and Sound Research (BRAMS) H2V 4P3, Centre for Research on Brain, Language and Music (CRBLM) H3G 2A8, Department of Psychology, Université de Montréal, Montreal, Canada H2V 2S9, Universidad Nacional Autónoma de México, Queretaro, Mexico C.P. 76230 and Department of Psychiatry and Douglas Mental Health University Institute, McGill University, Montreal, Canada H4H 1R3 International Laboratory for Brain, Music and Sound Research (BRAMS) H2V 4P3, Centre for Research on Brain, Language and Music (CRBLM) H3G 2A8, Department of Psychology, Université de Montréal, Montreal, Canada H2V 2S9, Universidad Nacional Autónoma de México, Queretaro, Mexico C.P. 76230 and Department of Psychiatry and Douglas Mental Health University Institute, McGill University, Montreal, Canada H4H 1R3
| | - Luis Concha
- International Laboratory for Brain, Music and Sound Research (BRAMS) H2V 4P3, Centre for Research on Brain, Language and Music (CRBLM) H3G 2A8, Department of Psychology, Université de Montréal, Montreal, Canada H2V 2S9, Universidad Nacional Autónoma de México, Queretaro, Mexico C.P. 76230 and Department of Psychiatry and Douglas Mental Health University Institute, McGill University, Montreal, Canada H4H 1R3 International Laboratory for Brain, Music and Sound Research (BRAMS) H2V 4P3, Centre for Research on Brain, Language and Music (CRBLM) H3G 2A8, Department of Psychology, Université de Montréal, Montreal, Canada H2V 2S9, Universidad Nacional Autónoma de México, Queretaro, Mexico C.P. 76230 and Department of Psychiatry and Douglas Mental Health University Institute, McGill University, Montreal, Canada H4H 1R3
| | - Jorge L Armony
- International Laboratory for Brain, Music and Sound Research (BRAMS) H2V 4P3, Centre for Research on Brain, Language and Music (CRBLM) H3G 2A8, Department of Psychology, Université de Montréal, Montreal, Canada H2V 2S9, Universidad Nacional Autónoma de México, Queretaro, Mexico C.P. 76230 and Department of Psychiatry and Douglas Mental Health University Institute, McGill University, Montreal, Canada H4H 1R3 International Laboratory for Brain, Music and Sound Research (BRAMS) H2V 4P3, Centre for Research on Brain, Language and Music (CRBLM) H3G 2A8, Department of Psychology, Université de Montréal, Montreal, Canada H2V 2S9, Universidad Nacional Autónoma de México, Queretaro, Mexico C.P. 76230 and Department of Psychiatry and Douglas Mental Health University Institute, McGill University, Montreal, Canada H4H 1R3 International Laboratory for Brain, Music and Sound Research (BRAMS) H2V 4P3, Centre for Research on Brain, Language and Music (CRBLM) H3G 2A8, Department of Psychology, Université de Montréal, Montreal, Canada H2V 2S9, Universidad Nacional Autónoma de México, Queretaro, Mexico C.P. 76230 and Department of Psychiatry and Douglas Mental Health University Institute, McGill University, Montreal, Canada H4H 1R3
| |
Collapse
|
46
|
Särkämö T, Ripollés P, Vepsäläinen H, Autti T, Silvennoinen HM, Salli E, Laitinen S, Forsblom A, Soinila S, Rodríguez-Fornells A. Structural changes induced by daily music listening in the recovering brain after middle cerebral artery stroke: a voxel-based morphometry study. Front Hum Neurosci 2014; 8:245. [PMID: 24860466 PMCID: PMC4029020 DOI: 10.3389/fnhum.2014.00245] [Citation(s) in RCA: 52] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2014] [Accepted: 04/03/2014] [Indexed: 12/28/2022] Open
Abstract
Music is a highly complex and versatile stimulus for the brain that engages many temporal, frontal, parietal, cerebellar, and subcortical areas involved in auditory, cognitive, emotional, and motor processing. Regular musical activities have been shown to effectively enhance the structure and function of many brain areas, making music a potential tool also in neurological rehabilitation. In our previous randomized controlled study, we found that listening to music on a daily basis can improve cognitive recovery and improve mood after an acute middle cerebral artery stroke. Extending this study, a voxel-based morphometry (VBM) analysis utilizing cost function masking was performed on the acute and 6-month post-stroke stage structural magnetic resonance imaging data of the patients (n = 49) who either listened to their favorite music [music group (MG), n = 16] or verbal material [audio book group (ABG), n = 18] or did not receive any listening material [control group (CG), n = 15] during the 6-month recovery period. Although all groups showed significant gray matter volume (GMV) increases from the acute to the 6-month stage, there was a specific network of frontal areas [left and right superior frontal gyrus (SFG), right medial SFG] and limbic areas [left ventral/subgenual anterior cingulate cortex (SACC) and right ventral striatum (VS)] in patients with left hemisphere damage in which the GMV increases were larger in the MG than in the ABG and in the CG. Moreover, the GM reorganization in the frontal areas correlated with enhanced recovery of verbal memory, focused attention, and language skills, whereas the GM reorganization in the SACC correlated with reduced negative mood. This study adds on previous results, showing that music listening after stroke not only enhances behavioral recovery, but also induces fine-grained neuroanatomical changes in the recovering brain.
Collapse
Affiliation(s)
- Teppo Särkämö
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki , Helsinki , Finland ; Finnish Centre of Interdisciplinary Music Research, University of Helsinki , Helsinki , Finland
| | - Pablo Ripollés
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de Llobregat , Barcelona , Spain ; Department of Basic Psychology, University of Barcelona , Barcelona , Spain
| | - Henna Vepsäläinen
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki , Helsinki , Finland
| | - Taina Autti
- Department of Radiology, HUS Medical Imaging Center, Helsinki University Central Hospital, University of Helsinki , Helsinki , Finland
| | - Heli M Silvennoinen
- Department of Radiology, HUS Medical Imaging Center, Helsinki University Central Hospital, University of Helsinki , Helsinki , Finland
| | - Eero Salli
- Department of Radiology, HUS Medical Imaging Center, Helsinki University Central Hospital, University of Helsinki , Helsinki , Finland
| | | | - Anita Forsblom
- Department of Music, University of Jyväskylä , Jyväskylä , Finland
| | - Seppo Soinila
- Department of Neurology, Turku University Hospital , Turku , Finland
| | - Antoni Rodríguez-Fornells
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de Llobregat , Barcelona , Spain ; Department of Basic Psychology, University of Barcelona , Barcelona , Spain ; Institució Catalana de Recerca i Estudis Avançats (ICREA) , Barcelona , Spain
| |
Collapse
|
47
|
Schirmer A, Reece C, Zhao C, Ng E, Wu E, Yen SC. Reach out to one and you reach out to many: Social touch affects third-party observers. Br J Psychol 2014; 106:107-32. [DOI: 10.1111/bjop.12068] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2013] [Revised: 01/21/2014] [Indexed: 11/30/2022]
Affiliation(s)
- Annett Schirmer
- Department of Psychology; National University of Singapore; Singapore
- Duke/NUS Graduate Medical School; Singapore
- LSI Neurobiology/Ageing Programme; National University of Singapore; Singapore
| | - Christy Reece
- Department of Psychology; National University of Singapore; Singapore
| | - Claris Zhao
- Department of Psychology; National University of Singapore; Singapore
| | - Erik Ng
- Department of Psychology; National University of Singapore; Singapore
| | - Esther Wu
- Department of Psychology; National University of Singapore; Singapore
- Department of Electrical and Computer Engineering; National University of Singapore; Singapore
- Singapore Institute for Neurotechnology; National University of Singapore; Singapore
| | - Shih-Cheng Yen
- LSI Neurobiology/Ageing Programme; National University of Singapore; Singapore
- Department of Electrical and Computer Engineering; National University of Singapore; Singapore
- Singapore Institute for Neurotechnology; National University of Singapore; Singapore
| |
Collapse
|
48
|
Fang J, Hu X, Han J, Jiang X, Zhu D, Guo L, Liu T. Data-driven analysis of functional brain interactions during free listening to music and speech. Brain Imaging Behav 2014; 9:162-77. [PMID: 24526569 DOI: 10.1007/s11682-014-9293-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Natural stimulus functional magnetic resonance imaging (N-fMRI) such as fMRI acquired when participants were watching video streams or listening to audio streams has been increasingly used to investigate functional mechanisms of the human brain in recent years. One of the fundamental challenges in functional brain mapping based on N-fMRI is to model the brain's functional responses to continuous, naturalistic and dynamic natural stimuli. To address this challenge, in this paper we present a data-driven approach to exploring functional interactions in the human brain during free listening to music and speech streams. Specifically, we model the brain responses using N-fMRI by measuring the functional interactions on large-scale brain networks with intrinsically established structural correspondence, and perform music and speech classification tasks to guide the systematic identification of consistent and discriminative functional interactions when multiple subjects were listening music and speech in multiple categories. The underlying premise is that the functional interactions derived from N-fMRI data of multiple subjects should exhibit both consistency and discriminability. Our experimental results show that a variety of brain systems including attention, memory, auditory/language, emotion, and action networks are among the most relevant brain systems involved in classic music, pop music and speech differentiation. Our study provides an alternative approach to investigating the human brain's mechanism in comprehension of complex natural music and speech.
Collapse
Affiliation(s)
- Jun Fang
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | | | | | | | | | | | | |
Collapse
|
49
|
Mattei TA, Rodriguez AH, Bassuner J. Selective impairment of emotion recognition through music in Parkinson's disease: does it suggest the existence of different networks for music and speech prosody processing? Front Neurosci 2013; 7:161. [PMID: 24062634 PMCID: PMC3771238 DOI: 10.3389/fnins.2013.00161] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2013] [Accepted: 08/20/2013] [Indexed: 11/13/2022] Open
Affiliation(s)
- Tobias A Mattei
- Neurosurgery Department, Ohio State University Columbus, OH, USA
| | | | | |
Collapse
|
50
|
Juslin PN. What does music express? Basic emotions and beyond. Front Psychol 2013; 4:596. [PMID: 24046758 PMCID: PMC3764399 DOI: 10.3389/fpsyg.2013.00596] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2013] [Accepted: 08/16/2013] [Indexed: 11/19/2022] Open
Abstract
Numerous studies have investigated whether music can reliably convey emotions to listeners, and—if so—what musical parameters might carry this information. Far less attention has been devoted to the actual contents of the communicative process. The goal of this article is thus to consider what types of emotional content are possible to convey in music. I will argue that the content is mainly constrained by the type of coding involved, and that distinct types of content are related to different types of coding. Based on these premises, I suggest a conceptualization in terms of “multiple layers” of musical expression of emotions. The “core” layer is constituted by iconically-coded basic emotions. I attempt to clarify the meaning of this concept, dispel the myths that surround it, and provide examples of how it can be heuristic in explaining findings in this domain. However, I also propose that this “core” layer may be extended, qualified, and even modified by additional layers of expression that involve intrinsic and associative coding. These layers enable listeners to perceive more complex emotions—though the expressions are less cross-culturally invariant and more dependent on the social context and/or the individual listener. This multiple-layer conceptualization of expression in music can help to explain both similarities and differences between vocal and musical expression of emotions.
Collapse
Affiliation(s)
- Patrik N Juslin
- Department of Psychology, Uppsala University Uppsala, Sweden
| |
Collapse
|