1
|
Bravo F, Glogowski J, Stamatakis EA, Herfert K. Dissonant music engages early visual processing. Proc Natl Acad Sci U S A 2024; 121:e2320378121. [PMID: 39008675 DOI: 10.1073/pnas.2320378121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 06/04/2024] [Indexed: 07/17/2024] Open
Abstract
The neuroscientific examination of music processing in audio-visual contexts offers a valuable framework to assess how auditory information influences the emotional encoding of visual information. Using fMRI during naturalistic film viewing, we investigated the neural mechanisms underlying the effect of music on valence inferences during mental state attribution. Thirty-eight participants watched the same short-film accompanied by systematically controlled consonant or dissonant music. Subjects were instructed to think about the main character's intentions. The results revealed that increasing levels of dissonance led to more negatively valenced inferences, displaying the profound emotional impact of musical dissonance. Crucially, at the neuroscientific level and despite music being the sole manipulation, dissonance evoked the response of the primary visual cortex (V1). Functional/effective connectivity analysis showed a stronger coupling between the auditory ventral stream (AVS) and V1 in response to tonal dissonance and demonstrated the modulation of early visual processing via top-down feedback inputs from the AVS to V1. These V1 signal changes indicate the influence of high-level contextual representations associated with tonal dissonance on early visual cortices, serving to facilitate the emotional interpretation of visual information. Our results highlight the significance of employing systematically controlled music, which can isolate emotional valence from the arousal dimension, to elucidate the brain's sound-to-meaning interface and its distributive crossmodal effects on early visual encoding during naturalistic film viewing.
Collapse
Affiliation(s)
- Fernando Bravo
- Department of Preclinical Imaging and Radiopharmacy, University of Tübingen, Tübingen 72076, Germany
- Cognition and Consciousness Imaging Group, Division of Anaesthesia, Department of Medicine, University of Cambridge, Addenbrooke's Hospital, Cambridge CB2 0SP, United Kingdom
- Department of Clinical Neurosciences, University of Cambridge, Addenbrooke's Hospital, Cambridge CB2 0SP, United Kingdom
- Institut für Kunst- und Musikwissenschaft, Division of Musicology, Technische Universität Dresden, Dresden 01219, Germany
| | - Jana Glogowski
- Department of Psychology, Humboldt-Universität zu Berlin, Berlin 12489, Germany
| | - Emmanuel Andreas Stamatakis
- Cognition and Consciousness Imaging Group, Division of Anaesthesia, Department of Medicine, University of Cambridge, Addenbrooke's Hospital, Cambridge CB2 0SP, United Kingdom
- Department of Clinical Neurosciences, University of Cambridge, Addenbrooke's Hospital, Cambridge CB2 0SP, United Kingdom
| | - Kristina Herfert
- Department of Preclinical Imaging and Radiopharmacy, University of Tübingen, Tübingen 72076, Germany
| |
Collapse
|
2
|
Tsai CG, Fu YF, Li CW. Prediction errors arising from switches between major and minor modes in music: An fMRI study. Brain Cogn 2023; 169:105987. [PMID: 37126951 DOI: 10.1016/j.bandc.2023.105987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2023] [Revised: 04/15/2023] [Accepted: 04/17/2023] [Indexed: 05/03/2023]
Abstract
The major and minor modes in Western music have positive and negative connotations, respectively. The present fMRI study examined listeners' neural responses to switches between major and minor modes. We manipulated the final chords of J. S. Bach's keyboard pieces so that each major-mode passage ended with either the major (Major-Major) or minor (Major-Minor) tonic chord, and each minor-mode passage ended with either the minor (Minor-Minor) or major (Minor-Major) tonic chord. If the final major and minor chords have positive and negative reward values respectively, the Major-Minor and Minor-Major stimuli would cause negative and positive reward prediction errors (RPEs) respectively in a listener's brain. We found that activity in a frontoparietal network was significantly higher for Major-Minor than for Major-Major. Based on previous research, these results support the idea that a major-to-minor switch causes negative RPE. The contrast of Minor-Major minus Minor-Minor yielded activation in the ventral insula and visual cortex, speaking against the idea that a minor-to-major switch causes positive RPE. We discuss our results in relation to executive functions and the emotional connotations of major versus minor modes.
Collapse
Affiliation(s)
- Chen-Gia Tsai
- Graduate Institute of Musicology, National Taiwan University, Taipei, Taiwan; Neurobiology and Cognitive Science Center, National Taiwan University, Taipei, Taiwan
| | - Yi-Fan Fu
- Department of Bio-Industry Communication and Development, National Taiwan University, Taipei, Taiwan
| | - Chia-Wei Li
- Department of Radiology, Wan Fang Hospital, Taipei Medical University, Taipei, Taiwan.
| |
Collapse
|
3
|
Asano R, Boeckx C, Seifert U. Hierarchical control as a shared neurocognitive mechanism for language and music. Cognition 2021; 216:104847. [PMID: 34311153 DOI: 10.1016/j.cognition.2021.104847] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Revised: 05/14/2021] [Accepted: 07/11/2021] [Indexed: 12/16/2022]
Abstract
Although comparative research has made substantial progress in clarifying the relationship between language and music as neurocognitive systems from both a theoretical and empirical perspective, there is still no consensus about which mechanisms, if any, are shared and how they bring about different neurocognitive systems. In this paper, we tackle these two questions by focusing on hierarchical control as a neurocognitive mechanism underlying syntax in language and music. We put forward the Coordinated Hierarchical Control (CHC) hypothesis: linguistic and musical syntax rely on hierarchical control, but engage this shared mechanism differently depending on the current control demand. While linguistic syntax preferably engages the abstract rule-based control circuit, musical syntax rather employs the coordination of the abstract rule-based and the more concrete motor-based control circuits. We provide evidence for our hypothesis by reviewing neuroimaging as well as neuropsychological studies on linguistic and musical syntax. The CHC hypothesis makes a set of novel testable predictions to guide future work on the relationship between language and music.
Collapse
Affiliation(s)
- Rie Asano
- Systematic Musicology, Institute of Musicology, University of Cologne, Germany.
| | - Cedric Boeckx
- Section of General Linguistics, University of Barcelona, Spain; University of Barcelona Institute for Complex Systems (UBICS), Spain; Catalan Institute for Advanced Studies and Research (ICREA), Spain
| | - Uwe Seifert
- Systematic Musicology, Institute of Musicology, University of Cologne, Germany
| |
Collapse
|
4
|
Cook ND. The Triadic Roots of Human Cognition: "Mind" Is the Ability to go Beyond Dyadic Associations. Front Psychol 2018; 9:1060. [PMID: 30038590 PMCID: PMC6046464 DOI: 10.3389/fpsyg.2018.01060] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2017] [Accepted: 06/05/2018] [Indexed: 11/13/2022] Open
Abstract
Empirical evidence is reviewed indicating that the extraordinary aspects of the human mind are due to our species' ability to go beyond simple "dyadic associations" and to process the relations among three items of information simultaneously. Classic explanations of the "triadic" nature of human skills have been advocated by various scholars in the context of the evolution of human cognition. Here I summarize the core processes as found in (i) the syntax of language, (ii) tool-usage, and (iii) joint attention. I then review the triadic foundations of two perceptual phenomena of great importance in human aesthetics: (iv) harmony perception and (v) pictorial depth perception. In all five subfields of human psychology, most previous work has emphasized the recursive, hierarchical complexity of such "higher cognition," but a strongly reductionist approach indicates that the core mechanisms are triadic. It is concluded that the cognitive skills traditionally considered to be "uniquely" human require three-way associational processing that most non-Primate animal species find difficult or impossible, but all members of Homo sapiens - regardless of small cultural differences - find easy and inherently intriguing.
Collapse
Affiliation(s)
- Norman D. Cook
- Department of Informatics, Kansai University, Osaka, Japan
| |
Collapse
|
5
|
Clark CN, Golden HL, McCallion O, Nicholas JM, Cohen MH, Slattery CF, Paterson RW, Fletcher PD, Mummery CJ, Rohrer JD, Crutch SJ, Warren JD. Music models aberrant rule decoding and reward valuation in dementia. Soc Cogn Affect Neurosci 2018; 13:192-202. [PMID: 29186630 PMCID: PMC5827340 DOI: 10.1093/scan/nsx140] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2017] [Revised: 11/06/2017] [Accepted: 11/19/2017] [Indexed: 01/03/2023] Open
Abstract
Aberrant rule- and reward-based processes underpin abnormalities of socio-emotional behaviour in major dementias. However, these processes remain poorly characterized. Here we used music to probe rule decoding and reward valuation in patients with frontotemporal dementia (FTD) syndromes and Alzheimer's disease (AD) relative to healthy age-matched individuals. We created short melodies that were either harmonically resolved ('finished') or unresolved ('unfinished'); the task was to classify each melody as finished or unfinished (rule processing) and rate its subjective pleasantness (reward valuation). Results were adjusted for elementary pitch and executive processing; neuroanatomical correlates were assessed using voxel-based morphometry. Relative to healthy older controls, patients with behavioural variant FTD showed impairments of both musical rule decoding and reward valuation, while patients with semantic dementia showed impaired reward valuation but intact rule decoding, patients with AD showed impaired rule decoding but intact reward valuation and patients with progressive non-fluent aphasia performed comparably to healthy controls. Grey matter associations with task performance were identified in anterior temporal, medial and lateral orbitofrontal cortices, previously implicated in computing diverse biological and non-biological rules and rewards. The processing of musical rules and reward distils cognitive and neuroanatomical mechanisms relevant to complex socio-emotional dysfunction in major dementias.
Collapse
Affiliation(s)
- Camilla N Clark
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Hannah L Golden
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Oliver McCallion
- Oxford University Clinical Academic Graduate School, University of Oxford, Oxford, UK
| | - Jennifer M Nicholas
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
- London School of Hygiene and Tropical Medicine, University of London, London, UK
| | - Miriam H Cohen
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Catherine F Slattery
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Ross W Paterson
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Phillip D Fletcher
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Catherine J Mummery
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Jonathan D Rohrer
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Sebastian J Crutch
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Jason D Warren
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| |
Collapse
|
6
|
Abstract
The foundations of human music have long puzzled philosophers, mathematicians, psychologists, and neuroscientists. Although virtually all cultures uses combinations of tones as a basis for musical expression, why humans favor some tone combinations over others has been debated for millennia. Here we show that our attraction to specific tone combinations played simultaneously (chords) is predicted by their spectral similarity to voiced speech sounds. This connection between auditory aesthetics and a primary characteristic of vocalization adds to other evidence that tonal preferences arise from the biological advantages of social communication mediated by speech and language. Musical chords are combinations of two or more tones played together. While many different chords are used in music, some are heard as more attractive (consonant) than others. We have previously suggested that, for reasons of biological advantage, human tonal preferences can be understood in terms of the spectral similarity of tone combinations to harmonic human vocalizations. Using the chromatic scale, we tested this theory further by assessing the perceived consonance of all possible dyads, triads, and tetrads within a single octave. Our results show that the consonance of chords is predicted by their relative similarity to voiced speech sounds. These observations support the hypothesis that the relative attraction of musical tone combinations is due, at least in part, to the biological advantages that accrue from recognizing and responding to conspecific vocal stimuli.
Collapse
|
7
|
Cook ND. Calculation of the acoustical properties of triadic harmonies. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 142:3748. [PMID: 29289060 DOI: 10.1121/1.5018342] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
The author reports that the harmonic "tension" and major/minor "valence" of pitch combinations can be calculated directly from acoustical properties without relying on concepts from traditional harmony theory. The capability to compute the well-known types of harmonic triads means that their perception is not simply a consequence of learning an arbitrary cultural "idiom" handed down from the Italian Renaissance. On the contrary, for typical listeners familiar with diatonic music, attention to certain, definable, acoustical features underlies the perception of the valence (modality) and the inherent tension (instability) of three-tone harmonies.
Collapse
Affiliation(s)
- Norman D Cook
- Department of Informatics, Kansai University, 2-1 Reizenji, Takatsuki, Osaka, 569-1095, Japan
| |
Collapse
|
8
|
Bravo F, Cross I, Hawkins S, Gonzalez N, Docampo J, Bruno C, Stamatakis EA. Neural mechanisms underlying valence inferences to sound: The role of the right angular gyrus. Neuropsychologia 2017; 102:144-162. [PMID: 28602997 DOI: 10.1016/j.neuropsychologia.2017.05.029] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2016] [Revised: 05/24/2017] [Accepted: 05/31/2017] [Indexed: 01/03/2023]
Abstract
We frequently infer others' intentions based on non-verbal auditory cues. Although the brain underpinnings of social cognition have been extensively studied, no empirical work has yet examined the impact of musical structure manipulation on the neural processing of emotional valence during mental state inferences. We used a novel sound-based theory-of-mind paradigm in which participants categorized stimuli of different sensory dissonance level in terms of positive/negative valence. Whilst consistent with previous studies which propose facilitated encoding of consonances, our results demonstrated that distinct levels of consonance/dissonance elicited differential influences on the right angular gyrus, an area implicated in mental state attribution and attention reorienting processes. Functional and effective connectivity analyses further showed that consonances modulated a specific inhibitory interaction from associative memory to mental state attribution substrates. Following evidence suggesting that individuals with autism may process social affective cues differently, we assessed the relationship between participants' task performance and self-reported autistic traits in clinically typical adults. Higher scores on the social cognition scales of the AQ were associated with deficits in recognising positive valence in consonant sound cues. These findings are discussed with respect to Bayesian perspectives on autistic perception, which highlight a functional failure to optimize precision in relation to prior beliefs.
Collapse
Affiliation(s)
- Fernando Bravo
- University of Cambridge, Centre for Music and Science, Cambridge, UK; TU Dresden, Institut für Kunst- und Musikwissenschaft (E.A.R.S.), Dresden, Germany.
| | - Ian Cross
- University of Cambridge, Centre for Music and Science, Cambridge, UK
| | - Sarah Hawkins
- University of Cambridge, Centre for Music and Science, Cambridge, UK
| | - Nadia Gonzalez
- Fundación Científica del Sur Imaging Centre, Buenos Aires, Argentina
| | - Jorge Docampo
- Fundación Científica del Sur Imaging Centre, Buenos Aires, Argentina
| | - Claudio Bruno
- Fundación Científica del Sur Imaging Centre, Buenos Aires, Argentina
| | | |
Collapse
|
9
|
Krick CM, Argstatter H, Grapp M, Plinkert PK, Reith W. Heidelberg Neuro-Music Therapy Enhances Task-Negative Activity in Tinnitus Patients. Front Neurosci 2017; 11:384. [PMID: 28736515 PMCID: PMC5500649 DOI: 10.3389/fnins.2017.00384] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2016] [Accepted: 06/19/2017] [Indexed: 12/25/2022] Open
Abstract
Background: Suffering from tinnitus causes mental distress in most patients. Recent findings point toward a diminished activity of the brain's default-mode network (DMN) in subjects with mental disorders including depression or anxiety and also recently in subjects with tinnitus-related distress. We recently developed a therapeutic intervention, namely the Heidelberg Neuro-Music Therapy (HNMT), which shows an effective reduction of tinnitus-related distress following a 1-week short-term treatment. This approach offers the possibility to evaluate the neural changes associated with the improvements in tinnitus distress. We previously reported gray matter (GM) reorganization in DMN regions and in primary auditory areas following HNMT in cases of recent-onset tinnitus. Here we evaluate on the same patient group, using functional MRI (fMRI), the activity of the DMN following the improvements tinnitus-related distress related to the HNMT intervention. Methods: The DMN activity was estimated by the task-negative activation (TNA) during long inter-trial intervals in a word recognition task. The level of TNA was evaluated twice, before and after the 1-week study period, in 18 treated tinnitus patients (“treatment group,” TG), 21 passive tinnitus controls (PTC), and 22 active healthy controls (AC). During the study, the participants in TG and AC groups were treated with HNMT, whereas PTC patients did not receive any tinnitus-specific treatment. Therapy-related effects on DMN activity were assessed by comparing the pairs of fMRI records from the TG and PTC groups. Results: Treatment of the TG group with HNMT resulted in an augmented DMN activity in the PCC by 2.5% whereas no change was found in AC and PTC groups. This enhancement of PCC activity correlated with a reduction in tinnitus distress (Spearman Rho: −0.5; p < 0.005). Conclusion: Our findings show that an increased DMN activity, especially in the PCC, underlies the improvements in tinnitus-related distress triggered by HNMT and identify the DMN as an important network involved in therapeutic improvements.
Collapse
Affiliation(s)
- Christoph M Krick
- Department for Neuroradiology, Saarland University HospitalHomburg, Germany
| | - Heike Argstatter
- German Research Centre for Music Therapy ResearchHeidelberg, Germany
| | - Miriam Grapp
- German Research Centre for Music Therapy ResearchHeidelberg, Germany
| | - Peter K Plinkert
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital for Ear, Nose, and Throat, University of HeidelbergHeidelberg, Germany
| | - Wolfgang Reith
- Department for Neuroradiology, Saarland University HospitalHomburg, Germany
| |
Collapse
|
10
|
Sensory cortical response to uncertainty and low salience during recognition of affective cues in musical intervals. PLoS One 2017; 12:e0175991. [PMID: 28422990 PMCID: PMC5396975 DOI: 10.1371/journal.pone.0175991] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2016] [Accepted: 04/04/2017] [Indexed: 01/07/2023] Open
Abstract
Previous neuroimaging studies have shown an increased sensory cortical response (i.e., heightened weight on sensory evidence) under higher levels of predictive uncertainty. The signal enhancement theory proposes that attention improves the quality of the stimulus representation, and therefore reduces uncertainty by increasing the gain of the sensory signal. The present study employed functional magnetic resonance imaging (fMRI) to investigate the neural correlates for ambiguous valence inferences signaled by auditory information within an emotion recognition paradigm. Participants categorized sound stimuli of three distinct levels of consonance/dissonance controlled by interval content. Separate behavioural and neuroscientific experiments were conducted. Behavioural results revealed that, compared with the consonance condition (perfect fourths, fifths and octaves) and the strong dissonance condition (minor/major seconds and tritones), the intermediate dissonance condition (minor thirds) was the most ambiguous, least salient and more cognitively demanding category (slowest reaction times). The neuroscientific findings were consistent with a heightened weight on sensory evidence whilst participants were evaluating intermediate dissonances, which was reflected in an increased neural response of the right Heschl’s gyrus. The results support previous studies that have observed enhanced precision of sensory evidence whilst participants attempted to represent and respond to higher degrees of uncertainty, and converge with evidence showing preferential processing of complex spectral information in the right primary auditory cortex. These findings are discussed with respect to music-theoretical concepts and recent Bayesian models of perception, which have proposed that attention may heighten the weight of information coming from sensory channels to stimulate learning about unknown predictive relationships.
Collapse
|
11
|
Hou J, Song B, Chen ACN, Sun C, Zhou J, Zhu H, Beauchaine TP. Review on Neural Correlates of Emotion Regulation and Music: Implications for Emotion Dysregulation. Front Psychol 2017; 8:501. [PMID: 28421017 PMCID: PMC5376620 DOI: 10.3389/fpsyg.2017.00501] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2016] [Accepted: 03/16/2017] [Indexed: 12/15/2022] Open
Abstract
Previous studies have examined the neural correlates of emotion regulation and the neural changes that are evoked by music exposure. However, the link between music and emotion regulation is poorly understood. The objectives of this review are to (1) synthesize what is known about the neural correlates of emotion regulation and music-evoked emotions, and (2) consider the possibility of therapeutic effects of music on emotion dysregulation. Music-evoked emotions can modulate activities in both cortical and subcortical systems, and across cortical-subcortical networks. Functions within these networks are integral to generation and regulation of emotions. Since dysfunction in these networks are observed in numerous psychiatric disorders, a better understanding of neural correlates of music exposure may lead to more systematic and effective use of music therapy in emotion dysregulation.
Collapse
Affiliation(s)
- Jiancheng Hou
- Center for Educational Neuroscience, School of Psychology and Cognitive Science, East China Normal UniversityShanghai, China.,Department of Radiology, School of Medicine and Public Health, University of Wisconsin-MadisonMadison, WI, USA
| | - Bei Song
- Center for Educational Neuroscience, School of Psychology and Cognitive Science, East China Normal UniversityShanghai, China.,Music Conservatory of HarbinHarbin, China
| | - Andrew C N Chen
- Center for Higher Brain Functions and Institute for Brain Disorders, Capital Medical UniversityBeijing, China
| | - Changan Sun
- School of Education and Public Administration, Suzhou University of Science and TechnologySuzhou, China
| | - Jiaxian Zhou
- Center for Educational Neuroscience, School of Psychology and Cognitive Science, East China Normal UniversityShanghai, China
| | - Haidong Zhu
- Department of Psychology, Shihezi UniversityShihezi, China
| | | |
Collapse
|
12
|
Agustus JL, Mahoney CJ, Downey LE, Omar R, Cohen M, White MJ, Scott SK, Mancini L, Warren JD. Functional MRI of music emotion processing in frontotemporal dementia. Ann N Y Acad Sci 2015; 1337:232-40. [PMID: 25773639 PMCID: PMC4402026 DOI: 10.1111/nyas.12620] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Frontotemporal dementia is an important neurodegenerative disorder of younger life led by profound emotional and social dysfunction. Here we used fMRI to assess brain mechanisms of music emotion processing in a cohort of patients with frontotemporal dementia (n = 15) in relation to healthy age-matched individuals (n = 11). In a passive-listening paradigm, we manipulated levels of emotion processing in simple arpeggio chords (mode versus dissonance) and emotion modality (music versus human emotional vocalizations). A complex profile of disease-associated functional alterations was identified with separable signatures of musical mode, emotion level, and emotion modality within a common, distributed brain network, including posterior and anterior superior temporal and inferior frontal cortices and dorsal brainstem effector nuclei. Separable functional signatures were identified post-hoc in patients with and without abnormal craving for music (musicophilia): a model for specific abnormal emotional behaviors in frontotemporal dementia. Our findings indicate the potential of music to delineate neural mechanisms of altered emotion processing in dementias, with implications for future disease tracking and therapeutic strategies.
Collapse
Affiliation(s)
- Jennifer L Agustus
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, United Kingdom
| | | | | | | | | | | | | | | | | |
Collapse
|
13
|
|
14
|
Thaut MH, Trimarchi PD, Parsons LM. Human brain basis of musical rhythm perception: common and distinct neural substrates for meter, tempo, and pattern. Brain Sci 2014; 4:428-52. [PMID: 24961770 PMCID: PMC4101486 DOI: 10.3390/brainsci4020428] [Citation(s) in RCA: 57] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2014] [Revised: 05/26/2014] [Accepted: 05/30/2014] [Indexed: 11/24/2022] Open
Abstract
Rhythm as the time structure of music is composed of distinct temporal components such as pattern, meter, and tempo. Each feature requires different computational processes: meter involves representing repeating cycles of strong and weak beats; pattern involves representing intervals at each local time point which vary in length across segments and are linked hierarchically; and tempo requires representing frequency rates of underlying pulse structures. We explored whether distinct rhythmic elements engage different neural mechanisms by recording brain activity of adult musicians and non-musicians with positron emission tomography (PET) as they made covert same-different discriminations of (a) pairs of rhythmic, monotonic tone sequences representing changes in pattern, tempo, and meter, and (b) pairs of isochronous melodies. Common to pattern, meter, and tempo tasks were focal activities in right, or bilateral, areas of frontal, cingulate, parietal, prefrontal, temporal, and cerebellar cortices. Meter processing alone activated areas in right prefrontal and inferior frontal cortex associated with more cognitive and abstract representations. Pattern processing alone recruited right cortical areas involved in different kinds of auditory processing. Tempo processing alone engaged mechanisms subserving somatosensory and premotor information (e.g., posterior insula, postcentral gyrus). Melody produced activity different from the rhythm conditions (e.g., right anterior insula and various cerebellar areas). These exploratory findings suggest the outlines of some distinct neural components underlying the components of rhythmic structure.
Collapse
Affiliation(s)
- Michael H Thaut
- Center for Biomedical Research in Music, Colorado State University, Ft. Collins, CO 80523, USA.
| | | | | |
Collapse
|
15
|
Fritz TH, Renders W, Müller K, Schmude P, Leman M, Turner R, Villringer A. Anatomical differences in the human inferior colliculus relate to the perceived valence of musical consonance and dissonance. Eur J Neurosci 2013; 38:3099-105. [PMID: 23859464 DOI: 10.1111/ejn.12305] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2012] [Revised: 05/28/2013] [Accepted: 06/10/2013] [Indexed: 11/28/2022]
Abstract
Helmholtz himself speculated about a role of the cochlea in the perception of musical dissonance. Here we indirectly investigated this issue, assessing the valence judgment of musical stimuli with variable consonance/dissonance and presented diotically (exactly the same dissonant signal was presented to both ears) or dichotically (a consonant signal was presented to each ear--both consonant signals were rhythmically identical but differed by a semitone in pitch). Differences in brain organisation underlying inter-subject differences in the percept of dichotically presented dissonance were determined with voxel-based morphometry. Behavioral results showed that diotic dissonant stimuli were perceived as more unpleasant than dichotically presented dissonance, indicating that interactions within the cochlea modulated the valence percept during dissonance. However, the behavioral data also suggested that the dissonance percept did not depend crucially on the cochlea, but also occurred as a result of binaural integration when listening to dichotic dissonance. These results also showed substantial between-participant variations in the valence response to dichotic dissonance. These differences were in a voxel-based morphometry analysis related to differences in gray matter density in the inferior colliculus, which strongly substantiated a key role of the inferior colliculus in consonance/dissonance representation in humans.
Collapse
Affiliation(s)
- Thomas Hans Fritz
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1A, Leipzig 04103, Germany. ,Institute for Psychoacoustics and Electronic Music, Ghent, Belgium.,Department of Nuclear Medicine, University of Leipzig, Leipzig, Germany
| | - Wiske Renders
- Institute for Psychoacoustics and Electronic Music, Ghent, Belgium
| | - Karsten Müller
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1A, Leipzig, 04103, Germany
| | - Paul Schmude
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1A, Leipzig, 04103, Germany
| | - Marc Leman
- Institute for Psychoacoustics and Electronic Music, Ghent, Belgium
| | - Robert Turner
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1A, Leipzig, 04103, Germany
| | - Arno Villringer
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1A, Leipzig, 04103, Germany
| |
Collapse
|
16
|
Bidelman GM. The role of the auditory brainstem in processing musically relevant pitch. Front Psychol 2013; 4:264. [PMID: 23717294 PMCID: PMC3651994 DOI: 10.3389/fpsyg.2013.00264] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2013] [Accepted: 04/23/2013] [Indexed: 11/13/2022] Open
Abstract
Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority) are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners' perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain.
Collapse
Affiliation(s)
- Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis Memphis, TN, USA ; School of Communication Sciences and Disorders, University of Memphis Memphis, TN, USA
| |
Collapse
|
17
|
Seger CA, Spiering BJ, Sares AG, Quraini SI, Alpeter C, David J, Thaut MH. Corticostriatal contributions to musical expectancy perception. J Cogn Neurosci 2013; 25:1062-77. [PMID: 23410032 DOI: 10.1162/jocn_a_00371] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
This study investigates the functional neuroanatomy of harmonic music perception with fMRI. We presented short pieces of Western classical music to nonmusicians. The ending of each piece was systematically manipulated in the following four ways: Standard Cadence (expected resolution), Deceptive Cadence (moderate deviation from expectation), Modulated Cadence (strong deviation from expectation but remaining within the harmonic structure of Western tonal music), and Atonal Cadence (strongest deviation from expectation by leaving the harmonic structure of Western tonal music). Music compared with baseline broadly recruited regions of the bilateral superior temporal gyrus (STG) and the right inferior frontal gyrus (IFG). Parametric regressors scaled to the degree of deviation from harmonic expectancy identified regions sensitive to expectancy violation. Areas within the BG were significantly modulated by expectancy violation, indicating a previously unappreciated role in harmonic processing. Expectancy violation also recruited bilateral cortical regions in the IFG and anterior STG, previously associated with syntactic processing in other domains. The posterior STG was not significantly modulated by expectancy. Granger causality mapping found functional connectivity between IFG, anterior STG, posterior STG, and the BG during music perception. Our results imply the IFG, anterior STG, and the BG are recruited for higher-order harmonic processing, whereas the posterior STG is recruited for basic pitch and melodic processing.
Collapse
|