1
|
Ono K, Mizuochi R, Yamamoto K, Sasaoka T, Ymawaki S. Exploring the neural underpinnings of chord prediction uncertainty: an electroencephalography (EEG) study. Sci Rep 2024; 14:4586. [PMID: 38403782 PMCID: PMC10894873 DOI: 10.1038/s41598-024-55366-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 02/22/2024] [Indexed: 02/27/2024] Open
Abstract
Predictive processing in the brain, involving interaction between interoceptive (bodily signal) and exteroceptive (sensory) processing, is essential for understanding music as it encompasses musical temporality dynamics and affective responses. This study explores the relationship between neural correlates and subjective certainty of chord prediction, focusing on the alignment between predicted and actual chord progressions in both musically appropriate chord sequences and random chord sequences. Participants were asked to predict the final chord in sequences while their brain activity was measured using electroencephalography (EEG). We found that the stimulus preceding negativity (SPN), an EEG component associated with predictive processing of sensory stimuli, was larger for non-harmonic chord sequences than for harmonic chord progressions. Additionally, the heartbeat evoked potential (HEP), an EEG component related to interoceptive processing, was larger for random chord sequences and correlated with prediction certainty ratings. HEP also correlated with the N5 component, found while listening to the final chord. Our findings suggest that HEP more directly reflects the subjective prediction certainty than SPN. These findings offer new insights into the neural mechanisms underlying music perception and prediction, emphasizing the importance of considering auditory prediction certainty when examining the neural basis of music cognition.
Collapse
Affiliation(s)
- Kentaro Ono
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Hiroshima, Japan.
| | - Ryohei Mizuochi
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Hiroshima, Japan
| | - Kazuki Yamamoto
- Graduate School of Humanities and Social Sciences, Hiroshima University, Higashihiroshima, Japan
| | - Takafumi Sasaoka
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Hiroshima, Japan
| | - Shigeto Ymawaki
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Hiroshima, Japan
| |
Collapse
|
2
|
Albury AW, Bianco R, Gold BP, Penhune VB. Context changes judgments of liking and predictability for melodies. Front Psychol 2023; 14:1175682. [PMID: 38034280 PMCID: PMC10684779 DOI: 10.3389/fpsyg.2023.1175682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 10/23/2023] [Indexed: 12/02/2023] Open
Abstract
Predictability plays an important role in the experience of musical pleasure. By leveraging expectations, music induces pleasure through tension and surprise. However, musical predictions draw on both prior knowledge and immediate context. Similarly, musical pleasure, which has been shown to depend on predictability, may also vary relative to the individual and context. Although research has demonstrated the influence of both long-term knowledge and stimulus features in influencing expectations, it is unclear how perceptions of a melody are influenced by comparisons to other music pieces heard in the same context. To examine the effects of context we compared how listeners' judgments of two distinct sets of stimuli differed when they were presented alone or in combination. Stimuli were excerpts from a repertoire of Western music and a set of experimenter created melodies. Separate groups of participants rated liking and predictability for each set of stimuli alone and in combination. We found that when heard together, the Repertoire stimuli were more liked and rated as less predictable than if they were heard alone, with the opposite pattern being observed for the Experimental stimuli. This effect was driven by a change in ratings between the Alone and Combined conditions for each stimulus set. These findings demonstrate a context-based shift of predictability ratings and derived pleasure, suggesting that judgments stem not only from the physical properties of the stimulus, but also vary relative to other options available in the immediate context.
Collapse
Affiliation(s)
- Alexander W. Albury
- Department of Psychology, Concordia University, Montreal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS) and Center for Research in Brain, Language and Music (CRBLM), Montreal, QC, Canada
| | - Roberta Bianco
- Neuroscience of Perception and Action Laboratory, Italian Institute of Technology, Rome, Italy
| | - Benjamin P. Gold
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, United States
| | - Virginia B. Penhune
- Department of Psychology, Concordia University, Montreal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS) and Center for Research in Brain, Language and Music (CRBLM), Montreal, QC, Canada
| |
Collapse
|
3
|
Czepiel A, Fink LK, Seibert C, Scharinger M, Kotz SA. Aesthetic and physiological effects of naturalistic multimodal music listening. Cognition 2023; 239:105537. [PMID: 37487303 DOI: 10.1016/j.cognition.2023.105537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 05/31/2023] [Accepted: 06/24/2023] [Indexed: 07/26/2023]
Abstract
Compared to audio only (AO) conditions, audiovisual (AV) information can enhance the aesthetic experience of a music performance. However, such beneficial multimodal effects have yet to be studied in naturalistic music performance settings. Further, peripheral physiological correlates of aesthetic experiences are not well-understood. Here, participants were invited to a concert hall for piano performances of Bach, Messiaen, and Beethoven, which were presented in two conditions: AV and AO. They rated their aesthetic experience (AE) after each piece (Experiment 1 and 2), while peripheral signals (cardiorespiratory measures, skin conductance, and facial muscle activity) were continuously measured (Experiment 2). Factor scores of AE were significantly higher in the AV condition in both experiments. LF/HF ratio, a heart rhythm that represents activation of the sympathetic nervous system, was higher in the AO condition, suggesting increased arousal, likely caused by less predictable sound onsets in the AO condition. We present partial evidence that breathing was faster and facial muscle activity was higher in the AV condition, suggesting that observing a performer's movements likely enhances motor mimicry in these more voluntary peripheral measures. Further, zygomaticus ('smiling') muscle activity was a significant predictor of AE. Thus, we suggest physiological measures are related to AE, but at different levels: the more involuntary measures (i.e., heart rhythms) may reflect more sensory aspects, while the more voluntary measures (i.e., muscular control of breathing and facial responses) may reflect the liking aspect of an AE. In summary, we replicate and extend previous findings that AV information enhances AE in a naturalistic music performance setting. We further show that a combination of self-report and peripheral measures benefit a meaningful assessment of AE in naturalistic music performance settings.
Collapse
Affiliation(s)
- Anna Czepiel
- Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany; Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands.
| | - Lauren K Fink
- Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany; Max Planck-NYU Center for Language, Music, and Emotion, Frankfurt am Main, Germany
| | - Christoph Seibert
- Institute for Music Informatics and Musicology, University of Music Karlsruhe, Karlsruhe, Germany
| | - Mathias Scharinger
- Research Group Phonetics, Department of German Linguistics, University of Marburg, Marburg, Germany; Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
| | - Sonja A Kotz
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands; Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
4
|
You S, Sun L, Yang Y. The effects of contextual certainty on tension induction and resolution. Cogn Neurodyn 2023; 17:191-201. [PMID: 36704622 PMCID: PMC9871111 DOI: 10.1007/s11571-022-09810-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Revised: 02/21/2022] [Accepted: 04/02/2022] [Indexed: 01/29/2023] Open
Abstract
It is known that tension is a core principle of the generation of music emotion and meaning, and supposed to be induced by prediction in process of music listening. Using EEG and behavioral rating, the current research investigated how contextual certainty affects musical tension induction and resolution. The major results were that in the tension induction process, incongruent conditions elicited larger EN and P600 in ERP responses compared with congruent conditions, and the amplitude of P600, tension ratings were mediated by contextual certainty. In the tension resolution process, contextual certainty further affected the duration of P600 and tension ratings. For the certain conditions, tension ratings were higher, tension curves fluctuated faster, and a larger P600 was evoked in the incongruent condition compared with the congruent condition. For the uncertain conditions, there was no congruency effect on behavioral ratings and tension curves, but a larger P600 was elicited in the congruent condition. These results show that contextual certainty affects tension induction and resolution. Our findings provide a more comprehensive view on how musical prediction affects musical tension.
Collapse
Affiliation(s)
- Siqi You
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, No. 16 Lincui Road, Chaoyang District, Beijing, 100101 China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Lijun Sun
- College of Art, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Yufang Yang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, No. 16 Lincui Road, Chaoyang District, Beijing, 100101 China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
5
|
Chabin T, Pazart L, Gabriel D. Vocal melody and musical background are simultaneously processed by the brain for musical predictions. Ann N Y Acad Sci 2022; 1512:126-140. [PMID: 35229293 DOI: 10.1111/nyas.14755] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Accepted: 01/18/2022] [Indexed: 12/18/2022]
Abstract
Musical pleasure is related to the capacity to predict and anticipate the music. By recording early cerebral responses of 16 participants with electroencephalography during periods of silence inserted in known and unknown songs, we aimed to measure the contribution of different musical attributes to musical predictions. We investigated the mismatch between past encoded musical features and the current sensory inputs when listening to lyrics associated with vocal melody, only background instrumental material, or both attributes grouped together. When participants were listening to chords and lyrics for known songs, the brain responses related to musical violation produced event-related potential responses around 150-200 ms that were of a larger amplitude than for chords or lyrics only. Microstate analysis also revealed that for chords and lyrics, the global field power had an increased stability and a longer duration. The source localization identified that the right superior temporal and frontal gyri and the inferior and medial frontal gyri were activated for a longer time for chords and lyrics, likely caused by the increased complexity of the stimuli. We conclude that grouped together, a broader integration and retrieval of several musical attributes at the same time recruit larger neuronal networks that lead to more accurate predictions.
Collapse
Affiliation(s)
- Thibault Chabin
- Centre Hospitalier Universitaire de Besançon, Centre d'Investigation Clinique INSERM CIC 1431, Besançon, France
| | - Lionel Pazart
- Plateforme de Neuroimagerie Fonctionnelle et Neurostimulation Neuraxess, Centre Hospitalier Universitaire de Besançon, Université de Bourgogne Franche-Comté, Bourgogne Franche-Comté, France
| | - Damien Gabriel
- Laboratoire de Recherches Intégratives en Neurosciences et Psychologie Cognitive, Université Bourgogne Franche-Comté, Besançon, France
| |
Collapse
|
6
|
Czepiel A, Fink LK, Fink LT, Wald-Fuhrmann M, Tröndle M, Merrill J. Synchrony in the periphery: inter-subject correlation of physiological responses during live music concerts. Sci Rep 2021; 11:22457. [PMID: 34789746 PMCID: PMC8599424 DOI: 10.1038/s41598-021-00492-3] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 10/11/2021] [Indexed: 11/19/2022] Open
Abstract
While there is an increasing shift in cognitive science to study perception of naturalistic stimuli, this study extends this goal to naturalistic contexts by assessing physiological synchrony across audience members in a concert setting. Cardiorespiratory, skin conductance, and facial muscle responses were measured from participants attending live string quintet performances of full-length works from Viennese Classical, Contemporary, and Romantic styles. The concert was repeated on three consecutive days with different audiences. Using inter-subject correlation (ISC) to identify reliable responses to music, we found that highly correlated responses depicted typical signatures of physiological arousal. By relating physiological ISC to quantitative values of music features, logistic regressions revealed that high physiological synchrony was consistently predicted by faster tempi (which had higher ratings of arousing emotions and engagement), but only in Classical and Romantic styles (rated as familiar) and not the Contemporary style (rated as unfamiliar). Additionally, highly synchronised responses across all three concert audiences occurred during important structural moments in the music-identified using music theoretical analysis-namely at transitional passages, boundaries, and phrase repetitions. Overall, our results show that specific music features induce similar physiological responses across audience members in a concert context, which are linked to arousal, engagement, and familiarity.
Collapse
Affiliation(s)
- Anna Czepiel
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
| | - Lauren K Fink
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Max Planck - NYU Center for Language, Music, & Emotion (CLaME), New York, USA
| | - Lea T Fink
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
| | - Melanie Wald-Fuhrmann
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Max Planck - NYU Center for Language, Music, & Emotion (CLaME), New York, USA
| | | | - Julia Merrill
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Institute of Music, University of Kassel, Kassel, Germany
| |
Collapse
|
7
|
Contextual prediction modulates musical tension: Evidence from behavioral and neural responses. Brain Cogn 2021; 152:105771. [PMID: 34217125 DOI: 10.1016/j.bandc.2021.105771] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 04/10/2021] [Accepted: 06/23/2021] [Indexed: 11/23/2022]
Abstract
Tension is a bridge between music structure and emotion. It is known that tension is affected by prediction in music listening as music unfolds. Combining behavioral and neural responses, the current research investigated how musical predictions influence tension in the process of prediction build-up based on musical context (anticipatory stage) and its integration with upcoming stimuli (integration stage). The results showed that, at the anticipatory stage, compared with high-prediction conditions, in low-prediction conditions tension curve changed faster and unstable, and a larger N5 in ERP response was elicited. Furthermore, at the integration stage, compared with congruent conditions, in incongruent conditions the behavioral rating of tension were higher regardless of the predictability of the final chord; a right negativity and P600 were elicited, and the amplitude of P600 was modulated by the predictability of the final chord. These results indicated that the effect of prediction on tension was modulated by contextual predictability. The findings provide a more comprehensive view on how musical prediction affects musical tension.
Collapse
|
8
|
Zioga I, Harrison PMC, Pearce MT, Bhattacharya J, Luft CDB. Auditory but Not Audiovisual Cues Lead to Higher Neural Sensitivity to the Statistical Regularities of an Unfamiliar Musical Style. J Cogn Neurosci 2020; 32:2241-2259. [PMID: 32762519 DOI: 10.1162/jocn_a_01614] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
It is still a matter of debate whether visual aids improve learning of music. In a multisession study, we investigated the neural signatures of novel music sequence learning with or without aids (auditory-only: AO, audiovisual: AV). During three training sessions on three separate days, participants (nonmusicians) reproduced (note by note on a keyboard) melodic sequences generated by an artificial musical grammar. The AV group (n = 20) had each note color-coded on screen, whereas the AO group (n = 20) had no color indication. We evaluated learning of the statistical regularities of the novel music grammar before and after training by presenting melodies ending on correct or incorrect notes and by asking participants to judge the correctness and surprisal of the final note, while EEG was recorded. We found that participants successfully learned the new grammar. Although the AV group, as compared to the AO group, reproduced longer sequences during training, there was no significant difference in learning between groups. At the neural level, after training, the AO group showed a larger N100 response to low-probability compared with high-probability notes, suggesting an increased neural sensitivity to statistical properties of the grammar; this effect was not observed in the AV group. Our findings indicate that visual aids might improve sequence reproduction while not necessarily promoting better learning, indicating a potential dissociation between sequence reproduction and learning. We suggest that the difficulty induced by auditory-only input during music training might enhance cognitive engagement, thereby improving neural sensitivity to the underlying statistical properties of the learned material.
Collapse
|
9
|
Shany O, Singer N, Gold BP, Jacoby N, Tarrasch R, Hendler T, Granot R. Surprise-related activation in the nucleus accumbens interacts with music-induced pleasantness. Soc Cogn Affect Neurosci 2020; 14:459-470. [PMID: 30892654 PMCID: PMC6523415 DOI: 10.1093/scan/nsz019] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2018] [Revised: 02/19/2019] [Accepted: 03/12/2019] [Indexed: 12/13/2022] Open
Abstract
How can music-merely a stream of sounds-be enjoyable for so many people? Recent accounts of this phenomenon are inspired by predictive coding models, hypothesizing that both confirmation and violations of musical expectations associate with the hedonic response to music via recruitment of the mesolimbic system and its connections with the auditory cortex. Here we provide support for this model, by revealing associations of music-induced pleasantness with musical surprises in the activity and connectivity patterns of the nucleus accumbens (NAcc)-a central component of the mesolimbic system. We examined neurobehavioral responses to surprises in three naturalistic musical pieces using fMRI and subjective ratings of valence and arousal. Surprises were associated with changes in reported valence and arousal, as well as with enhanced activations in the auditory cortex, insula and ventral striatum, relative to unsurprising events. Importantly, we found that surprise-related activation in the NAcc was more pronounced among individuals who experienced greater music-induced pleasantness. These participants also exhibited stronger surprise-related NAcc-auditory cortex connectivity during the most pleasant piece, relative to participants who found the music less pleasant. These findings provide a novel demonstration of a direct link between musical surprises, NAcc activation and music-induced pleasantness.
Collapse
Affiliation(s)
- Ofir Shany
- Sagol Brain Institute, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel.,School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel
| | - Neomi Singer
- School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel.,Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| | - Benjamin Paul Gold
- Montreal Neurological Institute, McGill University, Montreal, QC, Canada.,International Laboratory for Brain, Music and Sound Research, Montreal, QC, Canada
| | - Nori Jacoby
- The Center for Science and Society, Columbia University, New York, NY, USA
| | - Ricardo Tarrasch
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel.,School of Education, Tel Aviv University, Tel Aviv, Israel
| | - Talma Hendler
- Sagol Brain Institute, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel.,School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel.,Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel.,Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Roni Granot
- Musicology Department, Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
10
|
Bannister S. Distinct varieties of aesthetic chills in response to multimedia. PLoS One 2019; 14:e0224974. [PMID: 31725733 PMCID: PMC6855651 DOI: 10.1371/journal.pone.0224974] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2018] [Accepted: 10/25/2019] [Indexed: 12/20/2022] Open
Abstract
The experience of aesthetic chills, often defined as a subjective response accompanied by goosebumps, shivers and tingling sensations, is a phenomenon often utilized to indicate moments of peak pleasure and emotional arousal in psychological research. However, little is currently understood about how to conceptualize the experience, particularly in terms of whether chills are general markers of intense pleasure and emotion, or instead a collection of distinct phenomenological experiences. To address this, a web-study was designed using images, videos, music videos, texts and music excerpts (from both an online forum dedicated to chills-eliciting stimuli and previous musical chills study), to explore variations across chills experience in terms of bodily and emotional responses reported. Results suggest that across participants (N = 179), three distinct chills categories could be identified: warm chills (chills co-occurring with smiling, warmth, feeling relaxed, stimulated and happy), cold chills (chills co-occurring with frowning, cold, sadness and anger), and moving chills (chills co-occurring with tears, feeling a lump in the throat, emotional intensity, and feelings of affection, tenderness and being moved). Warm chills were linked to stimuli expressing social communion and love; cold chills were elicited by stimuli portraying entities in distress, and support from one to another; moving chills were elicited by most stimuli, but their incidence were also predicted by ratings of trait empathy. Findings are discussed in terms of being moved, the importance of differing induction mechanisms such as shared experience and empathic concern, and the implications of distinct chills categories for both individual differences and inconsistencies in the existing aesthetic chills literature.
Collapse
Affiliation(s)
- Scott Bannister
- Department of Music, Durham University, Durham, County Durham, England, United Kingdom
- * E-mail:
| |
Collapse
|
11
|
Omigie D, Pearce M, Lehongre K, Hasboun D, Navarro V, Adam C, Samson S. Intracranial Recordings and Computational Modeling of Music Reveal the Time Course of Prediction Error Signaling in Frontal and Temporal Cortices. J Cogn Neurosci 2019; 31:855-873. [PMID: 30883293 DOI: 10.1162/jocn_a_01388] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Prediction is held to be a fundamental process underpinning perception, action, and cognition. To examine the time course of prediction error signaling, we recorded intracranial EEG activity from nine presurgical epileptic patients while they listened to melodies whose information theoretical predictability had been characterized using a computational model. We examined oscillatory activity in the superior temporal gyrus (STG), the middle temporal gyrus (MTG), and the pars orbitalis of the inferior frontal gyrus, lateral cortical areas previously implicated in auditory predictive processing. We also examined activity in anterior cingulate gyrus (ACG), insula, and amygdala to determine whether signatures of prediction error signaling may also be observable in these subcortical areas. Our results demonstrate that the information content (a measure of unexpectedness) of musical notes modulates the amplitude of low-frequency oscillatory activity (theta to beta power) in bilateral STG and right MTG from within 100 and 200 msec of note onset, respectively. Our results also show this cortical activity to be accompanied by low-frequency oscillatory modulation in ACG and insula-areas previously associated with mediating physiological arousal. Finally, we showed that modulation of low-frequency activity is followed by that of high-frequency (gamma) power from approximately 200 msec in the STG, between 300 and 400 msec in the left insula, and between 400 and 500 msec in the ACG. We discuss these results with respect to models of neural processing that emphasize gamma activity as an index of prediction error signaling and highlight the usefulness of musical stimuli in revealing the wide-reaching neural consequences of predictive processing.
Collapse
Affiliation(s)
- Diana Omigie
- Max Planck Institute for Empirical Aesthetics.,Goldsmiths, University of London
| | | | - Katia Lehongre
- AP-HP, GH Pitié-Salpêtrière-Charles Foix.,Inserm U 1127, CNRS UMR 7225, Sorbonne Université, UMPC Univ Paris 06 UMR 5 1127, Institut du Cerveau et de la Moelle épinière, ICM, F-75013
| | | | - Vincent Navarro
- AP-HP, GH Pitié-Salpêtrière-Charles Foix.,Inserm U 1127, CNRS UMR 7225, Sorbonne Université, UMPC Univ Paris 06 UMR 5 1127, Institut du Cerveau et de la Moelle épinière, ICM, F-75013
| | | | - Severine Samson
- AP-HP, GH Pitié-Salpêtrière-Charles Foix.,University of Lille
| |
Collapse
|
12
|
Musical reward prediction errors engage the nucleus accumbens and motivate learning. Proc Natl Acad Sci U S A 2019; 116:3310-3315. [PMID: 30728301 DOI: 10.1073/pnas.1809855116] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Enjoying music reliably ranks among life's greatest pleasures. Like many hedonic experiences, it engages several reward-related brain areas, with activity in the nucleus accumbens (NAc) most consistently reflecting the listener's subjective response. Converging evidence suggests that this activity arises from musical "reward prediction errors" (RPEs) that signal the difference between expected and perceived musical events, but this hypothesis has not been directly tested. In the present fMRI experiment, we assessed whether music could elicit formally modeled RPEs in the NAc by applying a well-established decision-making protocol designed and validated for studying RPEs. In the scanner, participants chose between arbitrary cues that probabilistically led to dissonant or consonant music, and learned to make choices associated with the consonance, which they preferred. We modeled regressors of trial-by-trial RPEs, finding that NAc activity tracked musically elicited RPEs, to an extent that explained variance in the individual learning rates. These results demonstrate that music can act as a reward, driving learning and eliciting RPEs in the NAc, a hub of reward- and music enjoyment-related activity.
Collapse
|
13
|
Sears DRW, Pearce MT, Spitzer J, Caplin WE, McAdams S. Expectations for tonal cadences: Sensory and cognitive priming effects. Q J Exp Psychol (Hove) 2018; 72:1422-1438. [DOI: 10.1177/1747021818814472] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Studies examining the formation of melodic and harmonic expectations during music listening have repeatedly demonstrated that a tonal context primes listeners to expect certain (tonally related) continuations over others. However, few such studies have (1) selected stimuli using ready examples of expectancy violation derived from real-world instances of tonal music, (2) provided a consistent account for the influence of sensory and cognitive mechanisms on tonal expectancies by comparing different computational simulations, or (3) combined melodic and harmonic representations in modelling cognitive processes of expectation. To resolve these issues, this study measures expectations for the most recurrent cadence patterns associated with tonal music and then simulates the reported findings using three sensory–cognitive models of auditory expectation. In Experiment 1, participants provided explicit retrospective expectancy ratings both before and after hearing the target melodic tone and chord of the cadential formula. In Experiment 2, participants indicated as quickly as possible whether those target events were in or out of tune relative to the preceding context. Across both experiments, cadences terminating with stable melodic tones and chords elicited the highest expectancy ratings and the fastest and most accurate responses. Moreover, the model simulations supported a cognitive interpretation of tonal processing, in which listeners with exposure to tonal music generate expectations as a consequence of the frequent (co-)occurrence of events on the musical surface.
Collapse
Affiliation(s)
- David RW Sears
- College of Visual & Performing Arts, Texas Tech University, Lubbock, TX, USA
- McGill University, Montreal, QC, Canada
| | | | | | | | | |
Collapse
|
14
|
Pearce MT. Statistical learning and probabilistic prediction in music cognition: mechanisms of stylistic enculturation. Ann N Y Acad Sci 2018; 1423:378-395. [PMID: 29749625 PMCID: PMC6849749 DOI: 10.1111/nyas.13654] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2017] [Revised: 01/31/2018] [Accepted: 02/06/2018] [Indexed: 11/28/2022]
Abstract
Music perception depends on internal psychological models derived through exposure to a musical culture. It is hypothesized that this musical enculturation depends on two cognitive processes: (1) statistical learning, in which listeners acquire internal cognitive models of statistical regularities present in the music to which they are exposed; and (2) probabilistic prediction based on these learned models that enables listeners to organize and process their mental representations of music. To corroborate these hypotheses, I review research that uses a computational model of probabilistic prediction based on statistical learning (the information dynamics of music (IDyOM) model) to simulate data from empirical studies of human listeners. The results show that a broad range of psychological processes involved in music perception-expectation, emotion, memory, similarity, segmentation, and meter-can be understood in terms of a single, underlying process of probabilistic prediction using learned statistical models. Furthermore, IDyOM simulations of listeners from different musical cultures demonstrate that statistical learning can plausibly predict causal effects of differential cultural exposure to musical styles, providing a quantitative model of cultural distance. Understanding the neural basis of musical enculturation will benefit from close coordination between empirical neuroimaging and computational modeling of underlying mechanisms, as outlined here.
Collapse
Affiliation(s)
- Marcus T. Pearce
- Cognitive Science Research Group, School of Electronic Engineering and Computer ScienceQueen Mary University of LondonLondonUK
- Centre for Music in the BrainAarhus UniversityAarhusDenmark
| |
Collapse
|
15
|
Applying Acoustical and Musicological Analysis to Detect Brain Responses to Realistic Music: A Case Study. APPLIED SCIENCES-BASEL 2018. [DOI: 10.3390/app8050716] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
16
|
Haumann NT, Vuust P, Bertelsen F, Garza-Villarreal EA. Influence of Musical Enculturation on Brain Responses to Metric Deviants. Front Neurosci 2018; 12:218. [PMID: 29720932 PMCID: PMC5915898 DOI: 10.3389/fnins.2018.00218] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2017] [Accepted: 03/19/2018] [Indexed: 11/13/2022] Open
Abstract
The ability to recognize metric accents is fundamental in both music and language perception. It has been suggested that music listeners prefer rhythms that follow simple binary meters, which are common in Western music. This means that listeners expect odd-numbered beats to be strong and even-numbered beats to be weak. In support of this, studies have shown that listeners exposed to Western music show stronger novelty and incongruity related P3 and irregularity detection related mismatch negativity (MMN) brain responses to attenuated odd- than attenuated even-numbered metric positions. Furthermore, behavioral evidence suggests that music listeners' preferences can be changed by long-term exposure to non-Western rhythms and meters, e.g., by listening to African or Balkan music. In our study, we investigated whether it might be possible to measure effects of music enculturation on neural responses to attenuated tones on specific metric positions. We compared the magnetic mismatch negativity (MMNm) to attenuated beats in a “Western group” of listeners (n = 12) mainly exposed to Western music and a “Bicultural group” of listeners (n = 13) exposed for at least 1 year to both Sub-Saharan African music in addition to Western music. We found that in the “Western group” the MMNm was higher in amplitude to deviant tones on odd compared to even metric positions, but not in the “Bicultural group.” In support of this finding, there was also a trend of the “Western group” to rate omitted beats as more surprising on odd than even metric positions, whereas the “Bicultural group” seemed to discriminate less between metric positions in terms of surprise ratings. Also, we observed that the overall latency of the MMNm was significantly shorter in the Bicultural group compared to the Western group. These effects were not biased by possible differences in rhythm perception ability or music training, measured with the Musical Ear Test (MET). Furthermore, source localization analyses suggest that auditory, inferior temporal, sensory-motor, superior frontal, and parahippocampal regions might be involved in eliciting the MMNm to the metric deviants. These findings suggest that effects of music enculturation can be measured on MMNm responses to attenuated tones on specific metric positions.
Collapse
Affiliation(s)
- Niels T Haumann
- Department of Aesthetics and Communication (Musicology), Faculty of Arts, Aarhus University, Aarhus, Denmark.,Department of Clinical Medicine, Center for Music in the Brain, Royal Academy of Music, Aarhus University, Aarhus, Denmark
| | - Peter Vuust
- Department of Clinical Medicine, Center for Music in the Brain, Royal Academy of Music, Aarhus University, Aarhus, Denmark
| | - Freja Bertelsen
- Center of Functionally Integrative Neuroscience, Aarhus University, Aarhus, Denmark.,Department of Nuclear Medicine and PET Centre, Aarhus University Hospital, Aarhus, Denmark
| | - Eduardo A Garza-Villarreal
- Department of Clinical Medicine, Center for Music in the Brain, Royal Academy of Music, Aarhus University, Aarhus, Denmark.,Clinical Research Division, Instituto Nacional de Psiquiatría Ramón de la Fuente Muñiz (INPRFM), Mexico City, Mexico.,Department of Neurology, Faculty of Medicine and University Hospital, Universidad Autonoma de Nuevo Leon, Monterrey, Mexico
| |
Collapse
|
17
|
Pearce M, Rohrmeier M. Musical Syntax II: Empirical Perspectives. SPRINGER HANDBOOK OF SYSTEMATIC MUSICOLOGY 2018. [DOI: 10.1007/978-3-662-55004-5_26] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
|
18
|
Arjmand HA, Hohagen J, Paton B, Rickard NS. Emotional Responses to Music: Shifts in Frontal Brain Asymmetry Mark Periods of Musical Change. Front Psychol 2017; 8:2044. [PMID: 29255434 PMCID: PMC5723012 DOI: 10.3389/fpsyg.2017.02044] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2016] [Accepted: 11/08/2017] [Indexed: 11/13/2022] Open
Abstract
Recent studies have demonstrated increased activity in brain regions associated with emotion and reward when listening to pleasurable music. Unexpected change in musical features intensity and tempo - and thereby enhanced tension and anticipation - is proposed to be one of the primary mechanisms by which music induces a strong emotional response in listeners. Whether such musical features coincide with central measures of emotional response has not, however, been extensively examined. In this study, subjective and physiological measures of experienced emotion were obtained continuously from 18 participants (12 females, 6 males; 18-38 years) who listened to four stimuli-pleasant music, unpleasant music (dissonant manipulations of their own music), neutral music, and no music, in a counter-balanced order. Each stimulus was presented twice: electroencephalograph (EEG) data were collected during the first, while participants continuously subjectively rated the stimuli during the second presentation. Frontal asymmetry (FA) indices from frontal and temporal sites were calculated, and peak periods of bias toward the left (indicating a shift toward positive affect) were identified across the sample. The music pieces were also examined to define the temporal onset of key musical features. Subjective reports of emotional experience averaged across the condition confirmed participants rated their music selection as very positive, the scrambled music as negative, and the neutral music and silence as neither positive nor negative. Significant effects in FA were observed in the frontal electrode pair FC3-FC4, and the greatest increase in left bias from baseline was observed in response to pleasurable music. These results are consistent with findings from previous research. Peak FA responses at this site were also found to co-occur with key musical events relating to change, for instance, the introduction of a new motif, or an instrument change, or a change in low level acoustic factors such as pitch, dynamics or texture. These findings provide empirical support for the proposal that change in basic musical features is a fundamental trigger of emotional responses in listeners.
Collapse
Affiliation(s)
| | - Jesper Hohagen
- Institute for Systematic Musicology, University of Hamburg, Hamburg, Germany
| | - Bryan Paton
- Monash Biomedical Imaging, Monash University, University of Newcastle, Newcastle, NSW, Australia
| | - Nikki S. Rickard
- School of Psychological Sciences, Monash University, Melbourne, VIC, Australia
- Centre for Positive Psychology, Graduate School of Education, University of Melbourne, Melbourne, VIC, Australia
| |
Collapse
|
19
|
Fang L, Shang J, Chen N. Perception of Western Musical Modes: A Chinese Study. Front Psychol 2017; 8:1905. [PMID: 29163286 PMCID: PMC5671660 DOI: 10.3389/fpsyg.2017.01905] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2017] [Accepted: 10/16/2017] [Indexed: 11/13/2022] Open
Abstract
The major mode conveys positive emotion, whereas the minor mode conveys negative emotion. However, previous studies have primarily focused on the emotions induced by Western music in Western participants. The influence of the musical mode (major or minor) on Chinese individuals' perception of Western music is unclear. In the present experiments, we investigated the effects of musical mode and harmonic complexity on psychological perception among Chinese participants. In Experiment 1, the participants (N = 30) evaluated 24 musical excerpts in five dimensions (pleasure, arousal, dominance, emotional tension, and liking). In Experiment 2, the participants (N = 40) evaluated 48 musical excerpts. Perceptions of the musical excerpts differed significantly according to mode, even if the stimuli were Western musical excerpts. The major-mode music induced greater pleasure and arousal and produced higher liking ratings than the minor-mode music, whereas the minor-mode music induced greater tension than the major-mode music. Mode did not influence the dominance rating. Perception of Western music was not influenced by harmonic complexity. Moreover, preference for musical mode was influenced by previous exposure to Western music. These results confirm the cross-cultural emotion induction effects of musical modes in Western music.
Collapse
Affiliation(s)
- Lele Fang
- School of Psychology, Liaoning Normal University, Dalian, China
- Cooperative Innovation Center of Healthy Personality Evaluation and Cultivation of Children and Adolescents in Liaoning Province, Dalian, China
| | - Junchen Shang
- School of Psychology, Liaoning Normal University, Dalian, China
- Cooperative Innovation Center of Healthy Personality Evaluation and Cultivation of Children and Adolescents in Liaoning Province, Dalian, China
| | - Nan Chen
- School of Psychology, Liaoning Normal University, Dalian, China
- Cooperative Innovation Center of Healthy Personality Evaluation and Cultivation of Children and Adolescents in Liaoning Province, Dalian, China
| |
Collapse
|
20
|
Podlipniak P. The Role of the Baldwin Effect in the Evolution of Human Musicality. Front Neurosci 2017; 11:542. [PMID: 29056895 PMCID: PMC5635050 DOI: 10.3389/fnins.2017.00542] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2017] [Accepted: 09/19/2017] [Indexed: 12/17/2022] Open
Abstract
From the biological perspective human musicality is the term referred to as a set of abilities which enable the recognition and production of music. Since music is a complex phenomenon which consists of features that represent different stages of the evolution of human auditory abilities, the question concerning the evolutionary origin of music must focus mainly on music specific properties and their possible biological function or functions. What usually differentiates music from other forms of human sound expressions is a syntactically organized structure based on pitch classes and rhythmic units measured in reference to musical pulse. This structure is an auditory (not acoustical) phenomenon, meaning that it is a human-specific interpretation of sounds achieved thanks to certain characteristics of the nervous system. There is historical and cross-cultural diversity of this structure which indicates that learning is an important part of the development of human musicality. However, the fact that there is no culture without music, the syntax of which is implicitly learned and easily recognizable, suggests that human musicality may be an adaptive phenomenon. If the use of syntactically organized structure as a communicative phenomenon were adaptive it would be only in circumstances in which this structure is recognizable by more than one individual. Therefore, there is a problem to explain the adaptive value of an ability to recognize a syntactically organized structure that appeared accidentally as the result of mutation or recombination in an environment without a syntactically organized structure. The possible solution could be explained by the Baldwin effect in which a culturally invented trait is transformed into an instinctive trait by the means of natural selection. It is proposed that in the beginning musical structure was invented and learned thanks to neural plasticity. Because structurally organized music appeared adaptive (phenotypic adaptation) e.g., as a tool of social consolidation, our predecessors started to spend a lot of time and energy on music. In such circumstances, accidentally one individual was born with the genetically controlled development of new neural circuitry which allowed him or her to learn music faster and with less energy use.
Collapse
Affiliation(s)
- Piotr Podlipniak
- Institute of Musicology, Adam Mickiewicz University in Poznań, Poznań, Poland
| |
Collapse
|
21
|
Gorzelańczyk EJ, Podlipniak P, Walecki P, Karpiński M, Tarnowska E. Pitch Syntax Violations Are Linked to Greater Skin Conductance Changes, Relative to Timbral Violations - The Predictive Role of the Reward System in Perspective of Cortico-subcortical Loops. Front Psychol 2017; 8:586. [PMID: 28458648 PMCID: PMC5394172 DOI: 10.3389/fpsyg.2017.00586] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2016] [Accepted: 03/29/2017] [Indexed: 12/03/2022] Open
Abstract
According to contemporary opinion emotional reactions to syntactic violations are due to surprise as a result of the general mechanism of prediction. The classic view is that, the processing of musical syntax can be explained by activity of the cerebral cortex. However, some recent studies have indicated that subcortical brain structures, including those related to the processing of emotions, are also important during the processing of syntax. In order to check whether emotional reactions play a role in the processing of pitch syntax or are only the result of the general mechanism of prediction, the comparison of skin conductance levels reacting to three types of melodies were recorded. In this study, 28 subjects listened to three types of short melodies prepared in Musical Instrument Digital Interface Standard files (MIDI) – tonally correct, tonally violated (with one out-of-key – i.e., of high information content), and tonally correct but with one note played in a different timbre. The BioSemi ActiveTwo with two passive Nihon Kohden electrodes was used. Skin conductance levels were positively correlated with the presented stimuli (timbral changes and tonal violations). Although changes in skin conductance levels were also observed in response to the change in timbre, the reactions to tonal violations were significantly stronger. Therefore, despite the fact that timbral change is at least as equally unexpected as an out-of-key note, the processing of pitch syntax mainly generates increased activation of the sympathetic part of the autonomic nervous system. These results suggest that the cortico–subcortical loops (especially the anterior cingulate – limbic loop) may play an important role in the processing of musical syntax.
Collapse
Affiliation(s)
- Edward J Gorzelańczyk
- Department of Theoretical Basis of Bio-Medical Sciences and Medical Informatics, Nicolaus Copernicus University Collegium MedicumBydgoszcz, Poland.,Non-Public Health Care Center Sue Ryder HomeBydgoszcz, Poland.,Medseven-Outpatient Addiction TreatmentBydgoszcz, Poland.,Institute of Philosophy, Kazimierz Wielki UniversityBydgoszcz, Poland
| | - Piotr Podlipniak
- Institute of Musicology, Adam Mickiewicz University in PoznańPoznań, Poland
| | - Piotr Walecki
- Department of Bioinformatics and Telemedicine, Jagiellonian University Collegium MedicumKrakow, Poland
| | - Maciej Karpiński
- Institute of Linguistics, Adam Mickiewicz University in PoznańPoznań, Poland
| | - Emilia Tarnowska
- Institute of Acoustics, Adam Mickiewicz University in PoznańPoznań, Poland
| |
Collapse
|
22
|
Omigie D. Basic, specific, mechanistic? Conceptualizing musical emotions in the brain. J Comp Neurol 2015; 524:1676-86. [PMID: 26172307 DOI: 10.1002/cne.23854] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2015] [Revised: 07/07/2015] [Accepted: 07/07/2015] [Indexed: 11/10/2022]
Abstract
The number of studies investigating music processing in the human brain continues to increase, with a large proportion of them focussing on the correlates of so-called musical emotions. The current Review highlights the recent development whereby such studies are no longer concerned only with basic emotions such as happiness and sadness but also with so-called music-specific or "aesthetic" ones such as nostalgia and wonder. It also highlights how mechanisms such as expectancy and empathy, which are seen as inducing musical emotions, are enjoying ever-increasing investigation and substantiation with physiological and neuroimaging methods. It is proposed that a combination of these approaches, namely, investigation of the precise mechanisms through which so-called music-specific or aesthetic emotions may arise, will provide the most important advances for our understanding of the unique nature of musical experience.
Collapse
Affiliation(s)
- Diana Omigie
- Music Department, Max Planck Institute for Empirical Aesthetics, 60322, Frankfurt am Main, Germany
| |
Collapse
|
23
|
Instruments, conductors, dancers, and intendants. Phys Life Rev 2015; 13:99-106. [DOI: 10.1016/j.plrev.2015.04.036] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2015] [Accepted: 04/27/2015] [Indexed: 02/05/2023]
|
24
|
Koelsch S. Music-evoked emotions: principles, brain correlates, and implications for therapy. Ann N Y Acad Sci 2015; 1337:193-201. [PMID: 25773635 DOI: 10.1111/nyas.12684] [Citation(s) in RCA: 51] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
This paper describes principles underlying the evocation of emotion with music: evaluation, resonance, memory, expectancy/tension, imagination, understanding, and social functions. Each of these principles includes several subprinciples, and the framework on music-evoked emotions emerging from these principles and subprinciples is supposed to provide a starting point for a systematic, coherent, and comprehensive theory on music-evoked emotions that considers both reception and production of music, as well as the relevance of emotion-evoking principles for music therapy.
Collapse
Affiliation(s)
- Stefan Koelsch
- Languages of Emotion, Freie Universität, Berlin, Germany
| |
Collapse
|
25
|
Guo S, Koelsch S. The effects of supervised learning on event-related potential correlates of music-syntactic processing. Brain Res 2015; 1626:232-46. [PMID: 25660849 DOI: 10.1016/j.brainres.2015.01.046] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2014] [Revised: 01/22/2015] [Accepted: 01/24/2015] [Indexed: 10/24/2022]
Abstract
Humans process music even without conscious effort according to implicit knowledge about syntactic regularities. Whether such automatic and implicit processing is modulated by veridical knowledge has remained unknown in previous neurophysiological studies. This study investigates this issue by testing whether the acquisition of veridical knowledge of a music-syntactic irregularity (acquired through supervised learning) modulates early, partly automatic, music-syntactic processes (as reflected in the early right anterior negativity, ERAN), and/or late controlled processes (as reflected in the late positive component, LPC). Excerpts of piano sonatas with syntactically regular and less regular chords were presented repeatedly (10 times) to non-musicians and amateur musicians. Participants were informed by a cue as to whether the following excerpt contained a regular or less regular chord. Results showed that the repeated exposure to several presentations of regular and less regular excerpts did not influence the ERAN elicited by less regular chords. By contrast, amplitudes of the LPC (as well as of the P3a evoked by less regular chords) decreased systematically across learning trials. These results reveal that late controlled, but not early (partly automatic), neural mechanisms of music-syntactic processing are modulated by repeated exposure to a musical piece. This article is part of a Special Issue entitled SI: Prediction and Attention.
Collapse
Affiliation(s)
- Shuang Guo
- Cluster Languages of Emotion, Freie Universität Berlin, Berlin, Germany
| | - Stefan Koelsch
- Cluster Languages of Emotion, Freie Universität Berlin, Berlin, Germany.
| |
Collapse
|
26
|
James CE, Cereghetti DM, Roullet Tribes E, Oechslin MS. Electrophysiological evidence for a specific neural correlate of musical violation expectation in primary-school children. Neuroimage 2014; 104:386-97. [PMID: 25278251 DOI: 10.1016/j.neuroimage.2014.09.047] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2014] [Revised: 09/13/2014] [Accepted: 09/20/2014] [Indexed: 10/24/2022] Open
Abstract
The majority of studies on music processing in children used simple musical stimuli. Here, primary schoolchildren judged the appropriateness of musical closure in expressive polyphone music, while high-density electroencephalography was recorded. Stimuli ended either regularly or contained refined in-key harmonic transgressions at closure. The children discriminated the transgressions well above chance. Regular and transgressed endings evoked opposite scalp voltage configurations peaking around 400ms after stimulus onset with bilateral frontal negativity for regular and centro-posterior negativity (CPN) for transgressed endings. A positive correlation could be established between strength of the CPN response and rater sensitivity (d-prime). We also investigated whether the capacity to discriminate the transgressions was supported by auditory domain specific or general cognitive mechanisms, and found that working memory capacity predicted transgression discrimination. Latency and distribution of the CPN are reminiscent of the N400, typically observed in response to semantic incongruities in language. Therefore our observation is intriguing, as the CPN occurred here within an intra-musical context, without any symbols referring to the external world. Moreover, the harmonic in-key transgressions that we implemented may be considered syntactical as they transgress structural rules. Such structural incongruities in music are typically followed by an early right anterior negativity (ERAN) and an N5, but not so here. Putative contributive sources of the CPN were localized in left pre-motor, mid-posterior cingulate and superior parietal regions of the brain that can be linked to integration processing. These results suggest that, at least in children, processing of syntax and meaning may coincide in complex intra-musical contexts.
Collapse
Affiliation(s)
- Clara E James
- HES-SO University of Applied Sciences and Arts Western Switzerland, School of Health Sciences, Geneva, Switzerland; Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland; Geneva Neuroscience Center, University of Geneva, Geneva, Switzerland.
| | - Donato M Cereghetti
- Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland; Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| | - Elodie Roullet Tribes
- Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
| | - Mathias S Oechslin
- Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland; International Normal Aging and Plasticity Imaging Center (INAPIC), University of Zurich, Zurich, Switzerland
| |
Collapse
|
27
|
Van Puyvelde M, Loots G, Vanfleteren P, Meys J, Simcock D, Pattyn N. Do you hear the same? Cardiorespiratory responses between mothers and infants during tonal and atonal music. PLoS One 2014; 9:e106920. [PMID: 25207803 PMCID: PMC4160208 DOI: 10.1371/journal.pone.0106920] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2014] [Accepted: 08/10/2014] [Indexed: 11/18/2022] Open
Abstract
This study examined the effects of tonal and atonal music on respiratory sinus arrhythmia (RSA) in 40 mothers and their 3-month-old infants. The tonal music fragment was composed using the structure of a harmonic series that corresponds with the pitch ratio characteristics of mother–infant vocal dialogues. The atonal fragment did not correspond with a tonal structure. Mother–infant ECG and respiration were registered along with simultaneous video recordings. RR-interval, respiration rate, and RSA were calculated. RSA was corrected for any confounding respiratory and motor activities. The results showed that the infants’ and the mothers’ RSA-responses to the tonal and atonal music differed. The infants showed significantly higher RSA-levels during the tonal fragment than during the atonal fragment and baseline, suggesting increased vagal activity during tonal music. The mothers showed RSA-responses that were equal to their infants only when the infants were lying close to their bodies and when they heard the difference between the two fragments, preferring the tonal above the atonal fragment. The results are discussed with regard to music-related topics, psychophysiological integration and mother-infant vocal interaction processes.
Collapse
Affiliation(s)
- Martine Van Puyvelde
- Research Group Interpersonal, Discursive and Narrative Studies (IDNS), Faculty of Psychology and Educational Sciences, Vrije Universiteit Brussel (VUB), Brussels, Belgium
- VIPER Research Unit, Royal Military Academy (RMA), Brussels, Belgium
- * E-mail:
| | - Gerrit Loots
- Research Group Interpersonal, Discursive and Narrative Studies (IDNS), Faculty of Psychology and Educational Sciences, Vrije Universiteit Brussel (VUB), Brussels, Belgium
- Universidad Católica Boliviana “San Pablo”, La Paz (UCB), Bolivia
| | - Pol Vanfleteren
- Research Group Interpersonal, Discursive and Narrative Studies (IDNS), Faculty of Psychology and Educational Sciences, Vrije Universiteit Brussel (VUB), Brussels, Belgium
| | - Joris Meys
- Department of Mathematical Modeling, Statistics and Bio informatics, Faculty of Bioscience Engineering, University of Ghent (UG), Ghent, Belgium
| | - David Simcock
- Institute of Food, Nutrition and Human Health, Massey University, Palmerston North, New Zealand
- Faculty of Medicine and Bioscience, James Cook University, Queensland, Australia
| | - Nathalie Pattyn
- VIPER Research Unit, Royal Military Academy (RMA), Brussels, Belgium
- Department of Experimental and Applied Psychology, Vrije Universiteit Brussel (VUB), Brussels, Belgium
| |
Collapse
|
28
|
Probabilistic models of expectation violation predict psychophysiological emotional responses to live concert music. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2014; 13:533-53. [PMID: 23605956 DOI: 10.3758/s13415-013-0161-y] [Citation(s) in RCA: 75] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We present the results of a study testing the often-theorized role of musical expectations in inducing listeners' emotions in a live flute concert experiment with 50 participants. Using an audience response system developed for this purpose, we measured subjective experience and peripheral psychophysiological changes continuously. To confirm the existence of the link between expectation and emotion, we used a threefold approach. (1) On the basis of an information-theoretic cognitive model, melodic pitch expectations were predicted by analyzing the musical stimuli used (six pieces of solo flute music). (2) A continuous rating scale was used by half of the audience to measure their experience of unexpectedness toward the music heard. (3) Emotional reactions were measured using a multicomponent approach: subjective feeling (valence and arousal rated continuously by the other half of the audience members), expressive behavior (facial EMG), and peripheral arousal (the latter two being measured in all 50 participants). Results confirmed the predicted relationship between high-information-content musical events, the violation of musical expectations (in corresponding ratings), and emotional reactions (psychologically and physiologically). Musical structures leading to expectation reactions were manifested in emotional reactions at different emotion component levels (increases in subjective arousal and autonomic nervous system activations). These results emphasize the role of musical structure in emotion induction, leading to a further understanding of the frequently experienced emotional effects of music.
Collapse
|
29
|
Bigand E, Delbé C, Poulin-Charronnat B, Leman M, Tillmann B. Empirical evidence for musical syntax processing? Computer simulations reveal the contribution of auditory short-term memory. Front Syst Neurosci 2014; 8:94. [PMID: 24936174 PMCID: PMC4047967 DOI: 10.3389/fnsys.2014.00094] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2013] [Accepted: 05/05/2014] [Indexed: 11/25/2022] Open
Abstract
During the last decade, it has been argued that (1) music processing involves syntactic representations similar to those observed in language, and (2) that music and language share similar syntactic-like processes and neural resources. This claim is important for understanding the origin of music and language abilities and, furthermore, it has clinical implications. The Western musical system, however, is rooted in psychoacoustic properties of sound, and this is not the case for linguistic syntax. Accordingly, musical syntax processing could be parsimoniously understood as an emergent property of auditory memory rather than a property of abstract processing similar to linguistic processing. To support this view, we simulated numerous empirical studies that investigated the processing of harmonic structures, using a model based on the accumulation of sensory information in auditory memory. The simulations revealed that most of the musical syntax manipulations used with behavioral and neurophysiological methods as well as with developmental and cross-cultural approaches can be accounted for by the auditory memory model. This led us to question whether current research on musical syntax can really be compared with linguistic processing. Our simulation also raises methodological and theoretical challenges to study musical syntax while disentangling the confounded low-level sensory influences. In order to investigate syntactic abilities in music comparable to language, research should preferentially use musical material with structures that circumvent the tonal effect exerted by psychoacoustic properties of sounds.
Collapse
Affiliation(s)
- Emmanuel Bigand
- LEAD, CNRS-UMR 5022, Université de Bourgogne Dijon, France ; Institut Universitaire de France Paris, France
| | - Charles Delbé
- LEAD, CNRS-UMR 5022, Université de Bourgogne Dijon, France
| | | | - Marc Leman
- Department of Musicology, IPEM, Ghent University Ghent, Belgium
| | - Barbara Tillmann
- Lyon Neuroscience Research Center, CNRS-UMR 5292, INSERM-UMR 1028, Université Lyon1 Lyon, France
| |
Collapse
|
30
|
Abstract
Music is a universal feature of human societies, partly owing to its power to evoke strong emotions and influence moods. During the past decade, the investigation of the neural correlates of music-evoked emotions has been invaluable for the understanding of human emotion. Functional neuroimaging studies on music and emotion show that music can modulate activity in brain structures that are known to be crucially involved in emotion, such as the amygdala, nucleus accumbens, hypothalamus, hippocampus, insula, cingulate cortex and orbitofrontal cortex. The potential of music to modulate activity in these structures has important implications for the use of music in the treatment of psychiatric and neurological disorders.
Collapse
|
31
|
Tillmann B, Poulin-Charronnat B, Bigand E. The role of expectation in music: from the score to emotions and the brain. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2014; 5:105-113. [PMID: 26304299 DOI: 10.1002/wcs.1262] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/06/2013] [Revised: 08/14/2013] [Accepted: 08/29/2013] [Indexed: 11/08/2022]
Abstract
Like discourse, music is a dynamic process that occurs over time. Listeners usually expect some events or structures of events to occur in the prolongation of a given context. Part of the musical emotional experience would depend upon how composers (improvisers) fulfill these expectancies. Musical expectations are a core phenomenon of music cognition, and the present article provides an overview of its foundation in the score as well as in listeners' behavior and brain, and how it can be simulated by artificial neural networks. We highlight parallels to language processing and include the attentional and emotional dimensions of musical expectations. Studying musical expectations is thus valuable not only for our understanding of music perception and production but also for more general brain functioning. Some open and challenging issues are summarized in this article. WIREs Cogn Sci 2014, 5:105-113. doi: 10.1002/wcs.1262 CONFLICT OF INTEREST: The authors have no conflict of interest to declare. For further resources related to this article, please visit the WIREs website.
Collapse
Affiliation(s)
- B Tillmann
- Lyon Neuroscience Research Center, CNRS-UMR 5292, INSERM U1028, University Lyon 1, Lyon, France
| | | | - E Bigand
- Université de Bourgogne, LEAD-CNRS 5022, Dijon, France.,Institut Universitaire de France, France
| |
Collapse
|
32
|
Lehne M, Rohrmeier M, Koelsch S. Tension-related activity in the orbitofrontal cortex and amygdala: an fMRI study with music. Soc Cogn Affect Neurosci 2013; 9:1515-23. [PMID: 23974947 DOI: 10.1093/scan/nst141] [Citation(s) in RCA: 53] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
Tonal music is characterized by a continuous flow of tension and resolution. This flow of tension and resolution is closely related to processes of expectancy and prediction and is a key mediator of music-evoked emotions. However, the neural correlates of subjectively experienced tension and resolution have not yet been investigated. We acquired continuous ratings of musical tension for four piano pieces. In a subsequent functional magnetic resonance imaging experiment, we identified blood oxygen level-dependent signal increases related to musical tension in the left lateral orbitofrontal cortex (pars orbitalis of the inferior frontal gyrus). In addition, a region of interest analysis in bilateral amygdala showed activation in the right superficial amygdala during periods of increasing tension (compared with decreasing tension). This is the first neuroimaging study investigating the time-varying changes of the emotional experience of musical tension, revealing brain activity in key areas of affective processing.
Collapse
Affiliation(s)
- Moritz Lehne
- Cluster of Excellence 'Languages of Emotion', Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany and MIT Intelligence Initiative, Department of Linguistics and Philosophy, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Martin Rohrmeier
- Cluster of Excellence 'Languages of Emotion', Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany and MIT Intelligence Initiative, Department of Linguistics and Philosophy, Massachusetts Institute of Technology, Cambridge, MA 02139, USA Cluster of Excellence 'Languages of Emotion', Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany and MIT Intelligence Initiative, Department of Linguistics and Philosophy, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Stefan Koelsch
- Cluster of Excellence 'Languages of Emotion', Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany and MIT Intelligence Initiative, Department of Linguistics and Philosophy, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| |
Collapse
|
33
|
Mikutta CA, Schwab S, Niederhauser S, Wuermle O, Strik W, Altorfer A. Music, perceived arousal, and intensity: psychophysiological reactions to Chopin's "Tristesse". Psychophysiology 2013; 50:909-19. [PMID: 23763714 DOI: 10.1111/psyp.12071] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2012] [Accepted: 04/17/2013] [Indexed: 11/28/2022]
Abstract
The present study investigates the relation of perceived arousal (continuous self-rating), autonomic nervous system activity (heart rate, heart rate variability) and musical characteristics (sound intensity, musical rhythm) upon listening to a complex musical piece. Twenty amateur musicians listened to two performances of Chopin's "Tristesse" with different rhythmic shapes. Besides conventional statistical methods for analyzing psychophysiological reactions (heart rate, respiration rate) and musical variables, semblance analysis was used. Perceived arousal correlated strongly with sound intensity; heart rate showed only a partial response to changes in sound intensity. Larger changes in heart rate were caused by the version with more rhythmic tension. The low-/high-frequency ratio of heart rate variability increased--whereas the high frequency component decreased--during music listening. We conclude that autonomic nervous system activity can be modulated not only by sound intensity but also by the interpreter's use of rhythmic tension. Semblance analysis enables us to track the subtle correlations between musical and physiological variables.
Collapse
|
34
|
Abstract
Music has existed in human societies since prehistory, perhaps because it allows expression and regulation of emotion and evokes pleasure. In this review, we present findings from cognitive neuroscience that bear on the question of how we get from perception of sound patterns to pleasurable responses. First, we identify some of the auditory cortical circuits that are responsible for encoding and storing tonal patterns and discuss evidence that cortical loops between auditory and frontal cortices are important for maintaining musical information in working memory and for the recognition of structural regularities in musical patterns, which then lead to expectancies. Second, we review evidence concerning the mesolimbic striatal system and its involvement in reward, motivation, and pleasure in other domains. Recent data indicate that this dopaminergic system mediates pleasure associated with music; specifically, reward value for music can be coded by activity levels in the nucleus accumbens, whose functional connectivity with auditory and frontal areas increases as a function of increasing musical reward. We propose that pleasure in music arises from interactions between cortical loops that enable predictions and expectancies to emerge from sound patterns and subcortical systems responsible for reward and valuation.
Collapse
|
35
|
Friston KJ, Friston DA. A Free Energy Formulation of Music Generation and Perception: Helmholtz Revisited. CURRENT RESEARCH IN SYSTEMATIC MUSICOLOGY 2013. [DOI: 10.1007/978-3-319-00107-4_2] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
36
|
Pannese A. A gray matter of taste: sound perception, music cognition, and Baumgarten's aesthetics. STUDIES IN HISTORY AND PHILOSOPHY OF BIOLOGICAL AND BIOMEDICAL SCIENCES 2012; 43:594-601. [PMID: 22584037 DOI: 10.1016/j.shpsc.2012.03.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2011] [Revised: 03/20/2012] [Accepted: 03/21/2012] [Indexed: 05/31/2023]
Abstract
Music is an ancient and ubiquitous form of human expression. One important component for which music is sought after is its aesthetic value, whose appreciation has typically been associated with largely learned, culturally determined factors, such as education, exposure, and social pressure. However, neuroscientific evidence shows that the aesthetic response to music is often associated with automatic, physically- and biologically-grounded events, such as shivers, chills, increased heart rate, and motor synchronization, suggesting the existence of an underlying biological platform upon which contextual factors may act. Drawing on philosophical notions and neuroscientific evidence, I argue that, although there is no denying that social and cultural context play a substantial role in shaping the aesthetic response to music, these act upon largely universal, biological mechanisms involved with neural processing. I propose that the simultaneous presence of culturally-influenced and biologically-determined contributions to the aesthetic response to music epitomizes Baumgarten's equation of sensory perception with taste. Taking the argument one step further, I suggest that the heavily embodied aesthetic response to music bridges the cleavage between the two discrepant meanings-the one referring to sensory perception, the other referring to judgments of taste-traditionally attributed to the word "aesthetics" in the sciences and the humanities.
Collapse
Affiliation(s)
- Alessia Pannese
- Italian Academy for Advanced Studies in America and Department of Biological Sciences, Columbia University, 903 Fairchild, Mail Code 2430, NY, USA.
| |
Collapse
|
37
|
Pearce MT, Wiggins GA. Auditory Expectation: The Information Dynamics of Music Perception and Cognition. Top Cogn Sci 2012; 4:625-52. [DOI: 10.1111/j.1756-8765.2012.01214.x] [Citation(s) in RCA: 127] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
38
|
Tervaniemi M, Tupala T, Brattico E. Expertise in folk music alters the brain processing of Western harmony. Ann N Y Acad Sci 2012; 1252:147-51. [DOI: 10.1111/j.1749-6632.2011.06428.x] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
39
|
Perani D, Tervaniemi M, Toiviainen P. Tuning the brain for music. Cortex 2011; 47:1023-5. [DOI: 10.1016/j.cortex.2011.05.021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2011] [Accepted: 05/23/2011] [Indexed: 10/18/2022]
|
40
|
Carrus E, Koelsch S, Bhattacharya J. Shadows of music-language interaction on low frequency brain oscillatory patterns. BRAIN AND LANGUAGE 2011; 119:50-57. [PMID: 21683995 DOI: 10.1016/j.bandl.2011.05.009] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2010] [Revised: 05/15/2011] [Accepted: 05/23/2011] [Indexed: 05/30/2023]
Abstract
Electrophysiological studies investigating similarities between music and language perception have relied exclusively on the signal averaging technique, which does not adequately represent oscillatory aspects of electrical brain activity that are relevant for higher cognition. The current study investigated the patterns of brain oscillations during simultaneous processing of music and language using visually presented sentences and auditorily presented chord sequences. Music-syntactically regular or irregular chord functions were presented in sync with syntactically or semantically correct or incorrect words. Irregular chord functions (presented simultaneously with a syntactically correct word) produced an early (150-250 ms) spectral power decrease over anterior frontal regions in the theta band (5-7 Hz) and a late (350-700 ms) power increase in both the delta and the theta band (2-7 Hz) over parietal regions. Syntactically incorrect words (presented simultaneously with a regular chord) elicited a similar late power increase in delta-theta band over parietal sites, but no early effect. Interestingly, the late effect was significantly diminished when the language-syntactic and music-syntactic irregularities occurred at the same time. Further, the presence of a semantic violation occurring simultaneously with regular chords produced a significant increase in later delta-theta power at posterior regions; this effect was marginally decreased when the identical semantic violation occurred simultaneously with a music syntactical violation. Altogether, these results show that low frequency oscillatory networks get activated during the syntactic processing of both music and language, and further, these networks may possibly be shared.
Collapse
Affiliation(s)
- Elisa Carrus
- Department of Psychology, Goldsmiths, University of London, London, UK.
| | | | | |
Collapse
|
41
|
Expressiveness in musical emotions. PSYCHOLOGICAL RESEARCH 2011; 76:641-53. [DOI: 10.1007/s00426-011-0361-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2010] [Accepted: 06/23/2011] [Indexed: 10/18/2022]
|
42
|
Nakahara H, Furuya S, Masuko T, Francis PR, Kinoshita H. Performing music can induce greater modulation of emotion-related psychophysiological responses than listening to music. Int J Psychophysiol 2011; 81:152-8. [PMID: 21704661 DOI: 10.1016/j.ijpsycho.2011.06.003] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2010] [Revised: 05/02/2011] [Accepted: 06/05/2011] [Indexed: 10/18/2022]
Abstract
The present study investigated the differential effects of music-induced emotion on heart rate (HR) and its variability (HRV) while playing music on the piano and listening to a recording of the same piece of music. Sixteen pianists were monitored during tasks involving emotional piano performance, non-emotional piano performance, emotional perception, and non-emotional perception. It was found that emotional induction during both perception and performance modulated HR and HRV, and that such modulations were significantly greater during musical performance than during perception. The results confirmed that musical performance was far more effective in modulating emotion-related autonomic nerve activity than musical perception in musicians. The findings suggest the presence of a neural network of reward-emotion-associated autonomic nerve activity for musical performance that is independent of a neural network for musical perception.
Collapse
Affiliation(s)
- Hidehiro Nakahara
- Morinomiya University of Medical Sciences, 1-26-16, Nankokita, Suminoe, Osaka 559-8611, Japan.
| | | | | | | | | |
Collapse
|
43
|
Koelsch S. Towards a neural basis of processing musical semantics. Phys Life Rev 2011; 8:89-105. [PMID: 21601541 DOI: 10.1016/j.plrev.2011.04.004] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2011] [Accepted: 04/27/2011] [Indexed: 10/18/2022]
Abstract
Processing of meaning is critical for language perception, and therefore the majority of research on meaning processing has focused on the semantic, lexical, conceptual, and propositional processing of language. However, music is another a means of communication, and meaning also emerges from the interpretation of musical information. This article provides a framework for the investigation of the processing of musical meaning, and reviews neuroscience studies investigating this issue. These studies reveal two neural correlates of meaning processing, the N400 and the N5 (which are both components of the event-related electric brain potential). Here I argue that the N400 can be elicited by musical stimuli due to the processing of extra-musical meaning, whereas the N5 can be elicited due to the processing of intra-musical meaning. Notably, whereas the N400 can be elicited by both linguistic and musical stimuli, the N5 has so far only been observed for the processing of meaning in music. Thus, knowledge about both the N400 and the N5 can advance our understanding of how the human brain processes meaning information.
Collapse
Affiliation(s)
- Stefan Koelsch
- Cluster of Excellence der Freien Universität Berlin, Languages of Emotion, Habelschwerdter Allee 45, 14195 Berlin, Germany
| |
Collapse
|
44
|
Kim SG, Kim JS, Chung CK. The effect of conditional probability of chord progression on brain response: an MEG study. PLoS One 2011; 6:e17337. [PMID: 21364895 PMCID: PMC3045443 DOI: 10.1371/journal.pone.0017337] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2010] [Accepted: 01/29/2011] [Indexed: 01/14/2023] Open
Abstract
Background Recent electrophysiological and neuroimaging studies have explored how and where musical syntax in Western music is processed in the human brain. An inappropriate chord progression elicits an event-related potential (ERP) component called an early right anterior negativity (ERAN) or simply an early anterior negativity (EAN) in an early stage of processing the musical syntax. Though the possible underlying mechanism of the EAN is assumed to be probabilistic learning, the effect of the probability of chord progressions on the EAN response has not been previously explored explicitly. Methodology/Principal Findings In the present study, the empirical conditional probabilities in a Western music corpus were employed as an approximation of the frequencies in previous exposure of participants. Three types of chord progression were presented to musicians and non-musicians in order to examine the correlation between the probability of chord progression and the neuromagnetic response using magnetoencephalography (MEG). Chord progressions were found to elicit early responses in a negatively correlating fashion with the conditional probability. Observed EANm (as a magnetic counterpart of the EAN component) responses were consistent with the previously reported EAN responses in terms of latency and location. The effect of conditional probability interacted with the effect of musical training. In addition, the neural response also correlated with the behavioral measures in the non-musicians. Conclusions/Significance Our study is the first to reveal the correlation between the probability of chord progression and the corresponding neuromagnetic response. The current results suggest that the physiological response is a reflection of the probabilistic representations of the musical syntax. Moreover, the results indicate that the probabilistic representation is related to the musical training as well as the sensitivity of an individual.
Collapse
Affiliation(s)
- Seung-Goo Kim
- Interdisciplinary Program in Cognitive Science, Seoul National University, Seoul, Korea
| | - June Sic Kim
- MEG Center, Department of Neurosurgery, Seoul National University College of Medicine, Seoul, Korea
| | - Chun Kee Chung
- Interdisciplinary Program in Cognitive Science, Seoul National University, Seoul, Korea
- MEG Center, Department of Neurosurgery, Seoul National University College of Medicine, Seoul, Korea
- * E-mail:
| |
Collapse
|
45
|
Dellacherie D, Roy M, Hugueville L, Peretz I, Samson S. The effect of musical experience on emotional self-reports and psychophysiological responses to dissonance. Psychophysiology 2011; 48:337-49. [DOI: 10.1111/j.1469-8986.2010.01075.x] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
|
46
|
Chapin H, Jantzen K, Scott Kelso JA, Steinberg F, Large E. Dynamic emotional and neural responses to music depend on performance expression and listener experience. PLoS One 2010; 5:e13812. [PMID: 21179549 PMCID: PMC3002933 DOI: 10.1371/journal.pone.0013812] [Citation(s) in RCA: 96] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2010] [Accepted: 10/12/2010] [Indexed: 11/18/2022] Open
Abstract
Apart from its natural relevance to cognition, music provides a window into the intimate relationships between production, perception, experience, and emotion. Here, emotional responses and neural activity were observed as they evolved together with stimulus parameters over several minutes. Participants listened to a skilled music performance that included the natural fluctuations in timing and sound intensity that musicians use to evoke emotional responses. A mechanical performance of the same piece served as a control. Before and after fMRI scanning, participants reported real-time emotional responses on a 2-dimensional rating scale (arousal and valence) as they listened to each performance. During fMRI scanning, participants listened without reporting emotional responses. Limbic and paralimbic brain areas responded to the expressive dynamics of human music performance, and both emotion and reward related activations during music listening were dependent upon musical training. Moreover, dynamic changes in timing predicted ratings of emotional arousal, as well as real-time changes in neural activity. BOLD signal changes correlated with expressive timing fluctuations in cortical and subcortical motor areas consistent with pulse perception, and in a network consistent with the human mirror neuron system. These findings show that expressive music performance evokes emotion and reward related neural activations, and that music's affective impact on the brains of listeners is altered by musical training. Our observations are consistent with the idea that music performance evokes an emotional response through a form of empathy that is based, at least in part, on the perception of movement and on violations of pulse-based temporal expectancies.
Collapse
Affiliation(s)
- Heather Chapin
- Center for Complex Systems and Brain Sciences, Florida Atlantic University, Boca Raton, Florida, United States of America
| | - Kelly Jantzen
- Department of Psychology, Western Washington University, Bellingham, Washington, United States of America
| | - J. A. Scott Kelso
- Center for Complex Systems and Brain Sciences, Florida Atlantic University, Boca Raton, Florida, United States of America
- Intelligent Systems Research Centre, University of Ulster, Magee Campus, Derry, North Ireland
| | - Fred Steinberg
- University MRI of Boca Raton, Boca Raton, Florida, United States of America
| | - Edward Large
- Center for Complex Systems and Brain Sciences, Florida Atlantic University, Boca Raton, Florida, United States of America
- * E-mail:
| |
Collapse
|
47
|
Koelsch S, Jentschke S. Differences in Electric Brain Responses to Melodies and Chords. J Cogn Neurosci 2010; 22:2251-62. [DOI: 10.1162/jocn.2009.21338] [Citation(s) in RCA: 64] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The music we usually listen to in everyday life consists of either single melodies or harmonized melodies (i.e., of melodies “accompanied” by chords). However, differences in the neural mechanisms underlying melodic and harmonic processing have remained largely unknown. Using EEG, this study compared effects of music-syntactic processing between chords and melodies. In melody blocks, sequences consisted of five tones, the final tone being either regular or irregular (p = .5). Analogously, in chord blocks, sequences consisted of five chords, the final chord function being either regular or irregular. Melodies were derived from the top voice of chord sequences, allowing a proper comparison between melodic and harmonic processing. Music-syntactic incongruities elicited an early anterior negativity with a latency of approximately 125 msec in both the melody and the chord conditions. This effect was followed in the chord condition, but not in the melody condition, by an additional negative effect that was maximal at approximately 180 msec. Both effects were maximal at frontal electrodes, but the later effect was more broadly distributed over the scalp than the earlier effect. These findings indicate that melodic information (which is also contained in the top voice of chords) is processed earlier and with partly different neural mechanisms than harmonic information of chords.
Collapse
Affiliation(s)
- Stefan Koelsch
- University of Sussex, Brighton, UK
- Max-Planck-Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Sebastian Jentschke
- Max-Planck-Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- University College London, UK
| |
Collapse
|
48
|
Perani D, Saccuman MC, Scifo P, Spada D, Andreolli G, Rovelli R, Baldoli C, Koelsch S. Functional specializations for music processing in the human newborn brain. Proc Natl Acad Sci U S A 2010; 107:4758-63. [PMID: 20176953 PMCID: PMC2842045 DOI: 10.1073/pnas.0909074107] [Citation(s) in RCA: 158] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
In adults, specific neural systems with right-hemispheric weighting are necessary to process pitch, melody, and harmony as well as structure and meaning emerging from musical sequences. It is not known to what extent the specialization of these systems results from long-term exposure to music or from neurobiological constraints. One way to address this question is to examine how these systems function at birth, when auditory experience is minimal. We used functional MRI to measure brain activity in 1- to 3-day-old newborns while they heard excerpts of Western tonal music and altered versions of the same excerpts. Altered versions either included changes of the tonal key or were permanently dissonant. Music evoked predominantly right-hemispheric activations in primary and higher order auditory cortex. During presentation of the altered excerpts, hemodynamic responses were significantly reduced in the right auditory cortex, and activations emerged in the left inferior frontal cortex and limbic structures. These results demonstrate that the infant brain shows a hemispheric specialization in processing music as early as the first postnatal hours. Results also indicate that the neural architecture underlying music processing in newborns is sensitive to changes in tonal key as well as to differences in consonance and dissonance.
Collapse
Affiliation(s)
- Daniela Perani
- Faculty of Psychology, Vita-Salute San Raffaele University, 20132 Milan, Italy.
| | | | | | | | | | | | | | | |
Collapse
|
49
|
Omar R, Hailstone JC, Warren JE, Crutch SJ, Warren JD. The cognitive organization of music knowledge: a clinical analysis. ACTA ACUST UNITED AC 2010; 133:1200-13. [PMID: 20142334 PMCID: PMC2850578 DOI: 10.1093/brain/awp345] [Citation(s) in RCA: 67] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Despite much recent interest in the clinical neuroscience of music processing, the cognitive organization of music as a domain of non-verbal knowledge has been little studied. Here we addressed this issue systematically in two expert musicians with clinical diagnoses of semantic dementia and Alzheimer’s disease, in comparison with a control group of healthy expert musicians. In a series of neuropsychological experiments, we investigated associative knowledge of musical compositions (musical objects), musical emotions, musical instruments (musical sources) and music notation (musical symbols). These aspects of music knowledge were assessed in relation to musical perceptual abilities and extra-musical neuropsychological functions. The patient with semantic dementia showed relatively preserved recognition of musical compositions and musical symbols despite severely impaired recognition of musical emotions and musical instruments from sound. In contrast, the patient with Alzheimer’s disease showed impaired recognition of compositions, with somewhat better recognition of composer and musical era, and impaired comprehension of musical symbols, but normal recognition of musical emotions and musical instruments from sound. The findings suggest that music knowledge is fractionated, and superordinate musical knowledge is relatively more robust than knowledge of particular music. We propose that music constitutes a distinct domain of non-verbal knowledge but shares certain cognitive organizational features with other brain knowledge systems. Within the domain of music knowledge, dissociable cognitive mechanisms process knowledge derived from physical sources and the knowledge of abstract musical entities.
Collapse
Affiliation(s)
- Rohani Omar
- Dementia Research Centre, Institute of Neurology, University College, and Department of Clinical Neuroscience, Hammersmith Hospital Campus, London, UK
| | | | | | | | | |
Collapse
|