1
|
Cosper SH, Männel C, Mueller JL. Auditory associative word learning in adults: The effects of musical experience and stimulus ordering. Brain Cogn 2024; 180:106207. [PMID: 39053199 DOI: 10.1016/j.bandc.2024.106207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 06/18/2024] [Accepted: 07/16/2024] [Indexed: 07/27/2024]
Abstract
Evidence for sequential associative word learning in the auditory domain has been identified in infants, while adults have shown difficulties. To better understand which factors may facilitate adult auditory associative word learning, we assessed the role of auditory expertise as a learner-related property and stimulus order as a stimulus-related manipulation in the association of auditory objects and novel labels. We tested in the first experiment auditorily-trained musicians versus athletes (high-level control group) and in the second experiment stimulus ordering, contrasting object-label versus label-object presentation. Learning was evaluated from Event-Related Potentials (ERPs) during training and subsequent testing phases using a cluster-based permutation approach, as well as accuracy-judgement responses during test. Results revealed for musicians a late positive component in the ERP during testing, but neither an N400 (400-800 ms) nor behavioral effects were found at test, while athletes did not show any effect of learning. Moreover, the object-label-ordering group only exhibited emerging association effects during training, while the label-object-ordering group showed a trend-level late ERP effect (800-1200 ms) during test as well as above chance accuracy-judgement scores. Thus, our results suggest the learner-related property of auditory expertise and stimulus-related manipulation of stimulus ordering modulate auditory associative word learning in adults.
Collapse
Affiliation(s)
- Samuel H Cosper
- Chair of Lifespan Developmental Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.
| | - Claudia Männel
- Department of Audiology and Phoniatrics, Charité-Universitätsmedizin Berlin, Berlin, Germany; Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Jutta L Mueller
- Department of Linguistics, University of Vienna, Vienna, Austria
| |
Collapse
|
2
|
Burunat I, Levitin DJ, Toiviainen P. Breaking (musical) boundaries by investigating brain dynamics of event segmentation during real-life music-listening. Proc Natl Acad Sci U S A 2024; 121:e2319459121. [PMID: 39186645 PMCID: PMC11388323 DOI: 10.1073/pnas.2319459121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 06/26/2024] [Indexed: 08/28/2024] Open
Abstract
The perception of musical phrase boundaries is a critical aspect of human musical experience: It allows us to organize, understand, derive pleasure from, and remember music. Identifying boundaries is a prerequisite for segmenting music into meaningful chunks, facilitating efficient processing and storage while providing an enjoyable, fulfilling listening experience through the anticipation of upcoming musical events. Expanding on Sridharan et al.'s [Neuron 55, 521-532 (2007)] work on coarse musical boundaries between symphonic movements, we examined finer-grained boundaries. We measured the fMRI responses of 18 musicians and 18 nonmusicians during music listening. Using general linear model, independent component analysis, and Granger causality, we observed heightened auditory integration in anticipation to musical boundaries, and an extensive decrease within the fronto-temporal-parietal network during and immediately following boundaries. Notably, responses were modulated by musicianship. Findings uncover the intricate interplay between musical structure, expertise, and cognitive processing, advancing our knowledge of how the brain makes sense of music.
Collapse
Affiliation(s)
- Iballa Burunat
- Centre of Excellence in Music, Mind, Body and Brain, Department of Music, Arts and Culture Studies, University of Jyväskylä, Jyväskylä 40014, Finland
| | - Daniel J Levitin
- School of Social Sciences, Minerva University, San Francisco, CA 94103
- Department of Psychology, McGill University, Montreal, QC H3A 1G1, Canada
| | - Petri Toiviainen
- Centre of Excellence in Music, Mind, Body and Brain, Department of Music, Arts and Culture Studies, University of Jyväskylä, Jyväskylä 40014, Finland
| |
Collapse
|
3
|
Wu Q, Sun L, Ding N, Yang Y. Musical tension is affected by metrical structure dynamically and hierarchically. Cogn Neurodyn 2024; 18:1955-1976. [PMID: 39104669 PMCID: PMC11297889 DOI: 10.1007/s11571-023-10058-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 11/29/2023] [Accepted: 12/11/2023] [Indexed: 08/07/2024] Open
Abstract
As the basis of musical emotions, dynamic tension experience is felt by listeners as music unfolds over time. The effects of musical harmonic and melodic structures on tension have been widely investigated, however, the potential roles of metrical structures in tension perception remain largely unexplored. This experiment examined how different metrical structures affect tension experience and explored the underlying neural activities. The electroencephalogram (EEG) was recorded and subjective tension was rated simultaneously while participants listened to music meter sequences. On large time scale of whole meter sequences, it was found that different overall tension and low-frequency (1 ~ 4 Hz) steady-state evoked potentials were elicited by metrical structures with different periods of strong beats, and the higher overall tension was associated with metrical structure with the shorter intervals between strong beats. On small time scale of measures, dynamic tension fluctuations within measures was found to be associated with the periodic modulations of high-frequency (10 ~ 25 Hz) neural activities. The comparisons between the same beats within measures and across different meters both on small and large time scales verified the contextual effects of meter on tension induced by beats. Our findings suggest that the overall tension is determined by temporal intervals between strong beats, and the dynamic tension experience may arise from cognitive processing of hierarchical temporal expectation and attention, which are discussed under the theoretical frameworks of metrical hierarchy, musical expectation and dynamic attention.
Collapse
Affiliation(s)
- Qiong Wu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, No. 16 Lincui Road, Chaoyang District, Beijing, 100101 China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Lijun Sun
- College of Arts, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Nai Ding
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou, China
| | - Yufang Yang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, No. 16 Lincui Road, Chaoyang District, Beijing, 100101 China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
4
|
Heng JG, Zhang J, Bonetti L, Lim WPH, Vuust P, Agres K, Chen SHA. Understanding music and aging through the lens of Bayesian inference. Neurosci Biobehav Rev 2024; 163:105768. [PMID: 38908730 DOI: 10.1016/j.neubiorev.2024.105768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 06/05/2024] [Accepted: 06/10/2024] [Indexed: 06/24/2024]
Abstract
Bayesian inference has recently gained momentum in explaining music perception and aging. A fundamental mechanism underlying Bayesian inference is the notion of prediction. This framework could explain how predictions pertaining to musical (melodic, rhythmic, harmonic) structures engender action, emotion, and learning, expanding related concepts of music research, such as musical expectancies, groove, pleasure, and tension. Moreover, a Bayesian perspective of music perception may shed new insights on the beneficial effects of music in aging. Aging could be framed as an optimization process of Bayesian inference. As predictive inferences refine over time, the reliance on consolidated priors increases, while the updating of prior models through Bayesian inference attenuates. This may affect the ability of older adults to estimate uncertainties in their environment, limiting their cognitive and behavioral repertoire. With Bayesian inference as an overarching framework, this review synthesizes the literature on predictive inferences in music and aging, and details how music could be a promising tool in preventive and rehabilitative interventions for older adults through the lens of Bayesian inference.
Collapse
Affiliation(s)
- Jiamin Gladys Heng
- School of Computer Science and Engineering, Nanyang Technological University, Singapore.
| | - Jiayi Zhang
- Interdisciplinary Graduate Program, Nanyang Technological University, Singapore; School of Social Sciences, Nanyang Technological University, Singapore; Centre for Research and Development in Learning, Nanyang Technological University, Singapore
| | - Leonardo Bonetti
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus, Aalborg, Denmark; Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, United Kingdom; Department of Psychiatry, University of Oxford, United Kingdom; Department of Psychology, University of Bologna, Italy
| | | | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus, Aalborg, Denmark
| | - Kat Agres
- Centre for Music and Health, National University of Singapore, Singapore; Yong Siew Toh Conservatory of Music, National University of Singapore, Singapore
| | - Shen-Hsing Annabel Chen
- School of Social Sciences, Nanyang Technological University, Singapore; Centre for Research and Development in Learning, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore; National Institute of Education, Nanyang Technological University, Singapore.
| |
Collapse
|
5
|
Herff SA, Bonetti L, Cecchetti G, Vuust P, Kringelbach ML, Rohrmeier MA. Hierarchical syntax model of music predicts theta power during music listening. Neuropsychologia 2024; 199:108905. [PMID: 38740179 DOI: 10.1016/j.neuropsychologia.2024.108905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 03/07/2024] [Accepted: 05/06/2024] [Indexed: 05/16/2024]
Abstract
Linguistic research showed that the depth of syntactic embedding is reflected in brain theta power. Here, we test whether this also extends to non-linguistic stimuli, specifically music. We used a hierarchical model of musical syntax to continuously quantify two types of expert-annotated harmonic dependencies throughout a piece of Western classical music: prolongation and preparation. Prolongations can roughly be understood as a musical analogue to linguistic coordination between constituents that share the same function (e.g., 'pizza' and 'pasta' in 'I ate pizza and pasta'). Preparation refers to the dependency between two harmonies whereby the first implies a resolution towards the second (e.g., dominant towards tonic; similar to how the adjective implies the presence of a noun in 'I like spicy … '). Source reconstructed MEG data of sixty-five participants listening to the musical piece was then analysed. We used Bayesian Mixed Effects models to predict theta envelope in the brain, using the number of open prolongation and preparation dependencies as predictors whilst controlling for audio envelope. We observed that prolongation and preparation both carry independent and distinguishable predictive value for theta band fluctuation in key linguistic areas such as the Angular, Superior Temporal, and Heschl's Gyri, or their right-lateralised homologues, with preparation showing additional predictive value for areas associated with the reward system and prediction. Musical expertise further mediated these effects in language-related brain areas. Results show that predictions of precisely formalised music-theoretical models are reflected in the brain activity of listeners which furthers our understanding of the perception and cognition of musical structure.
Collapse
Affiliation(s)
- Steffen A Herff
- Sydney Conservatorium of Music, University of Sydney, Sydney, Australia; The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia; Digital and Cognitive Musicology Lab, College of Humanities, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
| | - Leonardo Bonetti
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Denmark; Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, United Kingdom; Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| | - Gabriele Cecchetti
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia; Digital and Cognitive Musicology Lab, College of Humanities, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Denmark
| | - Morten L Kringelbach
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Denmark; Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, United Kingdom; Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| | - Martin A Rohrmeier
- Digital and Cognitive Musicology Lab, College of Humanities, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
6
|
Roswandowitz C, Kathiresan T, Pellegrino E, Dellwo V, Frühholz S. Cortical-striatal brain network distinguishes deepfake from real speaker identity. Commun Biol 2024; 7:711. [PMID: 38862808 PMCID: PMC11166919 DOI: 10.1038/s42003-024-06372-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 05/22/2024] [Indexed: 06/13/2024] Open
Abstract
Deepfakes are viral ingredients of digital environments, and they can trick human cognition into misperceiving the fake as real. Here, we test the neurocognitive sensitivity of 25 participants to accept or reject person identities as recreated in audio deepfakes. We generate high-quality voice identity clones from natural speakers by using advanced deepfake technologies. During an identity matching task, participants show intermediate performance with deepfake voices, indicating levels of deception and resistance to deepfake identity spoofing. On the brain level, univariate and multivariate analyses consistently reveal a central cortico-striatal network that decoded the vocal acoustic pattern and deepfake-level (auditory cortex), as well as natural speaker identities (nucleus accumbens), which are valued for their social relevance. This network is embedded in a broader neural identity and object recognition network. Humans can thus be partly tricked by deepfakes, but the neurocognitive mechanisms identified during deepfake processing open windows for strengthening human resilience to fake information.
Collapse
Affiliation(s)
- Claudia Roswandowitz
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich, Switzerland.
- Phonetics and Speech Sciences Group, Department of Computational Linguistics, University of Zurich, Zurich, Switzerland.
- Neuroscience Centre Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland.
| | - Thayabaran Kathiresan
- Centre for Neuroscience of Speech, University Melbourne, Melbourne, Australia
- Redenlab, Melbourne, Australia
| | - Elisa Pellegrino
- Phonetics and Speech Sciences Group, Department of Computational Linguistics, University of Zurich, Zurich, Switzerland
| | - Volker Dellwo
- Phonetics and Speech Sciences Group, Department of Computational Linguistics, University of Zurich, Zurich, Switzerland
| | - Sascha Frühholz
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich, Switzerland
- Neuroscience Centre Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland
- Department of Psychology, University of Oslo, Oslo, Norway
| |
Collapse
|
7
|
Zhao C, Ong JH, Veic A, Patel AD, Jiang C, Fogel AR, Wang L, Hou Q, Das D, Crasto C, Chakrabarti B, Williams TI, Loutrari A, Liu F. Predictive processing of music and language in autism: Evidence from Mandarin and English speakers. Autism Res 2024; 17:1230-1257. [PMID: 38651566 DOI: 10.1002/aur.3133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Accepted: 04/01/2024] [Indexed: 04/25/2024]
Abstract
Atypical predictive processing has been associated with autism across multiple domains, based mainly on artificial antecedents and consequents. As structured sequences where expectations derive from implicit learning of combinatorial principles, language and music provide naturalistic stimuli for investigating predictive processing. In this study, we matched melodic and sentence stimuli in cloze probabilities and examined musical and linguistic prediction in Mandarin- (Experiment 1) and English-speaking (Experiment 2) autistic and non-autistic individuals using both production and perception tasks. In the production tasks, participants listened to unfinished melodies/sentences and then produced the final notes/words to complete these items. In the perception tasks, participants provided expectedness ratings of the completed melodies/sentences based on the most frequent notes/words in the norms. While Experiment 1 showed intact musical prediction but atypical linguistic prediction in autism in the Mandarin sample that demonstrated imbalanced musical training experience and receptive vocabulary skills between groups, the group difference disappeared in a more closely matched sample of English speakers in Experiment 2. These findings suggest the importance of taking an individual differences approach when investigating predictive processing in music and language in autism, as the difficulty in prediction in autism may not be due to generalized problems with prediction in any type of complex sequence processing.
Collapse
Affiliation(s)
- Chen Zhao
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Jia Hoong Ong
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Anamarija Veic
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Aniruddh D Patel
- Department of Psychology, Tufts University, Medford, Massachusetts, USA
- Program in Brain, Mind, and Consciousness, Canadian Institute for Advanced Research (CIFAR), Toronto, Canada
| | - Cunmei Jiang
- Music College, Shanghai Normal University, Shanghai, China
| | - Allison R Fogel
- Department of Psychology, Tufts University, Medford, Massachusetts, USA
| | - Li Wang
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Qingqi Hou
- Department of Music and Dance, Nanjing Normal University of Special Education, Nanjing, China
| | - Dipsikha Das
- School of Psychology, Keele University, Staffordshire, UK
| | - Cara Crasto
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Bhismadev Chakrabarti
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Tim I Williams
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Ariadne Loutrari
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| |
Collapse
|
8
|
Ono K, Mizuochi R, Yamamoto K, Sasaoka T, Ymawaki S. Exploring the neural underpinnings of chord prediction uncertainty: an electroencephalography (EEG) study. Sci Rep 2024; 14:4586. [PMID: 38403782 PMCID: PMC10894873 DOI: 10.1038/s41598-024-55366-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 02/22/2024] [Indexed: 02/27/2024] Open
Abstract
Predictive processing in the brain, involving interaction between interoceptive (bodily signal) and exteroceptive (sensory) processing, is essential for understanding music as it encompasses musical temporality dynamics and affective responses. This study explores the relationship between neural correlates and subjective certainty of chord prediction, focusing on the alignment between predicted and actual chord progressions in both musically appropriate chord sequences and random chord sequences. Participants were asked to predict the final chord in sequences while their brain activity was measured using electroencephalography (EEG). We found that the stimulus preceding negativity (SPN), an EEG component associated with predictive processing of sensory stimuli, was larger for non-harmonic chord sequences than for harmonic chord progressions. Additionally, the heartbeat evoked potential (HEP), an EEG component related to interoceptive processing, was larger for random chord sequences and correlated with prediction certainty ratings. HEP also correlated with the N5 component, found while listening to the final chord. Our findings suggest that HEP more directly reflects the subjective prediction certainty than SPN. These findings offer new insights into the neural mechanisms underlying music perception and prediction, emphasizing the importance of considering auditory prediction certainty when examining the neural basis of music cognition.
Collapse
Affiliation(s)
- Kentaro Ono
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Hiroshima, Japan.
| | - Ryohei Mizuochi
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Hiroshima, Japan
| | - Kazuki Yamamoto
- Graduate School of Humanities and Social Sciences, Hiroshima University, Higashihiroshima, Japan
| | - Takafumi Sasaoka
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Hiroshima, Japan
| | - Shigeto Ymawaki
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Hiroshima, Japan
| |
Collapse
|
9
|
Albury AW, Bianco R, Gold BP, Penhune VB. Context changes judgments of liking and predictability for melodies. Front Psychol 2023; 14:1175682. [PMID: 38034280 PMCID: PMC10684779 DOI: 10.3389/fpsyg.2023.1175682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 10/23/2023] [Indexed: 12/02/2023] Open
Abstract
Predictability plays an important role in the experience of musical pleasure. By leveraging expectations, music induces pleasure through tension and surprise. However, musical predictions draw on both prior knowledge and immediate context. Similarly, musical pleasure, which has been shown to depend on predictability, may also vary relative to the individual and context. Although research has demonstrated the influence of both long-term knowledge and stimulus features in influencing expectations, it is unclear how perceptions of a melody are influenced by comparisons to other music pieces heard in the same context. To examine the effects of context we compared how listeners' judgments of two distinct sets of stimuli differed when they were presented alone or in combination. Stimuli were excerpts from a repertoire of Western music and a set of experimenter created melodies. Separate groups of participants rated liking and predictability for each set of stimuli alone and in combination. We found that when heard together, the Repertoire stimuli were more liked and rated as less predictable than if they were heard alone, with the opposite pattern being observed for the Experimental stimuli. This effect was driven by a change in ratings between the Alone and Combined conditions for each stimulus set. These findings demonstrate a context-based shift of predictability ratings and derived pleasure, suggesting that judgments stem not only from the physical properties of the stimulus, but also vary relative to other options available in the immediate context.
Collapse
Affiliation(s)
- Alexander W. Albury
- Department of Psychology, Concordia University, Montreal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS) and Center for Research in Brain, Language and Music (CRBLM), Montreal, QC, Canada
| | - Roberta Bianco
- Neuroscience of Perception and Action Laboratory, Italian Institute of Technology, Rome, Italy
| | - Benjamin P. Gold
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, United States
| | - Virginia B. Penhune
- Department of Psychology, Concordia University, Montreal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS) and Center for Research in Brain, Language and Music (CRBLM), Montreal, QC, Canada
| |
Collapse
|
10
|
Chander A, Aslin RN. Expectation adaptation for rare cadences in music: Item order matters in repetition priming. Cognition 2023; 240:105601. [PMID: 37604028 PMCID: PMC10501749 DOI: 10.1016/j.cognition.2023.105601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 08/08/2023] [Accepted: 08/14/2023] [Indexed: 08/23/2023]
Abstract
Humans make predictions about future events in many domains, including when they listen to music. Previous accounts of harmonic expectation in music have emphasised the role of implicit musical knowledge acquired in the long term through the mechanism of statistical learning. However, it is not known whether listeners can adapt their expectations for unusual harmonies in the short term through repetition priming, and whether the extent of any short-term adaptation depends on the unfolding statistical structure of the music. To explore these possibilities, we presented 150 participants with phrases from Bach chorales that ended with a cadence that was either a priori likely or unlikely based on the long-term statistical structure of the corpus of chorales. While holding the 50-50 incidence of likely vs. unlikely cadences constant, we manipulated the order in which these phrases were presented such that the local probability of hearing an unlikely cadence changed throughout the experiment. For each phrase, participants provided two judgements: (a) a prospective rating of how confident they were in their expectations for the cadence, and (b) a retrospective rating of how well the presented cadence matched their expectations. While confidence ratings increased over the course of the experiment, the rate of change decreased as the local probability of an unexpected cadence increased. Participants' expectations favoured likely cadences over unlikely cadences on average, but their expectation ratings for unlikely cadences increased at a faster rate over the course of the experiment than for likely cadences, particularly when the local probability of hearing an unlikely cadence was high. Thus, despite entrenched long-term statistics about cadences, listeners can indeed adapt to unusual musical harmonies and are sensitive to the local statistical structure of the musical environment. We suggest that this adaptation is an instance of Bayesian belief updating, a domain-general process that accounts for expectation adaptation in multiple domains.
Collapse
Affiliation(s)
- Aditya Chander
- Department of Music, Yale University, 469 College St, New Haven, CT 06511, USA.
| | - Richard N Aslin
- Child Study Center, Yale School of Medicine, 230 S Frontage Rd, New Haven, CT 06519, USA; Department of Psychology, Yale University, 405 Temple St, New Haven, CT 06511, USA
| |
Collapse
|
11
|
Gold BP, Pearce MT, McIntosh AR, Chang C, Dagher A, Zatorre RJ. Auditory and reward structures reflect the pleasure of musical expectancies during naturalistic listening. Front Neurosci 2023; 17:1209398. [PMID: 37928727 PMCID: PMC10625409 DOI: 10.3389/fnins.2023.1209398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 10/05/2023] [Indexed: 11/07/2023] Open
Abstract
Enjoying music consistently engages key structures of the neural auditory and reward systems such as the right superior temporal gyrus (R STG) and ventral striatum (VS). Expectations seem to play a central role in this effect, as preferences reliably vary according to listeners' uncertainty about the musical future and surprise about the musical past. Accordingly, VS activity reflects the pleasure of musical surprise, and exhibits stronger correlations with R STG activity as pleasure grows. Yet the reward value of musical surprise - and thus the reason for these surprises engaging the reward system - remains an open question. Recent models of predictive neural processing and learning suggest that forming, testing, and updating hypotheses about one's environment may be intrinsically rewarding, and that the constantly evolving structure of musical patterns could provide ample opportunity for this procedure. Consistent with these accounts, our group previously found that listeners tend to prefer melodic excerpts taken from real music when it either validates their uncertain melodic predictions (i.e., is high in uncertainty and low in surprise) or when it challenges their highly confident ones (i.e., is low in uncertainty and high in surprise). An independent research group (Cheung et al., 2019) replicated these results with musical chord sequences, and identified their fMRI correlates in the STG, amygdala, and hippocampus but not the VS, raising new questions about the neural mechanisms of musical pleasure that the present study seeks to address. Here, we assessed concurrent liking ratings and hemodynamic fMRI signals as 24 participants listened to 50 naturalistic, real-world musical excerpts that varied across wide spectra of computationally modeled uncertainty and surprise. As in previous studies, liking ratings exhibited an interaction between uncertainty and surprise, with the strongest preferences for high uncertainty/low surprise and low uncertainty/high surprise. FMRI results also replicated previous findings, with music liking effects in the R STG and VS. Furthermore, we identify interactions between uncertainty and surprise on the one hand, and liking and surprise on the other, in VS activity. Altogether, these results provide important support for the hypothesized role of the VS in deriving pleasure from learning about musical structure.
Collapse
Affiliation(s)
- Benjamin P. Gold
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, United States
- Vanderbilt University Institute of Imaging Science, Vanderbilt University Medical Center, Nashville, TN, United States
- Montreal Neurological Institute, McGill University, Montreal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada
- Centre for Interdisciplinary Research in Music, Media, and Technology (CIRMMT), Montreal, QC, Canada
| | - Marcus T. Pearce
- Cognitive Science Research Group, School of Electronic Engineering & Computer Science, Queen Mary University of London, London, United Kingdom
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Anthony R. McIntosh
- Baycrest Centre, Rotman Research Institute, Toronto, ON, Canada
- Department of Psychology, University of Toronto, Toronto, ON, Canada
| | - Catie Chang
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, United States
- Vanderbilt University Institute of Imaging Science, Vanderbilt University Medical Center, Nashville, TN, United States
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, United States
- Department of Computer Science, Vanderbilt University, Nashville, TN, United States
| | - Alain Dagher
- Montreal Neurological Institute, McGill University, Montreal, QC, Canada
| | - Robert J. Zatorre
- Montreal Neurological Institute, McGill University, Montreal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada
- Centre for Interdisciplinary Research in Music, Media, and Technology (CIRMMT), Montreal, QC, Canada
| |
Collapse
|
12
|
Czepiel A, Fink LK, Seibert C, Scharinger M, Kotz SA. Aesthetic and physiological effects of naturalistic multimodal music listening. Cognition 2023; 239:105537. [PMID: 37487303 DOI: 10.1016/j.cognition.2023.105537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 05/31/2023] [Accepted: 06/24/2023] [Indexed: 07/26/2023]
Abstract
Compared to audio only (AO) conditions, audiovisual (AV) information can enhance the aesthetic experience of a music performance. However, such beneficial multimodal effects have yet to be studied in naturalistic music performance settings. Further, peripheral physiological correlates of aesthetic experiences are not well-understood. Here, participants were invited to a concert hall for piano performances of Bach, Messiaen, and Beethoven, which were presented in two conditions: AV and AO. They rated their aesthetic experience (AE) after each piece (Experiment 1 and 2), while peripheral signals (cardiorespiratory measures, skin conductance, and facial muscle activity) were continuously measured (Experiment 2). Factor scores of AE were significantly higher in the AV condition in both experiments. LF/HF ratio, a heart rhythm that represents activation of the sympathetic nervous system, was higher in the AO condition, suggesting increased arousal, likely caused by less predictable sound onsets in the AO condition. We present partial evidence that breathing was faster and facial muscle activity was higher in the AV condition, suggesting that observing a performer's movements likely enhances motor mimicry in these more voluntary peripheral measures. Further, zygomaticus ('smiling') muscle activity was a significant predictor of AE. Thus, we suggest physiological measures are related to AE, but at different levels: the more involuntary measures (i.e., heart rhythms) may reflect more sensory aspects, while the more voluntary measures (i.e., muscular control of breathing and facial responses) may reflect the liking aspect of an AE. In summary, we replicate and extend previous findings that AV information enhances AE in a naturalistic music performance setting. We further show that a combination of self-report and peripheral measures benefit a meaningful assessment of AE in naturalistic music performance settings.
Collapse
Affiliation(s)
- Anna Czepiel
- Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany; Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands.
| | - Lauren K Fink
- Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany; Max Planck-NYU Center for Language, Music, and Emotion, Frankfurt am Main, Germany
| | - Christoph Seibert
- Institute for Music Informatics and Musicology, University of Music Karlsruhe, Karlsruhe, Germany
| | - Mathias Scharinger
- Research Group Phonetics, Department of German Linguistics, University of Marburg, Marburg, Germany; Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
| | - Sonja A Kotz
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands; Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
13
|
Patrick MT, Cohn N, Mertus J, Blumstein SE. Sequences in harmony: Cognitive interactions between musical and visual narrative structure. Acta Psychol (Amst) 2023; 238:103981. [PMID: 37441849 DOI: 10.1016/j.actpsy.2023.103981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2022] [Revised: 07/02/2023] [Accepted: 07/05/2023] [Indexed: 07/15/2023] Open
Abstract
From film and television to graphic storytelling, tonal music can accompany visual narratives in a variety of contexts. The apprehension of both musical and narrative sequences involves temporal categories in ordered patterning, which raises an interesting question: Do musical progressions and visual narratives rely on shared sequence processing mechanisms? If this is the case, then cues from music and sequential static images, when presented simultaneously, should interact during audiovisual online processing. We tested this question by measuring reaction times to target picture panels appearing in visual narrative (comic strip) sequences, which were presented panel by panel and synchronized with musical chord progressions. Image sequences were either coherent narratives or incoherent (random) panels, and they were aligned with musical accompaniment consisting of coherent tonal chord progressions or non-tonal (unrelated) chords. Reaction times were faster for target images in coherent sequences than incoherent sequences, and even faster for coherent images with tonal accompaniment than non-tonal chords. This indicated an interaction between sequential structures across domains. We take these results as evidence for a shared, domain-general sequence processing mechanism operating across music and visual narrative.
Collapse
Affiliation(s)
- Morgan T Patrick
- Brown University, United States of America; Northwestern University, United States of America.
| | | | | | | |
Collapse
|
14
|
Jiang J, Liu F, Zhou L, Chen L, Jiang C. Explicit processing of melodic structure in congenital amusia can be improved by redescription-associate learning. Neuropsychologia 2023; 182:108521. [PMID: 36870471 DOI: 10.1016/j.neuropsychologia.2023.108521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 02/19/2023] [Accepted: 02/19/2023] [Indexed: 03/06/2023]
Abstract
Congenital amusia is a neurodevelopmental disorder of musical processing. Previous research demonstrates that although explicit musical processing is impaired in congenital amusia, implicit musical processing can be intact. However, little is known about whether implicit knowledge could improve explicit musical processing in individuals with congenital amusia. To this end, we developed a training method utilizing redescription-associate learning, aiming at transferring implicit representations of perceptual states into explicit forms through verbal description and then establishing the associations between the perceptual states reported and responses via feedback, to investigate whether the explicit processing of melodic structure could be improved in individuals with congenital amusia. Sixteen amusics and 11 controls rated the degree of expectedness of melodies during EEG recording before and after training. In the interim, half of the amusics received nine training sessions on melodic structure, while the other half received no training. Results, based on effect size estimation, showed that at pretest, amusics but not controls failed to explicitly distinguish the regular from the irregular melodies and to exhibit an ERAN in response to the irregular endings. At posttest, trained but not untrained amusics performed as well as controls at both the behavioral and neural levels. At the 3-month follow-up, the training effects still maintained. These findings present novel electrophysiological evidence of neural plasticity in the amusic brain, suggesting that redescription-associate learning may be an effective method to remediate impaired explicit processes for individuals with other neurodevelopmental disorders who have intact implicit knowledge.
Collapse
Affiliation(s)
- Jun Jiang
- Music College, Shanghai Normal University, Shanghai, 200234, China
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, RG6 6AL, UK
| | - Linshu Zhou
- Music College, Shanghai Normal University, Shanghai, 200234, China
| | - Liaoliao Chen
- Foreign Languages College, Shanghai Normal University, Shanghai, 200234, China
| | - Cunmei Jiang
- Music College, Shanghai Normal University, Shanghai, 200234, China.
| |
Collapse
|
15
|
Lévêque Y, Schellenberg EG, Fornoni L, Bouchet P, Caclin A, Tillmann B. Individuals with congenital amusia remember music they like. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2023:10.3758/s13415-023-01084-6. [PMID: 36949277 DOI: 10.3758/s13415-023-01084-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 02/22/2023] [Indexed: 03/24/2023]
Abstract
Music is better recognized when it is liked. Does this association remain evident when music perception and memory are severely impaired, as in congenital amusia? We tested 11 amusic and 11 matched control participants, asking whether liking of a musical excerpt influences subsequent recognition. In an initial exposure phase, participants-unaware that their recognition would be tested subsequently-listened to 24 musical excerpts and judged how much they liked each excerpt. In the test phase that followed, participants rated whether they recognized the previously heard excerpts, which were intermixed with an equal number of foils matched for mode, tempo, and musical genre. As expected, recognition was in general impaired for amusic participants compared with control participants. For both groups, however, recognition was better for excerpts that were liked, and the liking enhancement did not differ between groups. These results contribute to a growing body of research that examines the complex interplay between emotions and cognitive processes. More specifically, they extend previous findings related to amusics' impairments to a new memory paradigm and suggest that (1) amusic individuals are sensitive to an aesthetic and subjective dimension of the music-listening experience, and (2) emotions can support memory processes even in a population with impaired music perception and memory.
Collapse
Affiliation(s)
- Yohana Lévêque
- Lyon Neuroscience Research Center, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France.
- University Lyon 1, F-69000, Lyon, France.
| | - E Glenn Schellenberg
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal
- Department of Psychology, University of Toronto Mississauga, Mississauga, Canada
| | - Lesly Fornoni
- Lyon Neuroscience Research Center, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France
- University Lyon 1, F-69000, Lyon, France
| | - Patrick Bouchet
- Lyon Neuroscience Research Center, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France
- University Lyon 1, F-69000, Lyon, France
| | - Anne Caclin
- Lyon Neuroscience Research Center, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France
- University Lyon 1, F-69000, Lyon, France
| | - Barbara Tillmann
- Lyon Neuroscience Research Center, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France
- University Lyon 1, F-69000, Lyon, France
| |
Collapse
|
16
|
You S, Sun L, Yang Y. The effects of contextual certainty on tension induction and resolution. Cogn Neurodyn 2023; 17:191-201. [PMID: 36704622 PMCID: PMC9871111 DOI: 10.1007/s11571-022-09810-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Revised: 02/21/2022] [Accepted: 04/02/2022] [Indexed: 01/29/2023] Open
Abstract
It is known that tension is a core principle of the generation of music emotion and meaning, and supposed to be induced by prediction in process of music listening. Using EEG and behavioral rating, the current research investigated how contextual certainty affects musical tension induction and resolution. The major results were that in the tension induction process, incongruent conditions elicited larger EN and P600 in ERP responses compared with congruent conditions, and the amplitude of P600, tension ratings were mediated by contextual certainty. In the tension resolution process, contextual certainty further affected the duration of P600 and tension ratings. For the certain conditions, tension ratings were higher, tension curves fluctuated faster, and a larger P600 was evoked in the incongruent condition compared with the congruent condition. For the uncertain conditions, there was no congruency effect on behavioral ratings and tension curves, but a larger P600 was elicited in the congruent condition. These results show that contextual certainty affects tension induction and resolution. Our findings provide a more comprehensive view on how musical prediction affects musical tension.
Collapse
Affiliation(s)
- Siqi You
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, No. 16 Lincui Road, Chaoyang District, Beijing, 100101 China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Lijun Sun
- College of Art, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Yufang Yang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, No. 16 Lincui Road, Chaoyang District, Beijing, 100101 China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
17
|
How Chanting Relates to Cognitive Function, Altered States and Quality of Life. Brain Sci 2022; 12:brainsci12111456. [DOI: 10.3390/brainsci12111456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 10/19/2022] [Accepted: 10/20/2022] [Indexed: 11/16/2022] Open
Abstract
Chanting is practiced in many religious and secular traditions and involves rhythmic vocalization or mental repetition of a sound or phrase. This study examined how chanting relates to cognitive function, altered states, and quality of life across a wide range of traditions. A global survey was used to assess experiences during chanting including flow states, mystical experiences, mindfulness, and mind wandering. Further, attributes of chanting were assessed to determine their association with altered states and cognitive benefits, and whether psychological correlates of chanting are associated with quality of life. Responses were analyzed from 456 English speaking participants who regularly chant across 32 countries and various chanting traditions. Results revealed that different aspects of chanting were associated with distinctive experiential outcomes. Stronger intentionality (devotion, intention, sound) and higher chanting engagement (experience, practice duration, regularity) were associated with altered states and cognitive benefits. Participants whose main practice was call and response chanting reported higher scores of mystical experiences. Participants whose main practice was repetitive prayer reported lower mind wandering. Lastly, intentionality and engagement were associated with quality of life indirectly through altered states and cognitive benefits. This research sheds new light on the phenomenology and psychological consequences of chanting across a range of practices and traditions.
Collapse
|
18
|
Mednicoff SD, Barashy S, Gonzales D, Benning SD, Snyder JS, Hannon EE. Auditory affective processing, musicality, and the development of misophonic reactions. Front Neurosci 2022; 16:924806. [PMID: 36213735 PMCID: PMC9537735 DOI: 10.3389/fnins.2022.924806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 09/02/2022] [Indexed: 11/13/2022] Open
Abstract
Misophonia can be characterized both as a condition and as a negative affective experience. Misophonia is described as feeling irritation or disgust in response to hearing certain sounds, such as eating, drinking, gulping, and breathing. Although the earliest misophonic experiences are often described as occurring during childhood, relatively little is known about the developmental pathways that lead to individual variation in these experiences. This literature review discusses evidence of misophonic reactions during childhood and explores the possibility that early heightened sensitivities to both positive and negative sounds, such as to music, might indicate a vulnerability for misophonia and misophonic reactions. We will review when misophonia may develop, how it is distinguished from other auditory conditions (e.g., hyperacusis, phonophobia, or tinnitus), and how it relates to developmental disorders (e.g., autism spectrum disorder or Williams syndrome). Finally, we explore the possibility that children with heightened musicality could be more likely to experience misophonic reactions and develop misophonia.
Collapse
Affiliation(s)
| | | | | | | | | | - Erin E. Hannon
- Department of Psychology, University of Nevada Las Vegas, Las Vegas, NV, United States
| |
Collapse
|
19
|
Feng Y, Quon RJ, Jobst BC, Casey MA. Evoked responses to note onsets and phrase boundaries in Mozart's K448. Sci Rep 2022; 12:9632. [PMID: 35688855 PMCID: PMC9187696 DOI: 10.1038/s41598-022-13710-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Accepted: 04/25/2022] [Indexed: 11/29/2022] Open
Abstract
Understanding the neural correlates of perception of hierarchical structure in music presents a direct window into auditory organization. To examine the hypothesis that high-level and low-level structures—i.e. phrases and notes—elicit different neural responses, we collected intracranial electroencephalography (iEEG) data from eight subjects during exposure to Mozart’s K448 and directly compared Event-related potentials (ERPs) due to note onsets and those elicited by phrase boundaries. Cluster-level permutation tests revealed that note-onset-related ERPs and phrase-boundary-related ERPs were significantly different at \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$-150$$\end{document}-150, 200, and 450 ms relative to note onset and phrase markers. We also observed increased activity in frontal brain regions when processing phrase boundaries. We relate these observations to (1) a process which syntactically binds notes together hierarchically to form larger phrases; (2) positive emotions induced by successful prediction of forthcoming phrase boundaries and violations of melodic expectations at phrase boundaries.
Collapse
Affiliation(s)
- Yijing Feng
- Department of Computer Science, Dartmouth College, Hanover, NH, 03755, USA
| | - Robert J Quon
- Geisel School of Medicine, Dartmouth College, Hanover, NH, 03755, USA.,Dartmouth-Hitchcock Medical Center, Lebanon, NH, 03756, USA
| | - Barbara C Jobst
- Geisel School of Medicine, Dartmouth College, Hanover, NH, 03755, USA.,Dartmouth-Hitchcock Medical Center, Lebanon, NH, 03756, USA
| | - Michael A Casey
- Department of Computer Science, Dartmouth College, Hanover, NH, 03755, USA. .,Department of Music, Dartmouth College, Hanover, NH, 03755, USA.
| |
Collapse
|
20
|
Zhang N, Sun L, Wu Q, Yang Y. Tension experience induced by tonal and melodic shift at music phrase boundaries. Sci Rep 2022; 12:8304. [PMID: 35585148 PMCID: PMC9117266 DOI: 10.1038/s41598-022-11949-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 04/25/2022] [Indexed: 11/20/2022] Open
Abstract
Music tension is a link between music structures and emotions. As music unfolds, developmental patterns induce various emotional experiences, but the relationship between developmental patterns and tension experience remains unclear. The present study compared two developmental patterns of two successive phrases (tonal shift and melodic shift) with repetition condition to investigate the relationship with tension experience. Professional musicians rated on-line felt tension and EEG responses were recorded while listening to music sequences. Behavioral results showed that tension ratings under tonal and melodic shift conditions were higher than those under repetition conditions. ERP results showed larger potentials at early P300 and late positive component (LPC) time windows under tonal shift condition, and early right anterior negativity (ERAN) and LPC under melodic shift condition. ERSP results showed early beta and late gamma power increased under tonal shift condition, theta power decreased and alpha power increased under melodic shift condition. Our findings suggest that developmental patterns play a vital role in tension experiences; tonal shift affects tension by tonal shift detection and integration, while melodic shift affects tension by attentional processing and working memory integration. From the perspective of Event Structure Processing Model, solid evidence was given to specify the time-span segmentation and reduction.
Collapse
Affiliation(s)
- Ning Zhang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Lijun Sun
- College of Art, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Qiong Wu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Yufang Yang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China. .,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
21
|
Chabin T, Pazart L, Gabriel D. Vocal melody and musical background are simultaneously processed by the brain for musical predictions. Ann N Y Acad Sci 2022; 1512:126-140. [PMID: 35229293 DOI: 10.1111/nyas.14755] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Accepted: 01/18/2022] [Indexed: 12/18/2022]
Abstract
Musical pleasure is related to the capacity to predict and anticipate the music. By recording early cerebral responses of 16 participants with electroencephalography during periods of silence inserted in known and unknown songs, we aimed to measure the contribution of different musical attributes to musical predictions. We investigated the mismatch between past encoded musical features and the current sensory inputs when listening to lyrics associated with vocal melody, only background instrumental material, or both attributes grouped together. When participants were listening to chords and lyrics for known songs, the brain responses related to musical violation produced event-related potential responses around 150-200 ms that were of a larger amplitude than for chords or lyrics only. Microstate analysis also revealed that for chords and lyrics, the global field power had an increased stability and a longer duration. The source localization identified that the right superior temporal and frontal gyri and the inferior and medial frontal gyri were activated for a longer time for chords and lyrics, likely caused by the increased complexity of the stimuli. We conclude that grouped together, a broader integration and retrieval of several musical attributes at the same time recruit larger neuronal networks that lead to more accurate predictions.
Collapse
Affiliation(s)
- Thibault Chabin
- Centre Hospitalier Universitaire de Besançon, Centre d'Investigation Clinique INSERM CIC 1431, Besançon, France
| | - Lionel Pazart
- Plateforme de Neuroimagerie Fonctionnelle et Neurostimulation Neuraxess, Centre Hospitalier Universitaire de Besançon, Université de Bourgogne Franche-Comté, Bourgogne Franche-Comté, France
| | - Damien Gabriel
- Laboratoire de Recherches Intégratives en Neurosciences et Psychologie Cognitive, Université Bourgogne Franche-Comté, Besançon, France
| |
Collapse
|
22
|
Neural correlates of acoustic dissonance in music: The role of musicianship, schematic and veridical expectations. PLoS One 2021; 16:e0260728. [PMID: 34852008 PMCID: PMC8635369 DOI: 10.1371/journal.pone.0260728] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 11/15/2021] [Indexed: 11/19/2022] Open
Abstract
In western music, harmonic expectations can be fulfilled or broken by unexpected chords. Musical irregularities in the absence of auditory deviance elicit well-studied neural responses (e.g. ERAN, P3, N5). These responses are sensitive to schematic expectations (induced by syntactic rules of chord succession) and veridical expectations about predictability (induced by experimental regularities). However, the cognitive and sensory contributions to these responses and their plasticity as a result of musical training remains under debate. In the present study, we explored whether the neural processing of pure acoustic violations is affected by schematic and veridical expectations. Moreover, we investigated whether these two factors interact with long-term musical training. In Experiment 1, we registered the ERPs elicited by dissonant clusters placed either at the middle or the ending position of chord cadences. In Experiment 2, we presented to the listeners with a high proportion of cadences ending in a dissonant chord. In both experiments, we compared the ERPs of musicians and non-musicians. Dissonant clusters elicited distinctive neural responses (an early negativity, the P3 and the N5). While the EN was not affected by syntactic rules, the P3a and P3b were larger for dissonant closures than for middle dissonant chords. Interestingly, these components were larger in musicians than in non-musicians, while the N5 was the opposite. Finally, the predictability of dissonant closures in our experiment did not modulate any of the ERPs. Our study suggests that, at early time windows, dissonance is processed based on acoustic deviance independently of syntactic rules. However, at longer latencies, listeners may be able to engage integration mechanisms and further processes of attentional and structural analysis dependent on musical hierarchies, which are enhanced in musicians.
Collapse
|
23
|
Czepiel A, Fink LK, Fink LT, Wald-Fuhrmann M, Tröndle M, Merrill J. Synchrony in the periphery: inter-subject correlation of physiological responses during live music concerts. Sci Rep 2021; 11:22457. [PMID: 34789746 PMCID: PMC8599424 DOI: 10.1038/s41598-021-00492-3] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 10/11/2021] [Indexed: 11/19/2022] Open
Abstract
While there is an increasing shift in cognitive science to study perception of naturalistic stimuli, this study extends this goal to naturalistic contexts by assessing physiological synchrony across audience members in a concert setting. Cardiorespiratory, skin conductance, and facial muscle responses were measured from participants attending live string quintet performances of full-length works from Viennese Classical, Contemporary, and Romantic styles. The concert was repeated on three consecutive days with different audiences. Using inter-subject correlation (ISC) to identify reliable responses to music, we found that highly correlated responses depicted typical signatures of physiological arousal. By relating physiological ISC to quantitative values of music features, logistic regressions revealed that high physiological synchrony was consistently predicted by faster tempi (which had higher ratings of arousing emotions and engagement), but only in Classical and Romantic styles (rated as familiar) and not the Contemporary style (rated as unfamiliar). Additionally, highly synchronised responses across all three concert audiences occurred during important structural moments in the music-identified using music theoretical analysis-namely at transitional passages, boundaries, and phrase repetitions. Overall, our results show that specific music features induce similar physiological responses across audience members in a concert context, which are linked to arousal, engagement, and familiarity.
Collapse
Affiliation(s)
- Anna Czepiel
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
| | - Lauren K Fink
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Max Planck - NYU Center for Language, Music, & Emotion (CLaME), New York, USA
| | - Lea T Fink
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
| | - Melanie Wald-Fuhrmann
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Max Planck - NYU Center for Language, Music, & Emotion (CLaME), New York, USA
| | | | - Julia Merrill
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Institute of Music, University of Kassel, Kassel, Germany
| |
Collapse
|
24
|
Abstract
Increasing evidence has uncovered associations between the cognition of abstract schemas and spatial perception. Here we examine such associations for Western musical syntax, tonality. Spatial metaphors are ubiquitous when describing tonality: stable, closural tones are considered to be spatially central and, as gravitational foci, spatially lower. We investigated whether listeners, musicians and nonmusicians, indeed associate tonal relationships with visuospatial dimensions, including spatial height, centrality, laterality, and size, implicitly or explicitly, and whether such mappings are consistent with established metaphors. In the explicit paradigm, participants heard a tonality-establishing prime followed by a probe tone and coupled each probe with a subjectively appropriate location (Exp.1) or size (Exp.4). The implicit paradigm used a version of the Implicit Association Test to examine associations of tonal stability with vertical position (Exp.2), lateral position (Exp3) and size (Exp.5). Tonal stability was indeed associated with perceived physical space: the spatial distances between the locations associated with different scale-degrees significantly correlated with the tonal stability differences between these scale-degrees. However, inconsistently with musical discourse, stable tones were associated with leftward (instead of central) and higher (instead of lower) spatial positions. We speculate that these mappings are influenced by emotion, embodying the "good is up" metaphor, and by the spatial structure of music keyboards. Taken together, the results demonstrate a new type of cross-modal correspondence and a hitherto under-researched connotative function of musical structure. Importantly, the results suggest that the spatial mappings of an abstract domain may be independent of the spatial metaphors used to describe that domain.
Collapse
|
25
|
Contextual prediction modulates musical tension: Evidence from behavioral and neural responses. Brain Cogn 2021; 152:105771. [PMID: 34217125 DOI: 10.1016/j.bandc.2021.105771] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 04/10/2021] [Accepted: 06/23/2021] [Indexed: 11/23/2022]
Abstract
Tension is a bridge between music structure and emotion. It is known that tension is affected by prediction in music listening as music unfolds. Combining behavioral and neural responses, the current research investigated how musical predictions influence tension in the process of prediction build-up based on musical context (anticipatory stage) and its integration with upcoming stimuli (integration stage). The results showed that, at the anticipatory stage, compared with high-prediction conditions, in low-prediction conditions tension curve changed faster and unstable, and a larger N5 in ERP response was elicited. Furthermore, at the integration stage, compared with congruent conditions, in incongruent conditions the behavioral rating of tension were higher regardless of the predictability of the final chord; a right negativity and P600 were elicited, and the amplitude of P600 was modulated by the predictability of the final chord. These results indicated that the effect of prediction on tension was modulated by contextual predictability. The findings provide a more comprehensive view on how musical prediction affects musical tension.
Collapse
|
26
|
Dell'Anna A, Rosso M, Bruno V, Garbarini F, Leman M, Berti A. Does musical interaction in a jazz duet modulate peripersonal space? PSYCHOLOGICAL RESEARCH 2021; 85:2107-2118. [PMID: 32488599 DOI: 10.1007/s00426-020-01365-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2019] [Accepted: 05/20/2020] [Indexed: 01/11/2023]
Abstract
Researchers have widely studied peripersonal space (the space within reach) in the last 20 years with a focus on its plasticity following the use of tools and, more recently, social interactions. Ensemble music is a sophisticated joint action that is typically explored in its temporal rather than spatial dimensions, even within embodied approaches. We, therefore, devised a new paradigm in which two musicians could perform a jazz standard either in a cooperative (correct harmony) or uncooperative (incorrect harmony) condition, under the hypothesis that their peripersonal spaces are modulated by the interaction. We exploited a well-established audio-tactile integration task as a proxy for such a space. After the performances, we measured reaction times to tactile stimuli on the subjects' right hand and auditory stimuli delivered at two different distances, (next to the subject and next to the partner). Considering previous literature's evidence that integration of two different stimuli (e.g. a tactile and an auditory stimulus) is faster in near space compared to far space, we predicted that a cooperative interaction would have extended the peripersonal space of the musicians towards their partner, facilitating reaction times to bimodal stimuli in both spaces. Surprisingly, we obtained complementary results in terms of an increase of reaction times to tactile-auditory near stimuli, but only following the uncooperative condition. We interpret this finding as a suppression of the subject's peripersonal space or as a withdrawal from the uncooperative partner. Subjective reports and correlations between these reports and reaction times comply with that interpretation. Finally, we determined an overall better multisensory integration competence in musicians compared to non-musicians tested in the same task.
Collapse
Affiliation(s)
- A Dell'Anna
- IPEM, Ghent University, Ghent, Belgium.
- Department of Psychology, Turin University, Turin, Italy.
| | - M Rosso
- IPEM, Ghent University, Ghent, Belgium
- Department of Psychology, Turin University, Turin, Italy
| | - V Bruno
- Department of Psychology, Turin University, Turin, Italy
| | - F Garbarini
- Department of Psychology, Turin University, Turin, Italy
| | - M Leman
- IPEM, Ghent University, Ghent, Belgium
| | - A Berti
- Department of Psychology, Turin University, Turin, Italy
| |
Collapse
|
27
|
Pesnot Lerousseau J, Schön D. Musical Expertise Is Associated with Improved Neural Statistical Learning in the Auditory Domain. Cereb Cortex 2021; 31:4877-4890. [PMID: 34013316 DOI: 10.1093/cercor/bhab128] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Revised: 04/16/2021] [Accepted: 04/16/2021] [Indexed: 11/14/2022] Open
Abstract
It is poorly known whether musical training is associated with improvements in general cognitive abilities, such as statistical learning (SL). In standard SL paradigms, musicians have shown better performances than nonmusicians. However, this advantage could be due to differences in auditory discrimination, in memory or truly in the ability to learn sequence statistics. Unfortunately, these different hypotheses make similar predictions in terms of expected results. To dissociate them, we developed a Bayesian model and recorded electroencephalography (EEG). Our results confirm that musicians perform approximately 15% better than nonmusicians at predicting items in auditory sequences that embed either low or high-order statistics. These higher performances are explained in the model by parameters governing the learning of high-order statistics and the selection stage noise. EEG recordings reveal a neural underpinning of the musician's advantage: the P300 amplitude correlates with the surprise elicited by each item, and so, more strongly for musicians. Finally, early EEG components correlate with the surprise elicited by low-order statistics, as opposed to late EEG components that correlate with the surprise elicited by high-order statistics and this effect is stronger for musicians. Overall, our results demonstrate that musical expertise is associated with improved neural SL in the auditory domain. SIGNIFICANCE STATEMENT It is poorly known whether musical training leads to improvements in general cognitive skills. One fundamental cognitive ability, SL, is thought to be enhanced in musicians, but previous studies have reported mixed results. This is because such musician's advantage can embrace very different explanations, such as improvement in auditory discrimination or in memory. To solve this problem, we developed a Bayesian model and recorded EEG to dissociate these explanations. Our results reveal that musical expertise is truly associated with an improved ability to learn sequence statistics, especially high-order statistics. This advantage is reflected in the electroencephalographic recordings, where the P300 amplitude is more sensitive to surprising items in musicians than in nonmusicians.
Collapse
Affiliation(s)
| | - Daniele Schön
- Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France
| |
Collapse
|
28
|
Li CW, Guo FY, Tsai CG. Predictive processing, cognitive control, and tonality stability of music: An fMRI study of chromatic harmony. Brain Cogn 2021; 151:105751. [PMID: 33991840 DOI: 10.1016/j.bandc.2021.105751] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2020] [Revised: 05/01/2021] [Accepted: 05/03/2021] [Indexed: 10/21/2022]
Abstract
The present study aimed at identifying the brain regions which preferentially responded to music with medium degrees of key stability. There were three types of auditory stimuli. Diatonic music based strictly on major and minor scales has the highest key stability, whereas atonal music has the lowest key stability. Between these two extremes, chromatic music is characterized by sophisticated uses of out-of-key notes, which challenge the internal model of musical pitch and lead to higher precision-weighted prediction error compared to diatonic and atonal music. The brain activity of 29 adults with excellent relative pitch was measured with functional magnetic resonance imaging while they listened to diatonic music, chromatic music, and atonal random note sequences. Several frontoparietal regions showed significantly greater response to chromatic music than to diatonic music and atonal sequences, including the pre-supplementary motor area (extending into the dorsal anterior cingulate cortex), dorsolateral prefrontal cortex, rostrolateral prefrontal cortex, intraparietal sulcus, and precuneus. We suggest that these frontoparietal regions may support working memory processes, hierarchical sequencing, and conflict resolution of remotely related harmonic elements during the predictive processing of chromatic music. This finding suggested a possible correlation between precision-weighted prediction error and the frontoparietal regions implicated in cognitive control.
Collapse
Affiliation(s)
- Chia-Wei Li
- Department of Radiology, Wan Fang Hospital, Taipei Medical University, Taipei, Taiwan
| | - Fong-Yi Guo
- Graduate Institute of Brain and Mind Sciences, National Taiwan University, Taipei, Taiwan
| | - Chen-Gia Tsai
- Graduate Institute of Musicology, National Taiwan University, Taipei, Taiwan; Neurobiology and Cognitive Science Center, National Taiwan University, Taipei, Taiwan.
| |
Collapse
|
29
|
Bower J, Magee WL, Catroppa C, Baker FA. The Neurophysiological Processing of Music in Children: A Systematic Review With Narrative Synthesis and Considerations for Clinical Practice in Music Therapy. Front Psychol 2021; 12:615209. [PMID: 33935868 PMCID: PMC8081903 DOI: 10.3389/fpsyg.2021.615209] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Accepted: 03/10/2021] [Indexed: 11/17/2022] Open
Abstract
Introduction: Evidence supporting the use of music interventions to maximize arousal and awareness in adults presenting with a disorder of consciousness continues to grow. However, the brain of a child is not simply a small adult brain, and therefore adult theories are not directly translatable to the pediatric population. The present study aims to synthesize brain imaging data about the neural processing of music in children aged 0-18 years, to form a theoretical basis for music interventions with children presenting with a disorder of consciousness following acquired brain injury. Methods: We conducted a systematic review with narrative synthesis utilizing an adaptation of the methodology developed by Popay and colleagues. Following the development of the narrative that answered the central question "what does brain imaging data reveal about the receptive processing of music in children?", discussion was centered around the clinical implications of music therapy with children following acquired brain injury. Results: The narrative synthesis included 46 studies that utilized EEG, MEG, fMRI, and fNIRS scanning techniques in children aged 0-18 years. From birth, musical stimuli elicit distinct but immature electrical responses, with components of the auditory evoked response having longer latencies and variable amplitudes compared to their adult counterparts. Hemodynamic responses are observed throughout cortical and subcortical structures however cortical immaturity impacts musical processing and the localization of function in infants and young children. The processing of complex musical stimuli continues to mature into late adolescence. Conclusion: While the ability to process fundamental musical elements is present from birth, infants and children process music more slowly and utilize different cortical areas compared to adults. Brain injury in childhood occurs in a period of rapid development and the ability to process music following brain injury will likely depend on pre-morbid musical processing. Further, a significant brain injury may disrupt the developmental trajectory of complex music processing. However, complex music processing may emerge earlier than comparative language processing, and occur throughout a more global circuitry.
Collapse
Affiliation(s)
- Janeen Bower
- Faculty of Fine Arts and Music, The University of Melbourne, Melbourne, VIC, Australia
- Brain and Mind, Clinical Sciences, The Murdoch Children's Research Institute, Melbourne, VIC, Australia
- Music Therapy Department, The Royal Children's Hospital Melbourne, Melbourne, VIC, Australia
| | - Wendy L. Magee
- Boyer College of Music and Dance, Temple University, Philadelphia, PA, United States
| | - Cathy Catroppa
- Brain and Mind, Clinical Sciences, The Murdoch Children's Research Institute, Melbourne, VIC, Australia
- Melbourne School of Psychological Sciences and The Department of Paediatrics, The University of Melbourne, Melbourne, VIC, Australia
- Psychology Department, The Royal Children's Hospital Melbourne, Melbourne, VIC, Australia
| | - Felicity Anne Baker
- Faculty of Fine Arts and Music, The University of Melbourne, Melbourne, VIC, Australia
- Centre of Research in Music and Health, Norwegian Academy of Music, Oslo, Norway
| |
Collapse
|
30
|
Kogan VV, Reiterer SM. Eros, Beauty, and Phon-Aesthetic Judgements of Language Sound. We Like It Flat and Fast, but Not Melodious. Comparing Phonetic and Acoustic Features of 16 European Languages. Front Hum Neurosci 2021; 15:578594. [PMID: 33708080 PMCID: PMC7940689 DOI: 10.3389/fnhum.2021.578594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Accepted: 01/12/2021] [Indexed: 11/13/2022] Open
Abstract
This article concerns sound aesthetic preferences for European foreign languages. We investigated the phonetic-acoustic dimension of the linguistic aesthetic pleasure to describe the "music" found in European languages. The Romance languages, French, Italian, and Spanish, take a lead when people talk about melodious language - the music-like effects in the language (a.k.a., phonetic chill). On the other end of the melodiousness spectrum are German and Arabic that are often considered sounding harsh and un-attractive. Despite the public interest, limited research has been conducted on the topic of phonaesthetics, i.e., the subfield of phonetics that is concerned with the aesthetic properties of speech sounds (Crystal, 2008). Our goal is to fill the existing research gap by identifying the acoustic features that drive the auditory perception of language sound beauty. What is so music-like in the language that makes people say "it is music in my ears"? We had 45 central European participants listening to 16 auditorily presented European languages and rating each language in terms of 22 binary characteristics (e.g., beautiful - ugly and funny - boring) plus indicating their language familiarities, L2 backgrounds, speaker voice liking, demographics, and musicality levels. Findings revealed that all factors in complex interplay explain a certain percentage of variance: familiarity and expertise in foreign languages, speaker voice characteristics, phonetic complexity, musical acoustic properties, and finally musical expertise of the listener. The most important discovery was the trade-off between speech tempo and so-called linguistic melody (pitch variance): the faster the language, the flatter/more atonal it is in terms of the pitch (speech melody), making it highly appealing acoustically (sounding beautiful and sexy), but not so melodious in a "musical" sense.
Collapse
Affiliation(s)
- Vita V Kogan
- School of European Culture and Languages, University of Kent, Kent, United Kingdom
| | - Susanne M Reiterer
- Department of Linguistics, University of Vienna, Vienna, Austria.,Teacher Education Centre, University of Vienna, Vienna, Austria
| |
Collapse
|
31
|
Musical Training and Brain Volume in Older Adults. Brain Sci 2021; 11:brainsci11010050. [PMID: 33466337 PMCID: PMC7824792 DOI: 10.3390/brainsci11010050] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 12/24/2020] [Accepted: 12/28/2020] [Indexed: 12/14/2022] Open
Abstract
Musical practice, including musical training and musical performance, has been found to benefit cognitive function in older adults. Less is known about the role of musical experiences on brain structure in older adults. The present study examined the role of different types of musical behaviors on brain structure in older adults. We administered the Goldsmiths Musical Sophistication Index, a questionnaire that includes questions about a variety of musical behaviors, including performance on an instrument, musical practice, allocation of time to music, musical listening expertise, and emotional responses to music. We demonstrated that musical training, defined as the extent of musical training, musical practice, and musicianship, was positively and significantly associated with the volume of the inferior frontal cortex and parahippocampus. In addition, musical training was positively associated with volume of the posterior cingulate cortex, insula, and medial orbitofrontal cortex. Together, the present study suggests that musical behaviors relate to a circuit of brain regions involved in executive function, memory, language, and emotion. As gray matter often declines with age, our study has promising implications for the positive role of musical practice on aging brain health.
Collapse
|
32
|
Egermann H, Reuben F. "Beauty Is How You Feel Inside": Aesthetic Judgments Are Related to Emotional Responses to Contemporary Music. Front Psychol 2020; 11:510029. [PMID: 33281651 PMCID: PMC7691637 DOI: 10.3389/fpsyg.2020.510029] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2019] [Accepted: 10/07/2020] [Indexed: 11/13/2022] Open
Abstract
While it has extensively been argued that aesthetic categories such as beauty have a direct relationship to emotion, there has only been limited psychological research on the relationship between aesthetic judgments and emotional responses to art. Music is recognized to be an art form that elicits strong emotional responses in listeners and it is therefore pertinent to study empirically how aesthetic judgments relate to emotional responses to music listening. The aim of the presented study is to test for the impact of aesthetic judgment on various psychophysiological response measures of emotion that were assessed in parallel in two contemporary music concerts, each with a different audience and program. In order to induce different levels of aesthetic judgments in participants, we assigned them randomly to one of two groups in a between-subjects design in both concerts: One group attended a talk on the music presented, illustrating its aesthetic value, while the other group attended an unrelated talk on a non-musical topic. During the concerts, we assessed, from 41 participants in Concert 1 (10 males; mean age 23 years) and 53 in Concert 2 (14 males; mean age 24 years), different emotional response components: (a) retrospective rating of emotion; (b) activation of the peripheral nervous system (skin conductance and heart rate); (c) the activity of two facial muscles associated with emotional valence (only Concert 1). Participants listened to live performances of a selection of contemporary music pieces. After each piece, participants rated the music according to a list of commonly discussed aesthetic judgment criteria, all thought to contribute to the perceived aesthetic value of art. While preconcert talks did not significantly impact value judgment ratings, through factor analyses it was found that aesthetic judgments could be grouped into several underlying dimensions representing analytical, semantic, traditional aesthetic, and typicality values. All dimensions where then subsequently shown to be related to subjective and physiological responses to music. The findings reported in this study contribute to understanding the relationship between aesthetic judgment processes and emotional responses to music. The results give further evidence that cognitive-affective interactions have a significant role in processing music stimuli.
Collapse
Affiliation(s)
- Hauke Egermann
- York Music Psychology Group, Music Science and Technology Research Cluster, Department of Music, University of York, York, United Kingdom
| | - Federico Reuben
- York Music Psychology Group, Music Science and Technology Research Cluster, Department of Music, University of York, York, United Kingdom
| |
Collapse
|
33
|
Sun L, Hu L, Ren G, Yang Y. Musical Tension Associated With Violations of Hierarchical Structure. Front Hum Neurosci 2020; 14:578112. [PMID: 33192408 PMCID: PMC7531224 DOI: 10.3389/fnhum.2020.578112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Accepted: 08/27/2020] [Indexed: 11/13/2022] Open
Abstract
Tension is one of the core principles of emotion evoked by music, linking objective musical events and subjective experience. The present study used continuous behavioral rating and electroencephalography (EEG) to investigate the dynamic process of tension generation and its underlying neurocognitive mechanisms; specifically, tension induced by structural violations at different music hierarchical levels. In the experiment, twenty-four musicians were required to rate felt tension continuously in real-time, while listening to music sequences with either well-formed structure, phrase violations, or period violations. The behavioral data showed that structural violations gave rise to increasing and accumulating tension experience as the music unfolded; tension was increased dramatically by structural violations. Correspondingly, structural violations elicited N5 at GFP peaks, and induced decreasing neural oscillations power in the alpha frequency band (8–13 Hz). Furthermore, compared to phrase violations, period violations elicited larger N5 and induced a longer-lasting decrease of power in the alpha band, suggesting a hierarchical manner of musical processing. These results demonstrate the important role of musical structure in the generation of the experience of tension, providing support to the dynamic view of musical emotion and the hierarchical manner of tension processing.
Collapse
Affiliation(s)
- Lijun Sun
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Li Hu
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Guiqin Ren
- College of Psychology, Liaoning Normal University, Dalian, China
| | - Yufang Yang
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
34
|
Maimon NB, Lamy D, Eitan Z. Crossmodal Correspondence Between Tonal Hierarchy and Visual Brightness: Associating Syntactic Structure and Perceptual Dimensions Across Modalities. Multisens Res 2020; 33:805-836. [PMID: 33706266 DOI: 10.1163/22134808-bja10006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2019] [Accepted: 03/02/2020] [Indexed: 11/19/2022]
Abstract
Crossmodal correspondences (CMC) systematically associate perceptual dimensions in different sensory modalities (e.g., auditory pitch and visual brightness), and affect perception, cognition, and action. While previous work typically investigated associations between basic perceptual dimensions, here we present a new type of CMC, involving a high-level, quasi-syntactic schema: music tonality. Tonality governs most Western music and regulates stability and tension in melodic and harmonic progressions. Musicians have long associated tonal stability with non-auditory domains, yet such correspondences have hardly been investigated empirically. Here, we investigated CMC between tonal stability and visual brightness, in musicians and in non-musicians, using explicit and implicit measures. On the explicit test, participants heard a tonality-establishing context followed by a probe tone, and matched each probe to one of several circles, varying in brightness. On the implicit test, we applied the Implicit Association Test to auditory (tonally stable or unstable sequences) and visual (bright or dark circles) stimuli. The findings indicate that tonal stability is associated with visual brightness both explicitly and implicitly. They further suggest that this correspondence depends only partially on conceptual musical knowledge, as it also operates through fast, unintentional, and arguably automatic processes in musicians and non-musicians alike. By showing that abstract musical structure can establish concrete connotations to a non-auditory perceptual domain, our results open a hitherto unexplored avenue for research, associating syntactical structure with connotative meaning.
Collapse
Affiliation(s)
- Neta B Maimon
- 1The School of Psychological Sciences, Tel Aviv University, POB 39040, Tel Aviv 69978, Israel
| | - Dominique Lamy
- 1The School of Psychological Sciences, Tel Aviv University, POB 39040, Tel Aviv 69978, Israel.,2The Sagol School of Neuroscience, Tel Aviv University, POB 39040, Tel Aviv 69978, Israel
| | - Zohar Eitan
- 3Buchman-Mehta School of Music, Tel Aviv University, POB 39040, Tel Aviv 69978, Israel
| |
Collapse
|
35
|
Zioga I, Harrison PMC, Pearce MT, Bhattacharya J, Luft CDB. Auditory but Not Audiovisual Cues Lead to Higher Neural Sensitivity to the Statistical Regularities of an Unfamiliar Musical Style. J Cogn Neurosci 2020; 32:2241-2259. [PMID: 32762519 DOI: 10.1162/jocn_a_01614] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
It is still a matter of debate whether visual aids improve learning of music. In a multisession study, we investigated the neural signatures of novel music sequence learning with or without aids (auditory-only: AO, audiovisual: AV). During three training sessions on three separate days, participants (nonmusicians) reproduced (note by note on a keyboard) melodic sequences generated by an artificial musical grammar. The AV group (n = 20) had each note color-coded on screen, whereas the AO group (n = 20) had no color indication. We evaluated learning of the statistical regularities of the novel music grammar before and after training by presenting melodies ending on correct or incorrect notes and by asking participants to judge the correctness and surprisal of the final note, while EEG was recorded. We found that participants successfully learned the new grammar. Although the AV group, as compared to the AO group, reproduced longer sequences during training, there was no significant difference in learning between groups. At the neural level, after training, the AO group showed a larger N100 response to low-probability compared with high-probability notes, suggesting an increased neural sensitivity to statistical properties of the grammar; this effect was not observed in the AV group. Our findings indicate that visual aids might improve sequence reproduction while not necessarily promoting better learning, indicating a potential dissociation between sequence reproduction and learning. We suggest that the difficulty induced by auditory-only input during music training might enhance cognitive engagement, thereby improving neural sensitivity to the underlying statistical properties of the learned material.
Collapse
|
36
|
Sun L, Feng C, Yang Y. Tension Experience Induced By Nested Structures In Music. Front Hum Neurosci 2020; 14:210. [PMID: 32670037 PMCID: PMC7327114 DOI: 10.3389/fnhum.2020.00210] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2020] [Accepted: 05/08/2020] [Indexed: 11/16/2022] Open
Abstract
Tension experience is the basis for music emotion. In music, discrete elements are always organized into complex nested structures to convey emotion. However, the processing of music tension in the nested structure remains unknown. The present study investigated the tension experience induced by the nested structure and the underlying neural mechanisms, using a continuous tension rating task and electroencephalography (EEG) at the same time. Thirty musicians listened to music chorale sequences with non-nested, singly nested and doubly nested structures and were required to rate their real-time tension experience. Behavioral data indicated that the tension experience induced by the nested structure had more fluctuations than the non-nested structure, and the difference was mainly exhibited in the process of tension induction rather than tension resolution. However, the EEG data showed that larger late positive components (LPCs) were elicited by the ending chords in the nested structure compared with the non-nested structure, reflecting the difference in cognitive integration for long-distance structural dependence. The discrepancy between resolution experience and neural responses revealed the non-parallel relations between emotion and cognition. Furthermore, the LPC elicited by the doubly nested structure showed a smaller scalp distribution than the singly nested structure, indicating the more difficult processing of the doubly nested structure. These findings revealed the dynamic tension experience induced by the nested structure and the influence of nested type, shedding new light on the relationship between structure and tension in music.
Collapse
Affiliation(s)
- Lijun Sun
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Chen Feng
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Yufang Yang
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
37
|
Skov M, Nadal M. A Farewell to Art: Aesthetics as a Topic in Psychology and Neuroscience. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2020; 15:630-642. [PMID: 32027577 DOI: 10.1177/1745691619897963] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
Empirical aesthetics and neuroaesthetics study two main issues: the valuation of sensory objects and art experience. These two issues are often treated as if they were intrinsically interrelated: Research on art experience focuses on how art elicits aesthetic pleasure, and research on valuation focuses on special categories of objects or emotional processes that determine the aesthetic experience. This entanglement hampers progress in empirical aesthetics and neuroaesthetics and limits their relevance to other domains of psychology and neuroscience. Substantial progress in these fields is possible only if research on aesthetics is disentangled from research on art. We define aesthetics as the study of how and why sensory stimuli acquire hedonic value. Under this definition, aesthetics becomes a fundamental topic for psychology and neuroscience because it links hedonics (the study of what hedonic valuation is in itself) and neuroeconomics (the study of how hedonic values are integrated into decision making and behavioral control). We also propose that this definition of aesthetics leads to concrete empirical questions, such as how perceptual information comes to engage value signals in the reward circuit or why different psychological and neurobiological factors elicit different appreciation events for identical sensory objects.
Collapse
Affiliation(s)
- Martin Skov
- Danish Research Centre for Magnetic Resonance, Copenhagen University Hospital Hvidovre.,Decision Neuroscience Research Cluster, Copenhagen Business School
| | - Marcos Nadal
- Human Evolution and Cognition Group, Institute for Cross-Disciplinary Physics and Complex Systems, University of the Balearic Islands/Spanish National Research Council
| |
Collapse
|
38
|
Shany O, Singer N, Gold BP, Jacoby N, Tarrasch R, Hendler T, Granot R. Surprise-related activation in the nucleus accumbens interacts with music-induced pleasantness. Soc Cogn Affect Neurosci 2020; 14:459-470. [PMID: 30892654 PMCID: PMC6523415 DOI: 10.1093/scan/nsz019] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2018] [Revised: 02/19/2019] [Accepted: 03/12/2019] [Indexed: 12/13/2022] Open
Abstract
How can music-merely a stream of sounds-be enjoyable for so many people? Recent accounts of this phenomenon are inspired by predictive coding models, hypothesizing that both confirmation and violations of musical expectations associate with the hedonic response to music via recruitment of the mesolimbic system and its connections with the auditory cortex. Here we provide support for this model, by revealing associations of music-induced pleasantness with musical surprises in the activity and connectivity patterns of the nucleus accumbens (NAcc)-a central component of the mesolimbic system. We examined neurobehavioral responses to surprises in three naturalistic musical pieces using fMRI and subjective ratings of valence and arousal. Surprises were associated with changes in reported valence and arousal, as well as with enhanced activations in the auditory cortex, insula and ventral striatum, relative to unsurprising events. Importantly, we found that surprise-related activation in the NAcc was more pronounced among individuals who experienced greater music-induced pleasantness. These participants also exhibited stronger surprise-related NAcc-auditory cortex connectivity during the most pleasant piece, relative to participants who found the music less pleasant. These findings provide a novel demonstration of a direct link between musical surprises, NAcc activation and music-induced pleasantness.
Collapse
Affiliation(s)
- Ofir Shany
- Sagol Brain Institute, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel.,School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel
| | - Neomi Singer
- School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel.,Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| | - Benjamin Paul Gold
- Montreal Neurological Institute, McGill University, Montreal, QC, Canada.,International Laboratory for Brain, Music and Sound Research, Montreal, QC, Canada
| | - Nori Jacoby
- The Center for Science and Society, Columbia University, New York, NY, USA
| | - Ricardo Tarrasch
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel.,School of Education, Tel Aviv University, Tel Aviv, Israel
| | - Talma Hendler
- Sagol Brain Institute, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel.,School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel.,Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel.,Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Roni Granot
- Musicology Department, Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
39
|
Cheung VK, Harrison PM, Meyer L, Pearce MT, Haynes JD, Koelsch S. Uncertainty and Surprise Jointly Predict Musical Pleasure and Amygdala, Hippocampus, and Auditory Cortex Activity. Curr Biol 2019; 29:4084-4092.e4. [DOI: 10.1016/j.cub.2019.09.067] [Citation(s) in RCA: 71] [Impact Index Per Article: 14.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2019] [Revised: 09/11/2019] [Accepted: 09/25/2019] [Indexed: 12/11/2022]
|
40
|
Gold BP, Pearce MT, Mas-Herrero E, Dagher A, Zatorre RJ. Predictability and Uncertainty in the Pleasure of Music: A Reward for Learning? J Neurosci 2019; 39:9397-9409. [PMID: 31636112 PMCID: PMC6867811 DOI: 10.1523/jneurosci.0428-19.2019] [Citation(s) in RCA: 50] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Revised: 09/30/2019] [Accepted: 10/01/2019] [Indexed: 12/23/2022] Open
Abstract
Music ranks among the greatest human pleasures. It consistently engages the reward system, and converging evidence implies it exploits predictions to do so. Both prediction confirmations and errors are essential for understanding one's environment, and music offers many of each as it manipulates interacting patterns across multiple timescales. Learning models suggest that a balance of these outcomes (i.e., intermediate complexity) optimizes the reduction of uncertainty to rewarding and pleasurable effect. Yet evidence of a similar pattern in music is mixed, hampered by arbitrary measures of complexity. In the present studies, we applied a well-validated information-theoretic model of auditory expectation to systematically measure two key aspects of musical complexity: predictability (operationalized as information content [IC]), and uncertainty (entropy). In Study 1, we evaluated how these properties affect musical preferences in 43 male and female participants; in Study 2, we replicated Study 1 in an independent sample of 27 people and assessed the contribution of veridical predictability by presenting the same stimuli seven times. Both studies revealed significant quadratic effects of IC and entropy on liking that outperformed linear effects, indicating reliable preferences for music of intermediate complexity. An interaction between IC and entropy further suggested preferences for more predictability during more uncertain contexts, which would facilitate uncertainty reduction. Repeating stimuli decreased liking ratings but did not disrupt the preference for intermediate complexity. Together, these findings support long-hypothesized optimal zones of predictability and uncertainty in musical pleasure with formal modeling, relating the pleasure of music listening to the intrinsic reward of learning.SIGNIFICANCE STATEMENT Abstract pleasures, such as music, claim much of our time, energy, and money despite lacking any clear adaptive benefits like food or shelter. Yet as music manipulates patterns of melody, rhythm, and more, it proficiently exploits our expectations. Given the importance of anticipating and adapting to our ever-changing environments, making and evaluating uncertain predictions can have strong emotional effects. Accordingly, we present evidence that listeners consistently prefer music of intermediate predictive complexity, and that preferences shift toward expected musical outcomes in more uncertain contexts. These results are consistent with theories that emphasize the intrinsic reward of learning, both by updating inaccurate predictions and validating accurate ones, which is optimal in environments that present manageable predictive challenges (i.e., reducible uncertainty).
Collapse
Affiliation(s)
- Benjamin P Gold
- Montreal Neurological Institute, McGill University, Montreal, Quebec H3A 2B4, Canada,
- International Laboratory for Brain, Music and Sound Research, Montreal, Quebec H2V 2J2, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montreal, Quebec H3A 1E3, Canada
| | - Marcus T Pearce
- Cognitive Science Research Group, School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, United Kingdom, and
- Centre for Music in the Brain, Aarhus University, Aarhus 8000, Denmark
| | - Ernest Mas-Herrero
- Montreal Neurological Institute, McGill University, Montreal, Quebec H3A 2B4, Canada
| | - Alain Dagher
- Montreal Neurological Institute, McGill University, Montreal, Quebec H3A 2B4, Canada
| | - Robert J Zatorre
- Montreal Neurological Institute, McGill University, Montreal, Quebec H3A 2B4, Canada
- International Laboratory for Brain, Music and Sound Research, Montreal, Quebec H2V 2J2, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montreal, Quebec H3A 1E3, Canada
| |
Collapse
|
41
|
Bianco R, Gold BP, Johnson AP, Penhune VB. Music predictability and liking enhance pupil dilation and promote motor learning in non-musicians. Sci Rep 2019; 9:17060. [PMID: 31745159 PMCID: PMC6863863 DOI: 10.1038/s41598-019-53510-w] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Accepted: 10/21/2019] [Indexed: 01/28/2023] Open
Abstract
Humans can anticipate music and derive pleasure from it. Expectations facilitate the learning of movements associated with anticipated events, and they are also linked with reward, which may further facilitate learning of the anticipated rewarding events. The present study investigates the synergistic effects of predictability and hedonic responses to music on arousal and motor-learning in a naïve population. Novel melodies were manipulated in their overall predictability (predictable/unpredictable) as objectively defined by a model of music expectation, and ranked as high/medium/low liked based on participants' self-reports collected during an initial listening session. During this session, we also recorded ocular pupil size as an implicit measure of listeners' arousal. During the following motor task, participants learned to play target notes of the melodies on a keyboard (notes were of similar motor and musical complexity across melodies). Pupil dilation was greater for liked melodies, particularly when predictable. Motor performance was facilitated in predictable rather than unpredictable melodies, but liked melodies were learned even in the unpredictable condition. Low-liked melodies also showed learning but mostly in participants with higher scores of task perceived competence. Taken together, these results highlight the effects of stimuli predictability on learning, which can be however overshadowed by the effects of stimulus liking or task-related intrinsic motivation.
Collapse
Affiliation(s)
- R Bianco
- Department of Psychology, Concordia University, Montreal, QC, Canada.
- Ear Institute, University College London, London, UK.
| | - B P Gold
- Montreal Neurological Institute, McGill University, Montreal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
| | - A P Johnson
- Department of Psychology, Concordia University, Montreal, QC, Canada
| | - V B Penhune
- Department of Psychology, Concordia University, Montreal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
| |
Collapse
|
42
|
Bannister S. Distinct varieties of aesthetic chills in response to multimedia. PLoS One 2019; 14:e0224974. [PMID: 31725733 PMCID: PMC6855651 DOI: 10.1371/journal.pone.0224974] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2018] [Accepted: 10/25/2019] [Indexed: 12/20/2022] Open
Abstract
The experience of aesthetic chills, often defined as a subjective response accompanied by goosebumps, shivers and tingling sensations, is a phenomenon often utilized to indicate moments of peak pleasure and emotional arousal in psychological research. However, little is currently understood about how to conceptualize the experience, particularly in terms of whether chills are general markers of intense pleasure and emotion, or instead a collection of distinct phenomenological experiences. To address this, a web-study was designed using images, videos, music videos, texts and music excerpts (from both an online forum dedicated to chills-eliciting stimuli and previous musical chills study), to explore variations across chills experience in terms of bodily and emotional responses reported. Results suggest that across participants (N = 179), three distinct chills categories could be identified: warm chills (chills co-occurring with smiling, warmth, feeling relaxed, stimulated and happy), cold chills (chills co-occurring with frowning, cold, sadness and anger), and moving chills (chills co-occurring with tears, feeling a lump in the throat, emotional intensity, and feelings of affection, tenderness and being moved). Warm chills were linked to stimuli expressing social communion and love; cold chills were elicited by stimuli portraying entities in distress, and support from one to another; moving chills were elicited by most stimuli, but their incidence were also predicted by ratings of trait empathy. Findings are discussed in terms of being moved, the importance of differing induction mechanisms such as shared experience and empathic concern, and the implications of distinct chills categories for both individual differences and inconsistencies in the existing aesthetic chills literature.
Collapse
Affiliation(s)
- Scott Bannister
- Department of Music, Durham University, Durham, County Durham, England, United Kingdom
- * E-mail:
| |
Collapse
|
43
|
Pagès-Portabella C, Toro JM. Dissonant endings of chord progressions elicit a larger ERAN than ambiguous endings in musicians. Psychophysiology 2019; 57:e13476. [PMID: 31512751 DOI: 10.1111/psyp.13476] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2018] [Revised: 07/30/2019] [Accepted: 08/08/2019] [Indexed: 11/29/2022]
Abstract
In major-minor tonal music, the hierarchical relationships and patterns of tension/release are essential for its composition and experience. For most listeners, tension leads to an expectation of resolution. Thus, when musical expectations are broken, they are usually perceived as erroneous and elicit specific neural responses such as the early right anterior negativity (ERAN). In the present study, we explored if different degrees of musical violations are processed differently after long-term musical training in comparison to day-to-day exposure. We registered the ERPs elicited by listening to unexpected chords in both musicians and nonmusicians. More specifically, we compared the responses of strong violations by unexpected dissonant endings and mild violations by unexpected but consonant endings (Neapolitan chords). Our results show that, irrespective of training, irregular endings elicited the ERAN. However, the ERAN for dissonant endings was larger in musicians than in nonmusicians. More importantly, we observed a modulation of the neural responses by the degree of violation only in musicians. In this group, the amplitude of the ERAN was larger for strong than for mild violations. These results suggest an early sensitivity of musicians to dissonance, which is processed as less expected than tonal irregularities. We also found that irregular endings elicited a P3 only in musicians. Our study suggests that, even though violations of harmonic expectancies are detected by all listeners, musical training modulates how different violations of the musical context are processed.
Collapse
Affiliation(s)
- Carlota Pagès-Portabella
- Language & Comparative Cognition Group, Center for Brain & Cognition, Universitat Pompeu Fabra, Barcelona, Spain
| | - Juan M Toro
- Language & Comparative Cognition Group, Center for Brain & Cognition, Universitat Pompeu Fabra, Barcelona, Spain.,Institució Catalana de Recerca i Estudis Avançats, Barcelona, Spain
| |
Collapse
|
44
|
Bravo F, Cross I, Hopkins C, Gonzalez N, Docampo J, Bruno C, Stamatakis EA. Anterior cingulate and medial prefrontal cortex response to systematically controlled tonal dissonance during passive music listening. Hum Brain Mapp 2019; 41:46-66. [PMID: 31512332 PMCID: PMC7268082 DOI: 10.1002/hbm.24786] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2019] [Revised: 07/18/2019] [Accepted: 08/26/2019] [Indexed: 12/14/2022] Open
Abstract
Several studies have attempted to investigate how the brain codes emotional value when processing music of contrasting levels of dissonance; however, the lack of control over specific musical structural characteristics (i.e., dynamics, rhythm, melodic contour or instrumental timbre), which are known to affect perceived dissonance, rendered results difficult to interpret. To account for this, we used functional imaging with an optimized control of the musical structure to obtain a finer characterization of brain activity in response to tonal dissonance. Behavioral findings supported previous evidence for an association between increased dissonance and negative emotion. Results further demonstrated that the manipulation of tonal dissonance through systematically controlled changes in interval content elicited contrasting valence ratings but no significant effects on either arousal or potency. Neuroscientific findings showed an engagement of the left medial prefrontal cortex (mPFC) and the left rostral anterior cingulate cortex (ACC) while participants listened to dissonant compared to consonant music, converging with studies that have proposed a core role of these regions during conflict monitoring (detection and resolution), and in the appraisal of negative emotion and fear‐related information. Both the left and right primary auditory cortices showed stronger functional connectivity with the ACC during the dissonant portion of the task, implying a demand for greater information integration when processing negatively valenced musical stimuli. This study demonstrated that the systematic control of musical dissonance could be applied to isolate valence from the arousal dimension, facilitating a novel access to the neural representation of negative emotion.
Collapse
Affiliation(s)
- Fernando Bravo
- Centre for Music and Science, University of Cambridge, Cambridge, UK.,TU Dresden, Institut für Kunst- und Musikwissenschaft, Dresden, Germany.,Cognition and Consciousness Imaging Group, Division of Anaesthesia, Department of Medicine, University of Cambridge, Cambridge, UK
| | - Ian Cross
- Centre for Music and Science, University of Cambridge, Cambridge, UK
| | | | - Nadia Gonzalez
- Department of Neuroimaging, Fundación Científica del Sur Imaging Centre, Buenos Aires, Argentina
| | - Jorge Docampo
- Department of Neuroimaging, Fundación Científica del Sur Imaging Centre, Buenos Aires, Argentina
| | - Claudio Bruno
- Department of Neuroimaging, Fundación Científica del Sur Imaging Centre, Buenos Aires, Argentina
| | - Emmanuel A Stamatakis
- Cognition and Consciousness Imaging Group, Division of Anaesthesia, Department of Medicine, University of Cambridge, Cambridge, UK
| |
Collapse
|
45
|
Omigie D, Pearce M, Lehongre K, Hasboun D, Navarro V, Adam C, Samson S. Intracranial Recordings and Computational Modeling of Music Reveal the Time Course of Prediction Error Signaling in Frontal and Temporal Cortices. J Cogn Neurosci 2019; 31:855-873. [PMID: 30883293 DOI: 10.1162/jocn_a_01388] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Prediction is held to be a fundamental process underpinning perception, action, and cognition. To examine the time course of prediction error signaling, we recorded intracranial EEG activity from nine presurgical epileptic patients while they listened to melodies whose information theoretical predictability had been characterized using a computational model. We examined oscillatory activity in the superior temporal gyrus (STG), the middle temporal gyrus (MTG), and the pars orbitalis of the inferior frontal gyrus, lateral cortical areas previously implicated in auditory predictive processing. We also examined activity in anterior cingulate gyrus (ACG), insula, and amygdala to determine whether signatures of prediction error signaling may also be observable in these subcortical areas. Our results demonstrate that the information content (a measure of unexpectedness) of musical notes modulates the amplitude of low-frequency oscillatory activity (theta to beta power) in bilateral STG and right MTG from within 100 and 200 msec of note onset, respectively. Our results also show this cortical activity to be accompanied by low-frequency oscillatory modulation in ACG and insula-areas previously associated with mediating physiological arousal. Finally, we showed that modulation of low-frequency activity is followed by that of high-frequency (gamma) power from approximately 200 msec in the STG, between 300 and 400 msec in the left insula, and between 400 and 500 msec in the ACG. We discuss these results with respect to models of neural processing that emphasize gamma activity as an index of prediction error signaling and highlight the usefulness of musical stimuli in revealing the wide-reaching neural consequences of predictive processing.
Collapse
Affiliation(s)
- Diana Omigie
- Max Planck Institute for Empirical Aesthetics.,Goldsmiths, University of London
| | | | - Katia Lehongre
- AP-HP, GH Pitié-Salpêtrière-Charles Foix.,Inserm U 1127, CNRS UMR 7225, Sorbonne Université, UMPC Univ Paris 06 UMR 5 1127, Institut du Cerveau et de la Moelle épinière, ICM, F-75013
| | | | - Vincent Navarro
- AP-HP, GH Pitié-Salpêtrière-Charles Foix.,Inserm U 1127, CNRS UMR 7225, Sorbonne Université, UMPC Univ Paris 06 UMR 5 1127, Institut du Cerveau et de la Moelle épinière, ICM, F-75013
| | | | - Severine Samson
- AP-HP, GH Pitié-Salpêtrière-Charles Foix.,University of Lille
| |
Collapse
|
46
|
Onishi A, Nakagawa S. How Does the Degree of Valence Influence Affective Auditory P300-Based BCIs? Front Neurosci 2019; 13:45. [PMID: 30837822 PMCID: PMC6390079 DOI: 10.3389/fnins.2019.00045] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2018] [Accepted: 01/17/2019] [Indexed: 11/29/2022] Open
Abstract
A brain-computer interface (BCI) translates brain signals into commands for the control of devices and for communication. BCIs enable persons with disabilities to communicate externally. Positive and negative affective sounds have been introduced to P300-based BCIs; however, how the degree of valence (e.g., very positive or positive) influences the BCI has not been investigated. To further examine the influence of affective sounds in P300-based BCIs, we applied sounds with five degrees of valence to the P300-based BCI. The sound valence ranged from very negative to very positive, as determined by Scheffe's method. The effect of sound valence on the BCI was evaluated by waveform analyses, followed by the evaluation of offline stimulus-wise classification accuracy. As a result, the late component of P300 showed significantly higher point-biserial correlation coefficients in response to very positive and very negative sounds than in response to the other sounds. The offline stimulus-wise classification accuracy was estimated from a region-of-interest. The analysis showed that the very negative sound achieved the highest accuracy and the very positive sound achieved the second highest accuracy, suggesting that the very positive sound and the very negative sound may be required to improve the accuracy.
Collapse
Affiliation(s)
- Akinari Onishi
- Center for Frontier Medical Engineering, Chiba University, Chiba, Japan
| | - Seiji Nakagawa
- Center for Frontier Medical Engineering, Chiba University, Chiba, Japan.,Department of Medical Engineering, Graduate School of Engineering, Chiba University, Chiba, Japan.,University Hospital Med-Tech Link Center, Chiba University, Chiba, Japan
| |
Collapse
|
47
|
Musical reward prediction errors engage the nucleus accumbens and motivate learning. Proc Natl Acad Sci U S A 2019; 116:3310-3315. [PMID: 30728301 DOI: 10.1073/pnas.1809855116] [Citation(s) in RCA: 66] [Impact Index Per Article: 13.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Enjoying music reliably ranks among life's greatest pleasures. Like many hedonic experiences, it engages several reward-related brain areas, with activity in the nucleus accumbens (NAc) most consistently reflecting the listener's subjective response. Converging evidence suggests that this activity arises from musical "reward prediction errors" (RPEs) that signal the difference between expected and perceived musical events, but this hypothesis has not been directly tested. In the present fMRI experiment, we assessed whether music could elicit formally modeled RPEs in the NAc by applying a well-established decision-making protocol designed and validated for studying RPEs. In the scanner, participants chose between arbitrary cues that probabilistically led to dissonant or consonant music, and learned to make choices associated with the consonance, which they preferred. We modeled regressors of trial-by-trial RPEs, finding that NAc activity tracked musically elicited RPEs, to an extent that explained variance in the individual learning rates. These results demonstrate that music can act as a reward, driving learning and eliciting RPEs in the NAc, a hub of reward- and music enjoyment-related activity.
Collapse
|
48
|
Sears DRW, Pearce MT, Spitzer J, Caplin WE, McAdams S. Expectations for tonal cadences: Sensory and cognitive priming effects. Q J Exp Psychol (Hove) 2018; 72:1422-1438. [DOI: 10.1177/1747021818814472] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Studies examining the formation of melodic and harmonic expectations during music listening have repeatedly demonstrated that a tonal context primes listeners to expect certain (tonally related) continuations over others. However, few such studies have (1) selected stimuli using ready examples of expectancy violation derived from real-world instances of tonal music, (2) provided a consistent account for the influence of sensory and cognitive mechanisms on tonal expectancies by comparing different computational simulations, or (3) combined melodic and harmonic representations in modelling cognitive processes of expectation. To resolve these issues, this study measures expectations for the most recurrent cadence patterns associated with tonal music and then simulates the reported findings using three sensory–cognitive models of auditory expectation. In Experiment 1, participants provided explicit retrospective expectancy ratings both before and after hearing the target melodic tone and chord of the cadential formula. In Experiment 2, participants indicated as quickly as possible whether those target events were in or out of tune relative to the preceding context. Across both experiments, cadences terminating with stable melodic tones and chords elicited the highest expectancy ratings and the fastest and most accurate responses. Moreover, the model simulations supported a cognitive interpretation of tonal processing, in which listeners with exposure to tonal music generate expectations as a consequence of the frequent (co-)occurrence of events on the musical surface.
Collapse
Affiliation(s)
- David RW Sears
- College of Visual & Performing Arts, Texas Tech University, Lubbock, TX, USA
- McGill University, Montreal, QC, Canada
| | | | | | | | | |
Collapse
|
49
|
Bannister S, Eerola T. Suppressing the Chills: Effects of Musical Manipulation on the Chills Response. Front Psychol 2018; 9:2046. [PMID: 30420822 PMCID: PMC6215865 DOI: 10.3389/fpsyg.2018.02046] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2018] [Accepted: 10/04/2018] [Indexed: 11/13/2022] Open
Abstract
Research on musical chills has linked the response to multiple musical features; however, there exists no study that has attempted to manipulate musical stimuli to enable causal inferences, meaning current understanding is based mainly on correlational evidence. In the current study, participants who regularly experience chills (N = 24) listened to an original and manipulated version of three pieces reported to elicit chills in a previous survey. Predefined chills sections were removed to create manipulated conditions. The effects of these manipulations on the chills response were assessed through continuous self-reports, and skin conductance measurements. Results show that chills were significantly less frequent following stimulus manipulation across all three pieces. Continuous measurements of chills intensity were significantly higher in the chills sections compared with control sections in the pieces; similar patterns were found for phasic skin conductance, although some differences emerged. Continuous measurements also correlated with psychoacoustic features such as loudness, brightness and roughness in two of the three pieces. Findings are discussed in terms of understanding structural and acoustic features and chills experiences within their local music contexts, the necessity of experimental approaches to musical chills, and the possibility of different features activating different underlying mechanisms.
Collapse
Affiliation(s)
- Scott Bannister
- Department of Music, Durham University, Durham, United Kingdom
| | - Tuomas Eerola
- Department of Music, Durham University, Durham, United Kingdom
| |
Collapse
|
50
|
Dolan D, Jensen HJ, Mediano PAM, Molina-Solana M, Rajpal H, Rosas F, Sloboda JA. The Improvisational State of Mind: A Multidisciplinary Study of an Improvisatory Approach to Classical Music Repertoire Performance. Front Psychol 2018; 9:1341. [PMID: 30319469 PMCID: PMC6167963 DOI: 10.3389/fpsyg.2018.01341] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2018] [Accepted: 07/12/2018] [Indexed: 11/13/2022] Open
Abstract
The recent re-introduction of improvisation as a professional practice within classical music, however cautious and still rare, allows direct and detailed contemporary comparison between improvised and "standard" approaches to performances of the same composition, comparisons which hitherto could only be inferred from impressionistic historical accounts. This study takes an interdisciplinary multi-method approach to discovering the contrasting nature and effects of prepared and improvised approaches during live chamber-music concert performances of a movement from Franz Schubert's "Shepherd on the Rock," given by a professional trio consisting of voice, flute, and piano, in the presence of an invited audience of 22 adults with varying levels of musical experience and training. The improvised performances were found to differ systematically from prepared performances in their timing, dynamic, and timbral features as well as in the degree of risk-taking and "mind reading" between performers, which included moments of spontaneously exchanging extemporized notes. Post-performance critical reflection by the performers characterized distinct mental states underlying the two modes of performance. The amount of overall body movements was reduced in the improvised performances, which showed less unco-ordinated movements between performers when compared to the prepared performance. Audience members, who were told only that the two performances would be different, but not how, rated the improvised version as more emotionally compelling and musically convincing than the prepared version. The size of this effect was not affected by whether or not the audience could see the performers, or by levels of musical training. EEG measurements from 19 scalp locations showed higher levels of Lempel-Ziv complexity (associated with awareness and alertness) in the improvised version in both performers and audience. Results are discussed in terms of their potential support for an "improvisatory state of mind" which may have aspects of flow (as characterized by Csikszentmihalyi, 1997) and primary states (as characterized by the Entropic Brain Hypothesis of Carhart-Harris et al., 2014). In a group setting, such as a live concert, our evidence suggests that this state of mind is communicable between performers and audience thus contributing to a heightened quality of shared experience.
Collapse
Affiliation(s)
- David Dolan
- Guildhall School of Music and Drama, London, United Kingdom
| | - Henrik J. Jensen
- Department of Mathematics, Centre of Complexity Science, Imperial College London, London, United Kingdom
- Institute of Innovative Research, Tokyo Institute of Technology, Yokohama, Japan
| | | | - Miguel Molina-Solana
- Department of Computing, Imperial College London, London, United Kingdom
- Data Science Institute, Imperial College London, London, United Kingdom
| | - Hardik Rajpal
- Department of Mathematics, Centre of Complexity Science, Imperial College London, London, United Kingdom
| | - Fernando Rosas
- Department of Mathematics, Centre of Complexity Science, Imperial College London, London, United Kingdom
- Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom
| | | |
Collapse
|