1
|
Hamilton M, Pearce M. Trajectories and revolutions in popular melody based on U.S. charts from 1950 to 2023. Sci Rep 2024; 14:14749. [PMID: 38965245 DOI: 10.1038/s41598-024-64571-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Accepted: 06/11/2024] [Indexed: 07/06/2024] Open
Abstract
In the past century, the history of popular music has been analyzed from many different perspectives, with sociologists, musicologists and philosophers all offering distinct narratives characterizing the evolution of popular music. However, quantitative studies on this subject began only in the last decade and focused on features extracted from raw audio, which limits the scope to low-level components of music. The present study investigates the evolution of a more abstract dimension of popular music, specifically melody, using a new dataset of popular melodies spanning from 1950 to 2023. To identify "melodic revolutions", changepoint detection was applied to a multivariate time series comprising features related to the pitch and rhythmic structure of the melodies. Two major revolutions in 1975 and 2000 and one smaller revolution in 1996, characterized by significant decreases in complexity, were located. The revolutions divided the time series into three eras, which were modeled separately with autoregression, linear regression and vector autoregression. Linear regression of autoregression residuals underscored inter-feature relationships, which become stronger in post-2000 melodies. The overriding pattern emerging from these analyses shows decreasing complexity and increasing note density in popular melodies over time, especially since 2000.
Collapse
Affiliation(s)
- Madeline Hamilton
- Music Cognition Lab, Queen Mary University of London, London, E1 4NS, UK.
| | - Marcus Pearce
- Music Cognition Lab, Queen Mary University of London, London, E1 4NS, UK
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| |
Collapse
|
2
|
Herff SA, Bonetti L, Cecchetti G, Vuust P, Kringelbach ML, Rohrmeier MA. Hierarchical syntax model of music predicts theta power during music listening. Neuropsychologia 2024; 199:108905. [PMID: 38740179 DOI: 10.1016/j.neuropsychologia.2024.108905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 03/07/2024] [Accepted: 05/06/2024] [Indexed: 05/16/2024]
Abstract
Linguistic research showed that the depth of syntactic embedding is reflected in brain theta power. Here, we test whether this also extends to non-linguistic stimuli, specifically music. We used a hierarchical model of musical syntax to continuously quantify two types of expert-annotated harmonic dependencies throughout a piece of Western classical music: prolongation and preparation. Prolongations can roughly be understood as a musical analogue to linguistic coordination between constituents that share the same function (e.g., 'pizza' and 'pasta' in 'I ate pizza and pasta'). Preparation refers to the dependency between two harmonies whereby the first implies a resolution towards the second (e.g., dominant towards tonic; similar to how the adjective implies the presence of a noun in 'I like spicy … '). Source reconstructed MEG data of sixty-five participants listening to the musical piece was then analysed. We used Bayesian Mixed Effects models to predict theta envelope in the brain, using the number of open prolongation and preparation dependencies as predictors whilst controlling for audio envelope. We observed that prolongation and preparation both carry independent and distinguishable predictive value for theta band fluctuation in key linguistic areas such as the Angular, Superior Temporal, and Heschl's Gyri, or their right-lateralised homologues, with preparation showing additional predictive value for areas associated with the reward system and prediction. Musical expertise further mediated these effects in language-related brain areas. Results show that predictions of precisely formalised music-theoretical models are reflected in the brain activity of listeners which furthers our understanding of the perception and cognition of musical structure.
Collapse
Affiliation(s)
- Steffen A Herff
- Sydney Conservatorium of Music, University of Sydney, Sydney, Australia; The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia; Digital and Cognitive Musicology Lab, College of Humanities, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
| | - Leonardo Bonetti
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Denmark; Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, United Kingdom; Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| | - Gabriele Cecchetti
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia; Digital and Cognitive Musicology Lab, College of Humanities, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Denmark
| | - Morten L Kringelbach
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Denmark; Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, United Kingdom; Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| | - Martin A Rohrmeier
- Digital and Cognitive Musicology Lab, College of Humanities, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
3
|
Heng JG, Zhang J, Bonetti L, Lim WPH, Vuust P, Agres K, Chen SHA. Understanding music and aging through the lens of Bayesian inference. Neurosci Biobehav Rev 2024; 163:105768. [PMID: 38908730 DOI: 10.1016/j.neubiorev.2024.105768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 06/05/2024] [Accepted: 06/10/2024] [Indexed: 06/24/2024]
Abstract
Bayesian inference has recently gained momentum in explaining music perception and aging. A fundamental mechanism underlying Bayesian inference is the notion of prediction. This framework could explain how predictions pertaining to musical (melodic, rhythmic, harmonic) structures engender action, emotion, and learning, expanding related concepts of music research, such as musical expectancies, groove, pleasure, and tension. Moreover, a Bayesian perspective of music perception may shed new insights on the beneficial effects of music in aging. Aging could be framed as an optimization process of Bayesian inference. As predictive inferences refine over time, the reliance on consolidated priors increases, while the updating of prior models through Bayesian inference attenuates. This may affect the ability of older adults to estimate uncertainties in their environment, limiting their cognitive and behavioral repertoire. With Bayesian inference as an overarching framework, this review synthesizes the literature on predictive inferences in music and aging, and details how music could be a promising tool in preventive and rehabilitative interventions for older adults through the lens of Bayesian inference.
Collapse
Affiliation(s)
- Jiamin Gladys Heng
- School of Computer Science and Engineering, Nanyang Technological University, Singapore.
| | - Jiayi Zhang
- Interdisciplinary Graduate Program, Nanyang Technological University, Singapore; School of Social Sciences, Nanyang Technological University, Singapore; Centre for Research and Development in Learning, Nanyang Technological University, Singapore
| | - Leonardo Bonetti
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus, Aalborg, Denmark; Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, United Kingdom; Department of Psychiatry, University of Oxford, United Kingdom; Department of Psychology, University of Bologna, Italy
| | | | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus, Aalborg, Denmark
| | - Kat Agres
- Centre for Music and Health, National University of Singapore, Singapore; Yong Siew Toh Conservatory of Music, National University of Singapore, Singapore
| | - Shen-Hsing Annabel Chen
- School of Social Sciences, Nanyang Technological University, Singapore; Centre for Research and Development in Learning, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore; National Institute of Education, Nanyang Technological University, Singapore.
| |
Collapse
|
4
|
Bechtold TA, Curry B, Witek M. The perceived catchiness of music affects the experience of groove. PLoS One 2024; 19:e0303309. [PMID: 38748741 PMCID: PMC11095763 DOI: 10.1371/journal.pone.0303309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Accepted: 04/23/2024] [Indexed: 05/19/2024] Open
Abstract
Catchiness and groove are common phenomena when listening to popular music. Catchiness may be a potential factor for experiencing groove but quantitative evidence for such a relationship is missing. To examine whether and how catchiness influences a key component of groove-the pleasurable urge to move to music (PLUMM)-we conducted a listening experiment with 450 participants and 240 short popular music clips of drum patterns, bass lines or keys/guitar parts. We found four main results: (1) catchiness as measured in a recognition task was only weakly associated with participants' perceived catchiness of music. We showed that perceived catchiness is multi-dimensional, subjective, and strongly associated with pleasure. (2) We found a sizeable positive relationship between PLUMM and perceived catchiness. (3) However, the relationship is complex, as further analysis showed that pleasure suppresses perceived catchiness' effect on the urge to move. (4) We compared common factors that promote perceived catchiness and PLUMM and found that listener-related variables contributed similarly, while the effects of musical content diverged. Overall, our data suggests music perceived as catchy is likely to foster groove experiences.
Collapse
Affiliation(s)
- Toni Amadeus Bechtold
- Department of Music, University of Birmingham, Birmingham, United Kingdom
- Lucerne School of Music, Lucerne University of Applied Sciences and Arts, Lucerne, Switzerland
| | - Ben Curry
- Department of Music, University of Birmingham, Birmingham, United Kingdom
| | - Maria Witek
- Department of Music, University of Birmingham, Birmingham, United Kingdom
| |
Collapse
|
5
|
Abrams EB, Namballa R, He R, Poeppel D, Ripollés P. Elevator music as a tool for the quantitative characterization of reward. Ann N Y Acad Sci 2024; 1535:121-136. [PMID: 38566486 DOI: 10.1111/nyas.15131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
While certain musical genres and songs are widely popular, there is still large variability in the music that individuals find rewarding or emotional, even among those with a similar musical enculturation. Interestingly, there is one Western genre that is intended to attract minimal attention and evoke a mild emotional response: elevator music. In a series of behavioral experiments, we show that elevator music consistently elicits low pleasure and surprise. Participants reported elevator music as being less pleasurable than music from popular genres, even when participants did not regularly listen to the comparison genre. Participants reported elevator music to be familiar even when they had not explicitly heard the presented song before. Computational and behavioral measures of surprisal showed that elevator music was less surprising, and thus more predictable, than other well-known genres. Elevator music covers of popular songs were rated as less pleasurable, surprising, and arousing than their original counterparts. Finally, we used elevator music as a control for self-selected rewarding songs in a proof-of-concept physiological (electrodermal activity and piloerection) experiment. Our results suggest that elevator music elicits low emotional responses consistently across Western music listeners, making it a unique control stimulus for studying musical novelty, pleasure, and surprise.
Collapse
Affiliation(s)
- Ellie Bean Abrams
- Department of Psychology, New York University, New York, New York, USA
- Center for Language, Music, and Emotion (CLaME), New York University, New York, New York, USA
- Music and Audio Research Laboratory (MARL), New York University, New York, New York, USA
| | - Richa Namballa
- Music and Audio Research Laboratory (MARL), New York University, New York, New York, USA
| | - Richard He
- Department of Psychology, New York University, New York, New York, USA
- Center for Language, Music, and Emotion (CLaME), New York University, New York, New York, USA
- Music and Audio Research Laboratory (MARL), New York University, New York, New York, USA
| | - David Poeppel
- Department of Psychology, New York University, New York, New York, USA
- Center for Language, Music, and Emotion (CLaME), New York University, New York, New York, USA
| | - Pablo Ripollés
- Department of Psychology, New York University, New York, New York, USA
- Center for Language, Music, and Emotion (CLaME), New York University, New York, New York, USA
- Music and Audio Research Laboratory (MARL), New York University, New York, New York, USA
| |
Collapse
|
6
|
Clemente A, Kaplan TM, Pearce MT. Perceptual representations mediate effects of stimulus properties on liking for music. Ann N Y Acad Sci 2024; 1533:169-180. [PMID: 38319962 DOI: 10.1111/nyas.15106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2024]
Abstract
Perceptual pleasure and its concomitant hedonic value play an essential role in everyday life, motivating behavior and thus influencing how individuals choose to spend their time and resources. However, how pleasure arises from perception of sensory information remains relatively poorly understood. In particular, research has neglected the question of how perceptual representations mediate the relationships between stimulus properties and liking (e.g., stimulus symmetry can only affect liking if it is perceived). The present research addresses this gap for the first time, analyzing perceptual and liking ratings of 96 nonmusicians (power of 0.99) and finding that perceptual representations mediate effects of feature-based and information-based stimulus properties on liking for a novel set of melodies varying in balance, contour, symmetry, or complexity. Moreover, variability due to individual differences and stimuli accounts for most of the variance in liking. These results have broad implications for psychological research on sensory valuation, advocating a more explicit account of random variability and the mediating role of perceptual representations of stimulus properties.
Collapse
Affiliation(s)
- Ana Clemente
- Human Evolution and Cognition Research Group, University of the Balearic Islands, Palma de Mallorca, Spain
- Department of Cognition, Development and Educational Psychology, Institute of Neurosciences, University of Barcelona, Barcelona, Spain
- Cognition and Brain Plasticity Unit, Bellvitge Institute for Biomedical Research, L'Hospitalet De Llobregat, Spain
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK
| | - Thomas M Kaplan
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK
| | - Marcus T Pearce
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| |
Collapse
|
7
|
Háden GP, Bouwer FL, Honing H, Winkler I. Beat processing in newborn infants cannot be explained by statistical learning based on transition probabilities. Cognition 2024; 243:105670. [PMID: 38016227 DOI: 10.1016/j.cognition.2023.105670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 11/08/2023] [Accepted: 11/17/2023] [Indexed: 11/30/2023]
Abstract
Newborn infants have been shown to extract temporal regularities from sound sequences, both in the form of learning regular sequential properties, and extracting periodicity in the input, commonly referred to as a regular pulse or the 'beat'. However, these two types of regularities are often indistinguishable in isochronous sequences, as both statistical learning and beat perception can be elicited by the regular alternation of accented and unaccented sounds. Here, we manipulated the isochrony of sound sequences in order to disentangle statistical learning from beat perception in sleeping newborn infants in an EEG experiment, as previously done in adults and macaque monkeys. We used a binary accented sequence that induces a beat when presented with isochronous timing, but not when presented with randomly jittered timing. We compared mismatch responses to infrequent deviants falling on either accented or unaccented (i.e., odd and even) positions. Results showed a clear difference between metrical positions in the isochronous sequence, but not in the equivalent jittered sequence. This suggests that beat processing is present in newborns. Despite previous evidence for statistical learning in newborns the effects of this ability were not detected in the jittered condition. These results show that statistical learning by itself does not fully explain beat processing in newborn infants.
Collapse
Affiliation(s)
- Gábor P Háden
- Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, Magyar tudósok körútja 2, H-1117 Budapest, Hungary; Department of Telecommunications and Media Informatics, Faculty of Electrical Engineering and Informatics, Budapest University of Technology and Economics, Magyar tudósok körútja 2, 1117 Budapest, Hungary.
| | - Fleur L Bouwer
- Music Cognition Group, Institute for Logic, Language, and Computation, University of Amsterdam, P.O. Box 94242, 1090 GE Amsterdam, the Netherlands; Amsterdam Brain and Cognition, University of Amsterdam, P.O. Box 15900, 1001 NK Amsterdam, the Netherlands; Department of Psychology, Brain & Cognition, University of Amsterdam, P.O. Box 15900, 1001 NK Amsterdam, the Netherlands; Cognitive Psychology Unit, Institute of Psychology & Leiden Institute for Brain and Cognition, Leiden University, 2333 AK Leiden, the Netherlands.
| | - Henkjan Honing
- Music Cognition Group, Institute for Logic, Language, and Computation, University of Amsterdam, P.O. Box 94242, 1090 GE Amsterdam, the Netherlands; Amsterdam Brain and Cognition, University of Amsterdam, P.O. Box 15900, 1001 NK Amsterdam, the Netherlands.
| | - István Winkler
- Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, Magyar tudósok körútja 2, H-1117 Budapest, Hungary.
| |
Collapse
|
8
|
Cheung VKM, Harrison PMC, Koelsch S, Pearce MT, Friederici AD, Meyer L. Cognitive and sensory expectations independently shape musical expectancy and pleasure. Philos Trans R Soc Lond B Biol Sci 2024; 379:20220420. [PMID: 38104601 PMCID: PMC10725761 DOI: 10.1098/rstb.2022.0420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 10/20/2023] [Indexed: 12/19/2023] Open
Abstract
Expectation is crucial for our enjoyment of music, yet the underlying generative mechanisms remain unclear. While sensory models derive predictions based on local acoustic information in the auditory signal, cognitive models assume abstract knowledge of music structure acquired over the long term. To evaluate these two contrasting mechanisms, we compared simulations from four computational models of musical expectancy against subjective expectancy and pleasantness ratings of over 1000 chords sampled from 739 US Billboard pop songs. Bayesian model comparison revealed that listeners' expectancy and pleasantness ratings were predicted by the independent, non-overlapping, contributions of cognitive and sensory expectations. Furthermore, cognitive expectations explained over twice the variance in listeners' perceived surprise compared to sensory expectations, suggesting a larger relative importance of long-term representations of music structure over short-term sensory-acoustic information in musical expectancy. Our results thus emphasize the distinct, albeit complementary, roles of cognitive and sensory expectations in shaping musical pleasure, and suggest that this expectancy-driven mechanism depends on musical information represented at different levels of abstraction along the neural hierarchy. This article is part of the theme issue 'Art, aesthetics and predictive processing: theoretical and empirical perspectives'.
Collapse
Affiliation(s)
- Vincent K. M. Cheung
- Sony Computer Science Laboratories, Inc., Shinagawa-ku, Tokyo 141-0022, Japan
- Department of Neuropsychology, Sony Computer Science Laboratories, Inc., Shinagawa-ku, Tokyo 141-0022, Japan
- Institute of Information Science, Academia Sinica, Taipei 115, Taiwan
| | - Peter M. C. Harrison
- Centre for Music and Science, University of Cambridge, Faculty of Music, 11 West Road, Cambridge, CB3 9DP, UK
- Centre for Digital Music, Queen Mary University of London, E1 4NS, UK
| | - Stefan Koelsch
- Department of Biological and Medical Psychology, University of Bergen, Bergen, 5009, Norway
| | - Marcus T. Pearce
- Centre for Digital Music, Queen Mary University of London, E1 4NS, UK
- Department of Clinical Medicine, Aarhus University, Aarhus N, 8200, Denmark
| | - Angela D. Friederici
- Department of Neuropsychology, Sony Computer Science Laboratories, Inc., Shinagawa-ku, Tokyo 141-0022, Japan
| | - Lars Meyer
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
- Clinic for Phoniatrics and Pedaudiology, University Hospital Münster, Münster, 48149, Germany
| |
Collapse
|
9
|
Bianco R, Zuk NJ, Bigand F, Quarta E, Grasso S, Arnese F, Ravignani A, Battaglia-Mayer A, Novembre G. Neural encoding of musical expectations in a non-human primate. Curr Biol 2024; 34:444-450.e5. [PMID: 38176416 DOI: 10.1016/j.cub.2023.12.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 10/26/2023] [Accepted: 12/07/2023] [Indexed: 01/06/2024]
Abstract
The appreciation of music is a universal trait of humankind.1,2,3 Evidence supporting this notion includes the ubiquity of music across cultures4,5,6,7 and the natural predisposition toward music that humans display early in development.8,9,10 Are we musical animals because of species-specific predispositions? This question cannot be answered by relying on cross-cultural or developmental studies alone, as these cannot rule out enculturation.11 Instead, it calls for cross-species experiments testing whether homologous neural mechanisms underlying music perception are present in non-human primates. We present music to two rhesus monkeys, reared without musical exposure, while recording electroencephalography (EEG) and pupillometry. Monkeys exhibit higher engagement and neural encoding of expectations based on the previously seeded musical context when passively listening to real music as opposed to shuffled controls. We then compare human and monkey neural responses to the same stimuli and find a species-dependent contribution of two fundamental musical features-pitch and timing12-in generating expectations: while timing- and pitch-based expectations13 are similarly weighted in humans, monkeys rely on timing rather than pitch. Together, these results shed light on the phylogeny of music perception. They highlight monkeys' capacity for processing temporal structures beyond plain acoustic processing, and they identify a species-dependent contribution of time- and pitch-related features to the neural encoding of musical expectations.
Collapse
Affiliation(s)
- Roberta Bianco
- Neuroscience of Perception & Action Lab, Italian Institute of Technology, Viale Regina Elena 291, 00161 Rome, Italy.
| | - Nathaniel J Zuk
- Department of Psychology, Nottingham Trent University, 50 Shakespeare Street, Nottingham NG1 4FQ, UK
| | - Félix Bigand
- Neuroscience of Perception & Action Lab, Italian Institute of Technology, Viale Regina Elena 291, 00161 Rome, Italy
| | - Eros Quarta
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Stefano Grasso
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Flavia Arnese
- Neuroscience of Perception & Action Lab, Italian Institute of Technology, Viale Regina Elena 291, 00161 Rome, Italy
| | - Andrea Ravignani
- Comparative Bioacoustics Group, Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD Nijmegen, the Netherlands; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Universitetsbyen 3, 8000 Aarhus, Denmark; Department of Human Neurosciences, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Alexandra Battaglia-Mayer
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Giacomo Novembre
- Neuroscience of Perception & Action Lab, Italian Institute of Technology, Viale Regina Elena 291, 00161 Rome, Italy.
| |
Collapse
|
10
|
Ten Oever S, Martin AE. Interdependence of "What" and "When" in the Brain. J Cogn Neurosci 2024; 36:167-186. [PMID: 37847823 DOI: 10.1162/jocn_a_02067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2023]
Abstract
From a brain's-eye-view, when a stimulus occurs and what it is are interrelated aspects of interpreting the perceptual world. Yet in practice, the putative perceptual inferences about sensory content and timing are often dichotomized and not investigated as an integrated process. We here argue that neural temporal dynamics can influence what is perceived, and in turn, stimulus content can influence the time at which perception is achieved. This computational principle results from the highly interdependent relationship of what and when in the environment. Both brain processes and perceptual events display strong temporal variability that is not always modeled; we argue that understanding-and, minimally, modeling-this temporal variability is key for theories of how the brain generates unified and consistent neural representations and that we ignore temporal variability in our analysis practice at the peril of both data interpretation and theory-building. Here, we review what and when interactions in the brain, demonstrate via simulations how temporal variability can result in misguided interpretations and conclusions, and outline how to integrate and synthesize what and when in theories and models of brain computation.
Collapse
Affiliation(s)
- Sanne Ten Oever
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands
- Maastricht University, The Netherlands
| | - Andrea E Martin
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Centre for Cognitive Neuroimaging, Nijmegen, The Netherlands
| |
Collapse
|
11
|
Albury AW, Bianco R, Gold BP, Penhune VB. Context changes judgments of liking and predictability for melodies. Front Psychol 2023; 14:1175682. [PMID: 38034280 PMCID: PMC10684779 DOI: 10.3389/fpsyg.2023.1175682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 10/23/2023] [Indexed: 12/02/2023] Open
Abstract
Predictability plays an important role in the experience of musical pleasure. By leveraging expectations, music induces pleasure through tension and surprise. However, musical predictions draw on both prior knowledge and immediate context. Similarly, musical pleasure, which has been shown to depend on predictability, may also vary relative to the individual and context. Although research has demonstrated the influence of both long-term knowledge and stimulus features in influencing expectations, it is unclear how perceptions of a melody are influenced by comparisons to other music pieces heard in the same context. To examine the effects of context we compared how listeners' judgments of two distinct sets of stimuli differed when they were presented alone or in combination. Stimuli were excerpts from a repertoire of Western music and a set of experimenter created melodies. Separate groups of participants rated liking and predictability for each set of stimuli alone and in combination. We found that when heard together, the Repertoire stimuli were more liked and rated as less predictable than if they were heard alone, with the opposite pattern being observed for the Experimental stimuli. This effect was driven by a change in ratings between the Alone and Combined conditions for each stimulus set. These findings demonstrate a context-based shift of predictability ratings and derived pleasure, suggesting that judgments stem not only from the physical properties of the stimulus, but also vary relative to other options available in the immediate context.
Collapse
Affiliation(s)
- Alexander W. Albury
- Department of Psychology, Concordia University, Montreal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS) and Center for Research in Brain, Language and Music (CRBLM), Montreal, QC, Canada
| | - Roberta Bianco
- Neuroscience of Perception and Action Laboratory, Italian Institute of Technology, Rome, Italy
| | - Benjamin P. Gold
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, United States
| | - Virginia B. Penhune
- Department of Psychology, Concordia University, Montreal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS) and Center for Research in Brain, Language and Music (CRBLM), Montreal, QC, Canada
| |
Collapse
|
12
|
Chander A, Aslin RN. Expectation adaptation for rare cadences in music: Item order matters in repetition priming. Cognition 2023; 240:105601. [PMID: 37604028 PMCID: PMC10501749 DOI: 10.1016/j.cognition.2023.105601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 08/08/2023] [Accepted: 08/14/2023] [Indexed: 08/23/2023]
Abstract
Humans make predictions about future events in many domains, including when they listen to music. Previous accounts of harmonic expectation in music have emphasised the role of implicit musical knowledge acquired in the long term through the mechanism of statistical learning. However, it is not known whether listeners can adapt their expectations for unusual harmonies in the short term through repetition priming, and whether the extent of any short-term adaptation depends on the unfolding statistical structure of the music. To explore these possibilities, we presented 150 participants with phrases from Bach chorales that ended with a cadence that was either a priori likely or unlikely based on the long-term statistical structure of the corpus of chorales. While holding the 50-50 incidence of likely vs. unlikely cadences constant, we manipulated the order in which these phrases were presented such that the local probability of hearing an unlikely cadence changed throughout the experiment. For each phrase, participants provided two judgements: (a) a prospective rating of how confident they were in their expectations for the cadence, and (b) a retrospective rating of how well the presented cadence matched their expectations. While confidence ratings increased over the course of the experiment, the rate of change decreased as the local probability of an unexpected cadence increased. Participants' expectations favoured likely cadences over unlikely cadences on average, but their expectation ratings for unlikely cadences increased at a faster rate over the course of the experiment than for likely cadences, particularly when the local probability of hearing an unlikely cadence was high. Thus, despite entrenched long-term statistics about cadences, listeners can indeed adapt to unusual musical harmonies and are sensitive to the local statistical structure of the musical environment. We suggest that this adaptation is an instance of Bayesian belief updating, a domain-general process that accounts for expectation adaptation in multiple domains.
Collapse
Affiliation(s)
- Aditya Chander
- Department of Music, Yale University, 469 College St, New Haven, CT 06511, USA.
| | - Richard N Aslin
- Child Study Center, Yale School of Medicine, 230 S Frontage Rd, New Haven, CT 06519, USA; Department of Psychology, Yale University, 405 Temple St, New Haven, CT 06511, USA
| |
Collapse
|
13
|
Milne AJ, Dean RT, Bulger D. The effects of rhythmic structure on tapping accuracy. Atten Percept Psychophys 2023; 85:2673-2699. [PMID: 37817052 PMCID: PMC10600317 DOI: 10.3758/s13414-023-02778-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/17/2023] [Indexed: 10/12/2023]
Abstract
Prior investigations of simple rhythms in familiar time signatures have shown the importance of several mechanisms; notably, those related to metricization and grouping. But there has been limited study of complex rhythms, including those in unfamiliar time signatures, such as are found outside mainstream Western music. Here, we investigate how the structures of 91 rhythms with nonisochronous onsets (mostly complex, several in unfamiliar time signatures) influence the accuracy, velocity, and timing of taps made by participants attempting to synchronize with these onsets. The onsets were piano-tone cues sounded at a well-formed subset of isochronous cymbal pulses; the latter occurring every 234 ms. We modelled tapping at both the rhythm level and the pulse level; the latter provides insight into how rhythmic structure makes some cues easier to tap and why incorrect (uncued) taps may occur. In our models, we use a wide variety of quantifications of rhythmic features, several of which are novel and many of which are indicative of underlying mechanisms, strategies, or heuristics. The results show that, for these tricky rhythms, taps are disrupted by unfamiliar period lengths and are guided by crude encodings of each rhythm: the density of rhythmic cues, their circular mean and variance, and recognizing common small patterns and the approximate positions of groups of cues. These lossy encodings are often counterproductive for discriminating between cued and uncued pulses and are quite different to mechanisms-such as metricization and emphasizing group boundaries-thought to guide tapping behaviours in learned and familiar rhythms.
Collapse
Affiliation(s)
- Andrew J Milne
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Locked Bag 1797, Penrith, NSW, 2751, Australia.
| | - Roger T Dean
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Locked Bag 1797, Penrith, NSW, 2751, Australia
| | - David Bulger
- Department of Mathematics and Statistics, Macquarie University, Sydney, Australia
| |
Collapse
|
14
|
Gold BP, Pearce MT, McIntosh AR, Chang C, Dagher A, Zatorre RJ. Auditory and reward structures reflect the pleasure of musical expectancies during naturalistic listening. Front Neurosci 2023; 17:1209398. [PMID: 37928727 PMCID: PMC10625409 DOI: 10.3389/fnins.2023.1209398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 10/05/2023] [Indexed: 11/07/2023] Open
Abstract
Enjoying music consistently engages key structures of the neural auditory and reward systems such as the right superior temporal gyrus (R STG) and ventral striatum (VS). Expectations seem to play a central role in this effect, as preferences reliably vary according to listeners' uncertainty about the musical future and surprise about the musical past. Accordingly, VS activity reflects the pleasure of musical surprise, and exhibits stronger correlations with R STG activity as pleasure grows. Yet the reward value of musical surprise - and thus the reason for these surprises engaging the reward system - remains an open question. Recent models of predictive neural processing and learning suggest that forming, testing, and updating hypotheses about one's environment may be intrinsically rewarding, and that the constantly evolving structure of musical patterns could provide ample opportunity for this procedure. Consistent with these accounts, our group previously found that listeners tend to prefer melodic excerpts taken from real music when it either validates their uncertain melodic predictions (i.e., is high in uncertainty and low in surprise) or when it challenges their highly confident ones (i.e., is low in uncertainty and high in surprise). An independent research group (Cheung et al., 2019) replicated these results with musical chord sequences, and identified their fMRI correlates in the STG, amygdala, and hippocampus but not the VS, raising new questions about the neural mechanisms of musical pleasure that the present study seeks to address. Here, we assessed concurrent liking ratings and hemodynamic fMRI signals as 24 participants listened to 50 naturalistic, real-world musical excerpts that varied across wide spectra of computationally modeled uncertainty and surprise. As in previous studies, liking ratings exhibited an interaction between uncertainty and surprise, with the strongest preferences for high uncertainty/low surprise and low uncertainty/high surprise. FMRI results also replicated previous findings, with music liking effects in the R STG and VS. Furthermore, we identify interactions between uncertainty and surprise on the one hand, and liking and surprise on the other, in VS activity. Altogether, these results provide important support for the hypothesized role of the VS in deriving pleasure from learning about musical structure.
Collapse
Affiliation(s)
- Benjamin P. Gold
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, United States
- Vanderbilt University Institute of Imaging Science, Vanderbilt University Medical Center, Nashville, TN, United States
- Montreal Neurological Institute, McGill University, Montreal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada
- Centre for Interdisciplinary Research in Music, Media, and Technology (CIRMMT), Montreal, QC, Canada
| | - Marcus T. Pearce
- Cognitive Science Research Group, School of Electronic Engineering & Computer Science, Queen Mary University of London, London, United Kingdom
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Anthony R. McIntosh
- Baycrest Centre, Rotman Research Institute, Toronto, ON, Canada
- Department of Psychology, University of Toronto, Toronto, ON, Canada
| | - Catie Chang
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, United States
- Vanderbilt University Institute of Imaging Science, Vanderbilt University Medical Center, Nashville, TN, United States
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, United States
- Department of Computer Science, Vanderbilt University, Nashville, TN, United States
| | - Alain Dagher
- Montreal Neurological Institute, McGill University, Montreal, QC, Canada
| | - Robert J. Zatorre
- Montreal Neurological Institute, McGill University, Montreal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada
- Centre for Interdisciplinary Research in Music, Media, and Technology (CIRMMT), Montreal, QC, Canada
| |
Collapse
|
15
|
Silas S, Müllensiefen D, Kopiez R. Singing Ability Assessment: Development and validation of a singing test based on item response theory and a general open-source software environment for singing data. Behav Res Methods 2023:10.3758/s13428-023-02188-0. [PMID: 37672190 DOI: 10.3758/s13428-023-02188-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/30/2023] [Indexed: 09/07/2023]
Abstract
We describe the development of the Singing Ability Assessment (SAA) open-source test environment. The SAA captures and scores different aspects of human singing ability and melodic memory in the context of item response theory. Taking perspectives from both melodic recall and singing accuracy literature, we present results from two online experiments (N = 247; N = 910). On-the-fly audio transcription is produced via a probabilistic algorithm and scored via latent variable approaches. Measures of the ability to sing long notes indicate a three-dimensional principal components analysis solution representing pitch accuracy, pitch volatility and changes in pitch stability (proportion variance explained: 35%; 33%; 32%). For melody singing, a mixed-effects model uses features of melodic structure (e.g., tonality, melody length) to predict overall sung melodic recall performance via a composite score [R2c = .42; R2m = .16]. Additionally, two separate mixed-effects models were constructed to explain performance in singing back melodies in a rhythmic [R2c = .42; R2m = .13] and an arhythmic [R2c = .38; R2m = .11] condition. Results showed that the yielded SAA melodic scores are significantly associated with previously described measures of singing accuracy, the long note singing accuracy measures, demographic variables, and features of participants' hardware setup. Consequently, we release five R packages which facilitate deploying melodic stimuli online and in laboratory contexts, constructing audio production tests, transcribing audio in the R environment, and deploying the test elements and their supporting models. These are published as open-source, easy to access, and flexible to adapt.
Collapse
Affiliation(s)
- Sebastian Silas
- Goldsmiths University of London, London, UK.
- Hanover Music Lab, Hanover University of Music, Drama and Media, Neues Haus 1, 30175, Hannover, Germany.
| | - Daniel Müllensiefen
- Goldsmiths University of London, London, UK
- Hanover Music Lab, Hanover University of Music, Drama and Media, Neues Haus 1, 30175, Hannover, Germany
| | - Reinhard Kopiez
- Hanover Music Lab, Hanover University of Music, Drama and Media, Neues Haus 1, 30175, Hannover, Germany
| |
Collapse
|
16
|
Klarlund M, Brattico E, Pearce M, Wu Y, Vuust P, Overgaard M, Du Y. Worlds apart? Testing the cultural distance hypothesis in music perception of Chinese and Western listeners. Cognition 2023; 235:105405. [PMID: 36807031 DOI: 10.1016/j.cognition.2023.105405] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Revised: 02/02/2023] [Accepted: 02/08/2023] [Indexed: 02/21/2023]
Abstract
According to the cultural distance hypothesis (CDH), individuals learn culture-specific statistical structures in music as internal stylistic models and use these models in predictive processing of music, with musical structures closer to their home culture being easier to predict. This cultural distance effect may be affected by domain-specific (musical ability) and domain-general individual characteristics (openness, implicit cultural bias). To test the CDH and its modulation by individual characteristics, we recruited Chinese and Western adults to categorize stylistically ambiguous and unambiguous Chinese and Western melodies by cultural origin. Categorization performance was better for unambiguous (low CD) than ambiguous melodies (high CD), and for in-culture melodies regardless of ambiguity for both groups, providing evidence for CDH. Musical ability, but not other traits, correlated positively with melody categorization, suggesting that musical ability refines internal stylistic models. Therefore, both cultures show musical enculturation in their home culture with a modulatory effect of individual musical ability.
Collapse
Affiliation(s)
- Mathias Klarlund
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Sino-Danish College, University of Chinese Academy of Sciences, Beijing, China; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark.
| | - Elvira Brattico
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark; Department of Education, Psychology, Communication, University of Bari Aldo Moro, Italy
| | - Marcus Pearce
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark; Music Cognition Lab, Queen Mary University of London, London, England, UK
| | - Yiyang Wu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| | - Morten Overgaard
- Center for Functionally Integrative Neuroscience, Dept of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Yi Du
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China; CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai, China; Chinese Institute for Brain Research, Beijing, China.
| |
Collapse
|
17
|
Basiński K, Quiroga-Martinez DR, Vuust P. Temporal hierarchies in the predictive processing of melody - From pure tones to songs. Neurosci Biobehav Rev 2023; 145:105007. [PMID: 36535375 DOI: 10.1016/j.neubiorev.2022.105007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 11/30/2022] [Accepted: 12/14/2022] [Indexed: 12/23/2022]
Abstract
Listening to musical melodies is a complex task that engages perceptual and memoryrelated processes. The processes underlying melody cognition happen simultaneously on different timescales, ranging from milliseconds to minutes. Although attempts have been made, research on melody perception is yet to produce a unified framework of how melody processing is achieved in the brain. This may in part be due to the difficulty of integrating concepts such as perception, attention and memory, which pertain to different temporal scales. Recent theories on brain processing, which hold prediction as a fundamental principle, offer potential solutions to this problem and may provide a unifying framework for explaining the neural processes that enable melody perception on multiple temporal levels. In this article, we review empirical evidence for predictive coding on the levels of pitch formation, basic pitch-related auditory patterns,more complex regularity processing extracted from basic patterns and long-term expectations related to musical syntax. We also identify areas that would benefit from further inquiry and suggest future directions in research on musical melody perception.
Collapse
Affiliation(s)
- Krzysztof Basiński
- Division of Quality of Life Research, Medical University of Gdańsk, Poland
| | - David Ricardo Quiroga-Martinez
- Helen Wills Neuroscience Institute & Department of Psychology, University of California Berkeley, USA; Center for Music in the Brain, Aarhus University & The Royal Academy of Music, Denmark
| | - Peter Vuust
- Center for Music in the Brain, Aarhus University & The Royal Academy of Music, Denmark
| |
Collapse
|
18
|
Weiss MW, Peretz I. Improvisation is a novel tool to study musicality. Sci Rep 2022; 12:12595. [PMID: 35869086 PMCID: PMC9307610 DOI: 10.1038/s41598-022-15312-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Accepted: 06/22/2022] [Indexed: 11/10/2022] Open
Abstract
Humans spontaneously invent songs from an early age. Here, we exploit this natural inclination to probe implicit musical knowledge in 33 untrained and poor singers (amusia). Each sang 28 long improvisations as a response to a verbal prompt or a continuation of a melodic stem. To assess the extent to which each improvisation reflects tonality, which has been proposed to be a core organizational principle of musicality and which is present within most music traditions, we developed a new algorithm that compares a sung excerpt to a probability density function representing the tonal hierarchy of Western music. The results show signatures of tonality in both nonmusicians and individuals with congenital amusia, who have notorious difficulty performing musical tasks that require explicit responses and memory. The findings are a proof of concept that improvisation can serve as a novel, even enjoyable method for systematically measuring hidden aspects of musicality across the spectrum of musical ability.
Collapse
|
19
|
Fernández-Rubio G, Brattico E, Kotz SA, Kringelbach ML, Vuust P, Bonetti L. Magnetoencephalography recordings reveal the spatiotemporal dynamics of recognition memory for complex versus simple auditory sequences. Commun Biol 2022; 5:1272. [PMID: 36402843 PMCID: PMC9675809 DOI: 10.1038/s42003-022-04217-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Accepted: 11/02/2022] [Indexed: 11/21/2022] Open
Abstract
Auditory recognition is a crucial cognitive process that relies on the organization of single elements over time. However, little is known about the spatiotemporal dynamics underlying the conscious recognition of auditory sequences varying in complexity. To study this, we asked 71 participants to learn and recognize simple tonal musical sequences and matched complex atonal sequences while their brain activity was recorded using magnetoencephalography (MEG). Results reveal qualitative changes in neural activity dependent on stimulus complexity: recognition of tonal sequences engages hippocampal and cingulate areas, whereas recognition of atonal sequences mainly activates the auditory processing network. Our findings reveal the involvement of a cortico-subcortical brain network for auditory recognition and support the idea that stimulus complexity qualitatively alters the neural pathways of recognition memory.
Collapse
Affiliation(s)
- Gemma Fernández-Rubio
- grid.7048.b0000 0001 1956 2722Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Aarhus, Denmark ,grid.5012.60000 0001 0481 6099Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Elvira Brattico
- grid.7048.b0000 0001 1956 2722Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Aarhus, Denmark ,grid.7644.10000 0001 0120 3326Department of Education, Psychology, Communication, University of Bari Aldo Moro, Bari, Italy
| | - Sonja A. Kotz
- grid.5012.60000 0001 0481 6099Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Morten L. Kringelbach
- grid.7048.b0000 0001 1956 2722Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Aarhus, Denmark ,grid.4991.50000 0004 1936 8948Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, United Kingdom ,grid.4991.50000 0004 1936 8948Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| | - Peter Vuust
- grid.7048.b0000 0001 1956 2722Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Aarhus, Denmark
| | - Leonardo Bonetti
- grid.7048.b0000 0001 1956 2722Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Aarhus, Denmark ,grid.4991.50000 0004 1936 8948Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, United Kingdom ,grid.4991.50000 0004 1936 8948Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
20
|
Bonetti L, Brattico E, Bruzzone SEP, Donati G, Deco G, Pantazis D, Vuust P, Kringelbach ML. Brain recognition of previously learned versus novel temporal sequences: a differential simultaneous processing. Cereb Cortex 2022; 33:5524-5537. [PMID: 36346308 PMCID: PMC10152090 DOI: 10.1093/cercor/bhac439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 10/12/2022] [Accepted: 12/13/2022] [Indexed: 11/09/2022] Open
Abstract
Abstract
Memory for sequences is a central topic in neuroscience, and decades of studies have investigated the neural mechanisms underlying the coding of a wide array of sequences extended over time. Yet, little is known on the brain mechanisms underlying the recognition of previously memorized versus novel temporal sequences. Moreover, the differential brain processing of single items in an auditory temporal sequence compared to the whole superordinate sequence is not fully understood. In this magnetoencephalography (MEG) study, the items of the temporal sequence were independently linked to local and rapid (2–8 Hz) brain processing, while the whole sequence was associated with concurrent global and slower (0.1–1 Hz) processing involving a widespread network of sequentially active brain regions. Notably, the recognition of previously memorized temporal sequences was associated to stronger activity in the slow brain processing, while the novel sequences required a greater involvement of the faster brain processing. Overall, the results expand on well-known information flow from lower- to higher order brain regions. In fact, they reveal the differential involvement of slow and faster whole brain processing to recognize previously learned versus novel temporal information.
Collapse
Affiliation(s)
- L Bonetti
- Center for Music in the Brain (MIB), Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg , Universitetsbyen 3, 8000, Aarhus C , Denmark
- Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford , Stoke place 7, OX39BX, Oxford , UK
- University of Oxford Department of Psychiatry, , Oxford, UK
- University of Bologna Department of Psychology, , Italy
| | - E Brattico
- Center for Music in the Brain (MIB), Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg , Universitetsbyen 3, 8000, Aarhus C , Denmark
- University of Bari Aldo Moro Department of Education, Psychology, Communication, , Italy
| | - S E P Bruzzone
- Center for Music in the Brain (MIB) , Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Universitetsbyen 3, 8000, Aarhus C , Denmark
- Copenhagen University Hospital Rigshospitalet Neurobiology Research Unit (NRU), , Inge Lehmanns Vej 6, 2100, Copenhagen , Denmark
- Faculty of Health and Medical Sciences, University of Copenhagen , Blegdamsvej 3B, 2200, Copenhagen , Denmark
| | - G Donati
- University of Bologna Department of Psychology, , Italy
| | - G Deco
- Center for Brain and Cognition, Universitat Pompeu Fabra Computational and Theoretical Neuroscience Group, , Edifici Merce Rodereda, C/ de Ramon Trias Fargas, 25, 08018 Barcelona , Spain
| | - D Pantazis
- McGovern Institute for Brain Research, Massachusetts Institute of Technology (MIT) , 77 Massachusetts Ave, Cambridge, MA 02139 , USA
| | - P Vuust
- Center for Music in the Brain (MIB), Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg , Universitetsbyen 3, 8000, Aarhus C , Denmark
| | - M L Kringelbach
- Center for Music in the Brain (MIB), Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg , Universitetsbyen 3, 8000, Aarhus C , Denmark
- Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford , Stoke place 7, OX39BX, Oxford , UK
- University of Oxford Department of Psychiatry, , Oxford, UK
| |
Collapse
|
21
|
The rediscovered motor-related area 55b emerges as a core hub of music perception. Commun Biol 2022; 5:1104. [PMID: 36257973 PMCID: PMC9579133 DOI: 10.1038/s42003-022-04009-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 09/19/2022] [Indexed: 12/03/2022] Open
Abstract
Passive listening to music, without sound production or evident movement, is long known to activate motor control regions. Nevertheless, the exact neuroanatomical correlates of the auditory-motor association and its underlying neural mechanisms have not been fully determined. Here, based on a NeuroSynth meta-analysis and three original fMRI paradigms of music perception, we show that the long-ignored pre-motor region, area 55b, an anatomically unique and functionally intriguing region, is a core hub of music perception. Moreover, results of a brain-behavior correlation analysis implicate neural entrainment as the underlying mechanism of area 55b’s contribution to music perception. In view of the current results and prior literature, area 55b is proposed as a keystone of sensorimotor integration, a fundamental brain machinery underlying simple to hierarchically complex behaviors. Refining the neuroanatomical and physiological understanding of sensorimotor integration is expected to have a major impact on various fields, from brain disorders to artificial general intelligence. Functional magnetic resonance imaging data acquired during passive listening to music suggest that pre-motor area 55b acts as a core hub of music processing in humans.
Collapse
|
22
|
Lisøy RS, Pfuhl G, Sunde HF, Biegler R. Sweet spot in music—Is predictability preferred among persons with psychotic-like experiences or autistic traits? PLoS One 2022; 17:e0275308. [PMID: 36174035 PMCID: PMC9521895 DOI: 10.1371/journal.pone.0275308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Accepted: 09/14/2022] [Indexed: 11/29/2022] Open
Abstract
People prefer music with an intermediate level of predictability; not so predictable as to be boring, yet not so unpredictable that it ceases to be music. This sweet spot for predictability varies due to differences in the perception of predictability. The symptoms of both psychosis and Autism Spectrum Disorder have been attributed to overestimation of uncertainty, which predicts a preference for predictable stimuli and environments. In a pre-registered study, we tested this prediction by investigating whether psychotic and autistic traits were associated with a higher preference for predictability in music. Participants from the general population were presented with twenty-nine pre-composed music excerpts, scored on their complexity by musical experts. A participant’s preferred level of predictability corresponded to the peak of the inverted U-shaped curve between music complexity and liking (i.e., a Wundt curve). We found that the sweet spot for predictability did indeed vary between individuals. Contrary to predictions, we did not find support for these variations being associated with autistic and psychotic traits. The findings are discussed in the context of the Wundt curve and the use of naturalistic stimuli. We also provide recommendations for further exploration.
Collapse
Affiliation(s)
- Rebekka Solvik Lisøy
- Department of Psychology, Faculty of Social and Educational Sciences, Norwegian University of Science and Technology, Trondheim, Norway
- * E-mail:
| | - Gerit Pfuhl
- Department of Psychology, Faculty of Social and Educational Sciences, Norwegian University of Science and Technology, Trondheim, Norway
- Department of Psychology, Faculty of Health Sciences, UiT–The Arctic University of Norway, Tromsø, Norway
| | - Hans Fredrik Sunde
- Centre for Fertility and Health, Norwegian Institute of Public Health, Oslo, Norway
| | - Robert Biegler
- Department of Psychology, Faculty of Social and Educational Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
23
|
Modeling enculturated bias in entrainment to rhythmic patterns. PLoS Comput Biol 2022; 18:e1010579. [PMID: 36174063 PMCID: PMC9553061 DOI: 10.1371/journal.pcbi.1010579] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 10/11/2022] [Accepted: 09/16/2022] [Indexed: 11/19/2022] Open
Abstract
Long-term and culture-specific experience of music shapes rhythm perception, leading to enculturated expectations that make certain rhythms easier to track and more conducive to synchronized movement. However, the influence of enculturated bias on the moment-to-moment dynamics of rhythm tracking is not well understood. Recent modeling work has formulated entrainment to rhythms as a formal inference problem, where phase is continuously estimated based on precise event times and their correspondence to timing expectations: PIPPET (Phase Inference from Point Process Event Timing). Here we propose that the problem of optimally tracking a rhythm also requires an ongoing process of inferring which pattern of event timing expectations is most suitable to predict a stimulus rhythm. We formalize this insight as an extension of PIPPET called pPIPPET (PIPPET with pattern inference). The variational solution to this problem introduces terms representing the likelihood that a stimulus is based on a particular member of a set of event timing patterns, which we initialize according to culturally-learned prior expectations of a listener. We evaluate pPIPPET in three experiments. First, we demonstrate that pPIPPET can qualitatively reproduce enculturated bias observed in human tapping data for simple two-interval rhythms. Second, we simulate categorization of a continuous three-interval rhythm space by Western-trained musicians through derivation of a comprehensive set of priors for pPIPPET from metrical patterns in a sample of Western rhythms. Third, we simulate iterated reproduction of three-interval rhythms, and show that models configured with notated rhythms from different cultures exhibit both universal and enculturated biases as observed experimentally in listeners from those cultures. These results suggest the influence of enculturated timing expectations on human perceptual and motor entrainment can be understood as approximating optimal inference about the rhythmic stimulus, with respect to prototypical patterns in an empirical sample of rhythms that represent the music-cultural environment of the listener. Cross-cultural studies have highlighted that listeners from non-Western cultures can precisely tap along with complex rhythms present in music from their culture that are challenging for participants from Western cultures. Therefore, while most adults can synchronize movements with simple periodic patterns (e.g. a ticking clock, a metronome), the ability to precisely track more complex rhythmic patterns depends on musical experience. Many computer models have been developed to describe the remarkable precision of human “entrainment”, but they have done little to explain how this ability depends on cultural musical experience. Here, we describe this as the problem of estimating the phase of a cycle underlying an auditory rhythm in real time, by drawing upon learned patterns (reference structures) that could plausibly describe the structure of observed events. By creating a model that solves this inference problem, and configuring these patterns to reflect specific musical features, we are able to simulate cultural variation in synchronization to rhythm. These results highlight that while humans universally move to musical rhythm, the ability to do so depends on musical experience within a cultural tradition, as reflected by the distinct “categories” of rhythm learned during such experience.
Collapse
|
24
|
Kim SG. On the encoding of natural music in computational models and human brains. Front Neurosci 2022; 16:928841. [PMID: 36203808 PMCID: PMC9531138 DOI: 10.3389/fnins.2022.928841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 08/15/2022] [Indexed: 11/13/2022] Open
Abstract
This article discusses recent developments and advances in the neuroscience of music to understand the nature of musical emotion. In particular, it highlights how system identification techniques and computational models of music have advanced our understanding of how the human brain processes the textures and structures of music and how the processed information evokes emotions. Musical models relate physical properties of stimuli to internal representations called features, and predictive models relate features to neural or behavioral responses and test their predictions against independent unseen data. The new frameworks do not require orthogonalized stimuli in controlled experiments to establish reproducible knowledge, which has opened up a new wave of naturalistic neuroscience. The current review focuses on how this trend has transformed the domain of the neuroscience of music.
Collapse
|
25
|
Cheung VKM, Sakamoto S. Separating Uncertainty from Surprise in Auditory Processing with Neurocomputational Models: Implications for Music Perception. J Neurosci 2022; 42:5657-5659. [PMID: 35858813 PMCID: PMC9302456 DOI: 10.1523/jneurosci.0594-22.2022] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 05/19/2022] [Accepted: 05/23/2022] [Indexed: 01/22/2023] Open
Affiliation(s)
- Vincent K M Cheung
- Sony Computer Science Laboratories, Inc., Tokyo 141-0022, Japan
- Institute of Information Science, Academia Sinica, Taipei 11529, Taiwan
| | - Shu Sakamoto
- Sony Computer Science Laboratories, Inc., Tokyo 141-0022, Japan
- Graduate School of Media and Governance, Keio University, Fujisawa 252-0882, Japan
| |
Collapse
|
26
|
Mencke I, Omigie D, Quiroga-Martinez DR, Brattico E. Atonal Music as a Model for Investigating Exploratory Behavior. Front Neurosci 2022; 16:793163. [PMID: 35812236 PMCID: PMC9256982 DOI: 10.3389/fnins.2022.793163] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Accepted: 05/12/2022] [Indexed: 11/13/2022] Open
Abstract
Atonal music is often characterized by low predictability stemming from the absence of tonal or metrical hierarchies. In contrast, Western tonal music exhibits intrinsic predictability due to its hierarchical structure and therefore, offers a directly accessible predictive model to the listener. In consequence, a specific challenge of atonal music is that listeners must generate a variety of new predictive models. Listeners must not only refrain from applying available tonal models to the heard music, but they must also search for statistical regularities and build new rules that may be related to musical properties other than pitch, such as timbre or dynamics. In this article, we propose that the generation of such new predictive models and the aesthetic experience of atonal music are characterized by internal states related to exploration. This is a behavior well characterized in behavioral neuroscience as fulfilling an innate drive to reduce uncertainty but which has received little attention in empirical music research. We support our proposal with emerging evidence that the hedonic value is associated with the recognition of patterns in low-predictability sound sequences and that atonal music elicits distinct behavioral responses in listeners. We end by outlining new research avenues that might both deepen our understanding of the aesthetic experience of atonal music in particular, and reveal core qualities of the aesthetic experience in general.
Collapse
Affiliation(s)
- Iris Mencke
- Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
- *Correspondence: Iris Mencke,
| | - Diana Omigie
- Department of Psychology, Goldsmiths, University of London, London, United Kingdom
| | - David Ricardo Quiroga-Martinez
- Department of Clinical Medicine, Center for Music in the Brain, Aarhus University and Royal Academy of Music, Aarhus, Denmark
| | - Elvira Brattico
- Department of Clinical Medicine, Center for Music in the Brain, Aarhus University and Royal Academy of Music, Aarhus, Denmark
- Department of Education, Psychology and Communication, University of Bari Aldo Moro, Bari, Italy
| |
Collapse
|
27
|
Vuust P, Heggli OA, Friston KJ, Kringelbach ML. Music in the brain. Nat Rev Neurosci 2022; 23:287-305. [PMID: 35352057 DOI: 10.1038/s41583-022-00578-5] [Citation(s) in RCA: 79] [Impact Index Per Article: 39.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/22/2022] [Indexed: 02/06/2023]
Abstract
Music is ubiquitous across human cultures - as a source of affective and pleasurable experience, moving us both physically and emotionally - and learning to play music shapes both brain structure and brain function. Music processing in the brain - namely, the perception of melody, harmony and rhythm - has traditionally been studied as an auditory phenomenon using passive listening paradigms. However, when listening to music, we actively generate predictions about what is likely to happen next. This enactive aspect has led to a more comprehensive understanding of music processing involving brain structures implicated in action, emotion and learning. Here we review the cognitive neuroscience literature of music perception. We show that music perception, action, emotion and learning all rest on the human brain's fundamental capacity for prediction - as formulated by the predictive coding of music model. This Review elucidates how this formulation of music perception and expertise in individuals can be extended to account for the dynamics and underlying brain mechanisms of collective music making. This in turn has important implications for human creativity as evinced by music improvisation. These recent advances shed new light on what makes music meaningful from a neuroscientific perspective.
Collapse
Affiliation(s)
- Peter Vuust
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark.
| | - Ole A Heggli
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
| | - Morten L Kringelbach
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark.,Department of Psychiatry, University of Oxford, Oxford, UK.,Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, UK
| |
Collapse
|
28
|
Li X, Bai X, Conway CM, Shi W, Wang X. Statistical learning for non-social and socially-meaningful stimuli in individuals with high and low levels of autistic traits. CURRENT PSYCHOLOGY 2022. [DOI: 10.1007/s12144-022-02703-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
29
|
Kern P, Heilbron M, de Lange FP, Spaak E. Cortical activity during naturalistic music listening reflects short-range predictions based on long-term experience. eLife 2022; 11:80935. [PMID: 36562532 PMCID: PMC9836393 DOI: 10.7554/elife.80935] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Accepted: 12/22/2022] [Indexed: 12/24/2022] Open
Abstract
Expectations shape our experience of music. However, the internal model upon which listeners form melodic expectations is still debated. Do expectations stem from Gestalt-like principles or statistical learning? If the latter, does long-term experience play an important role, or are short-term regularities sufficient? And finally, what length of context informs contextual expectations? To answer these questions, we presented human listeners with diverse naturalistic compositions from Western classical music, while recording neural activity using MEG. We quantified note-level melodic surprise and uncertainty using various computational models of music, including a state-of-the-art transformer neural network. A time-resolved regression analysis revealed that neural activity over fronto-temporal sensors tracked melodic surprise particularly around 200ms and 300-500ms after note onset. This neural surprise response was dissociated from sensory-acoustic and adaptation effects. Neural surprise was best predicted by computational models that incorporated long-term statistical learning-rather than by simple, Gestalt-like principles. Yet, intriguingly, the surprise reflected primarily short-range musical contexts of less than ten notes. We present a full replication of our novel MEG results in an openly available EEG dataset. Together, these results elucidate the internal model that shapes melodic predictions during naturalistic music listening.
Collapse
Affiliation(s)
- Pius Kern
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and BehaviourNijmegenNetherlands
| | - Micha Heilbron
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and BehaviourNijmegenNetherlands
| | - Floris P de Lange
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and BehaviourNijmegenNetherlands
| | - Eelke Spaak
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and BehaviourNijmegenNetherlands
| |
Collapse
|
30
|
Bianco R, Novembre G, Ringer H, Kohler N, Keller PE, Villringer A, Sammler D. Lateral Prefrontal Cortex Is a Hub for Music Production from Structural Rules to Movements. Cereb Cortex 2021; 32:3878-3895. [PMID: 34965579 PMCID: PMC9476625 DOI: 10.1093/cercor/bhab454] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 11/08/2021] [Accepted: 11/09/2021] [Indexed: 11/13/2022] Open
Abstract
Complex sequential behaviors, such as speaking or playing music, entail flexible rule-based chaining of single acts. However, it remains unclear how the brain translates abstract structural rules into movements. We combined music production with multimodal neuroimaging to dissociate high-level structural and low-level motor planning. Pianists played novel musical chord sequences on a muted MR-compatible piano by imitating a model hand on screen. Chord sequences were manipulated in terms of musical harmony and context length to assess structural planning, and in terms of fingers used for playing to assess motor planning. A model of probabilistic sequence processing confirmed temporally extended dependencies between chords, as opposed to local dependencies between movements. Violations of structural plans activated the left inferior frontal and middle temporal gyrus, and the fractional anisotropy of the ventral pathway connecting these two regions positively predicted behavioral measures of structural planning. A bilateral frontoparietal network was instead activated by violations of motor plans. Both structural and motor networks converged in lateral prefrontal cortex, with anterior regions contributing to musical structure building, and posterior areas to movement planning. These results establish a promising approach to study sequence production at different levels of action representation.
Collapse
Affiliation(s)
- Roberta Bianco
- UCL Ear Institute, University College London, London WC1X 8EE, UK.,Otto Hahn Research Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Giacomo Novembre
- Neuroscience of Perception and Action Lab, Italian Institute of Technology (IIT), Rome 00161, Italy
| | - Hanna Ringer
- Otto Hahn Research Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany.,Institute of Psychology, University of Leipzig, Leipzig 04109, Germany
| | - Natalie Kohler
- Otto Hahn Research Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany.,Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main 60322, Germany
| | - Peter E Keller
- Department of Clinical Medicine, Center for Music in the Brain, Aarhus University, Aarhus 8000, Denmark.,The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW 2751, Australia
| | - Arno Villringer
- Otto Hahn Research Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Daniela Sammler
- Otto Hahn Research Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany.,Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main 60322, Germany
| |
Collapse
|
31
|
Marion G, Di Liberto GM, Shamma SA. The Music of Silence: Part I: Responses to Musical Imagery Encode Melodic Expectations and Acoustics. J Neurosci 2021; 41:7435-7448. [PMID: 34341155 PMCID: PMC8412990 DOI: 10.1523/jneurosci.0183-21.2021] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 06/23/2021] [Accepted: 06/28/2021] [Indexed: 02/06/2023] Open
Abstract
Musical imagery is the voluntary internal hearing of music in the mind without the need for physical action or external stimulation. Numerous studies have already revealed brain areas activated during imagery. However, it remains unclear to what extent imagined music responses preserve the detailed temporal dynamics of the acoustic stimulus envelope and, crucially, whether melodic expectations play any role in modulating responses to imagined music, as they prominently do during listening. These modulations are important as they reflect aspects of the human musical experience, such as its acquisition, engagement, and enjoyment. This study explored the nature of these modulations in imagined music based on EEG recordings from 21 professional musicians (6 females and 15 males). Regression analyses were conducted to demonstrate that imagined neural signals can be predicted accurately, similarly to the listening task, and were sufficiently robust to allow for accurate identification of the imagined musical piece from the EEG. In doing so, our results indicate that imagery and listening tasks elicited an overlapping but distinctive topography of neural responses to sound acoustics, which is in line with previous fMRI literature. Melodic expectation, however, evoked very similar frontal spatial activation in both conditions, suggesting that they are supported by the same underlying mechanisms. Finally, neural responses induced by imagery exhibited a specific transformation from the listening condition, which primarily included a relative delay and a polarity inversion of the response. This transformation demonstrates the top-down predictive nature of the expectation mechanisms arising during both listening and imagery.SIGNIFICANCE STATEMENT It is well known that the human brain is activated during musical imagery: the act of voluntarily hearing music in our mind without external stimulation. It is unclear, however, what the temporal dynamics of this activation are, as well as what musical features are precisely encoded in the neural signals. This study uses an experimental paradigm with high temporal precision to record and analyze the cortical activity during musical imagery. This study reveals that neural signals encode music acoustics and melodic expectations during both listening and imagery. Crucially, it is also found that a simple mapping based on a time-shift and a polarity inversion could robustly describe the relationship between listening and imagery signals.
Collapse
Affiliation(s)
- Guilhem Marion
- Laboratoire des Systèmes Perceptifs, Département d'Étude Cognitive, École Normale Supérieure, PSL, 75005, Paris, France
| | - Giovanni M Di Liberto
- Laboratoire des Systèmes Perceptifs, Département d'Étude Cognitive, École Normale Supérieure, PSL, 75005, Paris, France
- Trinity Centre for Biomedical Engineering, Trinity College Institute of Neuroscience, Department of Mechanical, Manufacturing and Biomedical Engineering, Trinity College, University of Dublin, D02 PN40, Dublin 2, Ireland
- School of Electrical and Electronic Engineering and UCD Centre for Biomedical Engineering, University College Dublin, D04 V1W8, Dublin 4, Ireland
| | - Shihab A Shamma
- Laboratoire des Systèmes Perceptifs, Département d'Étude Cognitive, École Normale Supérieure, PSL, 75005, Paris, France
- Institute for Systems Research, Electrical and Computer Engineering, University of Maryland, College Park, MD 20742
| |
Collapse
|
32
|
Benhamou E, Zhao S, Sivasathiaseelan H, Johnson JCS, Requena-Komuro MC, Bond RL, van Leeuwen JEP, Russell LL, Greaves CV, Nelson A, Nicholas JM, Hardy CJD, Rohrer JD, Warren JD. Decoding expectation and surprise in dementia: the paradigm of music. Brain Commun 2021; 3:fcab173. [PMID: 34423301 PMCID: PMC8376684 DOI: 10.1093/braincomms/fcab173] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/31/2021] [Indexed: 01/08/2023] Open
Abstract
Making predictions about the world and responding appropriately to unexpected events are essential functions of the healthy brain. In neurodegenerative disorders, such as frontotemporal dementia and Alzheimer's disease, impaired processing of 'surprise' may underpin a diverse array of symptoms, particularly abnormalities of social and emotional behaviour, but is challenging to characterize. Here, we addressed this issue using a novel paradigm: music. We studied 62 patients (24 female; aged 53-88) representing major syndromes of frontotemporal dementia (behavioural variant, semantic variant primary progressive aphasia, non-fluent-agrammatic variant primary progressive aphasia) and typical amnestic Alzheimer's disease, in relation to 33 healthy controls (18 female; aged 54-78). Participants heard famous melodies containing no deviants or one of three types of deviant note-acoustic (white-noise burst), syntactic (key-violating pitch change) or semantic (key-preserving pitch change). Using a regression model that took elementary perceptual, executive and musical competence into account, we assessed accuracy detecting melodic deviants and simultaneously recorded pupillary responses and related these to deviant surprise value (information-content) and carrier melody predictability (entropy), calculated using an unsupervised machine learning model of music. Neuroanatomical associations of deviant detection accuracy and coupling of detection to deviant surprise value were assessed using voxel-based morphometry of patients' brain MRI. Whereas Alzheimer's disease was associated with normal deviant detection accuracy, behavioural and semantic variant frontotemporal dementia syndromes were associated with strikingly similar profiles of impaired syntactic and semantic deviant detection accuracy and impaired behavioural and autonomic sensitivity to deviant information-content (all P < 0.05). On the other hand, non-fluent-agrammatic primary progressive aphasia was associated with generalized impairment of deviant discriminability (P < 0.05) due to excessive false-alarms, despite retained behavioural and autonomic sensitivity to deviant information-content and melody predictability. Across the patient cohort, grey matter correlates of acoustic deviant detection accuracy were identified in precuneus, mid and mesial temporal regions; correlates of syntactic deviant detection accuracy and information-content processing, in inferior frontal and anterior temporal cortices, putamen and nucleus accumbens; and a common correlate of musical salience coding in supplementary motor area (all P < 0.05, corrected for multiple comparisons in pre-specified regions of interest). Our findings suggest that major dementias have distinct profiles of sensory 'surprise' processing, as instantiated in music. Music may be a useful and informative paradigm for probing the predictive decoding of complex sensory environments in neurodegenerative proteinopathies, with implications for understanding and measuring the core pathophysiology of these diseases.
Collapse
Affiliation(s)
- Elia Benhamou
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Sijia Zhao
- Department of Experimental Psychology, University of Oxford, Oxford OX2 6GG, UK
| | - Harri Sivasathiaseelan
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Jeremy C S Johnson
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Maï-Carmen Requena-Komuro
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Rebecca L Bond
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Janneke E P van Leeuwen
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Lucy L Russell
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Caroline V Greaves
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Annabel Nelson
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Jennifer M Nicholas
- Department of Medical Statistics, Faculty of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, London, UK
| | - Chris J D Hardy
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Jonathan D Rohrer
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Jason D Warren
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| |
Collapse
|
33
|
Bishop L, Jensenius AR, Laeng B. Musical and Bodily Predictors of Mental Effort in String Quartet Music: An Ecological Pupillometry Study of Performers and Listeners. Front Psychol 2021; 12:653021. [PMID: 34262504 PMCID: PMC8274478 DOI: 10.3389/fpsyg.2021.653021] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Accepted: 05/17/2021] [Indexed: 11/25/2022] Open
Abstract
Music performance can be cognitively and physically demanding. These demands vary across the course of a performance as the content of the music changes. More demanding passages require performers to focus their attention more intensity, or expend greater “mental effort.” To date, it remains unclear what effect different cognitive-motor demands have on performers' mental effort. It is likewise unclear how fluctuations in mental effort compare between performers and perceivers of the same music. We used pupillometry to examine the effects of different cognitive-motor demands on the mental effort used by performers and perceivers of classical string quartet music. We collected pupillometry, motion capture, and audio-video recordings of a string quartet as they performed a rehearsal and concert (for live audience) in our lab. We then collected pupillometry data from a remote sample of musically-trained listeners, who heard the audio recordings (without video) that we captured during the concert. We used a modelling approach to assess the effects of performers' bodily effort (head and arm motion; sound level; performers' ratings of technical difficulty), musical complexity (performers' ratings of harmonic complexity; a score-based measure of harmonic tension), and expressive difficulty (performers' ratings of expressive difficulty) on performers' and listeners' pupil diameters. Our results show stimulating effects of bodily effort and expressive difficulty on performers' pupil diameters, and stimulating effects of expressive difficulty on listeners' pupil diameters. We also observed negative effects of musical complexity on both performers and listeners, and negative effects of performers' bodily effort on listeners, which we suggest may reflect the complex relationships that these features share with other aspects of musical structure. Looking across the concert, we found that both of the quartet violinists (who exchanged places halfway through the concert) showed more dilated pupils during their turns as 1st violinist than when playing as 2nd violinist, suggesting that they experienced greater arousal when “leading” the quartet in the 1st violin role. This study shows how eye tracking and motion capture technologies can be used in combination in an ecological setting to investigate cognitive processing in music performance.
Collapse
Affiliation(s)
- Laura Bishop
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway.,Department of Musicology, University of Oslo, Oslo, Norway
| | - Alexander Refsum Jensenius
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway.,Department of Musicology, University of Oslo, Oslo, Norway
| | - Bruno Laeng
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway.,Department of Psychology, University of Oslo, Oslo, Norway
| |
Collapse
|
34
|
Sauvé SA, Cho A, Zendel BR. Mapping Tonal Hierarchy in the Brain. Neuroscience 2021; 465:187-202. [PMID: 33774126 DOI: 10.1016/j.neuroscience.2021.03.019] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 03/09/2021] [Accepted: 03/16/2021] [Indexed: 11/25/2022]
Abstract
In Western tonal music, pitches are organized hierarchically based on their perceived fit in a specific tonal context. This hierarchy forms scales that are commonly used in Western tonal music. The hierarchical nature of tonal structure is well established behaviourally; however, the neural underpinnings are largely unknown. In this study, EEG data and goodness-of-fit ratings were collected from 34 participants who listened to an arpeggio followed by a probe tone, where the probe tone could be any chromatic scale degree and the context any of the major keys. Goodness-of-fit ratings corresponded to the classic tonal hierarchy. N1, P2 and the Early Right Anterior Negativity (ERAN) were significantly modulated by scale degree. Furthermore, neural marker amplitudes and latencies were significantly correlated with similar magnitude to both pitch height and goodness-of-fit ratings. This is different from the clearer divide between pitch height correlating with early neural markers (100-200 ms) and tonal hierarchy correlating with late neural markers (200-1000 ms) reported by Sankaran et al. (2020) and Quiroga-Martinez et al. (2019). Finally, individual differences were greater than any main effects detected when pooling participants and brain-behavior correlations vary widely (i.e. r = -0.8 to 0.8).
Collapse
Affiliation(s)
- Sarah A Sauvé
- Division of Community Health and Humanities, Faculty of Medicine, Memorial University of Newfoundland, St. John's, Newfoundland and Labrador A1C 5S7, Canada.
| | - Alex Cho
- Division of Community Health and Humanities, Faculty of Medicine, Memorial University of Newfoundland, St. John's, Newfoundland and Labrador A1C 5S7, Canada
| | - Benjamin Rich Zendel
- Division of Community Health and Humanities, Faculty of Medicine, Memorial University of Newfoundland, St. John's, Newfoundland and Labrador A1C 5S7, Canada; Aging Research Centre - Newfoundland and Labrador, Memorial University of Newfoundland, Grenfell Campus, Memorial University, Canada
| |
Collapse
|
35
|
Friston KJ, Sajid N, Quiroga-Martinez DR, Parr T, Price CJ, Holmes E. Active listening. Hear Res 2021; 399:107998. [PMID: 32732017 PMCID: PMC7812378 DOI: 10.1016/j.heares.2020.107998] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/01/2019] [Revised: 05/11/2020] [Accepted: 05/13/2020] [Indexed: 11/27/2022]
Abstract
This paper introduces active listening, as a unified framework for synthesising and recognising speech. The notion of active listening inherits from active inference, which considers perception and action under one universal imperative: to maximise the evidence for our (generative) models of the world. First, we describe a generative model of spoken words that simulates (i) how discrete lexical, prosodic, and speaker attributes give rise to continuous acoustic signals; and conversely (ii) how continuous acoustic signals are recognised as words. The 'active' aspect involves (covertly) segmenting spoken sentences and borrows ideas from active vision. It casts speech segmentation as the selection of internal actions, corresponding to the placement of word boundaries. Practically, word boundaries are selected that maximise the evidence for an internal model of how individual words are generated. We establish face validity by simulating speech recognition and showing how the inferred content of a sentence depends on prior beliefs and background noise. Finally, we consider predictive validity by associating neuronal or physiological responses, such as the mismatch negativity and P300, with belief updating under active listening, which is greatest in the absence of accurate prior beliefs about what will be heard next.
Collapse
Affiliation(s)
- Karl J Friston
- The Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, WC1N 3AR, UK.
| | - Noor Sajid
- The Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, WC1N 3AR, UK.
| | | | - Thomas Parr
- The Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, WC1N 3AR, UK.
| | - Cathy J Price
- The Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, WC1N 3AR, UK.
| | - Emma Holmes
- The Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, WC1N 3AR, UK.
| |
Collapse
|
36
|
Politimou N, Douglass-Kirk P, Pearce M, Stewart L, Franco F. Melodic expectations in 5- and 6-year-old children. J Exp Child Psychol 2020; 203:105020. [PMID: 33271397 DOI: 10.1016/j.jecp.2020.105020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2020] [Revised: 10/01/2020] [Accepted: 10/02/2020] [Indexed: 11/28/2022]
Abstract
It has been argued that children implicitly acquire the rules relating to the structure of music in their environment using domain-general mechanisms such as statistical learning. Closely linked to statistical learning is the ability to form expectations about future events. Whether children as young as 5 years can make use of such internalized regularities to form expectations about the next note in a melody is still unclear. The possible effect of the home musical environment on the strength of musical expectations has also been under-explored. Using a newly developed melodic priming task that included melodies with either "expected" or "unexpected" endings according to rules of Western music theory, we tested 5- and 6-year-old children (N = 46). The stimuli in this task were constructed using the information dynamics of music (IDyOM) system, a probabilistic model estimating the level of "unexpectedness" of a note given the preceding context. Results showed that responses to expected versus unexpected tones were faster and more accurate, indicating that children have already formed robust melodic expectations at 5 years of age. Aspects of the home musical environment significantly predicted the strength of melodic expectations, suggesting that implicit musical learning may be influenced by the quantity of informal exposure to the surrounding musical environment.
Collapse
Affiliation(s)
- Nina Politimou
- Department of Psychology, Middlesex University London, The Burroughs, Hendon, London NW4 4BT, UK.
| | - Pedro Douglass-Kirk
- Department of Psychology, Goldsmiths University of London, New Cross, London SE14 6NW, UK
| | - Marcus Pearce
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Bethnal Green, London E1 4NS, UK; Center for Music in the Brain, Aarhus University, 8000 Aarhus, Denmark
| | - Lauren Stewart
- Department of Psychology, Goldsmiths University of London, New Cross, London SE14 6NW, UK
| | - Fabia Franco
- Department of Psychology, Middlesex University London, The Burroughs, Hendon, London NW4 4BT, UK
| |
Collapse
|
37
|
Wiggins GA. Response to commentaries on "Creativity, information, consciousness: The information dynamics of thinking". Phys Life Rev 2020; 34-35:57-61. [PMID: 33229299 DOI: 10.1016/j.plrev.2020.07.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Accepted: 07/29/2020] [Indexed: 11/16/2022]
Affiliation(s)
- Geraint A Wiggins
- Vrije Universiteit Brussel, Belgium; Queen Mary University of London, UK.
| |
Collapse
|
38
|
Harrison PMC, Bianco R, Chait M, Pearce MT. PPM-Decay: A computational model of auditory prediction with memory decay. PLoS Comput Biol 2020; 16:e1008304. [PMID: 33147209 PMCID: PMC7668605 DOI: 10.1371/journal.pcbi.1008304] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Revised: 11/16/2020] [Accepted: 09/04/2020] [Indexed: 12/19/2022] Open
Abstract
Statistical learning and probabilistic prediction are fundamental processes in auditory cognition. A prominent computational model of these processes is Prediction by Partial Matching (PPM), a variable-order Markov model that learns by internalizing n-grams from training sequences. However, PPM has limitations as a cognitive model: in particular, it has a perfect memory that weights all historic observations equally, which is inconsistent with memory capacity constraints and recency effects observed in human cognition. We address these limitations with PPM-Decay, a new variant of PPM that introduces a customizable memory decay kernel. In three studies-one with artificially generated sequences, one with chord sequences from Western music, and one with new behavioral data from an auditory pattern detection experiment-we show how this decay kernel improves the model's predictive performance for sequences whose underlying statistics change over time, and enables the model to capture effects of memory constraints on auditory pattern detection. The resulting model is available in our new open-source R package, ppm (https://github.com/pmcharrison/ppm).
Collapse
Affiliation(s)
- Peter M. C. Harrison
- Computational Auditory Perception Research Group, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
- Cognitive Science Research Group, Queen Mary University of London, London, UK
- * E-mail:
| | - Roberta Bianco
- UCL Ear Institute, University College London, London, UK
| | - Maria Chait
- UCL Ear Institute, University College London, London, UK
| | - Marcus T. Pearce
- Cognitive Science Research Group, Queen Mary University of London, London, UK
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| |
Collapse
|
39
|
Andermann M, Günther M, Patterson RD, Rupp A. Early cortical processing of pitch height and the role of adaptation and musicality. Neuroimage 2020; 225:117501. [PMID: 33169697 DOI: 10.1016/j.neuroimage.2020.117501] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Revised: 10/19/2020] [Accepted: 10/21/2020] [Indexed: 02/06/2023] Open
Abstract
Pitch is an important perceptual feature; however, it is poorly understood how its cortical correlates are shaped by absolute vs relative fundamental frequency (f0), and by neural adaptation. In this study, we assessed transient and sustained auditory evoked fields (AEFs) at the onset, progression, and offset of short pitch height sequences, taking into account the listener's musicality. We show that neuromagnetic activity reflects absolute f0 at pitch onset and offset, and relative f0 at transitions within pitch sequences; further, sequences with fixed f0 lead to larger response suppression than sequences with variable f0 contour, and to enhanced offset activity. Musical listeners exhibit stronger f0-related AEFs and larger differences between their responses to fixed vs variable sequences, both within sequences and at pitch offset. The results resemble prominent psychoacoustic phenomena in the perception of pitch contours; moreover, they suggest a strong influence of adaptive mechanisms on cortical pitch processing which, in turn, might be modulated by a listener's musical expertise.
Collapse
Affiliation(s)
- Martin Andermann
- Section of Biomagnetism, Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany.
| | - Melanie Günther
- Section of Biomagnetism, Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany
| | - Roy D Patterson
- Department of Physiology, Development and Neuroscience, University of Cambridge, Downing Street, Cambridge, CB2 3EG, United Kingdom
| | - André Rupp
- Section of Biomagnetism, Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany
| |
Collapse
|
40
|
Civai C, Teodorini R, Carrus E. Does unfairness sound wrong? A cross-domain investigation of expectations in music and social decision-making. ROYAL SOCIETY OPEN SCIENCE 2020; 7:190048. [PMID: 33047004 PMCID: PMC7540783 DOI: 10.1098/rsos.190048] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/09/2019] [Accepted: 08/24/2020] [Indexed: 06/11/2023]
Abstract
This study was interested in investigating the existence of a shared psychological mechanism for the processing of expectations across domains. The literature on music and language shows that violations of expectations produce similar neural responses and violating the expectation in one domain may influence the processing of stimuli in the other domain. Like music and language, our social world is governed by a system of inherent rules or norms, such as fairness. The study therefore aimed to draw a parallel to the social domain and investigate whether a manipulation of melodic expectation can influence the processing of higher-level expectations of fairness. Specifically, we aimed to investigate whether the presence of an unexpected melody enhances or reduces participants' sensitivity to the violations of fairness and the behavioural reactions associated with these. We embedded a manipulation of melodic expectation within a social decision-making paradigm, whereby musically expected and unexpected stimuli will be simultaneously presented with fair and unfair divisions in a third-party altruistic punishment game. Behavioural and electroencephalographic responses were recorded. Results from the pre-planned analyses show that participants are less likely to punish when melodies are more unexpected and that violations of fairness norms elicit medial frontal negativity (MFN)-life effects. Because no significant interactions between melodic expectancy and fairness of the division were found, results fail to provide evidence of a shared mechanism for the processing of expectations. Exploratory analyses show two additional effects: (i) unfair divisions elicit an early attentional component (P2), probably associated with stimulus saliency, and (ii) mid-value divisions elicit a late MFN-like component, probably reflecting stimulus ambiguity. Future studies could build on these results to further investigate the effect of the cross-domain influence of music on the processing of social stimuli on these early and late components.
Collapse
Affiliation(s)
| | | | - Elisa Carrus
- Division of Psychology, School of Applied Sciences, London South Bank University, London, UK
| |
Collapse
|
41
|
Zioga I, Harrison PMC, Pearce MT, Bhattacharya J, Luft CDB. Auditory but Not Audiovisual Cues Lead to Higher Neural Sensitivity to the Statistical Regularities of an Unfamiliar Musical Style. J Cogn Neurosci 2020; 32:2241-2259. [PMID: 32762519 DOI: 10.1162/jocn_a_01614] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
It is still a matter of debate whether visual aids improve learning of music. In a multisession study, we investigated the neural signatures of novel music sequence learning with or without aids (auditory-only: AO, audiovisual: AV). During three training sessions on three separate days, participants (nonmusicians) reproduced (note by note on a keyboard) melodic sequences generated by an artificial musical grammar. The AV group (n = 20) had each note color-coded on screen, whereas the AO group (n = 20) had no color indication. We evaluated learning of the statistical regularities of the novel music grammar before and after training by presenting melodies ending on correct or incorrect notes and by asking participants to judge the correctness and surprisal of the final note, while EEG was recorded. We found that participants successfully learned the new grammar. Although the AV group, as compared to the AO group, reproduced longer sequences during training, there was no significant difference in learning between groups. At the neural level, after training, the AO group showed a larger N100 response to low-probability compared with high-probability notes, suggesting an increased neural sensitivity to statistical properties of the grammar; this effect was not observed in the AV group. Our findings indicate that visual aids might improve sequence reproduction while not necessarily promoting better learning, indicating a potential dissociation between sequence reproduction and learning. We suggest that the difficulty induced by auditory-only input during music training might enhance cognitive engagement, thereby improving neural sensitivity to the underlying statistical properties of the learned material.
Collapse
|
42
|
Bianco R, Harrison PMC, Hu M, Bolger C, Picken S, Pearce MT, Chait M. Long-term implicit memory for sequential auditory patterns in humans. eLife 2020; 9:e56073. [PMID: 32420868 PMCID: PMC7338054 DOI: 10.7554/elife.56073] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2020] [Accepted: 05/18/2020] [Indexed: 11/17/2022] Open
Abstract
Memory, on multiple timescales, is critical to our ability to discover the structure of our surroundings, and efficiently interact with the environment. We combined behavioural manipulation and modelling to investigate the dynamics of memory formation for rarely reoccurring acoustic patterns. In a series of experiments, participants detected the emergence of regularly repeating patterns within rapid tone-pip sequences. Unbeknownst to them, a few patterns reoccurred every ~3 min. All sequences consisted of the same 20 frequencies and were distinguishable only by the order of tone-pips. Despite this, reoccurring patterns were associated with a rapidly growing detection-time advantage over novel patterns. This effect was implicit, robust to interference, and persisted for 7 weeks. The results implicate an interplay between short (a few seconds) and long-term (over many minutes) integration in memory formation and demonstrate the remarkable sensitivity of the human auditory system to sporadically reoccurring structure within the acoustic environment.
Collapse
Affiliation(s)
- Roberta Bianco
- UCL Ear Institute, University College LondonLondonUnited Kingdom
| | - Peter MC Harrison
- School of Electronic Engineering and Computer Science, Queen Mary University of LondonLondonUnited Kingdom
| | - Mingyue Hu
- UCL Ear Institute, University College LondonLondonUnited Kingdom
| | - Cora Bolger
- UCL Ear Institute, University College LondonLondonUnited Kingdom
| | - Samantha Picken
- UCL Ear Institute, University College LondonLondonUnited Kingdom
| | - Marcus T Pearce
- School of Electronic Engineering and Computer Science, Queen Mary University of LondonLondonUnited Kingdom
- Department of Clinical Medicine, Aarhus UniversityAarhusDenmark
| | - Maria Chait
- UCL Ear Institute, University College LondonLondonUnited Kingdom
| |
Collapse
|
43
|
Quiroga-Martinez DR, Hansen NC, Højlund A, Pearce M, Brattico E, Vuust P. Decomposing neural responses to melodic surprise in musicians and non-musicians: Evidence for a hierarchy of predictions in the auditory system. Neuroimage 2020; 215:116816. [PMID: 32276064 DOI: 10.1016/j.neuroimage.2020.116816] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2019] [Revised: 02/18/2020] [Accepted: 03/24/2020] [Indexed: 01/20/2023] Open
Abstract
Neural responses to auditory surprise are typically studied with highly unexpected, disruptive sounds. Consequently, little is known about auditory prediction in everyday contexts that are characterized by fine-grained, non-disruptive fluctuations of auditory surprise. To address this issue, we used IDyOM, a computational model of auditory expectation, to obtain continuous surprise estimates for a set of newly composed melodies. Our main goal was to assess whether the neural correlates of non-disruptive surprising sounds in a musical context are affected by musical expertise. Using magnetoencephalography (MEG), auditory responses were recorded from musicians and non-musicians while they listened to the melodies. Consistent with a previous study, the amplitude of the N1m component increased with higher levels of computationally estimated surprise. This effect, however, was not different between the two groups. Further analyses offered an explanation for this finding: Pitch interval size itself, rather than probabilistic prediction, was responsible for the modulation of the N1m, thus pointing to low-level sensory adaptation as the underlying mechanism. In turn, the formation of auditory regularities and proper probabilistic prediction were reflected in later components: The mismatch negativity (MMNm) and the P3am, respectively. Overall, our findings reveal a hierarchy of expectations in the auditory system and highlight the need to properly account for sensory adaptation in research addressing statistical learning.
Collapse
Affiliation(s)
- D R Quiroga-Martinez
- Center for Music in the Brain, Aarhus University & The Royal Academy of Music, Denmark.
| | - N C Hansen
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Australia; Aarhus Institute of Advanced Studies (AIAS), Aarhus University, Denmark
| | - A Højlund
- Center of Functionally Integrative Neuroscience, Aarhus University, Denmark
| | - M Pearce
- Center for Music in the Brain, Aarhus University & The Royal Academy of Music, Denmark; School of Electronic Engineering and Computer Science, Queen Mary University of London, UK
| | - E Brattico
- Center for Music in the Brain, Aarhus University & The Royal Academy of Music, Denmark; Department of Educational Sciences, Psychology and Communication, University of Bari Aldo Moro, Italy
| | - P Vuust
- Center for Music in the Brain, Aarhus University & The Royal Academy of Music, Denmark
| |
Collapse
|
44
|
Di Liberto GM, Pelofi C, Bianco R, Patel P, Mehta AD, Herrero JL, de Cheveigné A, Shamma S, Mesgarani N. Cortical encoding of melodic expectations in human temporal cortex. eLife 2020; 9:e51784. [PMID: 32122465 PMCID: PMC7053998 DOI: 10.7554/elife.51784] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2019] [Accepted: 01/20/2020] [Indexed: 01/14/2023] Open
Abstract
Humans engagement in music rests on underlying elements such as the listeners' cultural background and interest in music. These factors modulate how listeners anticipate musical events, a process inducing instantaneous neural responses as the music confronts these expectations. Measuring such neural correlates would represent a direct window into high-level brain processing. Here we recorded cortical signals as participants listened to Bach melodies. We assessed the relative contributions of acoustic versus melodic components of the music to the neural signal. Melodic features included information on pitch progressions and their tempo, which were extracted from a predictive model of musical structure based on Markov chains. We related the music to brain activity with temporal response functions demonstrating, for the first time, distinct cortical encoding of pitch and note-onset expectations during naturalistic music listening. This encoding was most pronounced at response latencies up to 350 ms, and in both planum temporale and Heschl's gyrus.
Collapse
Affiliation(s)
- Giovanni M Di Liberto
- Laboratoire des systèmes perceptifs, Département d’études cognitives, École normale supérieure, PSL University, CNRS75005 ParisFrance
| | - Claire Pelofi
- Department of Psychology, New York UniversityNew YorkUnited States
- Institut de Neurosciences des Système, UMR S 1106, INSERM, Aix Marseille UniversitéMarseilleFrance
| | | | - Prachi Patel
- Department of Electrical Engineering, Columbia UniversityNew YorkUnited States
- Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
| | - Ashesh D Mehta
- Department of Neurosurgery, Zucker School of Medicine at Hofstra/NorthwellManhassetUnited States
- Feinstein Institute of Medical Research, Northwell HealthManhassetUnited States
| | - Jose L Herrero
- Department of Neurosurgery, Zucker School of Medicine at Hofstra/NorthwellManhassetUnited States
- Feinstein Institute of Medical Research, Northwell HealthManhassetUnited States
| | - Alain de Cheveigné
- Laboratoire des systèmes perceptifs, Département d’études cognitives, École normale supérieure, PSL University, CNRS75005 ParisFrance
- UCL Ear InstituteLondonUnited Kingdom
| | - Shihab Shamma
- Laboratoire des systèmes perceptifs, Département d’études cognitives, École normale supérieure, PSL University, CNRS75005 ParisFrance
- Institute for Systems Research, Electrical and Computer Engineering, University of MarylandCollege ParkUnited States
| | - Nima Mesgarani
- Department of Electrical Engineering, Columbia UniversityNew YorkUnited States
- Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
| |
Collapse
|
45
|
A Set of 200 Musical Stimuli Varying in Balance, Contour, Symmetry, and Complexity: Behavioral and Computational Assessments. Behav Res Methods 2020; 52:1491-1509. [DOI: 10.3758/s13428-019-01329-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
46
|
Bianco R, Ptasczynski LE, Omigie D. Pupil responses to pitch deviants reflect predictability of melodic sequences. Brain Cogn 2020; 138:103621. [DOI: 10.1016/j.bandc.2019.103621] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2019] [Revised: 10/08/2019] [Accepted: 10/15/2019] [Indexed: 10/25/2022]
|
47
|
Zioga I, Harrison PM, Pearce MT, Bhattacharya J, Di Bernardi Luft C. From learning to creativity: Identifying the behavioural and neural correlates of learning to predict human judgements of musical creativity. Neuroimage 2020; 206:116311. [DOI: 10.1016/j.neuroimage.2019.116311] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Revised: 10/18/2019] [Accepted: 10/22/2019] [Indexed: 10/25/2022] Open
|
48
|
Quiroga‐Martinez DR, C. Hansen N, Højlund A, Pearce M, Brattico E, Vuust P. Musical prediction error responses similarly reduced by predictive uncertainty in musicians and non‐musicians. Eur J Neurosci 2020; 51:2250-2269. [DOI: 10.1111/ejn.14667] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2019] [Revised: 11/26/2019] [Accepted: 12/23/2019] [Indexed: 12/14/2022]
Affiliation(s)
| | - Niels C. Hansen
- The MARCS Institute for Brain, Behaviour, and Development Western Sydney University Sydney NSW Australia
| | - Andreas Højlund
- Center for Functionally Integrative Neuroscience Aarhus University Aarhus Denmark
| | - Marcus Pearce
- Center for Music in the Brain Aarhus University & The Royal Academy of music Aarhus Denmark
- School of Electronic Engineering and Computer Science Queen Mary University of London London UK
| | - Elvira Brattico
- Center for Music in the Brain Aarhus University & The Royal Academy of music Aarhus Denmark
| | - Peter Vuust
- Center for Music in the Brain Aarhus University & The Royal Academy of music Aarhus Denmark
| |
Collapse
|
49
|
Bianco R, Gold BP, Johnson AP, Penhune VB. Music predictability and liking enhance pupil dilation and promote motor learning in non-musicians. Sci Rep 2019; 9:17060. [PMID: 31745159 PMCID: PMC6863863 DOI: 10.1038/s41598-019-53510-w] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Accepted: 10/21/2019] [Indexed: 01/28/2023] Open
Abstract
Humans can anticipate music and derive pleasure from it. Expectations facilitate the learning of movements associated with anticipated events, and they are also linked with reward, which may further facilitate learning of the anticipated rewarding events. The present study investigates the synergistic effects of predictability and hedonic responses to music on arousal and motor-learning in a naïve population. Novel melodies were manipulated in their overall predictability (predictable/unpredictable) as objectively defined by a model of music expectation, and ranked as high/medium/low liked based on participants' self-reports collected during an initial listening session. During this session, we also recorded ocular pupil size as an implicit measure of listeners' arousal. During the following motor task, participants learned to play target notes of the melodies on a keyboard (notes were of similar motor and musical complexity across melodies). Pupil dilation was greater for liked melodies, particularly when predictable. Motor performance was facilitated in predictable rather than unpredictable melodies, but liked melodies were learned even in the unpredictable condition. Low-liked melodies also showed learning but mostly in participants with higher scores of task perceived competence. Taken together, these results highlight the effects of stimuli predictability on learning, which can be however overshadowed by the effects of stimulus liking or task-related intrinsic motivation.
Collapse
Affiliation(s)
- R Bianco
- Department of Psychology, Concordia University, Montreal, QC, Canada.
- Ear Institute, University College London, London, UK.
| | - B P Gold
- Montreal Neurological Institute, McGill University, Montreal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
| | - A P Johnson
- Department of Psychology, Concordia University, Montreal, QC, Canada
| | - V B Penhune
- Department of Psychology, Concordia University, Montreal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
| |
Collapse
|
50
|
Predictability and Uncertainty in the Pleasure of Music: A Reward for Learning? J Neurosci 2019; 39:9397-9409. [PMID: 31636112 DOI: 10.1523/jneurosci.0428-19.2019] [Citation(s) in RCA: 50] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Revised: 09/30/2019] [Accepted: 10/01/2019] [Indexed: 12/23/2022] Open
Abstract
Music ranks among the greatest human pleasures. It consistently engages the reward system, and converging evidence implies it exploits predictions to do so. Both prediction confirmations and errors are essential for understanding one's environment, and music offers many of each as it manipulates interacting patterns across multiple timescales. Learning models suggest that a balance of these outcomes (i.e., intermediate complexity) optimizes the reduction of uncertainty to rewarding and pleasurable effect. Yet evidence of a similar pattern in music is mixed, hampered by arbitrary measures of complexity. In the present studies, we applied a well-validated information-theoretic model of auditory expectation to systematically measure two key aspects of musical complexity: predictability (operationalized as information content [IC]), and uncertainty (entropy). In Study 1, we evaluated how these properties affect musical preferences in 43 male and female participants; in Study 2, we replicated Study 1 in an independent sample of 27 people and assessed the contribution of veridical predictability by presenting the same stimuli seven times. Both studies revealed significant quadratic effects of IC and entropy on liking that outperformed linear effects, indicating reliable preferences for music of intermediate complexity. An interaction between IC and entropy further suggested preferences for more predictability during more uncertain contexts, which would facilitate uncertainty reduction. Repeating stimuli decreased liking ratings but did not disrupt the preference for intermediate complexity. Together, these findings support long-hypothesized optimal zones of predictability and uncertainty in musical pleasure with formal modeling, relating the pleasure of music listening to the intrinsic reward of learning.SIGNIFICANCE STATEMENT Abstract pleasures, such as music, claim much of our time, energy, and money despite lacking any clear adaptive benefits like food or shelter. Yet as music manipulates patterns of melody, rhythm, and more, it proficiently exploits our expectations. Given the importance of anticipating and adapting to our ever-changing environments, making and evaluating uncertain predictions can have strong emotional effects. Accordingly, we present evidence that listeners consistently prefer music of intermediate predictive complexity, and that preferences shift toward expected musical outcomes in more uncertain contexts. These results are consistent with theories that emphasize the intrinsic reward of learning, both by updating inaccurate predictions and validating accurate ones, which is optimal in environments that present manageable predictive challenges (i.e., reducible uncertainty).
Collapse
|