1
|
Wu Q, Sun L, Ding N, Yang Y. Musical tension is affected by metrical structure dynamically and hierarchically. Cogn Neurodyn 2024; 18:1955-1976. [PMID: 39104669 PMCID: PMC11297889 DOI: 10.1007/s11571-023-10058-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 11/29/2023] [Accepted: 12/11/2023] [Indexed: 08/07/2024] Open
Abstract
As the basis of musical emotions, dynamic tension experience is felt by listeners as music unfolds over time. The effects of musical harmonic and melodic structures on tension have been widely investigated, however, the potential roles of metrical structures in tension perception remain largely unexplored. This experiment examined how different metrical structures affect tension experience and explored the underlying neural activities. The electroencephalogram (EEG) was recorded and subjective tension was rated simultaneously while participants listened to music meter sequences. On large time scale of whole meter sequences, it was found that different overall tension and low-frequency (1 ~ 4 Hz) steady-state evoked potentials were elicited by metrical structures with different periods of strong beats, and the higher overall tension was associated with metrical structure with the shorter intervals between strong beats. On small time scale of measures, dynamic tension fluctuations within measures was found to be associated with the periodic modulations of high-frequency (10 ~ 25 Hz) neural activities. The comparisons between the same beats within measures and across different meters both on small and large time scales verified the contextual effects of meter on tension induced by beats. Our findings suggest that the overall tension is determined by temporal intervals between strong beats, and the dynamic tension experience may arise from cognitive processing of hierarchical temporal expectation and attention, which are discussed under the theoretical frameworks of metrical hierarchy, musical expectation and dynamic attention.
Collapse
Affiliation(s)
- Qiong Wu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, No. 16 Lincui Road, Chaoyang District, Beijing, 100101 China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Lijun Sun
- College of Arts, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Nai Ding
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou, China
| | - Yufang Yang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, No. 16 Lincui Road, Chaoyang District, Beijing, 100101 China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
2
|
Heng JG, Zhang J, Bonetti L, Lim WPH, Vuust P, Agres K, Chen SHA. Understanding music and aging through the lens of Bayesian inference. Neurosci Biobehav Rev 2024; 163:105768. [PMID: 38908730 DOI: 10.1016/j.neubiorev.2024.105768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 06/05/2024] [Accepted: 06/10/2024] [Indexed: 06/24/2024]
Abstract
Bayesian inference has recently gained momentum in explaining music perception and aging. A fundamental mechanism underlying Bayesian inference is the notion of prediction. This framework could explain how predictions pertaining to musical (melodic, rhythmic, harmonic) structures engender action, emotion, and learning, expanding related concepts of music research, such as musical expectancies, groove, pleasure, and tension. Moreover, a Bayesian perspective of music perception may shed new insights on the beneficial effects of music in aging. Aging could be framed as an optimization process of Bayesian inference. As predictive inferences refine over time, the reliance on consolidated priors increases, while the updating of prior models through Bayesian inference attenuates. This may affect the ability of older adults to estimate uncertainties in their environment, limiting their cognitive and behavioral repertoire. With Bayesian inference as an overarching framework, this review synthesizes the literature on predictive inferences in music and aging, and details how music could be a promising tool in preventive and rehabilitative interventions for older adults through the lens of Bayesian inference.
Collapse
Affiliation(s)
- Jiamin Gladys Heng
- School of Computer Science and Engineering, Nanyang Technological University, Singapore.
| | - Jiayi Zhang
- Interdisciplinary Graduate Program, Nanyang Technological University, Singapore; School of Social Sciences, Nanyang Technological University, Singapore; Centre for Research and Development in Learning, Nanyang Technological University, Singapore
| | - Leonardo Bonetti
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus, Aalborg, Denmark; Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, United Kingdom; Department of Psychiatry, University of Oxford, United Kingdom; Department of Psychology, University of Bologna, Italy
| | | | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus, Aalborg, Denmark
| | - Kat Agres
- Centre for Music and Health, National University of Singapore, Singapore; Yong Siew Toh Conservatory of Music, National University of Singapore, Singapore
| | - Shen-Hsing Annabel Chen
- School of Social Sciences, Nanyang Technological University, Singapore; Centre for Research and Development in Learning, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore; National Institute of Education, Nanyang Technological University, Singapore.
| |
Collapse
|
3
|
Teng X, Larrouy-Maestri P, Poeppel D. Segmenting and Predicting Musical Phrase Structure Exploits Neural Gain Modulation and Phase Precession. J Neurosci 2024; 44:e1331232024. [PMID: 38926087 PMCID: PMC11270514 DOI: 10.1523/jneurosci.1331-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 05/29/2024] [Accepted: 06/11/2024] [Indexed: 06/28/2024] Open
Abstract
Music, like spoken language, is often characterized by hierarchically organized structure. Previous experiments have shown neural tracking of notes and beats, but little work touches on the more abstract question: how does the brain establish high-level musical structures in real time? We presented Bach chorales to participants (20 females and 9 males) undergoing electroencephalogram (EEG) recording to investigate how the brain tracks musical phrases. We removed the main temporal cues to phrasal structures, so that listeners could only rely on harmonic information to parse a continuous musical stream. Phrasal structures were disrupted by locally or globally reversing the harmonic progression, so that our observations on the original music could be controlled and compared. We first replicated the findings on neural tracking of musical notes and beats, substantiating the positive correlation between musical training and neural tracking. Critically, we discovered a neural signature in the frequency range ∼0.1 Hz (modulations of EEG power) that reliably tracks musical phrasal structure. Next, we developed an approach to quantify the phrasal phase precession of the EEG power, revealing that phrase tracking is indeed an operation of active segmentation involving predictive processes. We demonstrate that the brain establishes complex musical structures online over long timescales (>5 s) and actively segments continuous music streams in a manner comparable to language processing. These two neural signatures, phrase tracking and phrasal phase precession, provide new conceptual and technical tools to study the processes underpinning high-level structure building using noninvasive recording techniques.
Collapse
Affiliation(s)
- Xiangbin Teng
- Department of Psychology, The Chinese University of Hong Kong, Shatin, Hong Kong SAR, China
| | - Pauline Larrouy-Maestri
- Music Department, Max-Planck-Institute for Empirical Aesthetics, Frankfurt 60322, Germany
- Center for Language, Music, and Emotion (CLaME), New York, New York 10003
| | - David Poeppel
- Center for Language, Music, and Emotion (CLaME), New York, New York 10003
- Department of Psychology, New York University, New York, New York 10003
- Ernst Struengmann Institute for Neuroscience, Frankfurt 60528, Germany
- Music and Audio Research Laboratory (MARL), New York, New York 11201
| |
Collapse
|
4
|
Herff SA, Bonetti L, Cecchetti G, Vuust P, Kringelbach ML, Rohrmeier MA. Hierarchical syntax model of music predicts theta power during music listening. Neuropsychologia 2024; 199:108905. [PMID: 38740179 DOI: 10.1016/j.neuropsychologia.2024.108905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 03/07/2024] [Accepted: 05/06/2024] [Indexed: 05/16/2024]
Abstract
Linguistic research showed that the depth of syntactic embedding is reflected in brain theta power. Here, we test whether this also extends to non-linguistic stimuli, specifically music. We used a hierarchical model of musical syntax to continuously quantify two types of expert-annotated harmonic dependencies throughout a piece of Western classical music: prolongation and preparation. Prolongations can roughly be understood as a musical analogue to linguistic coordination between constituents that share the same function (e.g., 'pizza' and 'pasta' in 'I ate pizza and pasta'). Preparation refers to the dependency between two harmonies whereby the first implies a resolution towards the second (e.g., dominant towards tonic; similar to how the adjective implies the presence of a noun in 'I like spicy … '). Source reconstructed MEG data of sixty-five participants listening to the musical piece was then analysed. We used Bayesian Mixed Effects models to predict theta envelope in the brain, using the number of open prolongation and preparation dependencies as predictors whilst controlling for audio envelope. We observed that prolongation and preparation both carry independent and distinguishable predictive value for theta band fluctuation in key linguistic areas such as the Angular, Superior Temporal, and Heschl's Gyri, or their right-lateralised homologues, with preparation showing additional predictive value for areas associated with the reward system and prediction. Musical expertise further mediated these effects in language-related brain areas. Results show that predictions of precisely formalised music-theoretical models are reflected in the brain activity of listeners which furthers our understanding of the perception and cognition of musical structure.
Collapse
Affiliation(s)
- Steffen A Herff
- Sydney Conservatorium of Music, University of Sydney, Sydney, Australia; The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia; Digital and Cognitive Musicology Lab, College of Humanities, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
| | - Leonardo Bonetti
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Denmark; Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, United Kingdom; Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| | - Gabriele Cecchetti
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia; Digital and Cognitive Musicology Lab, College of Humanities, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Denmark
| | - Morten L Kringelbach
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Denmark; Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, United Kingdom; Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| | - Martin A Rohrmeier
- Digital and Cognitive Musicology Lab, College of Humanities, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
5
|
Zhao C, Ong JH, Veic A, Patel AD, Jiang C, Fogel AR, Wang L, Hou Q, Das D, Crasto C, Chakrabarti B, Williams TI, Loutrari A, Liu F. Predictive processing of music and language in autism: Evidence from Mandarin and English speakers. Autism Res 2024; 17:1230-1257. [PMID: 38651566 DOI: 10.1002/aur.3133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Accepted: 04/01/2024] [Indexed: 04/25/2024]
Abstract
Atypical predictive processing has been associated with autism across multiple domains, based mainly on artificial antecedents and consequents. As structured sequences where expectations derive from implicit learning of combinatorial principles, language and music provide naturalistic stimuli for investigating predictive processing. In this study, we matched melodic and sentence stimuli in cloze probabilities and examined musical and linguistic prediction in Mandarin- (Experiment 1) and English-speaking (Experiment 2) autistic and non-autistic individuals using both production and perception tasks. In the production tasks, participants listened to unfinished melodies/sentences and then produced the final notes/words to complete these items. In the perception tasks, participants provided expectedness ratings of the completed melodies/sentences based on the most frequent notes/words in the norms. While Experiment 1 showed intact musical prediction but atypical linguistic prediction in autism in the Mandarin sample that demonstrated imbalanced musical training experience and receptive vocabulary skills between groups, the group difference disappeared in a more closely matched sample of English speakers in Experiment 2. These findings suggest the importance of taking an individual differences approach when investigating predictive processing in music and language in autism, as the difficulty in prediction in autism may not be due to generalized problems with prediction in any type of complex sequence processing.
Collapse
Affiliation(s)
- Chen Zhao
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Jia Hoong Ong
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Anamarija Veic
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Aniruddh D Patel
- Department of Psychology, Tufts University, Medford, Massachusetts, USA
- Program in Brain, Mind, and Consciousness, Canadian Institute for Advanced Research (CIFAR), Toronto, Canada
| | - Cunmei Jiang
- Music College, Shanghai Normal University, Shanghai, China
| | - Allison R Fogel
- Department of Psychology, Tufts University, Medford, Massachusetts, USA
| | - Li Wang
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Qingqi Hou
- Department of Music and Dance, Nanjing Normal University of Special Education, Nanjing, China
| | - Dipsikha Das
- School of Psychology, Keele University, Staffordshire, UK
| | - Cara Crasto
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Bhismadev Chakrabarti
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Tim I Williams
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Ariadne Loutrari
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| |
Collapse
|
6
|
Ishida K, Ishida T, Nittono H. Decoding predicted musical notes from omitted stimulus potentials. Sci Rep 2024; 14:11164. [PMID: 38750185 PMCID: PMC11096333 DOI: 10.1038/s41598-024-61989-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Accepted: 05/13/2024] [Indexed: 05/18/2024] Open
Abstract
Electrophysiological studies have investigated predictive processing in music by examining event-related potentials (ERPs) elicited by the violation of musical expectations. While several studies have reported that the predictability of stimuli can modulate the amplitude of ERPs, it is unclear how specific the representation of the expected note is. The present study addressed this issue by recording the omitted stimulus potentials (OSPs) to avoid contamination of bottom-up sensory processing with top-down predictive processing. Decoding of the omitted content was attempted using a support vector machine, which is a type of machine learning. ERP responses to the omission of four target notes (E, F, A, and C) at the same position in familiar and unfamiliar melodies were recorded from 25 participants. The results showed that the omission N1 were larger in the familiar melody condition than in the unfamiliar melody condition. The decoding accuracy of the four omitted notes was significantly higher in the familiar melody condition than in the unfamiliar melody condition. These results suggest that the OSPs contain discriminable predictive information, and the higher the predictability, the more the specific representation of the expected note is generated.
Collapse
Affiliation(s)
- Kai Ishida
- Graduate School of Human Sciences, Osaka University, 1-2 Yamadaoka, Suita, Osaka, 565-0871, Japan.
- Japan Society for the Promotion of Science, Tokyo, Japan.
| | - Tomomi Ishida
- Graduate School of Human Sciences, Osaka University, 1-2 Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Hiroshi Nittono
- Graduate School of Human Sciences, Osaka University, 1-2 Yamadaoka, Suita, Osaka, 565-0871, Japan
| |
Collapse
|
7
|
Li Q, Liu G, Zhang Y, Wu J, Huang R. Neural correlates of musical familiarity: a functional magnetic resonance study. Cereb Cortex 2024; 34:bhae177. [PMID: 38679480 DOI: 10.1093/cercor/bhae177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Revised: 04/04/2024] [Accepted: 04/09/2024] [Indexed: 05/01/2024] Open
Abstract
Existing neuroimaging studies on neural correlates of musical familiarity often employ a familiar vs. unfamiliar contrast analysis. This singular analytical approach reveals associations between explicit musical memory and musical familiarity. However, is the neural activity associated with musical familiarity solely related to explicit musical memory, or could it also be related to implicit musical memory? To address this, we presented 130 song excerpts of varying familiarity to 21 participants. While acquiring their brain activity using functional magnetic resonance imaging (fMRI), we asked the participants to rate the familiarity of each song on a five-point scale. To comprehensively analyze the neural correlates of musical familiarity, we examined it from four perspectives: the intensity of local neural activity, patterns of local neural activity, global neural activity patterns, and functional connectivity. The results from these four approaches were consistent and revealed that musical familiarity is related to the activity of both explicit and implicit musical memory networks. Our findings suggest that: (1) musical familiarity is also associated with implicit musical memory, and (2) there is a cooperative and competitive interaction between the two types of musical memory in the perception of music.
Collapse
Affiliation(s)
- Qiang Li
- Department of Applied Psychology, College of Education Science, Guizhou Education University, No. 115, Gaoxin Street, Wudang, Guiyang 550018, China
| | - Guangyuan Liu
- Department of Electronic and Information Engineering, College of Electronic and Information Engineering, Southwest University, Tian Sheng road, No. 2, Beibei, Chongqing 400715, China
| | - Yuan Zhang
- Department of Applied Psychology, College of Education Science, Guizhou Education University, No. 115, Gaoxin Street, Wudang, Guiyang 550018, China
| | - Junhua Wu
- Department of Applied Psychology, College of Education Science, Guizhou Education University, No. 115, Gaoxin Street, Wudang, Guiyang 550018, China
| | - Rong Huang
- Department of Applied Psychology, College of Education Science, Guizhou Education University, No. 115, Gaoxin Street, Wudang, Guiyang 550018, China
| |
Collapse
|
8
|
Ren Y, Brown TI. Beyond the ears: A review exploring the interconnected brain behind the hierarchical memory of music. Psychon Bull Rev 2024; 31:507-530. [PMID: 37723336 DOI: 10.3758/s13423-023-02376-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/22/2023] [Indexed: 09/20/2023]
Abstract
Music is a ubiquitous element of daily life. Understanding how music memory is represented and expressed in the brain is key to understanding how music can influence human daily cognitive tasks. Current music-memory literature is built on data from very heterogeneous tasks for measuring memory, and the neural correlates appear to differ depending on different forms of memory function targeted. Such heterogeneity leaves many exceptions and conflicts in the data underexplained (e.g., hippocampal involvement in music memory is debated). This review provides an overview of existing neuroimaging results from music-memory related studies and concludes that although music is a special class of event in our lives, the memory systems behind it do in fact share neural mechanisms with memories from other modalities. We suggest that dividing music memory into different levels of a hierarchy (structural level and semantic level) helps understand overlap and divergence in neural networks involved. This is grounded in the fact that memorizing a piece of music recruits brain clusters that separately support functions including-but not limited to-syntax storage and retrieval, temporal processing, prediction versus reality comparison, stimulus feature integration, personal memory associations, and emotion perception. The cross-talk between frontal-parietal music structural processing centers and the subcortical emotion and context encoding areas explains why music is not only so easily memorable but can also serve as strong contextual information for encoding and retrieving nonmusic information in our lives.
Collapse
Affiliation(s)
- Yiren Ren
- Georgia Institute of Technology, College of Science, School of Psychology, Atlanta, GA, USA.
| | - Thackery I Brown
- Georgia Institute of Technology, College of Science, School of Psychology, Atlanta, GA, USA
| |
Collapse
|
9
|
Fiorin G, Delfitto D. Syncopation as structure bootstrapping: the role of asymmetry in rhythm and language. Front Psychol 2024; 15:1304485. [PMID: 38440243 PMCID: PMC10911290 DOI: 10.3389/fpsyg.2024.1304485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 01/22/2024] [Indexed: 03/06/2024] Open
Abstract
Syncopation - the occurrence of a musical event on a metrically weak position preceding a rest on a metrically strong position - represents an important challenge in the study of the mapping between rhythm and meter. In this contribution, we present the hypothesis that syncopation is an effective strategy to elicit the bootstrapping of a multi-layered, hierarchically organized metric structure from a linear rhythmic surface. The hypothesis is inspired by a parallel with the problem of linearization in natural language syntax, which is the problem of how hierarchically organized phrase-structure markers are mapped onto linear sequences of words. The hypothesis has important consequences for the role of meter in music perception and cognition and, more particularly, for its role in the relationship between rhythm and bodily entrainment.
Collapse
Affiliation(s)
- Gaetano Fiorin
- Department of Humanities, University of Trieste, Trieste, Italy
| | - Denis Delfitto
- Department of Cultures and Civilizations, University of Verona, Verona, Italy
| |
Collapse
|
10
|
Ishida K, Nittono H. Relationship between schematic and dynamic expectations of melodic patterns in music perception. Int J Psychophysiol 2024; 196:112292. [PMID: 38154607 DOI: 10.1016/j.ijpsycho.2023.112292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 12/21/2023] [Accepted: 12/21/2023] [Indexed: 12/30/2023]
Abstract
Prediction is fundamental in music listening. Two types of expectations have been proposed: schematic expectations, which arise from knowledge of tonal regularities (e.g., harmony and key) acquired through long-term plasticity and learning, and dynamic expectations, which arise from short-term regularity representations (e.g., rhythmic patterns and melodic contours) extracted from ongoing musical contexts. Although both expectations are indispensable in music listening, how they interact with each other in music prediction remains unclear. The present study examined the relationship between schematic and dynamic expectations in music processing using event-related potentials (ERPs). At the ending note of the melodies, the schematic expectation was violated by presenting a note with music-syntactic irregular (i.e., outof- key note), while the dynamic expectation was violated by presenting a contour deviant based on online statistical learning of melodic patterns. Schematic and dynamic expectations were manipulated to predict the same note. ERPs were recorded for the music-syntactic irregularity and the contour deviant, which occurred independently or simultaneously. The results showed that the music-syntactic irregularity elicited an early right anterior negativity (ERAN), reflecting the prediction error in the schematic expectation, while the contour deviant elicited a mismatch negativity (MMN), reflecting the prediction error in the dynamic expectation. Both components occurred within a similar latency range. Moreover, the ERP amplitude was multiplicatively increased when the irregularity and deviance occurred simultaneously. These findings suggest that schematic and dynamic expectations function concurrently in an interactive manner when both expectations predict the same note.
Collapse
Affiliation(s)
- Kai Ishida
- Graduate School of Human Sciences, Osaka University, Osaka, Japan.
| | - Hiroshi Nittono
- Graduate School of Human Sciences, Osaka University, Osaka, Japan.
| |
Collapse
|
11
|
Cheung VKM, Harrison PMC, Koelsch S, Pearce MT, Friederici AD, Meyer L. Cognitive and sensory expectations independently shape musical expectancy and pleasure. Philos Trans R Soc Lond B Biol Sci 2024; 379:20220420. [PMID: 38104601 PMCID: PMC10725761 DOI: 10.1098/rstb.2022.0420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 10/20/2023] [Indexed: 12/19/2023] Open
Abstract
Expectation is crucial for our enjoyment of music, yet the underlying generative mechanisms remain unclear. While sensory models derive predictions based on local acoustic information in the auditory signal, cognitive models assume abstract knowledge of music structure acquired over the long term. To evaluate these two contrasting mechanisms, we compared simulations from four computational models of musical expectancy against subjective expectancy and pleasantness ratings of over 1000 chords sampled from 739 US Billboard pop songs. Bayesian model comparison revealed that listeners' expectancy and pleasantness ratings were predicted by the independent, non-overlapping, contributions of cognitive and sensory expectations. Furthermore, cognitive expectations explained over twice the variance in listeners' perceived surprise compared to sensory expectations, suggesting a larger relative importance of long-term representations of music structure over short-term sensory-acoustic information in musical expectancy. Our results thus emphasize the distinct, albeit complementary, roles of cognitive and sensory expectations in shaping musical pleasure, and suggest that this expectancy-driven mechanism depends on musical information represented at different levels of abstraction along the neural hierarchy. This article is part of the theme issue 'Art, aesthetics and predictive processing: theoretical and empirical perspectives'.
Collapse
Affiliation(s)
- Vincent K. M. Cheung
- Sony Computer Science Laboratories, Inc., Shinagawa-ku, Tokyo 141-0022, Japan
- Department of Neuropsychology, Sony Computer Science Laboratories, Inc., Shinagawa-ku, Tokyo 141-0022, Japan
- Institute of Information Science, Academia Sinica, Taipei 115, Taiwan
| | - Peter M. C. Harrison
- Centre for Music and Science, University of Cambridge, Faculty of Music, 11 West Road, Cambridge, CB3 9DP, UK
- Centre for Digital Music, Queen Mary University of London, E1 4NS, UK
| | - Stefan Koelsch
- Department of Biological and Medical Psychology, University of Bergen, Bergen, 5009, Norway
| | - Marcus T. Pearce
- Centre for Digital Music, Queen Mary University of London, E1 4NS, UK
- Department of Clinical Medicine, Aarhus University, Aarhus N, 8200, Denmark
| | - Angela D. Friederici
- Department of Neuropsychology, Sony Computer Science Laboratories, Inc., Shinagawa-ku, Tokyo 141-0022, Japan
| | - Lars Meyer
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
- Clinic for Phoniatrics and Pedaudiology, University Hospital Münster, Münster, 48149, Germany
| |
Collapse
|
12
|
de Jesus Dias Martins M. Cognitive and Neural Representations of Fractals in Vision, Music, and Action. ADVANCES IN NEUROBIOLOGY 2024; 36:935-951. [PMID: 38468070 DOI: 10.1007/978-3-031-47606-8_46] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/13/2024]
Abstract
The concept of fractal was popularized by Mandelbrot as a tool to tame the geometrical structure of objects with infinite hierarchical depth. The key aspect of fractals is the use of simple parsimonious rules and initial conditions, which when applied recursively can generate unbounded complexity. Fractals are structures ubiquitous in nature, being present in coast lines, bacteria colonies, trees, and physiological time series. However, within the field of cognitive science, the core question is not which phenomena can generate fractal structures, but whether human or animal minds can represent recursive processes, and if so in which domains. In this chapter, we will explore the cognitive and neural mechanisms underlying the representation of recursive hierarchical embedding. Language is the domain in which this capacity is best studied. Humans can generate an infinite array of hierarchically structured sentences, and this capacity distinguishes us from other species. However, recent research suggests that humans can represent similar structures in the domains of music, vision, and action and has provided additional cues as to how these capacities are cognitively implemented. Using a comparative approach, we will map the commonalities and differences across domains and offer a roadmap to understand the neurobiological implementation of fractal cognition.
Collapse
Affiliation(s)
- Mauricio de Jesus Dias Martins
- Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, SCAN-Unit, University of Vienna, Vienna, Austria.
| |
Collapse
|
13
|
Cecchetti G, Tomasini CA, Herff SA, Rohrmeier MA. Interpreting Rhythm as Parsing: Syntactic-Processing Operations Predict the Migration of Visual Flashes as Perceived During Listening to Musical Rhythms. Cogn Sci 2023; 47:e13389. [PMID: 38038624 DOI: 10.1111/cogs.13389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 11/10/2023] [Accepted: 11/13/2023] [Indexed: 12/02/2023]
Abstract
Music can be interpreted by attributing syntactic relationships to sequential musical events, and, computationally, such musical interpretation represents an analogous combinatorial task to syntactic processing in language. While this perspective has been primarily addressed in the domain of harmony, we focus here on rhythm in the Western tonal idiom, and we propose for the first time a framework for modeling the moment-by-moment execution of processing operations involved in the interpretation of music. Our approach is based on (1) a music-theoretically motivated grammar formalizing the competence of rhythmic interpretation in terms of three basic types of dependency (preparation, syncopation, and split; Rohrmeier, 2020), and (2) psychologically plausible predictions about the complexity of structural integration and memory storage operations, necessary for parsing hierarchical dependencies, derived from the dependency locality theory (Gibson, 2000). With a behavioral experiment, we exemplify an empirical implementation of the proposed theoretical framework. One hundred listeners were asked to reproduce the location of a visual flash presented while listening to three rhythmic excerpts, each exemplifying a different interpretation under the formal grammar. The hypothesized execution of syntactic-processing operations was found to be a significant predictor of the observed displacement between the reported and the objective location of the flashes. Overall, this study presents a theoretical approach and a first empirical proof-of-concept for modeling the cognitive process resulting in such interpretation as a form of syntactic parsing with algorithmic similarities to its linguistic counterpart. Results from the present small-scale experiment should not be read as a final test of the theory, but they are consistent with the theoretical predictions after controlling for several possible confounding factors and may form the basis for further large-scale and ecological testing.
Collapse
Affiliation(s)
- Gabriele Cecchetti
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne
| | - Cédric A Tomasini
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne
| | - Steffen A Herff
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University
| | - Martin A Rohrmeier
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne
| |
Collapse
|
14
|
Zheng Y, Gao P, Li X. The modulating effect of musical expertise on lexical-semantic prediction in speech-in-noise comprehension: Evidence from an EEG study. Psychophysiology 2023; 60:e14371. [PMID: 37350401 DOI: 10.1111/psyp.14371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Revised: 05/08/2023] [Accepted: 05/27/2023] [Indexed: 06/24/2023]
Abstract
Musical expertise has been proposed to facilitate speech perception and comprehension in noisy environments. This study further examined the open question of whether musical expertise modulates high-level lexical-semantic prediction to aid online speech comprehension in noisy backgrounds. Musicians and nonmusicians listened to semantically strongly/weakly constraining sentences during EEG recording. At verbs prior to target nouns, both groups showed a positivity-ERP effect (Strong vs. Weak) associated with the predictability of incoming nouns; this correlation effect was stronger in musicians than in nonmusicians. After the target nouns appeared, both groups showed an N400 reduction effect (Strong vs. Weak) associated with noun predictability, but musicians exhibited an earlier onset latency and stronger effect size of this correlation effect than nonmusicians. To determine whether musical expertise enhances anticipatory semantic processing in general, the same group of participants participated in a control reading comprehension experiment. The results showed that, compared with nonmusicians, musicians demonstrated more delayed ERP correlation effects of noun predictability at words preceding the target nouns; musicians also exhibited more delayed and reduced N400 decrease effects correlated with noun predictability at the target nouns. Taken together, these results suggest that musical expertise enhances lexical-semantic predictive processing in speech-in-noise comprehension. This musical-expertise effect may be related to the strengthened hierarchical speech processing in particular.
Collapse
Affiliation(s)
- Yuanyi Zheng
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Panke Gao
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Xiaoqing Li
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
- Jiangsu Collaborative Innovation Center for Language Ability, Jiangsu Normal University, Xuzhou, China
| |
Collapse
|
15
|
Fiveash A, Ferreri L, Bouwer FL, Kösem A, Moghimi S, Ravignani A, Keller PE, Tillmann B. Can rhythm-mediated reward boost learning, memory, and social connection? Perspectives for future research. Neurosci Biobehav Rev 2023; 149:105153. [PMID: 37019245 DOI: 10.1016/j.neubiorev.2023.105153] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 03/14/2023] [Accepted: 03/31/2023] [Indexed: 04/05/2023]
Abstract
Studies of rhythm processing and of reward have progressed separately, with little connection between the two. However, consistent links between rhythm and reward are beginning to surface, with research suggesting that synchronization to rhythm is rewarding, and that this rewarding element may in turn also boost this synchronization. The current mini review shows that the combined study of rhythm and reward can be beneficial to better understand their independent and combined roles across two central aspects of cognition: 1) learning and memory, and 2) social connection and interpersonal synchronization; which have so far been studied largely independently. From this basis, it is discussed how connections between rhythm and reward can be applied to learning and memory and social connection across different populations, taking into account individual differences, clinical populations, human development, and animal research. Future research will need to consider the rewarding nature of rhythm, and that rhythm can in turn boost reward, potentially enhancing other cognitive and social processes.
Collapse
Affiliation(s)
- A Fiveash
- Lyon Neuroscience Research Center, CRNL, CNRS, UMR 5292, INSERM U1028, F-69000 Lyon, France; University of Lyon 1, Lyon, France; The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia.
| | - L Ferreri
- Department of Brain and Behavioural Sciences, University of Pavia, Pavia, Italy; Laboratoire d'Étude des Mécanismes Cognitifs, Université Lumière Lyon 2, Lyon, France
| | - F L Bouwer
- Department of Psychology, Brain and Cognition, University of Amsterdam, Amsterdam, the Netherlands
| | - A Kösem
- Lyon Neuroscience Research Center, CRNL, CNRS, UMR 5292, INSERM U1028, F-69000 Lyon, France
| | - S Moghimi
- Groupe de Recherches sur l'Analyse Multimodale de la Fonction Cérébrale, INSERM U1105, Amiens, France
| | - A Ravignani
- Comparative Bioacoustics Group, Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, the Netherlands; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Denmark
| | - P E Keller
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Denmark
| | - B Tillmann
- Lyon Neuroscience Research Center, CRNL, CNRS, UMR 5292, INSERM U1028, F-69000 Lyon, France; University of Lyon 1, Lyon, France; Laboratory for Research on Learning and Development, LEAD - CNRS UMR5022, Université de Bourgogne, Dijon, France
| |
Collapse
|
16
|
Basiński K, Quiroga-Martinez DR, Vuust P. Temporal hierarchies in the predictive processing of melody - From pure tones to songs. Neurosci Biobehav Rev 2023; 145:105007. [PMID: 36535375 DOI: 10.1016/j.neubiorev.2022.105007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 11/30/2022] [Accepted: 12/14/2022] [Indexed: 12/23/2022]
Abstract
Listening to musical melodies is a complex task that engages perceptual and memoryrelated processes. The processes underlying melody cognition happen simultaneously on different timescales, ranging from milliseconds to minutes. Although attempts have been made, research on melody perception is yet to produce a unified framework of how melody processing is achieved in the brain. This may in part be due to the difficulty of integrating concepts such as perception, attention and memory, which pertain to different temporal scales. Recent theories on brain processing, which hold prediction as a fundamental principle, offer potential solutions to this problem and may provide a unifying framework for explaining the neural processes that enable melody perception on multiple temporal levels. In this article, we review empirical evidence for predictive coding on the levels of pitch formation, basic pitch-related auditory patterns,more complex regularity processing extracted from basic patterns and long-term expectations related to musical syntax. We also identify areas that would benefit from further inquiry and suggest future directions in research on musical melody perception.
Collapse
Affiliation(s)
- Krzysztof Basiński
- Division of Quality of Life Research, Medical University of Gdańsk, Poland
| | - David Ricardo Quiroga-Martinez
- Helen Wills Neuroscience Institute & Department of Psychology, University of California Berkeley, USA; Center for Music in the Brain, Aarhus University & The Royal Academy of Music, Denmark
| | - Peter Vuust
- Center for Music in the Brain, Aarhus University & The Royal Academy of Music, Denmark
| |
Collapse
|
17
|
Fernández-Rubio G, Brattico E, Kotz SA, Kringelbach ML, Vuust P, Bonetti L. Magnetoencephalography recordings reveal the spatiotemporal dynamics of recognition memory for complex versus simple auditory sequences. Commun Biol 2022; 5:1272. [PMID: 36402843 PMCID: PMC9675809 DOI: 10.1038/s42003-022-04217-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Accepted: 11/02/2022] [Indexed: 11/21/2022] Open
Abstract
Auditory recognition is a crucial cognitive process that relies on the organization of single elements over time. However, little is known about the spatiotemporal dynamics underlying the conscious recognition of auditory sequences varying in complexity. To study this, we asked 71 participants to learn and recognize simple tonal musical sequences and matched complex atonal sequences while their brain activity was recorded using magnetoencephalography (MEG). Results reveal qualitative changes in neural activity dependent on stimulus complexity: recognition of tonal sequences engages hippocampal and cingulate areas, whereas recognition of atonal sequences mainly activates the auditory processing network. Our findings reveal the involvement of a cortico-subcortical brain network for auditory recognition and support the idea that stimulus complexity qualitatively alters the neural pathways of recognition memory.
Collapse
Affiliation(s)
- Gemma Fernández-Rubio
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Aarhus, Denmark.
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands.
| | - Elvira Brattico
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Aarhus, Denmark
- Department of Education, Psychology, Communication, University of Bari Aldo Moro, Bari, Italy
| | - Sonja A Kotz
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Morten L Kringelbach
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Aarhus, Denmark
- Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, United Kingdom
- Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Aarhus, Denmark
| | - Leonardo Bonetti
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Aarhus, Denmark
- Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, United Kingdom
- Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
18
|
Lisøy RS, Pfuhl G, Sunde HF, Biegler R. Sweet spot in music-Is predictability preferred among persons with psychotic-like experiences or autistic traits? PLoS One 2022; 17:e0275308. [PMID: 36174035 PMCID: PMC9521895 DOI: 10.1371/journal.pone.0275308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Accepted: 09/14/2022] [Indexed: 11/29/2022] Open
Abstract
People prefer music with an intermediate level of predictability; not so predictable as to be boring, yet not so unpredictable that it ceases to be music. This sweet spot for predictability varies due to differences in the perception of predictability. The symptoms of both psychosis and Autism Spectrum Disorder have been attributed to overestimation of uncertainty, which predicts a preference for predictable stimuli and environments. In a pre-registered study, we tested this prediction by investigating whether psychotic and autistic traits were associated with a higher preference for predictability in music. Participants from the general population were presented with twenty-nine pre-composed music excerpts, scored on their complexity by musical experts. A participant's preferred level of predictability corresponded to the peak of the inverted U-shaped curve between music complexity and liking (i.e., a Wundt curve). We found that the sweet spot for predictability did indeed vary between individuals. Contrary to predictions, we did not find support for these variations being associated with autistic and psychotic traits. The findings are discussed in the context of the Wundt curve and the use of naturalistic stimuli. We also provide recommendations for further exploration.
Collapse
Affiliation(s)
- Rebekka Solvik Lisøy
- Department of Psychology, Faculty of Social and Educational Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| | - Gerit Pfuhl
- Department of Psychology, Faculty of Social and Educational Sciences, Norwegian University of Science and Technology, Trondheim, Norway
- Department of Psychology, Faculty of Health Sciences, UiT–The Arctic University of Norway, Tromsø, Norway
| | - Hans Fredrik Sunde
- Centre for Fertility and Health, Norwegian Institute of Public Health, Oslo, Norway
| | - Robert Biegler
- Department of Psychology, Faculty of Social and Educational Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
19
|
Scharinger M, Knoop CA, Wagner V, Menninghaus W. Neural processing of poems and songs is based on melodic properties. Neuroimage 2022; 257:119310. [PMID: 35569784 DOI: 10.1016/j.neuroimage.2022.119310] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 04/26/2022] [Accepted: 05/11/2022] [Indexed: 11/30/2022] Open
Abstract
The neural processing of speech and music is still a matter of debate. A long tradition that assumes shared processing capacities for the two domains contrasts with views that assume domain-specific processing. We here contribute to this topic by investigating, in a functional magnetic imaging (fMRI) study, ecologically valid stimuli that are identical in wording and differ only in that one group is typically spoken (or silently read), whereas the other is sung: poems and their respective musical settings. We focus on the melodic properties of spoken poems and their sung musical counterparts by looking at proportions of significant autocorrelations (PSA) based on pitch values extracted from their recordings. Following earlier studies, we assumed a bias of poem-processing towards the left and a bias for song-processing on the right hemisphere. Furthermore, PSA values of poems and songs were expected to explain variance in left- vs. right-temporal brain areas, while continuous liking ratings obtained in the scanner should modulate activity in the reward network. Overall, poem processing compared to song processing relied on left temporal regions, including the superior temporal gyrus, whereas song processing compared to poem processing recruited more right temporal areas, including Heschl's gyrus and the superior temporal gyrus. PSA values co-varied with activation in bilateral temporal regions for poems, and in right-dominant fronto-temporal regions for songs. Continuous liking ratings were correlated with activity in the default mode network for both poems and songs. The pattern of results suggests that the neural processing of poems and their musical settings is based on their melodic properties, supported by bilateral temporal auditory areas and an additional right fronto-temporal network known to be implicated in the processing of melodies in songs. These findings take a middle ground in providing evidence for specific processing circuits for speech and music in the left and right hemisphere, but simultaneously for shared processing of melodic aspects of both poems and their musical settings in the right temporal cortex. Thus, we demonstrate the neurobiological plausibility of assuming the importance of melodic properties in spoken and sung aesthetic language alike, along with the involvement of the default mode network in the aesthetic appreciation of these properties.
Collapse
Affiliation(s)
- Mathias Scharinger
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Research Group Phonetics, Institute of German Linguistics, Philipps-University Marburg, Pilgrimstein 16, Marburg 35032, Germany; Center for Mind, Brain and Behavior, Universities of Marburg and Gießen, Germany.
| | - Christine A Knoop
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Valentin Wagner
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Experimental Psychology Unit, Helmut Schmidt University / University of the Federal Armed Forces Hamburg, Germany
| | - Winfried Menninghaus
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| |
Collapse
|
20
|
Mencke I, Omigie D, Quiroga-Martinez DR, Brattico E. Atonal Music as a Model for Investigating Exploratory Behavior. Front Neurosci 2022; 16:793163. [PMID: 35812236 PMCID: PMC9256982 DOI: 10.3389/fnins.2022.793163] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Accepted: 05/12/2022] [Indexed: 11/13/2022] Open
Abstract
Atonal music is often characterized by low predictability stemming from the absence of tonal or metrical hierarchies. In contrast, Western tonal music exhibits intrinsic predictability due to its hierarchical structure and therefore, offers a directly accessible predictive model to the listener. In consequence, a specific challenge of atonal music is that listeners must generate a variety of new predictive models. Listeners must not only refrain from applying available tonal models to the heard music, but they must also search for statistical regularities and build new rules that may be related to musical properties other than pitch, such as timbre or dynamics. In this article, we propose that the generation of such new predictive models and the aesthetic experience of atonal music are characterized by internal states related to exploration. This is a behavior well characterized in behavioral neuroscience as fulfilling an innate drive to reduce uncertainty but which has received little attention in empirical music research. We support our proposal with emerging evidence that the hedonic value is associated with the recognition of patterns in low-predictability sound sequences and that atonal music elicits distinct behavioral responses in listeners. We end by outlining new research avenues that might both deepen our understanding of the aesthetic experience of atonal music in particular, and reveal core qualities of the aesthetic experience in general.
Collapse
Affiliation(s)
- Iris Mencke
- Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
- *Correspondence: Iris Mencke,
| | - Diana Omigie
- Department of Psychology, Goldsmiths, University of London, London, United Kingdom
| | - David Ricardo Quiroga-Martinez
- Department of Clinical Medicine, Center for Music in the Brain, Aarhus University and Royal Academy of Music, Aarhus, Denmark
| | - Elvira Brattico
- Department of Clinical Medicine, Center for Music in the Brain, Aarhus University and Royal Academy of Music, Aarhus, Denmark
- Department of Education, Psychology and Communication, University of Bari Aldo Moro, Bari, Italy
| |
Collapse
|
21
|
Zhang N, Sun L, Wu Q, Yang Y. Tension experience induced by tonal and melodic shift at music phrase boundaries. Sci Rep 2022; 12:8304. [PMID: 35585148 PMCID: PMC9117266 DOI: 10.1038/s41598-022-11949-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 04/25/2022] [Indexed: 11/20/2022] Open
Abstract
Music tension is a link between music structures and emotions. As music unfolds, developmental patterns induce various emotional experiences, but the relationship between developmental patterns and tension experience remains unclear. The present study compared two developmental patterns of two successive phrases (tonal shift and melodic shift) with repetition condition to investigate the relationship with tension experience. Professional musicians rated on-line felt tension and EEG responses were recorded while listening to music sequences. Behavioral results showed that tension ratings under tonal and melodic shift conditions were higher than those under repetition conditions. ERP results showed larger potentials at early P300 and late positive component (LPC) time windows under tonal shift condition, and early right anterior negativity (ERAN) and LPC under melodic shift condition. ERSP results showed early beta and late gamma power increased under tonal shift condition, theta power decreased and alpha power increased under melodic shift condition. Our findings suggest that developmental patterns play a vital role in tension experiences; tonal shift affects tension by tonal shift detection and integration, while melodic shift affects tension by attentional processing and working memory integration. From the perspective of Event Structure Processing Model, solid evidence was given to specify the time-span segmentation and reduction.
Collapse
Affiliation(s)
- Ning Zhang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Lijun Sun
- College of Art, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Qiong Wu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Yufang Yang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China. .,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
22
|
Haiduk F, Fitch WT. Understanding Design Features of Music and Language: The Choric/Dialogic Distinction. Front Psychol 2022; 13:786899. [PMID: 35529579 PMCID: PMC9075586 DOI: 10.3389/fpsyg.2022.786899] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 02/22/2022] [Indexed: 12/03/2022] Open
Abstract
Music and spoken language share certain characteristics: both consist of sequences of acoustic elements that are combinatorically combined, and these elements partition the same continuous acoustic dimensions (frequency, formant space and duration). However, the resulting categories differ sharply: scale tones and note durations of small integer ratios appear in music, while speech uses phonemes, lexical tone, and non-isochronous durations. Why did music and language diverge into the two systems we have today, differing in these specific features? We propose a framework based on information theory and a reverse-engineering perspective, suggesting that design features of music and language are a response to their differential deployment along three different continuous dimensions. These include the familiar propositional-aesthetic ('goal') and repetitive-novel ('novelty') dimensions, and a dialogic-choric ('interactivity') dimension that is our focus here. Specifically, we hypothesize that music exhibits specializations enhancing coherent production by several individuals concurrently-the 'choric' context. In contrast, language is specialized for exchange in tightly coordinated turn-taking-'dialogic' contexts. We examine the evidence for our framework, both from humans and non-human animals, and conclude that many proposed design features of music and language follow naturally from their use in distinct dialogic and choric communicative contexts. Furthermore, the hybrid nature of intermediate systems like poetry, chant, or solo lament follows from their deployment in the less typical interactive context.
Collapse
Affiliation(s)
- Felix Haiduk
- Department of Behavioral and Cognitive Biology, University of Vienna, Vienna, Austria
| | - W. Tecumseh Fitch
- Department of Behavioral and Cognitive Biology, University of Vienna, Vienna, Austria
- Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
| |
Collapse
|
23
|
Vuust P, Heggli OA, Friston KJ, Kringelbach ML. Music in the brain. Nat Rev Neurosci 2022; 23:287-305. [PMID: 35352057 DOI: 10.1038/s41583-022-00578-5] [Citation(s) in RCA: 99] [Impact Index Per Article: 49.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/22/2022] [Indexed: 02/06/2023]
Abstract
Music is ubiquitous across human cultures - as a source of affective and pleasurable experience, moving us both physically and emotionally - and learning to play music shapes both brain structure and brain function. Music processing in the brain - namely, the perception of melody, harmony and rhythm - has traditionally been studied as an auditory phenomenon using passive listening paradigms. However, when listening to music, we actively generate predictions about what is likely to happen next. This enactive aspect has led to a more comprehensive understanding of music processing involving brain structures implicated in action, emotion and learning. Here we review the cognitive neuroscience literature of music perception. We show that music perception, action, emotion and learning all rest on the human brain's fundamental capacity for prediction - as formulated by the predictive coding of music model. This Review elucidates how this formulation of music perception and expertise in individuals can be extended to account for the dynamics and underlying brain mechanisms of collective music making. This in turn has important implications for human creativity as evinced by music improvisation. These recent advances shed new light on what makes music meaningful from a neuroscientific perspective.
Collapse
Affiliation(s)
- Peter Vuust
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark.
| | - Ole A Heggli
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
| | - Morten L Kringelbach
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark.,Department of Psychiatry, University of Oxford, Oxford, UK.,Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, UK
| |
Collapse
|
24
|
Grzywacz NM, Aleem H. Does Amount of Information Support Aesthetic Values? Front Neurosci 2022; 16:805658. [PMID: 35392414 PMCID: PMC8982361 DOI: 10.3389/fnins.2022.805658] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Accepted: 02/16/2022] [Indexed: 11/24/2022] Open
Abstract
Obtaining information from the world is important for survival. The brain, therefore, has special mechanisms to extract as much information as possible from sensory stimuli. Hence, given its importance, the amount of available information may underlie aesthetic values. Such information-based aesthetic values would be significant because they would compete with others to drive decision-making. In this article, we ask, "What is the evidence that amount of information support aesthetic values?" An important concept in the measurement of informational volume is entropy. Research on aesthetic values has thus used Shannon entropy to evaluate the contribution of quantity of information. We review here the concepts of information and aesthetic values, and research on the visual and auditory systems to probe whether the brain uses entropy or other relevant measures, specially, Fisher information, in aesthetic decisions. We conclude that information measures contribute to these decisions in two ways: first, the absolute quantity of information can modulate aesthetic preferences for certain sensory patterns. However, the preference for volume of information is highly individualized, with information-measures competing with organizing principles, such as rhythm and symmetry. In addition, people tend to be resistant to too much entropy, but not necessarily, high amounts of Fisher information. We show that this resistance may stem in part from the distribution of amount of information in natural sensory stimuli. Second, the measurement of entropic-like quantities over time reveal that they can modulate aesthetic decisions by varying degrees of surprise given temporally integrated expectations. We propose that amount of information underpins complex aesthetic values, possibly informing the brain on the allocation of resources or the situational appropriateness of some cognitive models.
Collapse
Affiliation(s)
- Norberto M. Grzywacz
- Department of Psychology, Loyola University Chicago, Chicago, IL, United States
- Department of Molecular Pharmacology and Neuroscience, Loyola University Chicago, Chicago, IL, United States
- Interdisciplinary Program in Neuroscience, Georgetown University, Washington, DC, United States
| | - Hassan Aleem
- Interdisciplinary Program in Neuroscience, Georgetown University, Washington, DC, United States
| |
Collapse
|
25
|
Chabin T, Pazart L, Gabriel D. Vocal melody and musical background are simultaneously processed by the brain for musical predictions. Ann N Y Acad Sci 2022; 1512:126-140. [PMID: 35229293 DOI: 10.1111/nyas.14755] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Accepted: 01/18/2022] [Indexed: 12/18/2022]
Abstract
Musical pleasure is related to the capacity to predict and anticipate the music. By recording early cerebral responses of 16 participants with electroencephalography during periods of silence inserted in known and unknown songs, we aimed to measure the contribution of different musical attributes to musical predictions. We investigated the mismatch between past encoded musical features and the current sensory inputs when listening to lyrics associated with vocal melody, only background instrumental material, or both attributes grouped together. When participants were listening to chords and lyrics for known songs, the brain responses related to musical violation produced event-related potential responses around 150-200 ms that were of a larger amplitude than for chords or lyrics only. Microstate analysis also revealed that for chords and lyrics, the global field power had an increased stability and a longer duration. The source localization identified that the right superior temporal and frontal gyri and the inferior and medial frontal gyri were activated for a longer time for chords and lyrics, likely caused by the increased complexity of the stimuli. We conclude that grouped together, a broader integration and retrieval of several musical attributes at the same time recruit larger neuronal networks that lead to more accurate predictions.
Collapse
Affiliation(s)
- Thibault Chabin
- Centre Hospitalier Universitaire de Besançon, Centre d'Investigation Clinique INSERM CIC 1431, Besançon, France
| | - Lionel Pazart
- Plateforme de Neuroimagerie Fonctionnelle et Neurostimulation Neuraxess, Centre Hospitalier Universitaire de Besançon, Université de Bourgogne Franche-Comté, Bourgogne Franche-Comté, France
| | - Damien Gabriel
- Laboratoire de Recherches Intégratives en Neurosciences et Psychologie Cognitive, Université Bourgogne Franche-Comté, Besançon, France
| |
Collapse
|
26
|
Tsogli V, Jentschke S, Koelsch S. Unpredictability of the “when” influences prediction error processing of the “what” and “where”. PLoS One 2022; 17:e0263373. [PMID: 35113946 PMCID: PMC8812910 DOI: 10.1371/journal.pone.0263373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Accepted: 01/18/2022] [Indexed: 11/24/2022] Open
Abstract
The capability to establish accurate predictions is an integral part of learning. Whether predictions about different dimensions of a stimulus interact with each other, and whether such an interaction affects learning, has remained elusive. We conducted a statistical learning study with EEG (electroencephalography), where a stream of consecutive sound triplets was presented with deviants that were either: (a) statistical, depending on the triplet ending probability, (b) physical, due to a change in sound location or (c) double deviants, i.e. a combination of the two. We manipulated the predictability of stimulus-onset by using random stimulus-onset asynchronies. Temporal unpredictability due to random onsets reduced the neurophysiological responses to statistical and location deviants, as indexed by the statistical mismatch negativity (sMMN) and the location MMN. Our results demonstrate that the predictability of one stimulus attribute influences the processing of prediction error signals of other stimulus attributes, and thus also learning of those attributes.
Collapse
Affiliation(s)
- Vera Tsogli
- Department of Biological and Medical Psychology, University of Bergen, Bergen, Norway
| | | | - Stefan Koelsch
- Department of Biological and Medical Psychology, University of Bergen, Bergen, Norway
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- * E-mail:
| |
Collapse
|
27
|
Kern P, Heilbron M, de Lange FP, Spaak E. Cortical activity during naturalistic music listening reflects short-range predictions based on long-term experience. eLife 2022; 11:80935. [PMID: 36562532 PMCID: PMC9836393 DOI: 10.7554/elife.80935] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Accepted: 12/22/2022] [Indexed: 12/24/2022] Open
Abstract
Expectations shape our experience of music. However, the internal model upon which listeners form melodic expectations is still debated. Do expectations stem from Gestalt-like principles or statistical learning? If the latter, does long-term experience play an important role, or are short-term regularities sufficient? And finally, what length of context informs contextual expectations? To answer these questions, we presented human listeners with diverse naturalistic compositions from Western classical music, while recording neural activity using MEG. We quantified note-level melodic surprise and uncertainty using various computational models of music, including a state-of-the-art transformer neural network. A time-resolved regression analysis revealed that neural activity over fronto-temporal sensors tracked melodic surprise particularly around 200ms and 300-500ms after note onset. This neural surprise response was dissociated from sensory-acoustic and adaptation effects. Neural surprise was best predicted by computational models that incorporated long-term statistical learning-rather than by simple, Gestalt-like principles. Yet, intriguingly, the surprise reflected primarily short-range musical contexts of less than ten notes. We present a full replication of our novel MEG results in an openly available EEG dataset. Together, these results elucidate the internal model that shapes melodic predictions during naturalistic music listening.
Collapse
Affiliation(s)
- Pius Kern
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and BehaviourNijmegenNetherlands
| | - Micha Heilbron
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and BehaviourNijmegenNetherlands
| | - Floris P de Lange
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and BehaviourNijmegenNetherlands
| | - Eelke Spaak
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and BehaviourNijmegenNetherlands
| |
Collapse
|
28
|
Mencke I, Quiroga-Martinez DR, Omigie D, Michalareas G, Schwarzacher F, Haumann NT, Vuust P, Brattico E. Prediction under uncertainty: Dissociating sensory from cognitive expectations in highly uncertain musical contexts. Brain Res 2021; 1773:147664. [PMID: 34560052 DOI: 10.1016/j.brainres.2021.147664] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Revised: 09/16/2021] [Accepted: 09/17/2021] [Indexed: 10/20/2022]
Abstract
Predictive models in the brain rely on the continuous extraction of regularities from the environment. These models are thought to be updated by novel information, as reflected in prediction error responses such as the mismatch negativity (MMN). However, although in real life individuals often face situations in which uncertainty prevails, it remains unclear whether and how predictive models emerge in high-uncertainty contexts. Recent research suggests that uncertainty affects the magnitude of MMN responses in the context of music listening. However, musical predictions are typically studied with MMN stimulation paradigms based on Western tonal music, which are characterized by relatively high predictability. Hence, we developed an MMN paradigm to investigate how the high uncertainty of atonal music modulates predictive processes as indexed by the MMN and behavior. Using MEG in a group of 20 subjects without musical training, we demonstrate that the magnetic MMN in response to pitch, intensity, timbre, and location deviants is evoked in both tonal and atonal melodies, with no significant differences between conditions. In contrast, in a separate behavioral experiment involving 39 non-musicians, participants detected pitch deviants more accurately and rated confidence higher in the tonal than in the atonal musical context. These results indicate that contextual tonal uncertainty modulates processing stages in which conscious awareness is involved, although deviants robustly elicit low-level pre-attentive responses such as the MMN. The achievement of robust MMN responses, despite high tonal uncertainty, is relevant for future studies comparing groups of listeners' MMN responses to increasingly ecological music stimuli.
Collapse
Affiliation(s)
- Iris Mencke
- Department of Music, Max Planck Institute for Empirical Aesthetics, Grüneburgweg 14, 60322 Frankfurt/Main, Germany; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Nørrebrogade 44, 8000 Aarhus C, Denmark.
| | - David Ricardo Quiroga-Martinez
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Nørrebrogade 44, 8000 Aarhus C, Denmark
| | - Diana Omigie
- Department of Psychology, University of London, Goldsmiths, SE14 6NW London, United Kingdom
| | - Georgios Michalareas
- Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, Grüneburgweg 14, 60322 Frankfurt/Main, Germany
| | - Franz Schwarzacher
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Nørrebrogade 44, 8000 Aarhus C, Denmark
| | - Niels Trusbak Haumann
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Nørrebrogade 44, 8000 Aarhus C, Denmark
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Nørrebrogade 44, 8000 Aarhus C, Denmark
| | - Elvira Brattico
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Nørrebrogade 44, 8000 Aarhus C, Denmark; Department of Education, Psychology and Communication, University of Bari Aldo Moro, Piazza Umberto I, 70121 Bari, Italy
| |
Collapse
|
29
|
Verosky NJ, Morgan E. Pitches that Wire Together Fire Together: Scale Degree Associations Across Time Predict Melodic Expectations. Cogn Sci 2021; 45:e13037. [PMID: 34606140 DOI: 10.1111/cogs.13037] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 07/22/2021] [Accepted: 07/23/2021] [Indexed: 11/29/2022]
Abstract
The ongoing generation of expectations is fundamental to listeners' experience of music, but research into types of statistical information that listeners extract from musical melodies has tended to emphasize transition probabilities and n-grams, with limited consideration given to other types of statistical learning that may be relevant. Temporal associations between scale degrees represent a different type of information present in musical melodies that can be learned from musical corpora using expectation networks, a computationally simple method based on activation and decay. Expectation networks infer the expectation of encountering one scale degree followed in the near (but not necessarily immediate) future by another given scale degree, with previous work suggesting that scale degree associations learned by expectation networks better predict listener ratings of pitch similarity than transition probabilities. The current work outlines how these learned scale degree associations can be combined to predict melodic continuations and tests the resulting predictions on a dataset of listener responses to a musical cloze task previously used to compare two other models of melodic expectation, a variable-order Markov model (IDyOM) and Temperley's music-theoretically motivated model. Under multinomial logistic regression, all three models explain significant unique variance in human melodic expectations, with coefficient estimates highest for expectation networks. These results suggest that generalized scale degree associations informed by both adjacent and nonadjacent relationships between melodic notes influence listeners' melodic predictions above and beyond n-gram context, highlighting the need to consider a broader range of statistical learning processes that may underlie listeners' expectations for upcoming musical events.
Collapse
Affiliation(s)
| | - Emily Morgan
- Department of Linguistics, University of California, Davis
| |
Collapse
|
30
|
Zheng Y, Zhao Z, Yang X, Li X. The impact of musical expertise on anticipatory semantic processing during online speech comprehension: An electroencephalography study. BRAIN AND LANGUAGE 2021; 221:105006. [PMID: 34392023 DOI: 10.1016/j.bandl.2021.105006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 07/29/2021] [Accepted: 07/30/2021] [Indexed: 06/13/2023]
Abstract
Musical experience has been found to aid speech perception. This electroencephalography study further examined whether and how musical expertise affects high-level predictive semantic processing in speech comprehension. Musicians and non-musicians listened to semantically strongly/weakly constraining sentences, with each sentence being primed by a congruent/incongruent sentence-prosody. At the target nouns, a N400 reduction effect (strongly vs. weakly constraining) was observed in both groups, with the onset-latency of this effect being delayed for incongruent (vs. congruent) priming. At the transitive verbs preceding these target nouns, musicians' event-related-potential amplitude (in incongruent-priming) and beta-band oscillatory power (in congruent- and incongruent-priming) showed a semantic-constraint effect, and were correlated with the predictability of incoming nouns; non-musicians only demonstrated an event-related-potential semantic-constraint effect, which was correlated with the predictability of current verbs. These results indicate musical expertise enhances semantic prediction tendency in speech comprehension, and this effect might be not just an aftereffect of facilitated acoustic/phonological processing.
Collapse
Affiliation(s)
- Yuanyi Zheng
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100149, China
| | - Zitong Zhao
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100149, China
| | - Xiaohong Yang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100149, China
| | - Xiaoqing Li
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100149, China.
| |
Collapse
|
31
|
Edalati M, Mahmoudzadeh M, Safaie J, Wallois F, Moghimi S. Violation of rhythmic expectancies can elicit late frontal gamma activity nested in theta oscillations. Psychophysiology 2021; 58:e13909. [PMID: 34310719 PMCID: PMC9285090 DOI: 10.1111/psyp.13909] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2020] [Revised: 06/25/2021] [Accepted: 07/08/2021] [Indexed: 11/29/2022]
Abstract
Rhythm processing involves building expectations according to the hierarchical temporal structure of auditory events. Although rhythm processing has been addressed in the context of predictive coding, the properties of the oscillatory response in different cortical areas are still not clear. We explored the oscillatory properties of the neural response to rhythmic incongruence and the cross-frequency coupling between multiple frequencies to further investigate the mechanisms underlying rhythm perception. We designed an experiment to investigate the neural response to rhythmic deviations in which the tone either arrived earlier than expected or the tone in the same metrical position was omitted. These two manipulations modulate the rhythmic structure differently, with the former creating a larger violation of the general structure of the musical stimulus than the latter. Both deviations resulted in an MMN response, whereas only the rhythmic deviant resulted in a subsequent P3a. Rhythmic deviants due to the early occurrence of a tone, but not omission deviants, seemed to elicit a late high gamma response (60-80 Hz) at the end of the P3a over the left frontal region, which, interestingly, correlated with the P3a amplitude over the same region and was also nested in theta oscillations. The timing of the elicited high-frequency gamma oscillations related to rhythmic deviation suggests that it might be related to the update of the predictive neural model, corresponding to the temporal structure of the events in higher-level cortical areas.
Collapse
Affiliation(s)
- M Edalati
- Inserm UMR1105, Groupe de Recherches sur l'Analyse Multimodale de la Fonction Cérébrale, CURS, Amiens, France.,Electrical Engineering Department, Ferdowsi University of Mashhad, Mashhad, Iran
| | - M Mahmoudzadeh
- Inserm UMR1105, Groupe de Recherches sur l'Analyse Multimodale de la Fonction Cérébrale, CURS, Amiens, France.,Inserm UMR1105, EFSN Pédiatriques, CHU Amiens sud, Amiens, France
| | - J Safaie
- Electrical Engineering Department, Ferdowsi University of Mashhad, Mashhad, Iran
| | - F Wallois
- Inserm UMR1105, Groupe de Recherches sur l'Analyse Multimodale de la Fonction Cérébrale, CURS, Amiens, France.,Inserm UMR1105, EFSN Pédiatriques, CHU Amiens sud, Amiens, France
| | - S Moghimi
- Inserm UMR1105, Groupe de Recherches sur l'Analyse Multimodale de la Fonction Cérébrale, CURS, Amiens, France.,Electrical Engineering Department, Ferdowsi University of Mashhad, Mashhad, Iran.,Inserm UMR1105, EFSN Pédiatriques, CHU Amiens sud, Amiens, France
| |
Collapse
|
32
|
Asano R, Boeckx C, Seifert U. Hierarchical control as a shared neurocognitive mechanism for language and music. Cognition 2021; 216:104847. [PMID: 34311153 DOI: 10.1016/j.cognition.2021.104847] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Revised: 05/14/2021] [Accepted: 07/11/2021] [Indexed: 12/16/2022]
Abstract
Although comparative research has made substantial progress in clarifying the relationship between language and music as neurocognitive systems from both a theoretical and empirical perspective, there is still no consensus about which mechanisms, if any, are shared and how they bring about different neurocognitive systems. In this paper, we tackle these two questions by focusing on hierarchical control as a neurocognitive mechanism underlying syntax in language and music. We put forward the Coordinated Hierarchical Control (CHC) hypothesis: linguistic and musical syntax rely on hierarchical control, but engage this shared mechanism differently depending on the current control demand. While linguistic syntax preferably engages the abstract rule-based control circuit, musical syntax rather employs the coordination of the abstract rule-based and the more concrete motor-based control circuits. We provide evidence for our hypothesis by reviewing neuroimaging as well as neuropsychological studies on linguistic and musical syntax. The CHC hypothesis makes a set of novel testable predictions to guide future work on the relationship between language and music.
Collapse
Affiliation(s)
- Rie Asano
- Systematic Musicology, Institute of Musicology, University of Cologne, Germany.
| | - Cedric Boeckx
- Section of General Linguistics, University of Barcelona, Spain; University of Barcelona Institute for Complex Systems (UBICS), Spain; Catalan Institute for Advanced Studies and Research (ICREA), Spain
| | - Uwe Seifert
- Systematic Musicology, Institute of Musicology, University of Cologne, Germany
| |
Collapse
|
33
|
Miles SA, Rosen DS, Barry S, Grunberg D, Grzywacz N. What to Expect When the Unexpected Becomes Expected: Harmonic Surprise and Preference Over Time in Popular Music. Front Hum Neurosci 2021; 15:578644. [PMID: 33994972 PMCID: PMC8121146 DOI: 10.3389/fnhum.2021.578644] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Accepted: 03/29/2021] [Indexed: 11/22/2022] Open
Abstract
Previous work demonstrates that music with more surprising chords tends to be perceived as more enjoyable than music with more conventional harmonic structures. In that work, harmonic surprise was computed based upon a static distribution of chords. This would assume that harmonic surprise is constant over time, and the effect of harmonic surprise on music preference is similarly static. In this study we assess that assumption and establish that the relationship between harmonic surprise (as measured according to a specific time period) and music preference is not constant as time goes on. Analyses of harmonic surprise and preference from 1958 to 1991 showed increased harmonic surprise over time, and that this increase was significantly more pronounced in preferred songs. Separate analyses showed similar increases over the years from 2000 to 2019. As such, these findings provide evidence that the human perception of tonality is influenced by exposure. Baseline harmonic expectations that were developed through listening to the music of “yesterday” are violated in the music of “today,” leading to preference. Then, once the music of “today” provides the baseline expectations for the music of “tomorrow,” more pronounced violations—and with them, higher harmonic surprise values—become associated with preference formation. We call this phenomenon the “Inflationary-Surprise Hypothesis.” Support for this hypothesis could impact the understanding of how the perception of tonality, and other statistical regularities, are developed in the human brain.
Collapse
Affiliation(s)
- Scott A Miles
- Interdisciplinary Program in Neuroscience, Georgetown University, Washington, DC, United States.,Secret Chord Laboratories, Norfolk, VA, United States
| | - David S Rosen
- Secret Chord Laboratories, Norfolk, VA, United States.,Music and Entertainment Technology Laboratory, Drexel University, Philadelphia, PA, United States
| | - Shaun Barry
- Secret Chord Laboratories, Norfolk, VA, United States
| | | | - Norberto Grzywacz
- Interdisciplinary Program in Neuroscience, Georgetown University, Washington, DC, United States.,Department of Psychology, Loyola University Chicago, Chicago, IL, United States.,Department of Molecular Pharmacology and Neuroscience, Loyola University Chicago, Chicago, IL, United States
| |
Collapse
|
34
|
Unraveling the Temporal Dynamics of Reward Signals in Music-Induced Pleasure with TMS. J Neurosci 2021; 41:3889-3899. [PMID: 33782048 DOI: 10.1523/jneurosci.0727-20.2020] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Revised: 12/18/2020] [Accepted: 12/21/2020] [Indexed: 11/21/2022] Open
Abstract
Music's ability to induce feelings of pleasure has been the subject of intense neuroscientific research lately. Prior neuroimaging studies have shown that music-induced pleasure engages cortico-striatal circuits related to the anticipation and receipt of biologically relevant rewards/incentives, but these reports are necessarily correlational. Here, we studied both the causal role of this circuitry and its temporal dynamics by applying transcranial magnetic stimulation (TMS) over the left dorsolateral PFC combined with fMRI in 17 male and female participants. Behaviorally, we found that, in accord with previous findings, excitation of fronto-striatal pathways enhanced subjective reports of music-induced pleasure and motivation, whereas inhibition of the same circuitry led to the reduction of both. fMRI activity patterns indicated that these behavioral changes were driven by bidirectional TMS-induced alteration of fronto-striatal function. Specifically, changes in activity in the NAcc predicted modulation of both hedonic and motivational responses, with a dissociation between pre-experiential versus experiential components of musical reward. In addition, TMS-induced changes in the fMRI functional connectivity between the NAcc and frontal and auditory cortices predicted the degree of modulation of hedonic responses. These results indicate that the engagement of cortico-striatal pathways and the NAcc, in particular, is indispensable to experience rewarding feelings from music.SIGNIFICANCE STATEMENT Neuroimaging studies have shown that music-induced pleasure engages cortico-striatal circuits involved in the processing of biologically relevant rewards. Yet, these reports are necessarily correlational. Here, we studied both the causal role of this circuitry and its temporal dynamics by combining brain stimulation over the frontal cortex with functional imaging. Behaviorally, we found that excitation and inhibition of fronto-striatal pathways enhanced and disrupted, respectively, subjective reports of music-induced pleasure and motivation. These changes were associated with changes in NAcc activity and NAcc coupling with frontal and auditory cortices, dissociating between pre-experimental versus experiential components of musical reward. These results indicate that the engagement of cortico-striatal pathways, and the NAcc in particular, is indispensable to experience rewarding feeling from music.
Collapse
|
35
|
Modifications in the Topological Structure of EEG Functional Connectivity Networks during Listening Tonal and Atonal Concert Music in Musicians and Non-Musicians. Brain Sci 2021; 11:brainsci11020159. [PMID: 33530384 PMCID: PMC7910933 DOI: 10.3390/brainsci11020159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 01/12/2021] [Accepted: 01/20/2021] [Indexed: 11/17/2022] Open
Abstract
The present work aims to demonstrate the hypothesis that atonal music modifies the topological structure of electroencephalographic (EEG) connectivity networks in relation to tonal music. To this, EEG monopolar records were taken in musicians and non-musicians while listening to tonal, atonal, and pink noise sound excerpts. EEG functional connectivities (FC) among channels assessed by a phase synchronization index previously thresholded using surrogate data test were computed. Sound effects, on the topological structure of graph-based networks assembled with the EEG-FCs at different frequency-bands, were analyzed throughout graph metric and network-based statistic (NBS). Local and global efficiency normalized (vs. random-network) measurements (NLE|NGE) assessing network information exchanges were able to discriminate both music styles irrespective of groups and frequency-bands. During tonal audition, NLE and NGE values in the beta-band network get close to that of a small-world network, while during atonal and even more during noise its structure moved away from small-world. These effects were attributed to the different timbre characteristics (sounds spectral centroid and entropy) and different musical structure. Results from networks topographic maps for strength and NLE of the nodes, and for FC subnets obtained from the NBS, allowed discriminating the musical styles and verifying the different strength, NLE, and FC of musicians compared to non-musicians.
Collapse
|
36
|
Pesek M, Medvešek Š, Podlesek A, Tkalčič M, Marolt M. A Comparison of Human and Computational Melody Prediction Through Familiarity and Expertise. Front Psychol 2020; 11:557398. [PMID: 33362622 PMCID: PMC7756065 DOI: 10.3389/fpsyg.2020.557398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Accepted: 11/13/2020] [Indexed: 11/16/2022] Open
Abstract
Melody prediction is an important aspect of music listening. The success of prediction, i.e., whether the next note played in a song is the same as the one predicted by the listener, depends on various factors. In the paper, we present two studies, where we assess how music familiarity and music expertise influence melody prediction in human listeners, and, expressed in appropriate data/algorithmic ways, computational models. To gather data on human listeners, we designed a melody prediction user study, where familiarity was controlled by two different music collections, while expertise was assessed by adapting the Music Sophistication Index instrument to Slovenian language. In the second study, we evaluated the melody prediction accuracy of computational melody prediction models. We evaluated two models, the SymCHM and the Implication-Realization model, which differ substantially in how they approach melody prediction. Our results show that both music familiarity and expertise affect the prediction accuracy of human listeners, as well as of computational models.
Collapse
Affiliation(s)
- Matevž Pesek
- Faculty of Computer and Information Science, University of Ljubljana, Ljubljana, Slovenia
| | - Špela Medvešek
- Faculty of Computer and Information Science, University of Ljubljana, Ljubljana, Slovenia
| | - Anja Podlesek
- Faculty of Arts, University of Ljubljana, Ljubljana, Slovenia
| | - Marko Tkalčič
- Faculty of Mathematics, Natural Sciences and Information Technologies, University of Primorska, Koper, Slovenia
| | - Matija Marolt
- Faculty of Computer and Information Science, University of Ljubljana, Ljubljana, Slovenia
| |
Collapse
|
37
|
Friedman R. Themes of advanced information processing in the primate brain. AIMS Neurosci 2020; 7:373-388. [PMID: 33263076 PMCID: PMC7701368 DOI: 10.3934/neuroscience.2020023] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Accepted: 10/09/2020] [Indexed: 11/30/2022] Open
Abstract
Here is a review of several empirical examples of information processing that occur in the primate cerebral cortex. These include visual processing, object identification and perception, information encoding, and memory. Also, there is a discussion of the higher scale neural organization, mainly theoretical, which suggests hypotheses on how the brain internally represents objects. Altogether they support the general attributes of the mechanisms of brain computation, such as efficiency, resiliency, data compression, and a modularization of neural function and their pathways. Moreover, the specific neural encoding schemes are expectedly stochastic, abstract and not easily decoded by theoretical or empirical approaches.
Collapse
Affiliation(s)
- Robert Friedman
- Department of Biological Sciences, University of South Carolina, Columbia 29208, USA
| |
Collapse
|
38
|
Learning to predict: Neuronal signatures of auditory expectancy in human event-related potentials. Neuroimage 2020; 225:117472. [PMID: 33099012 PMCID: PMC9215305 DOI: 10.1016/j.neuroimage.2020.117472] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2020] [Revised: 10/08/2020] [Accepted: 10/15/2020] [Indexed: 12/31/2022] Open
Abstract
Learning to anticipate future states of the world based on statistical regularities in the environment is a key component of perception and is vital for the survival of many organisms. Such statistical learning and prediction are crucial for acquiring language and music appreciation. Importantly, learned expectations can be implicitly derived from exposure to sensory input, without requiring explicit information regarding contingencies in the environment. Whereas many previous studies of statistical learning have demonstrated larger neuronal responses to unexpected versus expected stimuli, the neuronal bases of the expectations themselves remain poorly understood. Here we examined behavioral and neuronal signatures of learned expectancy via human scalp-recorded event-related brain potentials (ERPs). Participants were instructed to listen to a series of sounds and press a response button as quickly as possible upon hearing a target noise burst, which was either reliably or unreliably preceded by one of three pure tones in low-, mid-, and high-frequency ranges. Participants were not informed about the statistical contingencies between the preceding tone ‘cues’ and the target. Over the course of a stimulus block, participants responded more rapidly to reliably cued targets. This behavioral index of learned expectancy was paralleled by a negative ERP deflection, designated as a neuronal contingency response (CR), which occurred immediately prior to the onset of the target. The amplitude and latency of the CR were systematically modulated by the strength of the predictive relationship between the cue and the target. Re-averaging ERPs with respect to the latency of behavioral responses revealed no consistent relationship between the CR and the motor response, suggesting that the CR represents a neuronal signature of learned expectancy or anticipatory attention. Our results demonstrate that statistical regularities in an auditory input stream can be implicitly learned and exploited to influence behavior. Furthermore, we uncover a potential ‘prediction signal’ that reflects this fundamental learning process.
Collapse
|
39
|
Abstract
AbstractMusic listening is one of the most pleasurable activities in our life. As a rewarding stimulus, pleasant music could induce long-term memory improvements for the items encoded in close temporal proximity. In the present study, we behaviourally investigated (1) whether musical pleasure and musical hedonia enhance verbal episodic memory, and (2) whether such enhancement takes place even when the pleasant stimulus is not present during the encoding. Participants (N = 100) were asked to encode words presented in different auditory contexts (highly and lowly pleasant classical music, and control white noise), played before and during (N = 49), or only before (N = 51) the encoding. The Barcelona Music Reward Questionnaire was used to measure participants’ sensitivity to musical reward. 24 h later, participants’ verbal episodic memory was tested (old/new recognition and remember/know paradigm). Results revealed that participants with a high musical reward sensitivity present an increased recollection performance, especially for words encoded in a highly pleasant musical context. Furthermore, this effect persists even when the auditory stimulus is not concurrently present during the encoding of target items. Taken together, these findings suggest that musical pleasure might constitute a helpful encoding context able to drive memory improvements via reward mechanisms.
Collapse
|
40
|
Cardona G, Rodriguez-Fornells A, Nye H, Rifà-Ros X, Ferreri L. The impact of musical pleasure and musical hedonia on verbal episodic memory. Sci Rep 2020; 10:16113. [PMID: 32999309 PMCID: PMC7527554 DOI: 10.1038/s41598-020-72772-3] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Accepted: 08/26/2020] [Indexed: 12/14/2022] Open
Abstract
Music listening is one of the most pleasurable activities in our life. As a rewarding stimulus, pleasant music could induce long-term memory improvements for the items encoded in close temporal proximity. In the present study, we behaviourally investigated (1) whether musical pleasure and musical hedonia enhance verbal episodic memory, and (2) whether such enhancement takes place even when the pleasant stimulus is not present during the encoding. Participants (N = 100) were asked to encode words presented in different auditory contexts (highly and lowly pleasant classical music, and control white noise), played before and during (N = 49), or only before (N = 51) the encoding. The Barcelona Music Reward Questionnaire was used to measure participants' sensitivity to musical reward. 24 h later, participants' verbal episodic memory was tested (old/new recognition and remember/know paradigm). Results revealed that participants with a high musical reward sensitivity present an increased recollection performance, especially for words encoded in a highly pleasant musical context. Furthermore, this effect persists even when the auditory stimulus is not concurrently present during the encoding of target items. Taken together, these findings suggest that musical pleasure might constitute a helpful encoding context able to drive memory improvements via reward mechanisms.
Collapse
Affiliation(s)
- Gemma Cardona
- Department of Cognition, Development and Educational Psychology, University of Barcelona, 08035, Barcelona, Spain.
- Cognition and Brain Plasticity Unit, Bellvitge Biomedical Research Institute, L'Hospitalet de Llobregat, 08907, Barcelona, Spain.
| | - Antoni Rodriguez-Fornells
- Department of Cognition, Development and Educational Psychology, University of Barcelona, 08035, Barcelona, Spain
- Cognition and Brain Plasticity Unit, Bellvitge Biomedical Research Institute, L'Hospitalet de Llobregat, 08907, Barcelona, Spain
- Institució Catalana de Recerca i Estudis Avançats, 08010, Barcelona, Spain
| | - Harry Nye
- Cognition and Brain Plasticity Unit, Bellvitge Biomedical Research Institute, L'Hospitalet de Llobregat, 08907, Barcelona, Spain
| | - Xavier Rifà-Ros
- Department of Cognition, Development and Educational Psychology, University of Barcelona, 08035, Barcelona, Spain
- Cognition and Brain Plasticity Unit, Bellvitge Biomedical Research Institute, L'Hospitalet de Llobregat, 08907, Barcelona, Spain
| | - Laura Ferreri
- Laboratoire d'Étude des Mécanismes Cognitifs, Université Lumière Lyon 2, 69676, Lyon, France
| |
Collapse
|
41
|
Sun L, Feng C, Yang Y. Tension Experience Induced By Nested Structures In Music. Front Hum Neurosci 2020; 14:210. [PMID: 32670037 PMCID: PMC7327114 DOI: 10.3389/fnhum.2020.00210] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2020] [Accepted: 05/08/2020] [Indexed: 11/16/2022] Open
Abstract
Tension experience is the basis for music emotion. In music, discrete elements are always organized into complex nested structures to convey emotion. However, the processing of music tension in the nested structure remains unknown. The present study investigated the tension experience induced by the nested structure and the underlying neural mechanisms, using a continuous tension rating task and electroencephalography (EEG) at the same time. Thirty musicians listened to music chorale sequences with non-nested, singly nested and doubly nested structures and were required to rate their real-time tension experience. Behavioral data indicated that the tension experience induced by the nested structure had more fluctuations than the non-nested structure, and the difference was mainly exhibited in the process of tension induction rather than tension resolution. However, the EEG data showed that larger late positive components (LPCs) were elicited by the ending chords in the nested structure compared with the non-nested structure, reflecting the difference in cognitive integration for long-distance structural dependence. The discrepancy between resolution experience and neural responses revealed the non-parallel relations between emotion and cognition. Furthermore, the LPC elicited by the doubly nested structure showed a smaller scalp distribution than the singly nested structure, indicating the more difficult processing of the doubly nested structure. These findings revealed the dynamic tension experience induced by the nested structure and the influence of nested type, shedding new light on the relationship between structure and tension in music.
Collapse
Affiliation(s)
- Lijun Sun
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Chen Feng
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Yufang Yang
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
42
|
Calma-Roddin N, Drury JE. Music, Language, and The N400: ERP Interference Patterns Across Cognitive Domains. Sci Rep 2020; 10:11222. [PMID: 32641708 PMCID: PMC7343814 DOI: 10.1038/s41598-020-66732-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2018] [Accepted: 04/03/2020] [Indexed: 11/09/2022] Open
Abstract
Studies of the relationship of language and music have suggested these two systems may share processing resources involved in the computation/maintenance of abstract hierarchical structure (syntax). One type of evidence comes from ERP interference studies involving concurrent language/music processing showing interaction effects when both processing streams are simultaneously perturbed by violations (e.g., syntactically incorrect words paired with incongruent completion of a chord progression). Here, we employ this interference methodology to target the mechanisms supporting long term memory (LTM) access/retrieval in language and music. We used melody stimuli from previous work showing out-of-key or unexpected notes may elicit a musical analogue of language N400 effects, but only for familiar melodies, and not for unfamiliar ones. Target notes in these melodies were time-locked to visually presented target words in sentence contexts manipulating lexical/conceptual semantic congruity. Our study succeeded in eliciting expected N400 responses from each cognitive domain independently. Among several new findings we argue to be of interest, these data demonstrate that: (i) language N400 effects are delayed in onset by concurrent music processing only when melodies are familiar, and (ii) double violations with familiar melodies (but not with unfamiliar ones) yield a sub-additive N400 response. In addition: (iii) early negativities (RAN effects), which previous work has connected to musical syntax, along with the music N400, were together delayed in onset for familiar melodies relative to the timing of these effects reported in the previous music-only study using these same stimuli, and (iv) double violation cases involving unfamiliar/novel melodies also delayed the RAN effect onset. These patterns constitute the first demonstration of N400 interference effects across these domains and together contribute previously undocumented types of interactions to the available pool of findings relevant to understanding whether language and music may rely on shared underlying mechanisms.
Collapse
Affiliation(s)
- Nicole Calma-Roddin
- Department of Behavioral Sciences, New York Institute of Technology, Old Westbury, New York, USA.
- Department of Psychology, Stony Brook University, New York, USA.
| | - John E Drury
- School of Linguistic Sciences and Arts, Jiangsu Normal University, Xuzhou, China
| |
Collapse
|
43
|
Recursive music elucidates neural mechanisms supporting the generation and detection of melodic hierarchies. Brain Struct Funct 2020; 225:1997-2015. [PMID: 32591927 PMCID: PMC7473971 DOI: 10.1007/s00429-020-02105-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Accepted: 06/16/2020] [Indexed: 12/17/2022]
Abstract
The ability to generate complex hierarchical structures is a crucial component of human cognition which can be expressed in the musical domain in the form of hierarchical melodic relations. The neural underpinnings of this ability have been investigated by comparing the perception of well-formed melodies with unexpected sequences of tones. However, these contrasts do not target specifically the representation of rules generating hierarchical structure. Here, we present a novel paradigm in which identical melodic sequences are generated in four steps, according to three different rules: The Recursive rule, generating new hierarchical levels at each step; The Iterative rule, adding tones within a fixed hierarchical level without generating new levels; and a control rule that simply repeats the third step. Using fMRI, we compared brain activity across these rules when participants are imagining the fourth step after listening to the third (generation phase), and when participants listened to a fourth step (test sound phase), either well-formed or a violation. We found that, in comparison with Repetition and Iteration, imagining the fourth step using the Recursive rule activated the superior temporal gyrus (STG). During the test sound phase, we found fronto-temporo-parietal activity and hippocampal de-activation when processing violations, but no differences between rules. STG activation during the generation phase suggests that generating new hierarchical levels from previous steps might rely on retrieving appropriate melodic hierarchy schemas. Previous findings highlighting the role of hippocampus and inferior frontal gyrus may reflect processing of unexpected melodic sequences, rather than hierarchy generation per se.
Collapse
|
44
|
Dell'Anna A, Buhmann J, Six J, Maes PJ, Leman M. Timing Markers of Interaction Quality During Semi-Hocket Singing. Front Neurosci 2020; 14:619. [PMID: 32625057 PMCID: PMC7315043 DOI: 10.3389/fnins.2020.00619] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Accepted: 05/19/2020] [Indexed: 01/30/2023] Open
Abstract
Music is believed to work as a bio-social tool enabling groups of people to establish joint action and group bonding experiences. However, little is known about the quality of the group members' interaction needed to bring about these effects. To investigate the role of interaction quality, and its effect on joint action and bonding experience, we asked dyads (two singers) to perform music in medieval "hocket" style, in order to engage their co-regulatory activity. The music contained three relative inter-onset-interval (IOI) classes: quarter note, dotted quarter note and eight note, marking time intervals between successive onsets (generated by both singers). We hypothesized that singers co-regulated their activity by minimizing prediction errors in view of stable IOI-classes. Prediction errors were measured using a dynamic Bayesian inference approach that allows us to identify three different types of error called fluctuation (micro-timing errors measured in milliseconds), narration (omission errors or misattribution of an IOI to a wrong IOI class), and collapse errors (macro-timing errors that cause the breakdown of a performance). These three types of errors were correlated with the singers' estimated quality of the performance and the experienced sense of joint agency. We let the singers perform either while moving or standing still, under the hypothesis that the moving condition would have reduced timing errors and increased We-agency as opposed to Shared-agency (the former portraying a condition in which the performers blend into one another, the latter portraying a joint, but distinct, control of the performance). The results show that estimated quality correlates with fluctuation and narration errors, while agency correlates (to a lesser degree) with narration errors. Somewhat unexpectedly, there was a minor effect of movement, and it was beneficial only for good performers. Joint agency resulted in a "shared," rather than a "we," sense of joint agency. The methodology and findings open up promising avenues for future research on social embodied music interaction.
Collapse
Affiliation(s)
- Alessandro Dell'Anna
- Department of Musicology - IPEM, Ghent University, Ghent, Belgium.,Department of Psychology, University of Turin, Turin, Italy
| | - Jeska Buhmann
- Department of Musicology - IPEM, Ghent University, Ghent, Belgium
| | - Joren Six
- Department of Musicology - IPEM, Ghent University, Ghent, Belgium
| | - Pieter-Jan Maes
- Department of Musicology - IPEM, Ghent University, Ghent, Belgium
| | - Marc Leman
- Department of Musicology - IPEM, Ghent University, Ghent, Belgium
| |
Collapse
|
45
|
Why do we move to the beat? A multi-scale approach, from physical principles to brain dynamics. Neurosci Biobehav Rev 2020; 112:553-584. [DOI: 10.1016/j.neubiorev.2019.12.024] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2019] [Revised: 10/20/2019] [Accepted: 12/13/2019] [Indexed: 01/08/2023]
|
46
|
Köksal Ersöz E, Aguilar C, Chossat P, Krupa M, Lavigne F. Neuronal mechanisms for sequential activation of memory items: Dynamics and reliability. PLoS One 2020; 15:e0231165. [PMID: 32298290 PMCID: PMC7161983 DOI: 10.1371/journal.pone.0231165] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Accepted: 03/17/2020] [Indexed: 11/19/2022] Open
Abstract
In this article we present a biologically inspired model of activation of memory items in a sequence. Our model produces two types of sequences, corresponding to two different types of cerebral functions: activation of regular or irregular sequences. The switch between the two types of activation occurs through the modulation of biological parameters, without altering the connectivity matrix. Some of the parameters included in our model are neuronal gain, strength of inhibition, synaptic depression and noise. We investigate how these parameters enable the existence of sequences and influence the type of sequences observed. In particular we show that synaptic depression and noise drive the transitions from one memory item to the next and neuronal gain controls the switching between regular and irregular (random) activation.
Collapse
Affiliation(s)
| | - Carlos Aguilar
- Lab by MANTU, Amaris Research Unit, Route des Colles, Biot, France
| | - Pascal Chossat
- Project Team MathNeuro, INRIA-CNRS-UNS, Sophia Antipolis, France
- Université Côte d'Azur, Laboratoire Jean-Alexandre Dieudonné, Nice, France
| | - Martin Krupa
- Project Team MathNeuro, INRIA-CNRS-UNS, Sophia Antipolis, France
- Université Côte d'Azur, Laboratoire Jean-Alexandre Dieudonné, Nice, France
| | | |
Collapse
|
47
|
Bonnasse-Gahot L. Efficient Communication in Written and Performed Music. Cogn Sci 2020; 44:e12826. [PMID: 32215961 DOI: 10.1111/cogs.12826] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Revised: 02/05/2020] [Accepted: 02/13/2020] [Indexed: 11/29/2022]
Abstract
Since its inception, Shannon's information theory has attracted interest for the study of language and music. Recently, a wide range of converging studies have shown how efficient communication pervades language, from phonetics to syntax. Efficient principles imply that more resources should be assigned to highly informative items. For instance, average information content was shown to be a better predictor of word length than frequency, revisiting the famous Zipf's law. However, in spite of the success of the efficient communication framework in the study of language and speech, very little work has investigated its relevance in the analysis of music. Here, we examine the organization of harmonic information in two large corpora of Western music, one made of MIDI files directly sequenced from scores, and the other made of MIDI recordings of live performances of highly skilled piano players. We show that there is a clear positive relationship between (contextual) information content of harmonic sequences and two essential musical properties, namely duration and loudness: the more unexpected a harmonic event is, the longer and the louder it is.
Collapse
|
48
|
Zioga I, Harrison PM, Pearce MT, Bhattacharya J, Di Bernardi Luft C. From learning to creativity: Identifying the behavioural and neural correlates of learning to predict human judgements of musical creativity. Neuroimage 2020; 206:116311. [DOI: 10.1016/j.neuroimage.2019.116311] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Revised: 10/18/2019] [Accepted: 10/22/2019] [Indexed: 10/25/2022] Open
|
49
|
Cheung VK, Harrison PM, Meyer L, Pearce MT, Haynes JD, Koelsch S. Uncertainty and Surprise Jointly Predict Musical Pleasure and Amygdala, Hippocampus, and Auditory Cortex Activity. Curr Biol 2019; 29:4084-4092.e4. [DOI: 10.1016/j.cub.2019.09.067] [Citation(s) in RCA: 71] [Impact Index Per Article: 14.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2019] [Revised: 09/11/2019] [Accepted: 09/25/2019] [Indexed: 12/11/2022]
|
50
|
Shared neural resources of rhythm and syntax: An ALE meta-analysis. Neuropsychologia 2019; 137:107284. [PMID: 31783081 DOI: 10.1016/j.neuropsychologia.2019.107284] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Accepted: 11/25/2019] [Indexed: 11/20/2022]
Abstract
A growing body of evidence has highlighted behavioral connections between musical rhythm and linguistic syntax, suggesting that these abilities may be mediated by common neural resources. Here, we performed a quantitative meta-analysis of neuroimaging studies using activation likelihood estimate (ALE) to localize the shared neural structures engaged in a representative set of musical rhythm (rhythm, beat, and meter) and linguistic syntax (merge movement, and reanalysis) operations. Rhythm engaged a bilateral sensorimotor network throughout the brain consisting of the inferior frontal gyri, supplementary motor area, superior temporal gyri/temporoparietal junction, insula, intraparietal lobule, and putamen. By contrast, syntax mostly recruited the left sensorimotor network including the inferior frontal gyrus, posterior superior temporal gyrus, premotor cortex, and supplementary motor area. Intersections between rhythm and syntax maps yielded overlapping regions in the left inferior frontal gyrus, left supplementary motor area, and bilateral insula-neural substrates involved in temporal hierarchy processing and predictive coding. Together, this is the first neuroimaging meta-analysis providing detailed anatomical overlap of sensorimotor regions recruited for musical rhythm and linguistic syntax.
Collapse
|