1
|
Cheung VKM, Harrison PMC, Koelsch S, Pearce MT, Friederici AD, Meyer L. Cognitive and sensory expectations independently shape musical expectancy and pleasure. Philos Trans R Soc Lond B Biol Sci 2024; 379:20220420. [PMID: 38104601 PMCID: PMC10725761 DOI: 10.1098/rstb.2022.0420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 10/20/2023] [Indexed: 12/19/2023] Open
Abstract
Expectation is crucial for our enjoyment of music, yet the underlying generative mechanisms remain unclear. While sensory models derive predictions based on local acoustic information in the auditory signal, cognitive models assume abstract knowledge of music structure acquired over the long term. To evaluate these two contrasting mechanisms, we compared simulations from four computational models of musical expectancy against subjective expectancy and pleasantness ratings of over 1000 chords sampled from 739 US Billboard pop songs. Bayesian model comparison revealed that listeners' expectancy and pleasantness ratings were predicted by the independent, non-overlapping, contributions of cognitive and sensory expectations. Furthermore, cognitive expectations explained over twice the variance in listeners' perceived surprise compared to sensory expectations, suggesting a larger relative importance of long-term representations of music structure over short-term sensory-acoustic information in musical expectancy. Our results thus emphasize the distinct, albeit complementary, roles of cognitive and sensory expectations in shaping musical pleasure, and suggest that this expectancy-driven mechanism depends on musical information represented at different levels of abstraction along the neural hierarchy. This article is part of the theme issue 'Art, aesthetics and predictive processing: theoretical and empirical perspectives'.
Collapse
Affiliation(s)
- Vincent K. M. Cheung
- Sony Computer Science Laboratories, Inc., Shinagawa-ku, Tokyo 141-0022, Japan
- Department of Neuropsychology, Sony Computer Science Laboratories, Inc., Shinagawa-ku, Tokyo 141-0022, Japan
- Institute of Information Science, Academia Sinica, Taipei 115, Taiwan
| | - Peter M. C. Harrison
- Centre for Music and Science, University of Cambridge, Faculty of Music, 11 West Road, Cambridge, CB3 9DP, UK
- Centre for Digital Music, Queen Mary University of London, E1 4NS, UK
| | - Stefan Koelsch
- Department of Biological and Medical Psychology, University of Bergen, Bergen, 5009, Norway
| | - Marcus T. Pearce
- Centre for Digital Music, Queen Mary University of London, E1 4NS, UK
- Department of Clinical Medicine, Aarhus University, Aarhus N, 8200, Denmark
| | - Angela D. Friederici
- Department of Neuropsychology, Sony Computer Science Laboratories, Inc., Shinagawa-ku, Tokyo 141-0022, Japan
| | - Lars Meyer
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
- Clinic for Phoniatrics and Pedaudiology, University Hospital Münster, Münster, 48149, Germany
| |
Collapse
|
2
|
Pousson JE, Shen YW, Lin YP, Voicikas A, Pipinis E, Bernhofs V, Burmistrova L, Griskova-Bulanova I. Exploring Spatio-Spectral Electroencephalogram Modulations of Imbuing Emotional Intent During Active Piano Playing. IEEE Trans Neural Syst Rehabil Eng 2023; 31:4347-4356. [PMID: 37883285 DOI: 10.1109/tnsre.2023.3327740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2023]
Abstract
Imbuing emotional intent serves as a crucial modulator of music improvisation during active musical instrument playing. However, most improvisation-related neural endeavors have been gained without considering the emotional context. This study attempts to exploit reproducible spatio-spectral electroencephalogram (EEG) oscillations of emotional intent using a data-driven independent component analysis framework in an ecological multiday piano playing experiment. Through the four-day 32-ch EEG dataset of 10 professional players, we showed that EEG patterns were substantially affected by both intra- and inter-individual variability underlying the emotional intent of the dichotomized valence (positive vs. negative) and arousal (high vs. low) categories. Less than half (3-4) of the 10 participants analogously exhibited day-reproducible ( ≥ three days) spectral modulations at the right frontal beta in response to the valence contrast as well as the frontal central gamma and the superior parietal alpha to the arousal counterpart. In particular, the frontal engagement facilitates a better understanding of the frontal cortex (e.g., dorsolateral prefrontal cortex and anterior cingulate cortex) and its role in intervening emotional processes and expressing spectral signatures that are relatively resistant to natural EEG variability. Such ecologically vivid EEG findings may lead to better understanding of the development of a brain-computer music interface infrastructure capable of guiding the training, performance, and appreciation for emotional improvisatory status or actuating music interaction via emotional context.
Collapse
|
3
|
Chander A, Aslin RN. Expectation adaptation for rare cadences in music: Item order matters in repetition priming. Cognition 2023; 240:105601. [PMID: 37604028 PMCID: PMC10501749 DOI: 10.1016/j.cognition.2023.105601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 08/08/2023] [Accepted: 08/14/2023] [Indexed: 08/23/2023]
Abstract
Humans make predictions about future events in many domains, including when they listen to music. Previous accounts of harmonic expectation in music have emphasised the role of implicit musical knowledge acquired in the long term through the mechanism of statistical learning. However, it is not known whether listeners can adapt their expectations for unusual harmonies in the short term through repetition priming, and whether the extent of any short-term adaptation depends on the unfolding statistical structure of the music. To explore these possibilities, we presented 150 participants with phrases from Bach chorales that ended with a cadence that was either a priori likely or unlikely based on the long-term statistical structure of the corpus of chorales. While holding the 50-50 incidence of likely vs. unlikely cadences constant, we manipulated the order in which these phrases were presented such that the local probability of hearing an unlikely cadence changed throughout the experiment. For each phrase, participants provided two judgements: (a) a prospective rating of how confident they were in their expectations for the cadence, and (b) a retrospective rating of how well the presented cadence matched their expectations. While confidence ratings increased over the course of the experiment, the rate of change decreased as the local probability of an unexpected cadence increased. Participants' expectations favoured likely cadences over unlikely cadences on average, but their expectation ratings for unlikely cadences increased at a faster rate over the course of the experiment than for likely cadences, particularly when the local probability of hearing an unlikely cadence was high. Thus, despite entrenched long-term statistics about cadences, listeners can indeed adapt to unusual musical harmonies and are sensitive to the local statistical structure of the musical environment. We suggest that this adaptation is an instance of Bayesian belief updating, a domain-general process that accounts for expectation adaptation in multiple domains.
Collapse
Affiliation(s)
- Aditya Chander
- Department of Music, Yale University, 469 College St, New Haven, CT 06511, USA.
| | - Richard N Aslin
- Child Study Center, Yale School of Medicine, 230 S Frontage Rd, New Haven, CT 06519, USA; Department of Psychology, Yale University, 405 Temple St, New Haven, CT 06511, USA
| |
Collapse
|
4
|
Chen X, Affourtit J, Ryskin R, Regev TI, Norman-Haignere S, Jouravlev O, Malik-Moraleda S, Kean H, Varley R, Fedorenko E. The human language system, including its inferior frontal component in "Broca's area," does not support music perception. Cereb Cortex 2023; 33:7904-7929. [PMID: 37005063 PMCID: PMC10505454 DOI: 10.1093/cercor/bhad087] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 01/02/2023] [Accepted: 01/03/2023] [Indexed: 04/04/2023] Open
Abstract
Language and music are two human-unique capacities whose relationship remains debated. Some have argued for overlap in processing mechanisms, especially for structure processing. Such claims often concern the inferior frontal component of the language system located within "Broca's area." However, others have failed to find overlap. Using a robust individual-subject fMRI approach, we examined the responses of language brain regions to music stimuli, and probed the musical abilities of individuals with severe aphasia. Across 4 experiments, we obtained a clear answer: music perception does not engage the language system, and judgments about music structure are possible even in the presence of severe damage to the language network. In particular, the language regions' responses to music are generally low, often below the fixation baseline, and never exceed responses elicited by nonmusic auditory conditions, like animal sounds. Furthermore, the language regions are not sensitive to music structure: they show low responses to both intact and structure-scrambled music, and to melodies with vs. without structural violations. Finally, in line with past patient investigations, individuals with aphasia, who cannot judge sentence grammaticality, perform well on melody well-formedness judgments. Thus, the mechanisms that process structure in language do not appear to process music, including music syntax.
Collapse
Affiliation(s)
- Xuanyi Chen
- Department of Cognitive Sciences, Rice University, TX 77005, United States
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Josef Affourtit
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Rachel Ryskin
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Cognitive & Information Sciences, University of California, Merced, Merced, CA 95343, United States
| | - Tamar I Regev
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Samuel Norman-Haignere
- Department of Biostatistics & Computational Biology, University of Rochester Medical Center, Rochester, NY, United States
- Department of Neuroscience, University of Rochester Medical Center, Rochester, NY, United States
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, United States
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, United States
| | - Olessia Jouravlev
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Cognitive Science, Carleton University, Ottawa, ON, Canada
| | - Saima Malik-Moraleda
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- The Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138, United States
| | - Hope Kean
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Rosemary Varley
- Psychology & Language Sciences, UCL, London, WCN1 1PF, United Kingdom
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- The Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138, United States
| |
Collapse
|
5
|
Jiang J, Liu F, Zhou L, Chen L, Jiang C. Explicit processing of melodic structure in congenital amusia can be improved by redescription-associate learning. Neuropsychologia 2023; 182:108521. [PMID: 36870471 DOI: 10.1016/j.neuropsychologia.2023.108521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 02/19/2023] [Accepted: 02/19/2023] [Indexed: 03/06/2023]
Abstract
Congenital amusia is a neurodevelopmental disorder of musical processing. Previous research demonstrates that although explicit musical processing is impaired in congenital amusia, implicit musical processing can be intact. However, little is known about whether implicit knowledge could improve explicit musical processing in individuals with congenital amusia. To this end, we developed a training method utilizing redescription-associate learning, aiming at transferring implicit representations of perceptual states into explicit forms through verbal description and then establishing the associations between the perceptual states reported and responses via feedback, to investigate whether the explicit processing of melodic structure could be improved in individuals with congenital amusia. Sixteen amusics and 11 controls rated the degree of expectedness of melodies during EEG recording before and after training. In the interim, half of the amusics received nine training sessions on melodic structure, while the other half received no training. Results, based on effect size estimation, showed that at pretest, amusics but not controls failed to explicitly distinguish the regular from the irregular melodies and to exhibit an ERAN in response to the irregular endings. At posttest, trained but not untrained amusics performed as well as controls at both the behavioral and neural levels. At the 3-month follow-up, the training effects still maintained. These findings present novel electrophysiological evidence of neural plasticity in the amusic brain, suggesting that redescription-associate learning may be an effective method to remediate impaired explicit processes for individuals with other neurodevelopmental disorders who have intact implicit knowledge.
Collapse
Affiliation(s)
- Jun Jiang
- Music College, Shanghai Normal University, Shanghai, 200234, China
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, RG6 6AL, UK
| | - Linshu Zhou
- Music College, Shanghai Normal University, Shanghai, 200234, China
| | - Liaoliao Chen
- Foreign Languages College, Shanghai Normal University, Shanghai, 200234, China
| | - Cunmei Jiang
- Music College, Shanghai Normal University, Shanghai, 200234, China.
| |
Collapse
|
6
|
Basiński K, Quiroga-Martinez DR, Vuust P. Temporal hierarchies in the predictive processing of melody - From pure tones to songs. Neurosci Biobehav Rev 2023; 145:105007. [PMID: 36535375 DOI: 10.1016/j.neubiorev.2022.105007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 11/30/2022] [Accepted: 12/14/2022] [Indexed: 12/23/2022]
Abstract
Listening to musical melodies is a complex task that engages perceptual and memoryrelated processes. The processes underlying melody cognition happen simultaneously on different timescales, ranging from milliseconds to minutes. Although attempts have been made, research on melody perception is yet to produce a unified framework of how melody processing is achieved in the brain. This may in part be due to the difficulty of integrating concepts such as perception, attention and memory, which pertain to different temporal scales. Recent theories on brain processing, which hold prediction as a fundamental principle, offer potential solutions to this problem and may provide a unifying framework for explaining the neural processes that enable melody perception on multiple temporal levels. In this article, we review empirical evidence for predictive coding on the levels of pitch formation, basic pitch-related auditory patterns,more complex regularity processing extracted from basic patterns and long-term expectations related to musical syntax. We also identify areas that would benefit from further inquiry and suggest future directions in research on musical melody perception.
Collapse
Affiliation(s)
- Krzysztof Basiński
- Division of Quality of Life Research, Medical University of Gdańsk, Poland
| | - David Ricardo Quiroga-Martinez
- Helen Wills Neuroscience Institute & Department of Psychology, University of California Berkeley, USA; Center for Music in the Brain, Aarhus University & The Royal Academy of Music, Denmark
| | - Peter Vuust
- Center for Music in the Brain, Aarhus University & The Royal Academy of Music, Denmark
| |
Collapse
|
7
|
Schiavio A, Maes PJ, van der Schyff D. The dynamics of musical participation. MUSICAE SCIENTIAE : THE JOURNAL OF THE EUROPEAN SOCIETY FOR THE COGNITIVE SCIENCES OF MUSIC 2022; 26:604-626. [PMID: 36090466 PMCID: PMC9449429 DOI: 10.1177/1029864920988319] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In this paper we argue that our comprehension of musical participation-the complex network of interactive dynamics involved in collaborative musical experience-can benefit from an analysis inspired by the existing frameworks of dynamical systems theory and coordination dynamics. These approaches can offer novel theoretical tools to help music researchers describe a number of central aspects of joint musical experience in greater detail, such as prediction, adaptivity, social cohesion, reciprocity, and reward. While most musicians involved in collective forms of musicking already have some familiarity with these terms and their associated experiences, we currently lack an analytical vocabulary to approach them in a more targeted way. To fill this gap, we adopt insights from these frameworks to suggest that musical participation may be advantageously characterized as an open, non-equilibrium, dynamical system. In particular, we suggest that research informed by dynamical systems theory might stimulate new interdisciplinary scholarship at the crossroads of musicology, psychology, philosophy, and cognitive (neuro)science, pointing toward new understandings of the core features of musical participation.
Collapse
Affiliation(s)
- Andrea Schiavio
- Andrea Schiavio, Centre for
Systematic Musicology, University of Graz, Glacisstraße 27a, Graz,
8010, Austria.
| | - Pieter-Jan Maes
- IPEM, Department of Art, Music, and
Theatre Sciences, Ghent University, Belgium
| | | |
Collapse
|
8
|
Kern P, Heilbron M, de Lange FP, Spaak E. Cortical activity during naturalistic music listening reflects short-range predictions based on long-term experience. eLife 2022; 11:80935. [PMID: 36562532 PMCID: PMC9836393 DOI: 10.7554/elife.80935] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Accepted: 12/22/2022] [Indexed: 12/24/2022] Open
Abstract
Expectations shape our experience of music. However, the internal model upon which listeners form melodic expectations is still debated. Do expectations stem from Gestalt-like principles or statistical learning? If the latter, does long-term experience play an important role, or are short-term regularities sufficient? And finally, what length of context informs contextual expectations? To answer these questions, we presented human listeners with diverse naturalistic compositions from Western classical music, while recording neural activity using MEG. We quantified note-level melodic surprise and uncertainty using various computational models of music, including a state-of-the-art transformer neural network. A time-resolved regression analysis revealed that neural activity over fronto-temporal sensors tracked melodic surprise particularly around 200ms and 300-500ms after note onset. This neural surprise response was dissociated from sensory-acoustic and adaptation effects. Neural surprise was best predicted by computational models that incorporated long-term statistical learning-rather than by simple, Gestalt-like principles. Yet, intriguingly, the surprise reflected primarily short-range musical contexts of less than ten notes. We present a full replication of our novel MEG results in an openly available EEG dataset. Together, these results elucidate the internal model that shapes melodic predictions during naturalistic music listening.
Collapse
Affiliation(s)
- Pius Kern
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and BehaviourNijmegenNetherlands
| | - Micha Heilbron
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and BehaviourNijmegenNetherlands
| | - Floris P de Lange
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and BehaviourNijmegenNetherlands
| | - Eelke Spaak
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and BehaviourNijmegenNetherlands
| |
Collapse
|
9
|
Neural correlates of acoustic dissonance in music: The role of musicianship, schematic and veridical expectations. PLoS One 2021; 16:e0260728. [PMID: 34852008 PMCID: PMC8635369 DOI: 10.1371/journal.pone.0260728] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 11/15/2021] [Indexed: 11/19/2022] Open
Abstract
In western music, harmonic expectations can be fulfilled or broken by unexpected chords. Musical irregularities in the absence of auditory deviance elicit well-studied neural responses (e.g. ERAN, P3, N5). These responses are sensitive to schematic expectations (induced by syntactic rules of chord succession) and veridical expectations about predictability (induced by experimental regularities). However, the cognitive and sensory contributions to these responses and their plasticity as a result of musical training remains under debate. In the present study, we explored whether the neural processing of pure acoustic violations is affected by schematic and veridical expectations. Moreover, we investigated whether these two factors interact with long-term musical training. In Experiment 1, we registered the ERPs elicited by dissonant clusters placed either at the middle or the ending position of chord cadences. In Experiment 2, we presented to the listeners with a high proportion of cadences ending in a dissonant chord. In both experiments, we compared the ERPs of musicians and non-musicians. Dissonant clusters elicited distinctive neural responses (an early negativity, the P3 and the N5). While the EN was not affected by syntactic rules, the P3a and P3b were larger for dissonant closures than for middle dissonant chords. Interestingly, these components were larger in musicians than in non-musicians, while the N5 was the opposite. Finally, the predictability of dissonant closures in our experiment did not modulate any of the ERPs. Our study suggests that, at early time windows, dissonance is processed based on acoustic deviance independently of syntactic rules. However, at longer latencies, listeners may be able to engage integration mechanisms and further processes of attentional and structural analysis dependent on musical hierarchies, which are enhanced in musicians.
Collapse
|
10
|
Pousson JE, Voicikas A, Bernhofs V, Pipinis E, Burmistrova L, Lin YP, Griškova-Bulanova I. Spectral Characteristics of EEG during Active Emotional Musical Performance. SENSORS 2021; 21:s21227466. [PMID: 34833541 PMCID: PMC8620396 DOI: 10.3390/s21227466] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 11/02/2021] [Accepted: 11/08/2021] [Indexed: 11/16/2022]
Abstract
The research on neural correlates of intentional emotion communication by the music performer is still limited. In this study, we attempted to evaluate EEG patterns recorded from musicians who were instructed to perform a simple piano score while manipulating their manner of play to express specific contrasting emotions and self-rate the emotion they reflected on the scales of arousal and valence. In the emotional playing task, participants were instructed to improvise variations in a manner by which the targeted emotion is communicated. In contrast, in the neutral playing task, participants were asked to play the same piece precisely as written to obtain data for control over general patterns of motor and sensory activation during playing. The spectral analysis of the signal was applied as an initial step to be able to connect findings to the wider field of music-emotion research. The experimental contrast of emotional playing vs. neutral playing was employed to probe brain activity patterns differentially involved in distinct emotional states. The tasks of emotional and neutral playing differed considerably with respect to the state of intended-to-transfer emotion arousal and valence levels. The EEG activity differences were observed between distressed/excited and neutral/depressed/relaxed playing.
Collapse
Affiliation(s)
- Jachin Edward Pousson
- Jāzeps Vītols Latvian Academy of Music, LV-1050 Riga, Latvia; (J.E.P.); (V.B.); (L.B.)
| | - Aleksandras Voicikas
- Department of Neurobiology and Biophysics, Vilnius University, LT-10257 Vilnius, Lithuania; (A.V.); (E.P.)
| | - Valdis Bernhofs
- Jāzeps Vītols Latvian Academy of Music, LV-1050 Riga, Latvia; (J.E.P.); (V.B.); (L.B.)
| | - Evaldas Pipinis
- Department of Neurobiology and Biophysics, Vilnius University, LT-10257 Vilnius, Lithuania; (A.V.); (E.P.)
| | - Lana Burmistrova
- Jāzeps Vītols Latvian Academy of Music, LV-1050 Riga, Latvia; (J.E.P.); (V.B.); (L.B.)
| | - Yuan-Pin Lin
- Institute of Medical Science and Technology, National Sun Yat-sen University, Kaohsiung 80424, Taiwan;
- Department of Electrical Engineering, National Sun Yat-sen University, Lienhai Road, Kaohsiung 80424, Taiwan
| | - Inga Griškova-Bulanova
- Department of Neurobiology and Biophysics, Vilnius University, LT-10257 Vilnius, Lithuania; (A.V.); (E.P.)
- Correspondence: ; Tel.: +37-067110954
| |
Collapse
|
11
|
Recursive music elucidates neural mechanisms supporting the generation and detection of melodic hierarchies. Brain Struct Funct 2020; 225:1997-2015. [PMID: 32591927 PMCID: PMC7473971 DOI: 10.1007/s00429-020-02105-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Accepted: 06/16/2020] [Indexed: 12/17/2022]
Abstract
The ability to generate complex hierarchical structures is a crucial component of human cognition which can be expressed in the musical domain in the form of hierarchical melodic relations. The neural underpinnings of this ability have been investigated by comparing the perception of well-formed melodies with unexpected sequences of tones. However, these contrasts do not target specifically the representation of rules generating hierarchical structure. Here, we present a novel paradigm in which identical melodic sequences are generated in four steps, according to three different rules: The Recursive rule, generating new hierarchical levels at each step; The Iterative rule, adding tones within a fixed hierarchical level without generating new levels; and a control rule that simply repeats the third step. Using fMRI, we compared brain activity across these rules when participants are imagining the fourth step after listening to the third (generation phase), and when participants listened to a fourth step (test sound phase), either well-formed or a violation. We found that, in comparison with Repetition and Iteration, imagining the fourth step using the Recursive rule activated the superior temporal gyrus (STG). During the test sound phase, we found fronto-temporo-parietal activity and hippocampal de-activation when processing violations, but no differences between rules. STG activation during the generation phase suggests that generating new hierarchical levels from previous steps might rely on retrieving appropriate melodic hierarchy schemas. Previous findings highlighting the role of hippocampus and inferior frontal gyrus may reflect processing of unexpected melodic sequences, rather than hierarchy generation per se.
Collapse
|
12
|
Tillmann B, Poulin-Charronnat B, Gaudrain E, Akhoun I, Delbé C, Truy E, Collet L. Implicit Processing of Pitch in Postlingually Deafened Cochlear Implant Users. Front Psychol 2019; 10:1990. [PMID: 31572253 PMCID: PMC6749036 DOI: 10.3389/fpsyg.2019.01990] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 08/14/2019] [Indexed: 11/29/2022] Open
Abstract
Cochlear implant (CI) users can only access limited pitch information through their device, which hinders music appreciation. Poor music perception may not only be due to CI technical limitations; lack of training or negative attitudes toward the electric sound might also contribute to it. Our study investigated with an implicit (indirect) investigation method whether poorly transmitted pitch information, presented as musical chords, can activate listeners’ knowledge about musical structures acquired prior to deafness. Seven postlingually deafened adult CI users participated in a musical priming paradigm investigating pitch processing without explicit judgments. Sequences made of eight sung-chords that ended on either a musically related (expected) target chord or a less-related (less-expected) target chord were presented. The use of a priming task based on linguistic features allowed CI patients to perform fast judgments on target chords in the sung music. If listeners’ musical knowledge is activated and allows for tonal expectations (as in normal-hearing listeners), faster response times were expected for related targets than less-related targets. However, if the pitch percept is too different and does not activate musical knowledge acquired prior to deafness, storing pitch information in a short-term memory buffer predicts the opposite pattern. If transmitted pitch information is too poor, no difference in response times should be observed. Results showed that CI patients were able to perform the linguistic task on the sung chords, but correct response times indicated sensory priming, with faster response times observed for the less-related targets: CI patients processed at least some of the pitch information of the musical sequences, which was stored in an auditory short-term memory and influenced chord processing. This finding suggests that the signal transmitted via electric hearing led to a pitch percept that was too different from that based on acoustic hearing, so that it did not automatically activate listeners’ previously acquired musical structure knowledge. However, the transmitted signal seems sufficiently informative to lead to sensory priming. These findings are encouraging for the development of pitch-related training programs for CI patients, despite the current technological limitations of the CI coding.
Collapse
Affiliation(s)
- Barbara Tillmann
- CNRS UMR5292, INSERM U1028, Auditory Cognition and Psychoacoustics Team, Lyon Neuroscience Research Center, Lyon, France.,University of Lyon, Lyon, France.,Université Claude Bernard Lyon 1, Villeurbanne, France
| | - Bénédicte Poulin-Charronnat
- CNRS UMR5292, INSERM U1028, Auditory Cognition and Psychoacoustics Team, Lyon Neuroscience Research Center, Lyon, France.,University of Lyon, Lyon, France.,Université Claude Bernard Lyon 1, Villeurbanne, France.,LEAD-CNRS, UMR5022, Université Bourgogne Franche-Comté, Dijon, France
| | - Etienne Gaudrain
- CNRS UMR5292, INSERM U1028, Auditory Cognition and Psychoacoustics Team, Lyon Neuroscience Research Center, Lyon, France.,University of Lyon, Lyon, France.,Université Claude Bernard Lyon 1, Villeurbanne, France.,University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Idrick Akhoun
- School of Psychological Sciences, The University of Manchester, Manchester, United Kingdom
| | - Charles Delbé
- CNRS UMR5292, INSERM U1028, Auditory Cognition and Psychoacoustics Team, Lyon Neuroscience Research Center, Lyon, France.,University of Lyon, Lyon, France.,Université Claude Bernard Lyon 1, Villeurbanne, France.,LEAD-CNRS, UMR5022, Université Bourgogne Franche-Comté, Dijon, France
| | - Eric Truy
- University of Lyon, Lyon, France.,Université Claude Bernard Lyon 1, Villeurbanne, France.,CNRS UMR5292, INSERM U1028, Brain Dynamics and Cognition Team, Lyon Neuroscience Research Center, Lyon, France
| | - Lionel Collet
- University of Lyon, Lyon, France.,Université Claude Bernard Lyon 1, Villeurbanne, France
| |
Collapse
|
13
|
Zhou L, Liu F, Jiang J, Jiang H, Jiang C. Abnormal neural responses to harmonic syntactic structures in congenital amusia. Psychophysiology 2019; 56:e13394. [PMID: 31111968 DOI: 10.1111/psyp.13394] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2018] [Revised: 04/14/2019] [Accepted: 04/23/2019] [Indexed: 10/26/2022]
Abstract
In music, harmonic syntactic structures are organized hierarchically through local and long-distance dependencies. This study investigated whether congenital amusia, a neurodevelopmental disorder of pitch perception, is associated with impaired processing of harmonic syntactic structures. For stimuli, we used harmonic sequences containing two phrases, where the first phrase ended with a half cadence and the second with an authentic cadence. In Experiment 1, we manipulated the ending chord of the authentic cadence to be either syntactically regular or irregular based on local dependencies. Sixteen amusics and 16 controls judged the expectedness of these chords while their EEG waveforms were recorded. In comparison to the regular endings, irregular endings elicited an ERAN, an N5, and a late positive component in controls but not in amusics, indicating that amusics were impaired in processing local syntactic dependencies. In Experiment 2, we manipulated the half cadence of the harmonic sequences to either adhere to or violate long-distance syntactic dependencies. In response to irregular harmonic sequences, an ERAN-like component and an N5 were elicited in controls but not in amusics, suggesting that amusics were impaired in processing long-distance syntactic dependencies. Furthermore, for controls, the neural processing of local and long-distance syntactic dependencies was correlated at the later integration stage but not at the early detection stage. These findings indicate that amusia is associated with impairment in the detection and integration of local and long-distance syntactic violations. The implications of these findings in terms of hierarchical music-syntactic processing are discussed.
Collapse
Affiliation(s)
- Linshu Zhou
- Music College, Shanghai Normal University, Shanghai, China
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Jun Jiang
- Music College, Shanghai Normal University, Shanghai, China
| | - Hanyuan Jiang
- Faculty of Humanities and Arts, Macau University of Science and Technology, Macau, China
| | - Cunmei Jiang
- Music College, Shanghai Normal University, Shanghai, China
| |
Collapse
|
14
|
Sears DRW, Pearce MT, Spitzer J, Caplin WE, McAdams S. Expectations for tonal cadences: Sensory and cognitive priming effects. Q J Exp Psychol (Hove) 2018; 72:1422-1438. [DOI: 10.1177/1747021818814472] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Studies examining the formation of melodic and harmonic expectations during music listening have repeatedly demonstrated that a tonal context primes listeners to expect certain (tonally related) continuations over others. However, few such studies have (1) selected stimuli using ready examples of expectancy violation derived from real-world instances of tonal music, (2) provided a consistent account for the influence of sensory and cognitive mechanisms on tonal expectancies by comparing different computational simulations, or (3) combined melodic and harmonic representations in modelling cognitive processes of expectation. To resolve these issues, this study measures expectations for the most recurrent cadence patterns associated with tonal music and then simulates the reported findings using three sensory–cognitive models of auditory expectation. In Experiment 1, participants provided explicit retrospective expectancy ratings both before and after hearing the target melodic tone and chord of the cadential formula. In Experiment 2, participants indicated as quickly as possible whether those target events were in or out of tune relative to the preceding context. Across both experiments, cadences terminating with stable melodic tones and chords elicited the highest expectancy ratings and the fastest and most accurate responses. Moreover, the model simulations supported a cognitive interpretation of tonal processing, in which listeners with exposure to tonal music generate expectations as a consequence of the frequent (co-)occurrence of events on the musical surface.
Collapse
Affiliation(s)
- David RW Sears
- College of Visual & Performing Arts, Texas Tech University, Lubbock, TX, USA
- McGill University, Montreal, QC, Canada
| | | | | | | | | |
Collapse
|
15
|
Sun Y, Lu X, Ho HT, Johnson BW, Sammler D, Thompson WF. Syntactic processing in music and language: Parallel abnormalities observed in congenital amusia. NEUROIMAGE-CLINICAL 2018; 19:640-651. [PMID: 30013922 PMCID: PMC6022360 DOI: 10.1016/j.nicl.2018.05.032] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2017] [Revised: 05/22/2018] [Accepted: 05/23/2018] [Indexed: 11/23/2022]
Abstract
Evidence is accumulating that similar cognitive resources are engaged to process syntactic structure in music and language. Congenital amusia – a neurodevelopmental disorder that primarily affects music perception, including musical syntax – provides a special opportunity to understand the nature of this overlap. Using electroencephalography (EEG), we investigated whether individuals with congenital amusia have parallel deficits in processing language syntax in comparison to control participants. Twelve amusic participants (eight females) and 12 control participants (eight females) were presented melodies in one session, and spoken sentences in another session, both of which had syntactic-congruent and -incongruent stimuli. They were asked to complete a music-related and a language-related task that were irrelevant to the syntactic incongruities. Our results show that amusic participants exhibit impairments in the early stages of both music- and language-syntactic processing. Specifically, we found that two event-related potential (ERP) components – namely Early Right Anterior Negativity (ERAN) and Left Anterior Negativity (LAN), associated with music- and language-syntactic processing respectively, were absent in the amusia group. However, at later processing stages, amusics showed similar brain responses as controls to syntactic incongruities in both music and language. This was reflected in a normal N5 in response to melodies and a normal P600 to spoken sentences. Notably, amusics' parallel music- and language-syntactic impairments were not accompanied by deficits in semantic processing (indexed by normal N400 in response to semantic incongruities). Together, our findings provide further evidence for shared music and language syntactic processing, particularly at early stages of processing. Amusics displayed abnormal brain responses to music-syntactic irregularities. They also exhibited abnormal brain responses to language-syntactic irregularities. These impairments affect an early stage of syntactic processing not a later stage. Music and language involve similar cognitive mechanisms for processing syntax.
Collapse
Affiliation(s)
- Yanan Sun
- Department of Cognitive Science, Macquarie University, New South Wales 2109, Australia; ARC Centre of Excellence in Cognition and its Disorders, New South Wales 2109, Australia.
| | - Xuejing Lu
- ARC Centre of Excellence in Cognition and its Disorders, New South Wales 2109, Australia; Department of Psychology, Macquarie University, New South Wales 2109, Australia; CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Hao Tam Ho
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa 56126, Italy; School of Psychology, University of Sydney, New South Wales 2006, Australia
| | - Blake W Johnson
- Department of Cognitive Science, Macquarie University, New South Wales 2109, Australia; ARC Centre of Excellence in Cognition and its Disorders, New South Wales 2109, Australia
| | - Daniela Sammler
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - William Forde Thompson
- ARC Centre of Excellence in Cognition and its Disorders, New South Wales 2109, Australia; Department of Psychology, Macquarie University, New South Wales 2109, Australia
| |
Collapse
|
16
|
Abstract
Over tens of thousands of years of human genetic and cultural evolution, many types and varieties of music and language have emerged; however, the fundamental components of each of these modes of communication seem to be common to all human cultures and social groups. In this brief review, rather than focusing on the development of different musical techniques and practices over time, the main issues addressed here concern: (i) when, and speculations as to why, modern Homo sapiens evolved musical behaviors, (ii) the evolutionary relationship between music and language, and (iii) why humans, perhaps unique among all living species, universally continue to possess two complementary but distinct communication streams. Did music exist before language, or vice versa, or was there a common precursor that in some way separated into two distinct yet still overlapping systems when cognitively modern H. sapiens evolved? A number of theories put forward to explain the origin and persistent universality of music are considered, but emphasis is given, supported by recent neuroimaging, physiological, and psychological findings, to the role that music can play in promoting trust, altruistic behavior, social bonding, and cooperation within groups of culturally compatible but not necessarily genetically related humans. It is argued that, early in our history, the unique socializing and harmonizing power of music acted as an essential counterweight to the new and evolving sense of self, to an emerging sense of individuality and mortality that was linked to the development of an advanced cognitive capacity and articulate language capability.
Collapse
Affiliation(s)
- Alan R Harvey
- School of Human Sciences, The University of Western Australia, Perron Institute for Neurological and Translational Science, Perth, WA, Australia
| |
Collapse
|
17
|
Tichko P, Skoe E. Musical Experience, Sensorineural Auditory Processing, and Reading Subskills in Adults. Brain Sci 2018; 8:E77. [PMID: 29702572 PMCID: PMC5977068 DOI: 10.3390/brainsci8050077] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2018] [Revised: 04/20/2018] [Accepted: 04/25/2018] [Indexed: 12/22/2022] Open
Abstract
Developmental research suggests that sensorineural auditory processing, reading subskills (e.g., phonological awareness and rapid naming), and musical experience are related during early periods of reading development. Interestingly, recent work suggests that these relations may extend into adulthood, with indices of sensorineural auditory processing relating to global reading ability. However, it is largely unknown whether sensorineural auditory processing relates to specific reading subskills, such as phonological awareness and rapid naming, as well as musical experience in mature readers. To address this question, we recorded electrophysiological responses to a repeating click (auditory stimulus) in a sample of adult readers. We then investigated relations between electrophysiological responses to sound, reading subskills, and musical experience in this same set of adult readers. Analyses suggest that sensorineural auditory processing, reading subskills, and musical experience are related in adulthood, with faster neural conduction times and greater musical experience associated with stronger rapid-naming skills. These results are similar to the developmental findings that suggest reading subskills are related to sensorineural auditory processing and musical experience in children.
Collapse
Affiliation(s)
- Parker Tichko
- Department of Psychological Sciences, Developmental Psychology Division, University of Connecticut, Storrs, CT 06269, USA.
| | - Erika Skoe
- Department of Psychological Sciences, Developmental Psychology Division, University of Connecticut, Storrs, CT 06269, USA.
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, CT 06269, USA.
- Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, Storrs, CT 06269, USA.
| |
Collapse
|
18
|
Roncaglia-Denissen MP, Bouwer FL, Honing H. Decision Making Strategy and the Simultaneous Processing of Syntactic Dependencies in Language and Music. Front Psychol 2018; 9:38. [PMID: 29441035 PMCID: PMC5797648 DOI: 10.3389/fpsyg.2018.00038] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2015] [Accepted: 01/10/2018] [Indexed: 11/29/2022] Open
Abstract
Despite differences in their function and domain-specific elements, syntactic processing in music and language is believed to share cognitive resources. This study aims to investigate whether the simultaneous processing of language and music share the use of a common syntactic processor or more general attentional resources. To investigate this matter we tested musicians and non-musicians using visually presented sentences and aurally presented melodies containing syntactic local and long-distance dependencies. Accuracy rates and reaction times of participants' responses were collected. In both sentences and melodies, unexpected syntactic anomalies were introduced. This is the first study to address the processing of local and long-distance dependencies in language and music combined while reducing the effect of sensory memory. Participants were instructed to focus on language (language session), music (music session), or both (dual session). In the language session, musicians and non-musicians performed comparably in terms of accuracy rates and reaction times. As expected, groups' differences appeared in the music session, with musicians being more accurate in their responses than non-musicians and only the latter showing an interaction between the accuracy rates for music and language syntax. In the dual session musicians were overall more accurate than non-musicians. However, both groups showed comparable behavior, by displaying an interaction between the accuracy rates for language and music syntax responses. In our study, accuracy rates seem to better capture the interaction between language and music syntax; and this interaction seems to indicate the use of distinct, however, interacting mechanisms as part of decision making strategy. This interaction seems to be subject of an increase of attentional load and domain proficiency. Our study contributes to the long-lasting debate about the commonalities between language and music by providing evidence for their interaction at a more domain-general level.
Collapse
Affiliation(s)
- M. P. Roncaglia-Denissen
- Institute for Logic, Language and Computation, University of Amsterdam, Amsterdam, Netherlands
- Amsterdam Brain and Cognition, University of Amsterdam, Amsterdam, Netherlands
| | - Fleur L. Bouwer
- Institute for Logic, Language and Computation, University of Amsterdam, Amsterdam, Netherlands
- Amsterdam Brain and Cognition, University of Amsterdam, Amsterdam, Netherlands
| | - Henkjan Honing
- Institute for Logic, Language and Computation, University of Amsterdam, Amsterdam, Netherlands
- Amsterdam Brain and Cognition, University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
19
|
Barrett FS, Janata P. Neural responses to nostalgia-evoking music modeled by elements of dynamic musical structure and individual differences in affective traits. Neuropsychologia 2016; 91:234-246. [DOI: 10.1016/j.neuropsychologia.2016.08.012] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2016] [Revised: 07/22/2016] [Accepted: 08/11/2016] [Indexed: 01/15/2023]
|
20
|
Patel AD, Morgan E. Exploring Cognitive Relations Between Prediction in Language and Music. Cogn Sci 2016; 41 Suppl 2:303-320. [DOI: 10.1111/cogs.12411] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2015] [Revised: 06/05/2016] [Accepted: 06/14/2016] [Indexed: 02/04/2023]
Affiliation(s)
- Aniruddh D. Patel
- Department of Psychology; Tufts University
- Azrieli Program in Brain, Mind, & Consciousness; Canadian Institute for Advanced Research (CIFAR); Toronto
| | | |
Collapse
|
21
|
Discrimination of tonal and atonal music in congenital amusia: The advantage of implicit tasks. Neuropsychologia 2016; 85:10-8. [DOI: 10.1016/j.neuropsychologia.2016.02.027] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2015] [Revised: 02/06/2016] [Accepted: 02/28/2016] [Indexed: 11/20/2022]
|
22
|
Mathias B, Tillmann B, Palmer C. Sensory, Cognitive, and Sensorimotor Learning Effects in Recognition Memory for Music. J Cogn Neurosci 2016; 28:1111-26. [PMID: 27027544 DOI: 10.1162/jocn_a_00958] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Recent research suggests that perception and action are strongly interrelated and that motor experience may aid memory recognition. We investigated the role of motor experience in auditory memory recognition processes by musicians using behavioral, ERP, and neural source current density measures. Skilled pianists learned one set of novel melodies by producing them and another set by perception only. Pianists then completed an auditory memory recognition test during which the previously learned melodies were presented with or without an out-of-key pitch alteration while the EEG was recorded. Pianists indicated whether each melody was altered from or identical to one of the original melodies. Altered pitches elicited a larger N2 ERP component than original pitches, and pitches within previously produced melodies elicited a larger N2 than pitches in previously perceived melodies. Cortical motor planning regions were more strongly activated within the time frame of the N2 following altered pitches in previously produced melodies compared with previously perceived melodies, and larger N2 amplitudes were associated with greater detection accuracy following production learning than perception learning. Early sensory (N1) and later cognitive (P3a) components elicited by pitch alterations correlated with predictions of sensory echoic and schematic tonality models, respectively, but only for the perception learning condition, suggesting that production experience alters the extent to which performers rely on sensory and tonal recognition cues. These findings provide evidence for distinct time courses of sensory, schematic, and motoric influences within the same recognition task and suggest that learned auditory-motor associations influence responses to out-of-key pitches.
Collapse
|
23
|
Maes PJ. Sensorimotor Grounding of Musical Embodiment and the Role of Prediction: A Review. Front Psychol 2016; 7:308. [PMID: 26973587 PMCID: PMC4778011 DOI: 10.3389/fpsyg.2016.00308] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2015] [Accepted: 02/17/2016] [Indexed: 01/23/2023] Open
Abstract
In a previous article, we reviewed empirical evidence demonstrating action-based effects on music perception to substantiate the musical embodiment thesis (Maes et al., 2014). Evidence was largely based on studies demonstrating that music perception automatically engages motor processes, or that body states/movements influence music perception. Here, we argue that more rigorous evidence is needed before any decisive conclusion in favor of a “radical” musical embodiment thesis can be posited. In the current article, we provide a focused review of recent research to collect further evidence for the “radical” embodiment thesis that music perception is a dynamic process firmly rooted in the natural disposition of sounds and the human auditory and motor system. Though, we emphasize that, on top of these natural dispositions, long-term processes operate, rooted in repeated sensorimotor experiences and leading to learning, prediction, and error minimization. This approach sheds new light on the development of musical repertoires, and may refine our understanding of action-based effects on music perception as discussed in our previous article (Maes et al., 2014). Additionally, we discuss two of our recent empirical studies demonstrating that music performance relies on similar principles of sensorimotor dynamics and predictive processing.
Collapse
Affiliation(s)
- Pieter-Jan Maes
- Department of Art, Music, and Theatre Sciences, IPEM, Ghent University Belgium
| |
Collapse
|
24
|
Tillmann B, Bigand E. Response: A commentary on: "Neural overlap in processing music and speech". Front Hum Neurosci 2015; 9:491. [PMID: 26441591 PMCID: PMC4584969 DOI: 10.3389/fnhum.2015.00491] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2015] [Accepted: 08/24/2015] [Indexed: 11/13/2022] Open
Affiliation(s)
- Barbara Tillmann
- Centre National de la Recherche Scientifique, UMR5292, INSERM, U1028, Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics Team Lyon, France ; University Lyon 1 Lyon, France
| | - Emmanuel Bigand
- Centre National de la Recherche Scientifique-LEAD, Université de Bourgogne Dijon, France ; Institut Universitaire de France Paris, France
| |
Collapse
|