1
|
Cheung VKM, Harrison PMC, Koelsch S, Pearce MT, Friederici AD, Meyer L. Cognitive and sensory expectations independently shape musical expectancy and pleasure. Philos Trans R Soc Lond B Biol Sci 2024; 379:20220420. [PMID: 38104601 PMCID: PMC10725761 DOI: 10.1098/rstb.2022.0420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 10/20/2023] [Indexed: 12/19/2023] Open
Abstract
Expectation is crucial for our enjoyment of music, yet the underlying generative mechanisms remain unclear. While sensory models derive predictions based on local acoustic information in the auditory signal, cognitive models assume abstract knowledge of music structure acquired over the long term. To evaluate these two contrasting mechanisms, we compared simulations from four computational models of musical expectancy against subjective expectancy and pleasantness ratings of over 1000 chords sampled from 739 US Billboard pop songs. Bayesian model comparison revealed that listeners' expectancy and pleasantness ratings were predicted by the independent, non-overlapping, contributions of cognitive and sensory expectations. Furthermore, cognitive expectations explained over twice the variance in listeners' perceived surprise compared to sensory expectations, suggesting a larger relative importance of long-term representations of music structure over short-term sensory-acoustic information in musical expectancy. Our results thus emphasize the distinct, albeit complementary, roles of cognitive and sensory expectations in shaping musical pleasure, and suggest that this expectancy-driven mechanism depends on musical information represented at different levels of abstraction along the neural hierarchy. This article is part of the theme issue 'Art, aesthetics and predictive processing: theoretical and empirical perspectives'.
Collapse
Affiliation(s)
- Vincent K. M. Cheung
- Sony Computer Science Laboratories, Inc., Shinagawa-ku, Tokyo 141-0022, Japan
- Department of Neuropsychology, Sony Computer Science Laboratories, Inc., Shinagawa-ku, Tokyo 141-0022, Japan
- Institute of Information Science, Academia Sinica, Taipei 115, Taiwan
| | - Peter M. C. Harrison
- Centre for Music and Science, University of Cambridge, Faculty of Music, 11 West Road, Cambridge, CB3 9DP, UK
- Centre for Digital Music, Queen Mary University of London, E1 4NS, UK
| | - Stefan Koelsch
- Department of Biological and Medical Psychology, University of Bergen, Bergen, 5009, Norway
| | - Marcus T. Pearce
- Centre for Digital Music, Queen Mary University of London, E1 4NS, UK
- Department of Clinical Medicine, Aarhus University, Aarhus N, 8200, Denmark
| | - Angela D. Friederici
- Department of Neuropsychology, Sony Computer Science Laboratories, Inc., Shinagawa-ku, Tokyo 141-0022, Japan
| | - Lars Meyer
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
- Clinic for Phoniatrics and Pedaudiology, University Hospital Münster, Münster, 48149, Germany
| |
Collapse
|
2
|
Patrick MT, Cohn N, Mertus J, Blumstein SE. Sequences in harmony: Cognitive interactions between musical and visual narrative structure. Acta Psychol (Amst) 2023; 238:103981. [PMID: 37441849 DOI: 10.1016/j.actpsy.2023.103981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2022] [Revised: 07/02/2023] [Accepted: 07/05/2023] [Indexed: 07/15/2023] Open
Abstract
From film and television to graphic storytelling, tonal music can accompany visual narratives in a variety of contexts. The apprehension of both musical and narrative sequences involves temporal categories in ordered patterning, which raises an interesting question: Do musical progressions and visual narratives rely on shared sequence processing mechanisms? If this is the case, then cues from music and sequential static images, when presented simultaneously, should interact during audiovisual online processing. We tested this question by measuring reaction times to target picture panels appearing in visual narrative (comic strip) sequences, which were presented panel by panel and synchronized with musical chord progressions. Image sequences were either coherent narratives or incoherent (random) panels, and they were aligned with musical accompaniment consisting of coherent tonal chord progressions or non-tonal (unrelated) chords. Reaction times were faster for target images in coherent sequences than incoherent sequences, and even faster for coherent images with tonal accompaniment than non-tonal chords. This indicated an interaction between sequential structures across domains. We take these results as evidence for a shared, domain-general sequence processing mechanism operating across music and visual narrative.
Collapse
Affiliation(s)
- Morgan T Patrick
- Brown University, United States of America; Northwestern University, United States of America.
| | | | | | | |
Collapse
|
3
|
Jiang J, Liu F, Zhou L, Chen L, Jiang C. Explicit processing of melodic structure in congenital amusia can be improved by redescription-associate learning. Neuropsychologia 2023; 182:108521. [PMID: 36870471 DOI: 10.1016/j.neuropsychologia.2023.108521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 02/19/2023] [Accepted: 02/19/2023] [Indexed: 03/06/2023]
Abstract
Congenital amusia is a neurodevelopmental disorder of musical processing. Previous research demonstrates that although explicit musical processing is impaired in congenital amusia, implicit musical processing can be intact. However, little is known about whether implicit knowledge could improve explicit musical processing in individuals with congenital amusia. To this end, we developed a training method utilizing redescription-associate learning, aiming at transferring implicit representations of perceptual states into explicit forms through verbal description and then establishing the associations between the perceptual states reported and responses via feedback, to investigate whether the explicit processing of melodic structure could be improved in individuals with congenital amusia. Sixteen amusics and 11 controls rated the degree of expectedness of melodies during EEG recording before and after training. In the interim, half of the amusics received nine training sessions on melodic structure, while the other half received no training. Results, based on effect size estimation, showed that at pretest, amusics but not controls failed to explicitly distinguish the regular from the irregular melodies and to exhibit an ERAN in response to the irregular endings. At posttest, trained but not untrained amusics performed as well as controls at both the behavioral and neural levels. At the 3-month follow-up, the training effects still maintained. These findings present novel electrophysiological evidence of neural plasticity in the amusic brain, suggesting that redescription-associate learning may be an effective method to remediate impaired explicit processes for individuals with other neurodevelopmental disorders who have intact implicit knowledge.
Collapse
Affiliation(s)
- Jun Jiang
- Music College, Shanghai Normal University, Shanghai, 200234, China
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, RG6 6AL, UK
| | - Linshu Zhou
- Music College, Shanghai Normal University, Shanghai, 200234, China
| | - Liaoliao Chen
- Foreign Languages College, Shanghai Normal University, Shanghai, 200234, China
| | - Cunmei Jiang
- Music College, Shanghai Normal University, Shanghai, 200234, China.
| |
Collapse
|
4
|
Malekmohammadi A, Ehrlich SK, Cheng G. Modulation of theta and gamma oscillations during familiarization with previously unknown music. Brain Res 2023; 1800:148198. [PMID: 36493897 DOI: 10.1016/j.brainres.2022.148198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 11/24/2022] [Accepted: 12/04/2022] [Indexed: 12/12/2022]
Abstract
Repeated listening to unknown music leads to gradual familiarization with musical sequences. Passively listening to musical sequences could involve an array of dynamic neural responses in reaching familiarization with the musical excerpts. This study elucidates the dynamic brain response and its variation over time by investigating the electrophysiological changes during the familiarization with initially unknown music. Twenty subjects were asked to familiarize themselves with previously unknown 10 s classical music excerpts over three repetitions while their electroencephalogram was recorded. Dynamic spectral changes in neural oscillations are monitored by time-frequency analyses for all frequency bands (theta: 5-9 Hz, alpha: 9-13 Hz, low-beta: 13-21 Hz, high beta: 21-32 Hz, and gamma: 32-50 Hz). Time-frequency analyses reveal sustained theta event-related desynchronization (ERD) in the frontal-midline and the left pre-frontal electrodes which decreased gradually from 1st to 3rd time repetition of the same excerpts (frontal-midline: 57.90 %, left-prefrontal: 75.93 %). Similarly, sustained gamma ERD decreased in the frontal-midline and bilaterally frontal/temporal areas (frontal-midline: 61.47 %, left-frontal: 90.88 %, right-frontal: 87.74 %). During familiarization, the decrease of theta ERD is superior in the first part (1-5 s) whereas the decrease of gamma ERD is superior in the second part (5-9 s) of music excerpts. The results suggest that decreased theta ERD is associated with successfully identifying familiar sequences, whereas decreased gamma ERD is related to forming unfamiliar sequences.
Collapse
Affiliation(s)
- Alireza Malekmohammadi
- Chair for Cognitive System, Technical University of Munich, Electrical Engineering, Munich, 80333, Germany.
| | - Stefan K Ehrlich
- Chair for Cognitive System, Technical University of Munich, Electrical Engineering, Munich, 80333, Germany
| | - Gordon Cheng
- Chair for Cognitive System, Technical University of Munich, Electrical Engineering, Munich, 80333, Germany
| |
Collapse
|
5
|
Ishida K, Nittono H. Relationship between early neural responses to syntactic and acoustic irregularities in music. Eur J Neurosci 2022; 56:6201-6214. [PMID: 36310105 DOI: 10.1111/ejn.15856] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 07/21/2022] [Accepted: 10/17/2022] [Indexed: 12/29/2022]
Abstract
Humans can detect various anomalies in a sound sequence without attending to each dimension explicitly. Event-related potentials (ERPs) have been used to examine the processes of auditory deviance detection. Previous research has shown that music-syntactic anomalies elicit early right anterior negativity (ERAN), whereas more general acoustic irregularities elicit mismatch negativity (MMN). Although these ERP components occur in a similar latency range with a similar scalp topography, the relationship between the detection processes they reflect remains unclear. This study compared these components by manipulating music-syntactic (chord progression) and acoustic (intensity) irregularities orthogonally in two experiments. Non-musicians (Experiment 1: N = 39; Experiment 2: N = 24) were asked to listen to chord sequences, each consisting of 5 four-voice chords, as they watched a silent video clip. Standard, harmonic-deviant, intensity-deviant and double-deviant chords occurred at the final position in each sequence. Deviant stimuli were presented infrequently (p = .10) in Experiment 1 and equiprobably (p = .25) in Experiment 2. Regardless of deviance probability, both harmonic and intensity deviants elicited similar negativities, which were indistinguishable in terms of latency or scalp distribution. When the two deviant types occurred simultaneously, the negativity increased in an additive manner; that is, the amplitude of the double-deviant ERP was as large as the sum of the single-deviant ERPs. These findings suggest that the detection of music-syntactic and acoustic irregularities works independently, based on different regularity representations.
Collapse
Affiliation(s)
- Kai Ishida
- Graduate School of Human Sciences, Osaka University, Osaka, Japan
| | - Hiroshi Nittono
- Graduate School of Human Sciences, Osaka University, Osaka, Japan
| |
Collapse
|
6
|
Tonal structures benefit short-term memory for real music: Evidence from non-musicians and individuals with congenital amusia. Brain Cogn 2022; 161:105881. [DOI: 10.1016/j.bandc.2022.105881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 05/23/2022] [Accepted: 05/26/2022] [Indexed: 11/23/2022]
|
7
|
Neural correlates of acoustic dissonance in music: The role of musicianship, schematic and veridical expectations. PLoS One 2021; 16:e0260728. [PMID: 34852008 PMCID: PMC8635369 DOI: 10.1371/journal.pone.0260728] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 11/15/2021] [Indexed: 11/19/2022] Open
Abstract
In western music, harmonic expectations can be fulfilled or broken by unexpected chords. Musical irregularities in the absence of auditory deviance elicit well-studied neural responses (e.g. ERAN, P3, N5). These responses are sensitive to schematic expectations (induced by syntactic rules of chord succession) and veridical expectations about predictability (induced by experimental regularities). However, the cognitive and sensory contributions to these responses and their plasticity as a result of musical training remains under debate. In the present study, we explored whether the neural processing of pure acoustic violations is affected by schematic and veridical expectations. Moreover, we investigated whether these two factors interact with long-term musical training. In Experiment 1, we registered the ERPs elicited by dissonant clusters placed either at the middle or the ending position of chord cadences. In Experiment 2, we presented to the listeners with a high proportion of cadences ending in a dissonant chord. In both experiments, we compared the ERPs of musicians and non-musicians. Dissonant clusters elicited distinctive neural responses (an early negativity, the P3 and the N5). While the EN was not affected by syntactic rules, the P3a and P3b were larger for dissonant closures than for middle dissonant chords. Interestingly, these components were larger in musicians than in non-musicians, while the N5 was the opposite. Finally, the predictability of dissonant closures in our experiment did not modulate any of the ERPs. Our study suggests that, at early time windows, dissonance is processed based on acoustic deviance independently of syntactic rules. However, at longer latencies, listeners may be able to engage integration mechanisms and further processes of attentional and structural analysis dependent on musical hierarchies, which are enhanced in musicians.
Collapse
|
8
|
Pesnot Lerousseau J, Schön D. Musical Expertise Is Associated with Improved Neural Statistical Learning in the Auditory Domain. Cereb Cortex 2021; 31:4877-4890. [PMID: 34013316 DOI: 10.1093/cercor/bhab128] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Revised: 04/16/2021] [Accepted: 04/16/2021] [Indexed: 11/14/2022] Open
Abstract
It is poorly known whether musical training is associated with improvements in general cognitive abilities, such as statistical learning (SL). In standard SL paradigms, musicians have shown better performances than nonmusicians. However, this advantage could be due to differences in auditory discrimination, in memory or truly in the ability to learn sequence statistics. Unfortunately, these different hypotheses make similar predictions in terms of expected results. To dissociate them, we developed a Bayesian model and recorded electroencephalography (EEG). Our results confirm that musicians perform approximately 15% better than nonmusicians at predicting items in auditory sequences that embed either low or high-order statistics. These higher performances are explained in the model by parameters governing the learning of high-order statistics and the selection stage noise. EEG recordings reveal a neural underpinning of the musician's advantage: the P300 amplitude correlates with the surprise elicited by each item, and so, more strongly for musicians. Finally, early EEG components correlate with the surprise elicited by low-order statistics, as opposed to late EEG components that correlate with the surprise elicited by high-order statistics and this effect is stronger for musicians. Overall, our results demonstrate that musical expertise is associated with improved neural SL in the auditory domain. SIGNIFICANCE STATEMENT It is poorly known whether musical training leads to improvements in general cognitive skills. One fundamental cognitive ability, SL, is thought to be enhanced in musicians, but previous studies have reported mixed results. This is because such musician's advantage can embrace very different explanations, such as improvement in auditory discrimination or in memory. To solve this problem, we developed a Bayesian model and recorded EEG to dissociate these explanations. Our results reveal that musical expertise is truly associated with an improved ability to learn sequence statistics, especially high-order statistics. This advantage is reflected in the electroencephalographic recordings, where the P300 amplitude is more sensitive to surprising items in musicians than in nonmusicians.
Collapse
Affiliation(s)
| | - Daniele Schön
- Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France
| |
Collapse
|
9
|
Sauvé SA, Cho A, Zendel BR. Mapping Tonal Hierarchy in the Brain. Neuroscience 2021; 465:187-202. [PMID: 33774126 DOI: 10.1016/j.neuroscience.2021.03.019] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 03/09/2021] [Accepted: 03/16/2021] [Indexed: 11/25/2022]
Abstract
In Western tonal music, pitches are organized hierarchically based on their perceived fit in a specific tonal context. This hierarchy forms scales that are commonly used in Western tonal music. The hierarchical nature of tonal structure is well established behaviourally; however, the neural underpinnings are largely unknown. In this study, EEG data and goodness-of-fit ratings were collected from 34 participants who listened to an arpeggio followed by a probe tone, where the probe tone could be any chromatic scale degree and the context any of the major keys. Goodness-of-fit ratings corresponded to the classic tonal hierarchy. N1, P2 and the Early Right Anterior Negativity (ERAN) were significantly modulated by scale degree. Furthermore, neural marker amplitudes and latencies were significantly correlated with similar magnitude to both pitch height and goodness-of-fit ratings. This is different from the clearer divide between pitch height correlating with early neural markers (100-200 ms) and tonal hierarchy correlating with late neural markers (200-1000 ms) reported by Sankaran et al. (2020) and Quiroga-Martinez et al. (2019). Finally, individual differences were greater than any main effects detected when pooling participants and brain-behavior correlations vary widely (i.e. r = -0.8 to 0.8).
Collapse
Affiliation(s)
- Sarah A Sauvé
- Division of Community Health and Humanities, Faculty of Medicine, Memorial University of Newfoundland, St. John's, Newfoundland and Labrador A1C 5S7, Canada.
| | - Alex Cho
- Division of Community Health and Humanities, Faculty of Medicine, Memorial University of Newfoundland, St. John's, Newfoundland and Labrador A1C 5S7, Canada
| | - Benjamin Rich Zendel
- Division of Community Health and Humanities, Faculty of Medicine, Memorial University of Newfoundland, St. John's, Newfoundland and Labrador A1C 5S7, Canada; Aging Research Centre - Newfoundland and Labrador, Memorial University of Newfoundland, Grenfell Campus, Memorial University, Canada
| |
Collapse
|
10
|
Gurariy G, Randall R, Greenberg AS. Manipulation of low-level features modulates grouping strength of auditory objects. PSYCHOLOGICAL RESEARCH 2020; 85:2256-2270. [PMID: 32691138 DOI: 10.1007/s00426-020-01391-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2019] [Accepted: 07/10/2020] [Indexed: 11/29/2022]
Abstract
A central challenge of auditory processing involves the segregation, analysis, and integration of acoustic information into auditory perceptual objects for processing by higher order cognitive operations. This study explores the influence of low-level features on auditory object perception. Participants provided perceived musicality ratings in response to randomly generated pure tone sequences. Previous work has shown that music perception relies on the integration of discrete sounds into a holistic structure. Hence, high (versus low) ratings were viewed as indicative of strong (versus weak) object formation. Additionally, participants rated sequences in which random subsets of tones were manipulated along one of three low-level dimensions (timbre, amplitude, or fade-in) at one of three strengths (low, medium, or high). Our primary findings demonstrate how low-level acoustic features modulate the perception of auditory objects, as measured by changes in musicality ratings for manipulated sequences. Secondarily, we used principal component analysis to categorize participants into subgroups based on differential sensitivities to low-level auditory dimensions, thereby highlighting the importance of individual differences in auditory perception. Finally, we report asymmetries regarding the effects of low-level dimensions; specifically, the perceptual significance of timbre. Together, these data contribute to our understanding of how low-level auditory features modulate auditory object perception.
Collapse
Affiliation(s)
- Gennadiy Gurariy
- Department of Biomedical Engineering, Medical College of Wisconsin & Marquette University, Milwaukee, USA
| | - Richard Randall
- School of Music and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, USA.
| | - Adam S Greenberg
- Department of Biomedical Engineering, Medical College of Wisconsin & Marquette University, Milwaukee, USA
| |
Collapse
|
11
|
Kim CH, Seol J, Jin SH, Kim JS, Kim Y, Yi SW, Chung CK. Increased fronto-temporal connectivity by modified melody in real music. PLoS One 2020; 15:e0235770. [PMID: 32639987 PMCID: PMC7343137 DOI: 10.1371/journal.pone.0235770] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Accepted: 06/22/2020] [Indexed: 12/20/2022] Open
Abstract
In real music, the original melody may appear intact, with little elaboration only, or significantly modified. Since a melody is most easily perceived in music, hearing significantly modified melody may change a brain connectivity. Mozart KV 265 is comprised of a theme with an original melody of “Twinkle Twinkle Little Star” and its significant variations. We studied whether effective connectivity changes with significantly modified melody, between bilateral inferior frontal gyri (IFGs) and Heschl’s gyri (HGs) using magnetoencephalography (MEG). Among the 12 connectivities, the connectivity from the left IFG to the right HG was consistently increased with significantly modified melody compared to the original melody in 2 separate sets of the same rhythmic pattern with different melody (p = 0.005 and 0.034, Bonferroni corrected). Our findings show that the modification of an original melody in a real music changes the brain connectivity.
Collapse
Affiliation(s)
- Chan Hee Kim
- Interdisciplinary Program in Neuroscience, Seoul National University College of Natural Science, Seoul, Korea
- Human Brain Function Laboratory, Seoul National University, Seoul, Korea
| | - Jaeho Seol
- Human Brain Function Laboratory, Seoul National University, Seoul, Korea
- W-Mind Laboratory, Wemakeprice Inc., Seoul, Korea
| | - Seung-Hyun Jin
- Human Brain Function Laboratory, Seoul National University, Seoul, Korea
| | - June Sic Kim
- Human Brain Function Laboratory, Seoul National University, Seoul, Korea
- Research Institute of Basic Sciences, Seoul National University, Seoul, Korea
| | - Youn Kim
- Department of Music, School of Humanities, The University of Hong Kong, Pok Fu Lam, Hong Kong
| | - Suk Won Yi
- College of Music, Seoul National University, Seoul, Korea
- Western Music Research Institute, Seoul National University, Seoul, Korea
| | - Chun Kee Chung
- Interdisciplinary Program in Neuroscience, Seoul National University College of Natural Science, Seoul, Korea
- Human Brain Function Laboratory, Seoul National University, Seoul, Korea
- Department of Brain and Cognitive Science, Seoul National University College of Natural Science, Seoul, Korea
- Department of Neurosurgery, Seoul National University Hospital, Seoul, Korea
- * E-mail:
| |
Collapse
|
12
|
Sun L, Thompson WF, Liu F, Zhou L, Jiang C. The human brain processes hierarchical structures of meter and harmony differently: Evidence from musicians and nonmusicians. Psychophysiology 2020; 57:e13598. [PMID: 32449180 DOI: 10.1111/psyp.13598] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2019] [Revised: 04/24/2020] [Accepted: 04/24/2020] [Indexed: 11/30/2022]
Abstract
The processing of temporal structure has been widely investigated, but evidence on how the brain processes temporal and nontemporal structures simultaneously is sparse. Using event-related potentials (ERPs), we examined how the brain responds to temporal (metric) and nontemporal (harmonic) structures in music simultaneously, and whether these processes are impacted by musical expertise. Fifteen musicians and 15 nonmusicians rated the degree of completeness of musical sequences with or without violations in metric or harmonic structures. In the single violation conditions, the ERP results showed that both musicians and nonmusicians exhibited an early right anterior negativity (ERAN) as well as an N5 to temporal violations ("when"), and only an N5-like response to nontemporal violations ("what"), which were consistent with the behavioral results. In the double violation condition, however, only the ERP results, but not the behavioral results, revealed a significant interaction between temporal and nontemporal violations at a later integrative stage, as manifested by an enlarged N5 effect compared to the single violation conditions. These findings provide the first evidence that the human brain uses different neural mechanisms in processing metric and harmonic structures in music, which may shed light on how the brain generates predictions for "what" and "when" events in the natural environment.
Collapse
Affiliation(s)
- Lijun Sun
- Department of Psychology, Shanghai Normal University, Shanghai, China
| | | | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Linshu Zhou
- Music College, Shanghai Normal University, Shanghai, China
| | - Cunmei Jiang
- Music College, Shanghai Normal University, Shanghai, China.,Institute of Psychology, Shanghai Normal University, Shanghai, China
| |
Collapse
|
13
|
Di Liberto GM, Pelofi C, Bianco R, Patel P, Mehta AD, Herrero JL, de Cheveigné A, Shamma S, Mesgarani N. Cortical encoding of melodic expectations in human temporal cortex. eLife 2020; 9:e51784. [PMID: 32122465 PMCID: PMC7053998 DOI: 10.7554/elife.51784] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2019] [Accepted: 01/20/2020] [Indexed: 01/14/2023] Open
Abstract
Humans engagement in music rests on underlying elements such as the listeners' cultural background and interest in music. These factors modulate how listeners anticipate musical events, a process inducing instantaneous neural responses as the music confronts these expectations. Measuring such neural correlates would represent a direct window into high-level brain processing. Here we recorded cortical signals as participants listened to Bach melodies. We assessed the relative contributions of acoustic versus melodic components of the music to the neural signal. Melodic features included information on pitch progressions and their tempo, which were extracted from a predictive model of musical structure based on Markov chains. We related the music to brain activity with temporal response functions demonstrating, for the first time, distinct cortical encoding of pitch and note-onset expectations during naturalistic music listening. This encoding was most pronounced at response latencies up to 350 ms, and in both planum temporale and Heschl's gyrus.
Collapse
Affiliation(s)
- Giovanni M Di Liberto
- Laboratoire des systèmes perceptifs, Département d’études cognitives, École normale supérieure, PSL University, CNRS75005 ParisFrance
| | - Claire Pelofi
- Department of Psychology, New York UniversityNew YorkUnited States
- Institut de Neurosciences des Système, UMR S 1106, INSERM, Aix Marseille UniversitéMarseilleFrance
| | | | - Prachi Patel
- Department of Electrical Engineering, Columbia UniversityNew YorkUnited States
- Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
| | - Ashesh D Mehta
- Department of Neurosurgery, Zucker School of Medicine at Hofstra/NorthwellManhassetUnited States
- Feinstein Institute of Medical Research, Northwell HealthManhassetUnited States
| | - Jose L Herrero
- Department of Neurosurgery, Zucker School of Medicine at Hofstra/NorthwellManhassetUnited States
- Feinstein Institute of Medical Research, Northwell HealthManhassetUnited States
| | - Alain de Cheveigné
- Laboratoire des systèmes perceptifs, Département d’études cognitives, École normale supérieure, PSL University, CNRS75005 ParisFrance
- UCL Ear InstituteLondonUnited Kingdom
| | - Shihab Shamma
- Laboratoire des systèmes perceptifs, Département d’études cognitives, École normale supérieure, PSL University, CNRS75005 ParisFrance
- Institute for Systems Research, Electrical and Computer Engineering, University of MarylandCollege ParkUnited States
| | - Nima Mesgarani
- Department of Electrical Engineering, Columbia UniversityNew YorkUnited States
- Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
| |
Collapse
|
14
|
Kim CH, Kim JS, Choi Y, Kyong JS, Kim Y, Yi SW, Chung CK. Change in left inferior frontal connectivity with less unexpected harmonic cadence by musical expertise. PLoS One 2019; 14:e0223283. [PMID: 31714920 PMCID: PMC6850538 DOI: 10.1371/journal.pone.0223283] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2019] [Accepted: 09/17/2019] [Indexed: 11/19/2022] Open
Abstract
In terms of harmonic expectancy, compared to an expected dominant-to-tonic and an unexpected dominant-to-supertonic, a dominant-to-submediant is a less unexpected cadence, the perception of which may depend on the subject’s musical expertise. The present study investigated how aforementioned 3 different cadences are processed in the networks of bilateral inferior frontal gyri (IFGs) and superior temporal gyri (STGs) with magnetoencephalography. We compared the correct rate and brain connectivity in 9 music-majors (mean age, 23.5 ± 3.4 years; musical training period, 18.7 ± 4.0 years) and 10 non-music-majors (mean age, 25.2 ± 2.6 years; musical training period, 4.2 ± 1.5 years). For the brain connectivity, we computed the summation of partial directed coherence (PDC) values for inflows/outflows to/from each area (sPDCi/sPDCo) in bilateral IFGs and STGs. In the behavioral responses, music-majors were better than non-music-majors for all 3 cadences (p < 0.05). However, sPDCi/sPDCo was prominent only for the dominant-to-submediant in the left IFG. The sPDCi was more strongly enhanced in music-majors than in non-music-majors (p = 0.002, Bonferroni corrected), while the sPDCo was vice versa (p = 0.005, Bonferroni corrected). Our data show that music-majors, with higher musical expertise, are better in identifying a less unexpected cadence than non-music-majors, with connectivity changes centered on the left IFG.
Collapse
Affiliation(s)
- Chan Hee Kim
- Interdisciplinary Program in Neuroscience, Seoul National University College of Natural Science, Seoul, Korea
| | - June Sic Kim
- Department of Brain and Cognitive Science, Seoul National University College of Natural Science, Seoul, Korea
- Research Institute of Basic Sciences, Seoul National University, Seoul, Korea
| | - Yunhee Choi
- Medical Research Collaborating Center, Seoul National University College of Medicine, Seoul National University Hospital, Seoul, Korea
| | - Jeong-Sug Kyong
- Neuroscience Research Institute, Seoul National University Medical Research Center, Seoul, Korea
- Audiology Institute, Hallym University of Graduate Studies, Seoul, Korea
| | - Youn Kim
- Department of Music, School of Humanities, The University of Hong Kong, Hong Kong, China
| | - Suk Won Yi
- College of Music, Seoul National University, Seoul, Korea
- Western Music Research Institute, Seoul National University, Seoul, Korea
| | - Chun Kee Chung
- Interdisciplinary Program in Neuroscience, Seoul National University College of Natural Science, Seoul, Korea
- Department of Brain and Cognitive Science, Seoul National University College of Natural Science, Seoul, Korea
- Neuroscience Research Institute, Seoul National University Medical Research Center, Seoul, Korea
- Department of Neurosurgery, Seoul National University Hospital, Seoul, Korea
- * E-mail:
| |
Collapse
|
15
|
Syntactic processing in music and language: Effects of interrupting auditory streams with alternating timbres. Int J Psychophysiol 2018; 129:31-40. [DOI: 10.1016/j.ijpsycho.2018.05.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2017] [Revised: 05/05/2018] [Accepted: 05/07/2018] [Indexed: 02/08/2023]
|
16
|
Sun L, Liu F, Zhou L, Jiang C. Musical training modulates the early but not the late stage of rhythmic syntactic processing. Psychophysiology 2017; 55. [PMID: 28833189 DOI: 10.1111/psyp.12983] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2017] [Revised: 06/09/2017] [Accepted: 07/31/2017] [Indexed: 11/26/2022]
Abstract
Syntactic processing is essential for musical understanding. Although the processing of harmonic syntax has been well studied, very little is known about the neural mechanisms underlying rhythmic syntactic processing. The present study investigated the neural processing of rhythmic syntax and whether and to what extent long-term musical training impacts such processing. Fourteen musicians and 14 nonmusicians listened to syntactic-regular or syntactic-irregular rhythmic sequences and judged the completeness of these sequences. Nonmusicians, as well as musicians, showed a P600 effect to syntactic-irregular endings, indicating that musical exposure and perceptual learning of music are sufficient to enable nonmusicians to process rhythmic syntax at the late stage. However, musicians, but not nonmusicians, also exhibited an early right anterior negativity (ERAN) response to syntactic-irregular endings, which suggests that musical training only modulates the early but not the late stage of rhythmic syntactic processing. These findings revealed for the first time the neural mechanisms underlying the processing of rhythmic syntax in music, which has important implications for theories of hierarchically organized music cognition and comparative studies of syntactic processing in music and language.
Collapse
Affiliation(s)
- Lijun Sun
- College of Education, Shanghai Normal University, Shanghai, China
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Linshu Zhou
- Music College, Shanghai Normal University, Shanghai, China
| | - Cunmei Jiang
- Music College, Shanghai Normal University, Shanghai, China.,Institute of Psychology, Shanghai Normal University, Shanghai, China
| |
Collapse
|
17
|
Guo S, Koelsch S. Effects of veridical expectations on syntax processing in music: Event-related potential evidence. Sci Rep 2016; 6:19064. [PMID: 26780880 PMCID: PMC4726113 DOI: 10.1038/srep19064] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2015] [Accepted: 11/02/2015] [Indexed: 11/09/2022] Open
Abstract
Numerous past studies have investigated neurophysiological correlates of music-syntactic processing. However, only little is known about how prior knowledge about an upcoming syntactically irregular event modulates brain correlates of music-syntactic processing. Two versions of a short chord sequence were presented repeatedly to non-musicians (n = 20) and musicians (n = 20). One sequence version ended on a syntactically regular chord, and the other one ended on a syntactically irregular chord. Participants were either informed (cued condition), or not informed (non-cued condition) about whether the sequence would end on the regular or the irregular chord. Results indicate that in the cued condition (compared to the non-cued condition) the peak latency of the early right anterior negativity (ERAN), elicited by irregular chords, was earlier in both non-musicians and musicians. However, the expectations due to the knowledge about the upcoming event (veridical expectations) did not influence the amplitude of the ERAN. These results suggest that veridical expectations modulate only the speed, but not the principle mechanisms, of music-syntactic processing.
Collapse
Affiliation(s)
- Shuang Guo
- Department of Educational Sciences &Psychology, Freie Universität Berlin, Berlin 14195, Germany
| | - Stefan Koelsch
- Department of Educational Sciences &Psychology, Freie Universität Berlin, Berlin 14195, Germany
| |
Collapse
|
18
|
Poikonen H, Alluri V, Brattico E, Lartillot O, Tervaniemi M, Huotilainen M. Event-related brain responses while listening to entire pieces of music. Neuroscience 2015; 312:58-73. [PMID: 26550950 DOI: 10.1016/j.neuroscience.2015.10.061] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2015] [Revised: 10/28/2015] [Accepted: 10/30/2015] [Indexed: 12/19/2022]
Abstract
Brain responses to discrete short sounds have been studied intensively using the event-related potential (ERP) method, in which the electroencephalogram (EEG) signal is divided into epochs time-locked to stimuli of interest. Here we introduce and apply a novel technique which enables one to isolate ERPs in human elicited by continuous music. The ERPs were recorded during listening to a Tango Nuevo piece, a deep techno track and an acoustic lullaby. Acoustic features related to timbre, harmony, and dynamics of the audio signal were computationally extracted from the musical pieces. Negative deflation occurring around 100 milliseconds after the stimulus onset (N100) and positive deflation occurring around 200 milliseconds after the stimulus onset (P200) ERP responses to peak changes in the acoustic features were distinguishable and were often largest for Tango Nuevo. In addition to large changes in these musical features, long phases of low values that precede a rapid increase - and that we will call Preceding Low-Feature Phases - followed by a rapid increase enhanced the amplitudes of N100 and P200 responses. These ERP responses resembled those to simpler sounds, making it possible to utilize the tradition of ERP research with naturalistic paradigms.
Collapse
Affiliation(s)
- H Poikonen
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki, P.O. Box 9 (Siltavuorenpenger 1 B), FI-00014 University of Helsinki, Finland.
| | - V Alluri
- Department of Music, University of Jyväskylä, P.O. Box 35, 40014 University of Jyväskylä, Finland.
| | - E Brattico
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki, P.O. Box 9 (Siltavuorenpenger 1 B), FI-00014 University of Helsinki, Finland; Center for Music in the Brain (MIB), Department of Clinical Medicine, Aarhus University, Nørrebrograde 44, DK-8000 Aarhus C, Denmark.
| | - O Lartillot
- Department of Architecture, Design and Media Technology, University of Aalborg, Rendsburggade 14, DK-9000 Aalborg, Denmark.
| | - M Tervaniemi
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki, P.O. Box 9 (Siltavuorenpenger 1 B), FI-00014 University of Helsinki, Finland; Cicero Learning, P.O. Box 9 (Siltavuorenpenger 5 A), FI-00014 University of Helsinki, Finland.
| | - M Huotilainen
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki, P.O. Box 9 (Siltavuorenpenger 1 B), FI-00014 University of Helsinki, Finland; Cicero Learning, P.O. Box 9 (Siltavuorenpenger 5 A), FI-00014 University of Helsinki, Finland; Finnish Institute of Occupational Health, Haartmaninkatu 1 A, 00250 Helsinki, Finland.
| |
Collapse
|
19
|
Skoe E, Krizman J, Spitzer E, Kraus N. Prior experience biases subcortical sensitivity to sound patterns. J Cogn Neurosci 2015; 27:124-40. [PMID: 25061926 DOI: 10.1162/jocn_a_00691] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
To make sense of our ever-changing world, our brains search out patterns. This drive can be so strong that the brain imposes patterns when there are none. The opposite can also occur: The brain can overlook patterns because they do not conform to expectations. In this study, we examined this neural sensitivity to patterns within the auditory brainstem, an evolutionarily ancient part of the brain that can be fine-tuned by experience and is integral to an array of cognitive functions. We have recently shown that this auditory hub is sensitive to patterns embedded within a novel sound stream, and we established a link between neural sensitivity and behavioral indices of learning [Skoe, E., Krizman, J., Spitzer, E., & Kraus, N. The auditory brainstem is a barometer of rapid auditory learning. Neuroscience, 243, 104-114, 2013]. We now ask whether this sensitivity to stimulus statistics is biased by prior experience and the expectations arising from this experience. To address this question, we recorded complex auditory brainstem responses (cABRs) to two patterned sound sequences formed from a set of eight repeating tones. For both patterned sequences, the eight tones were presented such that the transitional probability (TP) between neighboring tones was either 33% (low predictability) or 100% (high predictability). Although both sequences were novel to the healthy young adult listener and had similar TP distributions, one was perceived to be more musical than the other. For the more musical sequence, participants performed above chance when tested on their recognition of the most predictable two-tone combinations within the sequence (TP of 100%); in this case, the cABR differed from a baseline condition where the sound sequence had no predictable structure. In contrast, for the less musical sequence, learning was at chance, suggesting that listeners were "deaf" to the highly predictable repeating two-tone combinations in the sequence. For this condition, the cABR also did not differ from baseline. From this, we posit that the brainstem acts as a Bayesian sound processor, such that it factors in prior knowledge about the environment to index the probability of particular events within ever-changing sensory conditions.
Collapse
|
20
|
Guo S, Koelsch S. The effects of supervised learning on event-related potential correlates of music-syntactic processing. Brain Res 2015; 1626:232-46. [PMID: 25660849 DOI: 10.1016/j.brainres.2015.01.046] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2014] [Revised: 01/22/2015] [Accepted: 01/24/2015] [Indexed: 10/24/2022]
Abstract
Humans process music even without conscious effort according to implicit knowledge about syntactic regularities. Whether such automatic and implicit processing is modulated by veridical knowledge has remained unknown in previous neurophysiological studies. This study investigates this issue by testing whether the acquisition of veridical knowledge of a music-syntactic irregularity (acquired through supervised learning) modulates early, partly automatic, music-syntactic processes (as reflected in the early right anterior negativity, ERAN), and/or late controlled processes (as reflected in the late positive component, LPC). Excerpts of piano sonatas with syntactically regular and less regular chords were presented repeatedly (10 times) to non-musicians and amateur musicians. Participants were informed by a cue as to whether the following excerpt contained a regular or less regular chord. Results showed that the repeated exposure to several presentations of regular and less regular excerpts did not influence the ERAN elicited by less regular chords. By contrast, amplitudes of the LPC (as well as of the P3a evoked by less regular chords) decreased systematically across learning trials. These results reveal that late controlled, but not early (partly automatic), neural mechanisms of music-syntactic processing are modulated by repeated exposure to a musical piece. This article is part of a Special Issue entitled SI: Prediction and Attention.
Collapse
Affiliation(s)
- Shuang Guo
- Cluster Languages of Emotion, Freie Universität Berlin, Berlin, Germany
| | - Stefan Koelsch
- Cluster Languages of Emotion, Freie Universität Berlin, Berlin, Germany.
| |
Collapse
|
21
|
Implicit and explicit statistical learning of tone sequences across spectral shifts. Neuropsychologia 2014; 63:194-204. [PMID: 25192632 DOI: 10.1016/j.neuropsychologia.2014.08.028] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2014] [Revised: 08/21/2014] [Accepted: 08/22/2014] [Indexed: 11/22/2022]
Abstract
We investigated how the statistical learning of auditory sequences is reflected in neuromagnetic responses in implicit and explicit learning conditions. Complex tones with fundamental frequencies (F0s) in a five-tone equal temperament were generated by a formant synthesizer. The tones were subsequently ordered with the constraint that the probability of the forthcoming tone was statistically defined (80% for one tone; 5% for the other four) by the latest two successive tones (second-order Markov chains). The tone sequence consisted of 500 tones and 250 successive tones with a relative shift of F0s based on the same Markov transitional matrix. In explicit and implicit learning conditions, neuromagnetic responses to the tone sequence were recorded from fourteen right-handed participants. The temporal profiles of the N1m responses to the tones with higher and lower transitional probabilities were compared. In the explicit learning condition, the N1m responses to tones with higher transitional probability were significantly decreased compared with responses to tones with lower transitional probability in the latter half of the 500-tone sequence. Furthermore, this difference was retained even after the F0s were relatively shifted. In the implicit learning condition, N1m responses to tones with higher transitional probability were significantly decreased only for the 250 tones following the relative shift of F0s. The delayed detection of learning effects across the sound-spectral shift in the implicit condition may imply that learning may progress earlier in explicit learning conditions than in implicit learning conditions. The finding that the learning effects were retained across spectral shifts regardless of the learning modality indicates that relative pitch processing may be an essential ability for humans.
Collapse
|
22
|
Bigand E, Delbé C, Poulin-Charronnat B, Leman M, Tillmann B. Empirical evidence for musical syntax processing? Computer simulations reveal the contribution of auditory short-term memory. Front Syst Neurosci 2014; 8:94. [PMID: 24936174 PMCID: PMC4047967 DOI: 10.3389/fnsys.2014.00094] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2013] [Accepted: 05/05/2014] [Indexed: 11/25/2022] Open
Abstract
During the last decade, it has been argued that (1) music processing involves syntactic representations similar to those observed in language, and (2) that music and language share similar syntactic-like processes and neural resources. This claim is important for understanding the origin of music and language abilities and, furthermore, it has clinical implications. The Western musical system, however, is rooted in psychoacoustic properties of sound, and this is not the case for linguistic syntax. Accordingly, musical syntax processing could be parsimoniously understood as an emergent property of auditory memory rather than a property of abstract processing similar to linguistic processing. To support this view, we simulated numerous empirical studies that investigated the processing of harmonic structures, using a model based on the accumulation of sensory information in auditory memory. The simulations revealed that most of the musical syntax manipulations used with behavioral and neurophysiological methods as well as with developmental and cross-cultural approaches can be accounted for by the auditory memory model. This led us to question whether current research on musical syntax can really be compared with linguistic processing. Our simulation also raises methodological and theoretical challenges to study musical syntax while disentangling the confounded low-level sensory influences. In order to investigate syntactic abilities in music comparable to language, research should preferentially use musical material with structures that circumvent the tonal effect exerted by psychoacoustic properties of sounds.
Collapse
Affiliation(s)
- Emmanuel Bigand
- LEAD, CNRS-UMR 5022, Université de Bourgogne Dijon, France ; Institut Universitaire de France Paris, France
| | - Charles Delbé
- LEAD, CNRS-UMR 5022, Université de Bourgogne Dijon, France
| | | | - Marc Leman
- Department of Musicology, IPEM, Ghent University Ghent, Belgium
| | - Barbara Tillmann
- Lyon Neuroscience Research Center, CNRS-UMR 5292, INSERM-UMR 1028, Université Lyon1 Lyon, France
| |
Collapse
|
23
|
Jentschke S, Friederici AD, Koelsch S. Neural correlates of music-syntactic processing in two-year old children. Dev Cogn Neurosci 2014; 9:200-8. [PMID: 24907450 PMCID: PMC6989737 DOI: 10.1016/j.dcn.2014.04.005] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2013] [Revised: 04/23/2014] [Accepted: 04/23/2014] [Indexed: 12/04/2022] Open
Abstract
We observed neurophysiological correlates of music-syntactic processing in 30-month-olds. This indicates that children of that age process harmonic sequences according to complex syntactic regularities. These representations of music-syntactic regularities must have been acquired before and stored in long-term memory. Similar to syntax processing in language, these processes are highly automatic and do not require attention.
Music is a basic and ubiquitous socio-cognitive domain. However, our understanding of the time course of the development of music perception, particularly regarding implicit knowledge of music-syntactic regularities, remains contradictory and incomplete. Some authors assume that the acquisition of knowledge about these regularities lasts until late childhood, but there is also evidence for the presence of such knowledge in four- and five-year-olds. To explore whether such knowledge is already present in younger children, we tested whether 30-month-olds (N = 62) show neurophysiological responses to music-syntactically irregular harmonies. We observed an early right anterior negativity in response to both irregular in-key and out-of-key chords. The N5, a brain response usually present in older children and adults, was not observed, indicating that processes of harmonic integration (as reflected in the N5) are still in development in this age group. In conclusion, our results indicate that 30-month-olds already have acquired implicit knowledge of complex harmonic music-syntactic regularities and process musical information according to this knowledge.
Collapse
Affiliation(s)
| | - Angela D Friederici
- Max Planck Institute for Human Cognitive and Brain Sciences, Department of Neuropsychology, Leipzig, Germany.
| | - Stefan Koelsch
- Freie Universität Berlin, Cluster "Languages of Emotion", Berlin, Germany.
| |
Collapse
|
24
|
Kim CH, Lee S, Kim JS, Seol J, Yi SW, Chung CK. Melody effects on ERANm elicited by harmonic irregularity in musical syntax. Brain Res 2014; 1560:36-45. [PMID: 24607297 DOI: 10.1016/j.brainres.2014.02.045] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2013] [Revised: 02/10/2014] [Accepted: 02/25/2014] [Indexed: 10/25/2022]
Abstract
Recent studies have reported that early right anterior negativity (ERAN) and its magnetic counterpart (ERANm) are evoked by harmonic irregularity in Western tonal music; however, those studies did not control for differences of melody. Because melody and harmony have an interdependent relationship and because melody (in this study melody is represented by the highest voice part) in a chord sequence may dominate, there is controversy over whether ERAN (or ERANm) changes arise from melody or harmony differences. To separate the effects of melody differences and harmonic irregularity on ERANm responses, we designed two magnetoencephalography experiments and behavioral test. Participants were presented with three types of chord progression sequences (Expected, Intermediate, and Unexpected) with different harmonic regularities in which melody differences were or were not controlled. In the uncontrolled melody difference experiment, the unexpected chord elicited a significantly largest ERANm, but in the controlled melody difference experiment, the amplitude of the ERANm peak did not differ among the three conditions. However, ERANm peak latency was delayed more than that in the uncontrolled melody difference experiment. The behavioral results show the difference between the two experiments even if harmonic irregularity was discriminated in the uncontrolled melody difference experiment. In conclusion, our analysis reveals that there is a relationship between the effects of harmony and melody on ERANm. Hence, we suggest that a melody difference in a chord progression is largely responsible for the observed changes in ERANm, reaffirming that melody plays an important role in the processing of musical syntax.
Collapse
Affiliation(s)
- Chan Hee Kim
- Interdisciplinary Program in Neuroscience, Seoul National University College of Natural Science, Seoul, Republic of Korea; MEG Center, Department of Neurosurgery, Seoul National University Hospital, Seoul, Republic of Korea
| | - Sojin Lee
- Interdisciplinary Program in Neuroscience, Seoul National University College of Natural Science, Seoul, Republic of Korea; MEG Center, Department of Neurosurgery, Seoul National University Hospital, Seoul, Republic of Korea
| | - June Sic Kim
- MEG Center, Department of Neurosurgery, Seoul National University Hospital, Seoul, Republic of Korea; Department of Neurosurgery, Seoul National University College of Medicine, Seoul, Republic of Korea; Sensory Organ Research Institute, Seoul National University, Seoul, Republic of Korea
| | - Jaeho Seol
- Imaging Language Group, Brain Research Unit, O. V. Lounasmaa Laboratory, Aalto University School of Science, FI-00076 Aalto, Finland
| | - Suk Won Yi
- Department of Music, The Graduate School Seoul National University, Seoul, Republic of Korea; Western Music Research Institute, Seoul National University, Seoul, Republic of Korea
| | - Chun Kee Chung
- Interdisciplinary Program in Neuroscience, Seoul National University College of Natural Science, Seoul, Republic of Korea; MEG Center, Department of Neurosurgery, Seoul National University Hospital, Seoul, Republic of Korea; Department of Neurosurgery, Seoul National University College of Medicine, Seoul, Republic of Korea; Interdisciplinary Program in Cognitive Science, Seoul National University College of Humanities, Seoul, Republic of Korea; Department of Brain and Cognitive Science, Seoul National University College of Natural Science, Seoul, Republic of Korea.
| |
Collapse
|
25
|
Seppänen M, Hämäläinen J, Pesonen AK, Tervaniemi M. Passive sound exposure induces rapid perceptual learning in musicians: event-related potential evidence. Biol Psychol 2013; 94:341-53. [PMID: 23886959 DOI: 10.1016/j.biopsycho.2013.07.004] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2011] [Revised: 02/20/2013] [Accepted: 07/15/2013] [Indexed: 11/17/2022]
Abstract
Musicians show enhanced auditory processing compared to nonmusicians. However, the neural basis underlying the effects of musical training on rapid plasticity in auditory processing has not been systematically studied. Here, the rapid (one session) learning-related plastic changes in event-related potential (ERP) responses for pitch and duration deviants between passive blocks were compared between musicians and nonmusicians. Passive blocks were interleaved with an active discrimination task. In addition to musicians having faster and stronger overall source activation for deviating sounds, source analysis revealed rapid plastic changes in the left and right temporal and left frontal sources that were present only in musicians. Source activation decreased in these areas even without focused attention. Furthermore, deviant-related ERP responses above the parietal areas decreased after the active task in both musicians and nonmusicians. Taken together, the results indicate enhanced rapid plasticity in sound change discrimination and perceptual learning in musicians when compared with nonmusicians.
Collapse
Affiliation(s)
- Miia Seppänen
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki, Finland; Finnish Center of Excellence in Interdisciplinary Music Research, Department of Music, University of Jyväskylä, Finland.
| | | | | | | |
Collapse
|
26
|
Hoeren M, Kaller CP, Glauche V, Vry MS, Rijntjes M, Hamzei F, Weiller C. Action semantics and movement characteristics engage distinct processing streams during the observation of tool use. Exp Brain Res 2013; 229:243-60. [DOI: 10.1007/s00221-013-3610-5] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2013] [Accepted: 06/07/2013] [Indexed: 11/30/2022]
|
27
|
Fitzroy AB, Sanders LD. Musical expertise modulates early processing of syntactic violations in language. Front Psychol 2013; 3:603. [PMID: 23335905 PMCID: PMC3542524 DOI: 10.3389/fpsyg.2012.00603] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2012] [Accepted: 12/18/2012] [Indexed: 11/30/2022] Open
Abstract
Syntactic violations in speech and music have been shown to elicit an anterior negativity (AN) as early as 100 ms after violation onset and a posterior positivity that peaks at roughly 600 ms (P600/LPC). The language AN is typically reported as left-lateralized (LAN), whereas the music AN is typically reported as right-lateralized (RAN). However, several lines of evidence suggest syntactic processing of language and music rely on overlapping neural systems. The current study tested the hypothesis that syntactic processing of speech and music share neural resources by examining whether musical proficiency modulates ERP indices of linguistic syntactic processing. ERPs were measured in response to syntactic violations in sentences and chord progressions in musicians and non-musicians. Violations in speech were insertion errors in normal and semantically impoverished English sentences. Violations in music were out-of-key chord substitutions from distantly and closely related keys. Phrase-structure violations elicited an AN and P600 in both groups. Harmonic violations elicited an LPC in both groups, blatant harmonic violations also elicited a RAN in musicians only. Cross-domain effects of musical proficiency were similar to previously reported within-domain effects of linguistic proficiency on the distribution of the language AN; syntactic violations in normal English sentences elicited a LAN in musicians and a bilateral AN in non-musicians. The late positivities elicited by violations differed in latency and distribution between domains. These results suggest that initial processing of syntactic violations in language and music relies on shared neural resources in the general population, and that musical expertise results in more specialized cortical organization of syntactic processing in both domains.
Collapse
Affiliation(s)
- Ahren B Fitzroy
- Neuroscience and Behavior Program, University of Massachusetts Amherst, MA, USA
| | | |
Collapse
|
28
|
Nemoto I. Evoked magnetoencephalographic responses to omission of a tone in a musical scale. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2012; 131:4770-4784. [PMID: 22712949 DOI: 10.1121/1.4714916] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
The musical scale is a basis for melodies and can be a simple melody by itself. The present study investigated magnetoencephalographic (MEG) responses to omissions of one tone out of the C major scale. The tone preceding the omitted "target" tone was either prolonged or repeated. In another series, the tone after the target tone was repeated. In "normal" oddball experiments, the complete C major scale was presented more frequently than an incomplete scale lacking one tone, and in "reverse" oddball experiments, the roles were exchanged. In the normal oddball experiments, omission of any tone produced a response significantly different in amplitude from the standard response in the group of non-musicians, although the responses differed depending on the types of omission. The leading tone (B in the C major scale) was shown to elicit a large response when omitted and also when its presence was emphasized. The Reverse oddball experiments showed that repeated presentation of an incomplete scale lacking one tone temporarily reduced the influence of the complete scale but could not even temporarily replace it working as "standard." In addition, an auxiliary study was done to see possible influence of rhythmic variations.
Collapse
Affiliation(s)
- Iku Nemoto
- Department of Information Environment, Tokyo Denki University, 2-1200 Muzai-gakuendai, Inzai Chiba, 270-1382, Japan.
| |
Collapse
|
29
|
Herholz SC, Boh B, Pantev C. Musical training modulates encoding of higher-order regularities in the auditory cortex. Eur J Neurosci 2012; 34:524-9. [PMID: 21801242 DOI: 10.1111/j.1460-9568.2011.07775.x] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
We investigated the effect of long-term musical training on the time course of development of neuronal representations within the auditory cortex by means of magnetoencephalography. In musicians but not in nonmusicians, pre-attentive encoding of a complex regularity within a tone sequence was evident by a constant increase of the pattern mismatch negativity within < 10 min. The group difference was more pronounced in the left hemisphere, indicating stronger plastic changes in its structures supporting temporal analysis and sound pattern encoding. The results suggest an effect of long-term musical training on short-term auditory learning processes. This has implications not only for cognitive neuroscience in showing how short-term and long-term neuronal plasticity can interact within the auditory cortex, but also for educational and clinical applications of implicit auditory learning where beneficial effects of (musical) experience might be exploited.
Collapse
Affiliation(s)
- Sibylle C Herholz
- Montreal Neurological Institute, McGill University, and International Laboratory for Brain, Music and Sound Research, BRAMS, Montreal, Quebec, Canada
| | | | | |
Collapse
|
30
|
Detecting scale violations in absence of mismatch requires music-syntactic analysis: a further look at the early right anterior negativity (ERAN). Brain Topogr 2011; 25:285-92. [PMID: 22080232 DOI: 10.1007/s10548-011-0208-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2011] [Accepted: 10/28/2011] [Indexed: 10/15/2022]
Abstract
The purpose of this study was to determine whether infrequent scale violations in a sequence of in-key notes are detected when the deviants are matched for frequency of occurrence and preceding intervals with the control notes. We further investigated whether the detectability of scale violations is modulated by the presence of melodic context and by the level of musical training. Event related potentials were recorded from 14 musicians and 13 non-musicians. In non-musicians, the out-of-key notes elicited an early right anterior negativity (ERAN), which appeared prominently over right frontal sites only when presented within structured sequences; no effects were found when the out-of-key notes were presented within scrambled sequences. In musicians, the out-of-key notes elicited a similar bilateral ERAN in structured and scrambled sequences. Our findings suggest that scale information is processed at the level of music-syntactic analysis, and that the detection of deviants does not require activation of auditory sensory memory by mismatch effects. Scales are perceived as a broader context, not just as online interval relations. Additional melodic context information appears necessary to support the representation of scale deviants in non-musicians, but not in musically-trained individuals, likely as a consequence of stronger pre-existing representations.
Collapse
|
31
|
Ettlinger M, Margulis EH, Wong PCM. Implicit memory in music and language. Front Psychol 2011; 2:211. [PMID: 21927608 PMCID: PMC3170172 DOI: 10.3389/fpsyg.2011.00211] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2011] [Accepted: 08/15/2011] [Indexed: 11/13/2022] Open
Abstract
Research on music and language in recent decades has focused on their overlapping neurophysiological, perceptual, and cognitive underpinnings, ranging from the mechanism for encoding basic auditory cues to the mechanism for detecting violations in phrase structure. These overlaps have most often been identified in musicians with musical knowledge that was acquired explicitly, through formal training. In this paper, we review independent bodies of work in music and language that suggest an important role for implicitly acquired knowledge, implicit memory, and their associated neural structures in the acquisition of linguistic or musical grammar. These findings motivate potential new work that examines music and language comparatively in the context of the implicit memory system.
Collapse
Affiliation(s)
- Marc Ettlinger
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern UniversityEvanston, IL, USA
| | | | - Patrick C. M. Wong
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern UniversityEvanston, IL, USA
- Department of Otolaryngology – Head and Neck Surgery, Feinberg School of Medicine, Northwestern UniversityChicago, IL, USA
| |
Collapse
|
32
|
Maidhof C, Koelsch S. Effects of Selective Attention on Syntax Processing in Music and Language. J Cogn Neurosci 2011; 23:2252-67. [DOI: 10.1162/jocn.2010.21542] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
The present study investigated the effects of auditory selective attention on the processing of syntactic information in music and speech using event-related potentials. Spoken sentences or musical chord sequences were either presented in isolation, or simultaneously. When presented simultaneously, participants had to focus their attention either on speech, or on music. Final words of sentences and final harmonies of chord sequences were syntactically either correct or incorrect. Irregular chords elicited an early right anterior negativity (ERAN), whose amplitude was decreased when music was simultaneously presented with speech, compared to when only music was presented. However, the amplitude of the ERAN-like waveform elicited when music was ignored did not differ from the conditions in which participants attended the chord sequences. Irregular sentences elicited an early left anterior negativity (ELAN), regardless of whether speech was presented in isolation, was attended, or was to be ignored. These findings suggest that the neural mechanisms underlying the processing of syntactic structure of music and speech operate partially automatically, and, in the case of music, are influenced by different attentional conditions. Moreover, the ERAN was slightly reduced when irregular sentences were presented, but only when music was ignored. Therefore, these findings provide no clear support for an interaction of neural resources for syntactic processing already at these early stages.
Collapse
Affiliation(s)
- Clemens Maidhof
- 1Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- 2University of Helsinki, Helsinki, Finland
- 3University of Jyväskylä, Finland
| | | |
Collapse
|
33
|
Wehrum S, Degé F, Ott U, Walter B, Stippekohl B, Kagerer S, Schwarzer G, Vaitl D, Stark R. Can you hear a difference? Neuronal correlates of melodic deviance processing in children. Brain Res 2011; 1402:80-92. [DOI: 10.1016/j.brainres.2011.05.057] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2010] [Revised: 05/21/2011] [Accepted: 05/24/2011] [Indexed: 11/24/2022]
|
34
|
Kim SG, Kim JS, Chung CK. The effect of conditional probability of chord progression on brain response: an MEG study. PLoS One 2011; 6:e17337. [PMID: 21364895 PMCID: PMC3045443 DOI: 10.1371/journal.pone.0017337] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2010] [Accepted: 01/29/2011] [Indexed: 01/14/2023] Open
Abstract
Background Recent electrophysiological and neuroimaging studies have explored how and where musical syntax in Western music is processed in the human brain. An inappropriate chord progression elicits an event-related potential (ERP) component called an early right anterior negativity (ERAN) or simply an early anterior negativity (EAN) in an early stage of processing the musical syntax. Though the possible underlying mechanism of the EAN is assumed to be probabilistic learning, the effect of the probability of chord progressions on the EAN response has not been previously explored explicitly. Methodology/Principal Findings In the present study, the empirical conditional probabilities in a Western music corpus were employed as an approximation of the frequencies in previous exposure of participants. Three types of chord progression were presented to musicians and non-musicians in order to examine the correlation between the probability of chord progression and the neuromagnetic response using magnetoencephalography (MEG). Chord progressions were found to elicit early responses in a negatively correlating fashion with the conditional probability. Observed EANm (as a magnetic counterpart of the EAN component) responses were consistent with the previously reported EAN responses in terms of latency and location. The effect of conditional probability interacted with the effect of musical training. In addition, the neural response also correlated with the behavioral measures in the non-musicians. Conclusions/Significance Our study is the first to reveal the correlation between the probability of chord progression and the corresponding neuromagnetic response. The current results suggest that the physiological response is a reflection of the probabilistic representations of the musical syntax. Moreover, the results indicate that the probabilistic representation is related to the musical training as well as the sensitivity of an individual.
Collapse
Affiliation(s)
- Seung-Goo Kim
- Interdisciplinary Program in Cognitive Science, Seoul National University, Seoul, Korea
| | - June Sic Kim
- MEG Center, Department of Neurosurgery, Seoul National University College of Medicine, Seoul, Korea
| | - Chun Kee Chung
- Interdisciplinary Program in Cognitive Science, Seoul National University, Seoul, Korea
- MEG Center, Department of Neurosurgery, Seoul National University College of Medicine, Seoul, Korea
- * E-mail:
| |
Collapse
|
35
|
Skoe E, Kraus N. Hearing it again and again: on-line subcortical plasticity in humans. PLoS One 2010; 5:e13645. [PMID: 21049035 PMCID: PMC2964325 DOI: 10.1371/journal.pone.0013645] [Citation(s) in RCA: 61] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2010] [Accepted: 10/03/2010] [Indexed: 11/18/2022] Open
Abstract
Background Human brainstem activity is sensitive to local sound statistics, as reflected in an enhanced response in repetitive compared to pseudo-random stimulus conditions [1]. Here we probed the short-term time course of this enhancement using a paradigm that assessed how the local sound statistics (i.e., repetition within a five-note melody) interact with more global statistics (i.e., repetition of the melody). Methodology/Principal Findings To test the hypothesis that subcortical repetition enhancement builds over time, we recorded auditory brainstem responses in young adults to a five-note melody containing a repeated note, and monitored how the response changed over the course of 1.5 hrs. By comparing response amplitudes over time, we found a robust time-dependent enhancement to the locally repeating note that was superimposed on a weaker enhancement of the globally repeating pattern. Conclusions/Significance We provide the first demonstration of on-line subcortical plasticity in humans. This complements previous findings that experience-dependent subcortical plasticity can occur on a number of time scales, including life-long experiences with music and language, and short-term auditory training. Our results suggest that the incoming stimulus stream is constantly being monitored, even when the stimulus is physically invariant and attention is directed elsewhere, to augment the neural response to the most statistically salient features of the ongoing stimulus stream. These real-time transformations, which may subserve humans' strong disposition for grouping auditory objects, likely reflect a mix of local processes and corticofugal modulation arising from statistical regularities and the influences of expectation. Our results contribute to our understanding of the biological basis of statistical learning and initiate a new investigational approach relating to the time-course of subcortical plasticity. Although the reported time-dependent enhancements are believed to reflect universal neurophysiological processes, future experiments utilizing a larger array of stimuli are needed to establish the generalizability of our findings.
Collapse
Affiliation(s)
- Erika Skoe
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, Illinois, United States of America.
| | | |
Collapse
|
36
|
Koelsch S, Jentschke S. Differences in Electric Brain Responses to Melodies and Chords. J Cogn Neurosci 2010; 22:2251-62. [DOI: 10.1162/jocn.2009.21338] [Citation(s) in RCA: 64] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The music we usually listen to in everyday life consists of either single melodies or harmonized melodies (i.e., of melodies “accompanied” by chords). However, differences in the neural mechanisms underlying melodic and harmonic processing have remained largely unknown. Using EEG, this study compared effects of music-syntactic processing between chords and melodies. In melody blocks, sequences consisted of five tones, the final tone being either regular or irregular (p = .5). Analogously, in chord blocks, sequences consisted of five chords, the final chord function being either regular or irregular. Melodies were derived from the top voice of chord sequences, allowing a proper comparison between melodic and harmonic processing. Music-syntactic incongruities elicited an early anterior negativity with a latency of approximately 125 msec in both the melody and the chord conditions. This effect was followed in the chord condition, but not in the melody condition, by an additional negative effect that was maximal at approximately 180 msec. Both effects were maximal at frontal electrodes, but the later effect was more broadly distributed over the scalp than the earlier effect. These findings indicate that melodic information (which is also contained in the top voice of chords) is processed earlier and with partly different neural mechanisms than harmonic information of chords.
Collapse
Affiliation(s)
- Stefan Koelsch
- University of Sussex, Brighton, UK
- Max-Planck-Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Sebastian Jentschke
- Max-Planck-Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- University College London, UK
| |
Collapse
|
37
|
Ueno M, Kluender R. On the processing of Japanese wh-questions: an ERP study. Brain Res 2009; 1290:63-90. [PMID: 19501576 DOI: 10.1016/j.brainres.2009.05.084] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2008] [Revised: 04/18/2009] [Accepted: 05/23/2009] [Indexed: 10/20/2022]
Abstract
The processing of Japanese wh-questions was investigated using event-related brain potentials (ERPs). Unlike in English or German, a wh-element in Japanese need not be displaced from its canonical position, but instead needs a corresponding Q(uestion)-particle to indicate its interrogative scope. We tested to see if there were any processing correlates specific to these features of Japanese wh-questions. Both mono-clausal and bi-clausal Japanese wh-questions elicited right-lateralized anterior negativity (RAN) between wh-words and corresponding Q-particles, relative to structurally-equivalent yes/no-question control conditions. These results suggest a reliable neural processing correlate of the dependency between wh-elements and Q-particles in Japanese, similar to effects of (left) anterior negativity between wh-fillers and gaps in English and German, but with a right- rather than left-lateralized distribution. It is suggested that wh-in-situ questions in Japanese are processed by the incremental formation of a long-distance dependency between wh-elements and their Q-particles, resulting in a working memory load for keeping track of scopeless wh-elements.
Collapse
Affiliation(s)
- Mieko Ueno
- Department of Linguistics, University of California, San Diego, 9500 Gilman Drive #108, La Jolla, CA 92093-0108, USA.
| | | |
Collapse
|
38
|
Ruiz MH, Koelsch S, Bhattacharya J. Decrease in early right alpha band phase synchronization and late gamma band oscillations in processing syntax in music. Hum Brain Mapp 2009; 30:1207-25. [PMID: 18571796 PMCID: PMC6871114 DOI: 10.1002/hbm.20584] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2007] [Revised: 01/31/2008] [Accepted: 03/12/2008] [Indexed: 11/11/2022] Open
Abstract
The present study investigated the neural correlates associated with the processing of music-syntactical irregularities as compared with regular syntactic structures in music. Previous studies reported an early ( approximately 200 ms) right anterior negative component (ERAN) by traditional event-related-potential analysis during music-syntactical irregularities, yet little is known about the underlying oscillatory and synchronization properties of brain responses which are supposed to play a crucial role in general cognition including music perception. First we showed that the ERAN was primarily represented by low frequency (<8 Hz) brain oscillations. Further, we found that music-syntactical irregularities as compared with music-syntactical regularities, were associated with (i) an early decrease in the alpha band (9-10 Hz) phase synchronization between right fronto-central and left temporal brain regions, and (ii) a late ( approximately 500 ms) decrease in gamma band (38-50 Hz) oscillations over fronto-central brain regions. These results indicate a weaker degree of long-range integration when the musical expectancy is violated. In summary, our results reveal neural mechanisms of music-syntactic processing that operate at different levels of cortical integration, ranging from early decrease in long-range alpha phase synchronization to late local gamma oscillations.
Collapse
Affiliation(s)
- María Herrojo Ruiz
- Departamento de Física Fundamental, Universidad Nacional de Educación a Distancia, Madrid, Spain
- Institute of Music Physiology and Musician's Medicine, Hanover University of Music and Drama, Hanover, Germany
| | - Stefan Koelsch
- Department of Psychology, University of Sussex, Sussex, Falmer, Brighton, United Kingdom
- Max‐Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Joydeep Bhattacharya
- Department of Psychology, Goldsmiths College, University of London, New Cross, London, United Kingdom
- Commission for Scientific Visualization, Austrian Academy of Sciences, Vienna 1220, Austria
| |
Collapse
|
39
|
|
40
|
Koelsch S. Music-syntactic processing and auditory memory: Similarities and differences between ERAN and MMN. Psychophysiology 2009; 46:179-90. [PMID: 19055508 DOI: 10.1111/j.1469-8986.2008.00752.x] [Citation(s) in RCA: 102] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Affiliation(s)
- Stefan Koelsch
- Department of Psychology, University of Sussex, Brighton, UK.
| |
Collapse
|
41
|
Koelsch S, Kilches S, Steinbeis N, Schelinski S. Effects of unexpected chords and of performer's expression on brain responses and electrodermal activity. PLoS One 2008; 3:e2631. [PMID: 18612459 PMCID: PMC2435625 DOI: 10.1371/journal.pone.0002631] [Citation(s) in RCA: 61] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2008] [Accepted: 06/05/2008] [Indexed: 11/25/2022] Open
Abstract
Background There is lack of neuroscientific studies investigating music processing with naturalistic stimuli, and brain responses to real music are, thus, largely unknown. Methodology/Principal Findings This study investigates event-related brain potentials (ERPs), skin conductance responses (SCRs) and heart rate (HR) elicited by unexpected chords of piano sonatas as they were originally arranged by composers, and as they were played by professional pianists. From the musical excerpts played by the pianists (with emotional expression), we also created versions without variations in tempo and loudness (without musical expression) to investigate effects of musical expression on ERPs and SCRs. Compared to expected chords, unexpected chords elicited an early right anterior negativity (ERAN, reflecting music-syntactic processing) and an N5 (reflecting processing of meaning information) in the ERPs, as well as clear changes in the SCRs (reflecting that unexpected chords also elicited emotional responses). The ERAN was not influenced by emotional expression, whereas N5 potentials elicited by chords in general (regardless of their chord function) differed between the expressive and the non-expressive condition. Conclusions/Significance These results show that the neural mechanisms of music-syntactic processing operate independently of the emotional qualities of a stimulus, justifying the use of stimuli without emotional expression to investigate the cognitive processing of musical structure. Moreover, the data indicate that musical expression affects the neural mechanisms underlying the processing of musical meaning. Our data are the first to reveal influences of musical performance on ERPs and SCRs, and to show physiological responses to unexpected chords in naturalistic music.
Collapse
Affiliation(s)
- Stefan Koelsch
- Department of Psychology, University of Sussex, Brighton, United Kingdom.
| | | | | | | |
Collapse
|
42
|
Koelsch S, Jentschke S, Sammler D, Mietchen D. Untangling syntactic and sensory processing: an ERP study of music perception. Psychophysiology 2007; 44:476-90. [PMID: 17433099 DOI: 10.1111/j.1469-8986.2007.00517.x] [Citation(s) in RCA: 93] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
The present study investigated music-syntactic processing with chord sequences that ended on either regular or irregular chord functions. Sequences were composed such that perceived differences in the cognitive processing between syntactically regular and irregular chords could not be due to the sensory processing of acoustic factors like pitch repetition, pitch commonality (the major component of "sensory dissonance"), or roughness. Three experiments with independent groups of subjects were conducted: a behavioral experiment and two experiments using electroencephalography. Irregular chords elicited an early right anterior negativity (ERAN) in the event-related brain potentials (ERPs) under both task-relevant and task-irrelevant conditions. Behaviorally, participants detected around 75% of the irregular chords, indicating that these chords were only moderately salient. Nevertheless, the irregular chords reliably elicited clear ERP effects. Amateur musicians were slightly more sensitive to musical irregularities than nonmusicians, supporting previous studies demonstrating effects of musical training on music-syntactic processing. The findings indicate that the ERAN is an index of music-syntactic processing and that the ERAN can be elicited even when irregular chords are not detectable based on acoustical factors such as pitch repetition, sensory dissonance, or roughness.
Collapse
Affiliation(s)
- Stefan Koelsch
- Independent Junior Research Group Neurocognition of Music, Max-Planck-Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig, Germany.
| | | | | | | |
Collapse
|