1
|
Albury AW, Bianco R, Gold BP, Penhune VB. Context changes judgments of liking and predictability for melodies. Front Psychol 2023; 14:1175682. [PMID: 38034280 PMCID: PMC10684779 DOI: 10.3389/fpsyg.2023.1175682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 10/23/2023] [Indexed: 12/02/2023] Open
Abstract
Predictability plays an important role in the experience of musical pleasure. By leveraging expectations, music induces pleasure through tension and surprise. However, musical predictions draw on both prior knowledge and immediate context. Similarly, musical pleasure, which has been shown to depend on predictability, may also vary relative to the individual and context. Although research has demonstrated the influence of both long-term knowledge and stimulus features in influencing expectations, it is unclear how perceptions of a melody are influenced by comparisons to other music pieces heard in the same context. To examine the effects of context we compared how listeners' judgments of two distinct sets of stimuli differed when they were presented alone or in combination. Stimuli were excerpts from a repertoire of Western music and a set of experimenter created melodies. Separate groups of participants rated liking and predictability for each set of stimuli alone and in combination. We found that when heard together, the Repertoire stimuli were more liked and rated as less predictable than if they were heard alone, with the opposite pattern being observed for the Experimental stimuli. This effect was driven by a change in ratings between the Alone and Combined conditions for each stimulus set. These findings demonstrate a context-based shift of predictability ratings and derived pleasure, suggesting that judgments stem not only from the physical properties of the stimulus, but also vary relative to other options available in the immediate context.
Collapse
Affiliation(s)
- Alexander W. Albury
- Department of Psychology, Concordia University, Montreal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS) and Center for Research in Brain, Language and Music (CRBLM), Montreal, QC, Canada
| | - Roberta Bianco
- Neuroscience of Perception and Action Laboratory, Italian Institute of Technology, Rome, Italy
| | - Benjamin P. Gold
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, United States
| | - Virginia B. Penhune
- Department of Psychology, Concordia University, Montreal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS) and Center for Research in Brain, Language and Music (CRBLM), Montreal, QC, Canada
| |
Collapse
|
2
|
Gold BP, Pearce MT, McIntosh AR, Chang C, Dagher A, Zatorre RJ. Auditory and reward structures reflect the pleasure of musical expectancies during naturalistic listening. Front Neurosci 2023; 17:1209398. [PMID: 37928727 PMCID: PMC10625409 DOI: 10.3389/fnins.2023.1209398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 10/05/2023] [Indexed: 11/07/2023] Open
Abstract
Enjoying music consistently engages key structures of the neural auditory and reward systems such as the right superior temporal gyrus (R STG) and ventral striatum (VS). Expectations seem to play a central role in this effect, as preferences reliably vary according to listeners' uncertainty about the musical future and surprise about the musical past. Accordingly, VS activity reflects the pleasure of musical surprise, and exhibits stronger correlations with R STG activity as pleasure grows. Yet the reward value of musical surprise - and thus the reason for these surprises engaging the reward system - remains an open question. Recent models of predictive neural processing and learning suggest that forming, testing, and updating hypotheses about one's environment may be intrinsically rewarding, and that the constantly evolving structure of musical patterns could provide ample opportunity for this procedure. Consistent with these accounts, our group previously found that listeners tend to prefer melodic excerpts taken from real music when it either validates their uncertain melodic predictions (i.e., is high in uncertainty and low in surprise) or when it challenges their highly confident ones (i.e., is low in uncertainty and high in surprise). An independent research group (Cheung et al., 2019) replicated these results with musical chord sequences, and identified their fMRI correlates in the STG, amygdala, and hippocampus but not the VS, raising new questions about the neural mechanisms of musical pleasure that the present study seeks to address. Here, we assessed concurrent liking ratings and hemodynamic fMRI signals as 24 participants listened to 50 naturalistic, real-world musical excerpts that varied across wide spectra of computationally modeled uncertainty and surprise. As in previous studies, liking ratings exhibited an interaction between uncertainty and surprise, with the strongest preferences for high uncertainty/low surprise and low uncertainty/high surprise. FMRI results also replicated previous findings, with music liking effects in the R STG and VS. Furthermore, we identify interactions between uncertainty and surprise on the one hand, and liking and surprise on the other, in VS activity. Altogether, these results provide important support for the hypothesized role of the VS in deriving pleasure from learning about musical structure.
Collapse
Affiliation(s)
- Benjamin P. Gold
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, United States
- Vanderbilt University Institute of Imaging Science, Vanderbilt University Medical Center, Nashville, TN, United States
- Montreal Neurological Institute, McGill University, Montreal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada
- Centre for Interdisciplinary Research in Music, Media, and Technology (CIRMMT), Montreal, QC, Canada
| | - Marcus T. Pearce
- Cognitive Science Research Group, School of Electronic Engineering & Computer Science, Queen Mary University of London, London, United Kingdom
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Anthony R. McIntosh
- Baycrest Centre, Rotman Research Institute, Toronto, ON, Canada
- Department of Psychology, University of Toronto, Toronto, ON, Canada
| | - Catie Chang
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN, United States
- Vanderbilt University Institute of Imaging Science, Vanderbilt University Medical Center, Nashville, TN, United States
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, United States
- Department of Computer Science, Vanderbilt University, Nashville, TN, United States
| | - Alain Dagher
- Montreal Neurological Institute, McGill University, Montreal, QC, Canada
| | - Robert J. Zatorre
- Montreal Neurological Institute, McGill University, Montreal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada
- Centre for Interdisciplinary Research in Music, Media, and Technology (CIRMMT), Montreal, QC, Canada
| |
Collapse
|
3
|
Loui P. New music system reveals spectral contribution to statistical learning. Cognition 2022; 224:105071. [PMID: 35227982 DOI: 10.1016/j.cognition.2022.105071] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 02/10/2022] [Accepted: 02/20/2022] [Indexed: 11/03/2022]
Abstract
Knowledge of speech and music depends upon the ability to perceive relationships between sounds in order to form a stable mental representation of statistical structure. Although evidence exists for the learning of musical scale structure from the statistical properties of sound events, little research has been able to observe how specific acoustic features contribute to statistical learning independent of the effects of long-term exposure. Here, using a new musical system, we show that spectral content is an important cue for acquiring musical scale structure. In two experiments, participants completed probe-tone ratings before and after a half-hour period of exposure to melodies in a novel musical scale with a predefined statistical structure. In Experiment 1, participants were randomly assigned to either a no-exposure control group, or to exposure groups who heard pure tone or complex tone sequences. In Experiment 2, participants were randomly assigned to exposure groups who heard complex tones constructed with odd harmonics or even harmonics. Learning outcome was assessed by correlating pre/post-exposure ratings and the statistical structure of tones within the exposure period. Spectral information significantly affected sensitivity to statistical structure: participants were able to learn after exposure to all tested timbres, but did best at learning with timbres with odd harmonics, which were congruent with scale structure. Results show that spectral amplitude distribution is a useful cue for statistical learning, and suggest that musical scale structure might be acquired through exposure to spectral distribution in sounds.
Collapse
|
4
|
Musical instrument familiarity affects statistical learning of tone sequences. Cognition 2021; 218:104949. [PMID: 34768123 DOI: 10.1016/j.cognition.2021.104949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Revised: 06/22/2021] [Accepted: 10/28/2021] [Indexed: 11/23/2022]
Abstract
Most listeners have an implicit understanding of the rules that govern how music unfolds over time. This knowledge is acquired in part through statistical learning, a robust learning mechanism that allows individuals to extract regularities from the environment. However, it is presently unclear how this prior musical knowledge might facilitate or interfere with the learning of novel tone sequences that do not conform to familiar musical rules. In the present experiment, participants listened to novel, statistically structured tone sequences composed of pitch intervals not typically found in Western music. Between participants, the tone sequences either had the timbre of artificial, computerized instruments or familiar instruments (piano or violin). Knowledge of the statistical regularities was measured as by a two-alternative forced choice recognition task, requiring discrimination between novel sequences that followed versus violated the statistical structure, assessed at three time points (immediately post-training, as well as one day and one week post-training). Compared to artificial instruments, training on familiar instruments resulted in reduced accuracy. Moreover, sequences from familiar instruments - but not artificial instruments - were more likely to be judged as grammatical when they contained intervals that approximated those commonly used in Western music, even though this cue was non-informative. Overall, these results demonstrate that instrument familiarity can interfere with the learning of novel statistical regularities, presumably through biasing memory representations to be aligned with Western musical structures. These results demonstrate that real-world experience influences statistical learning in a non-linguistic domain, supporting the view that statistical learning involves the continuous updating of existing representations, rather than the establishment of entirely novel ones.
Collapse
|
5
|
Mendoza JK, Fausey CM. Everyday music in infancy. Dev Sci 2021; 24:e13122. [PMID: 34170059 PMCID: PMC8596421 DOI: 10.1111/desc.13122] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2019] [Revised: 03/18/2021] [Accepted: 03/30/2021] [Indexed: 11/28/2022]
Abstract
Infants enculturate to their soundscape over the first year of life, yet theories of how they do so rarely make contact with details about the sounds available in everyday life. Here, we report on properties of a ubiquitous early ecology in which foundational skills get built: music. We captured daylong recordings from 35 infants ages 6–12 months at home and fully double‐coded 467 h of everyday sounds for music and its features, tunes, and voices. Analyses of this first‐of‐its‐kind corpus revealed two distributional properties of infants’ everyday musical ecology. First, infants encountered vocal music in over half, and instrumental in over three‐quarters, of everyday music. Live sources generated one‐third, and recorded sources three‐quarters, of everyday music. Second, infants did not encounter each individual tune and voice in their day equally often. Instead, the most available identity cumulated to many more seconds of the day than would be expected under a uniform distribution. These properties of everyday music in human infancy are different from what is discoverable in environments highly constrained by context (e.g., laboratories) and time (e.g., minutes rather than hours). Together with recent insights about the everyday motor, language, and visual ecologies of infancy, these findings reinforce an emerging priority to build theories of development that address the opportunities and challenges of real input encountered by real learners.
Collapse
Affiliation(s)
| | - Caitlin M Fausey
- Department of Psychology, University of Oregon, Eugene, Oregon, USA
| |
Collapse
|
6
|
Miles SA, Rosen DS, Barry S, Grunberg D, Grzywacz N. What to Expect When the Unexpected Becomes Expected: Harmonic Surprise and Preference Over Time in Popular Music. Front Hum Neurosci 2021; 15:578644. [PMID: 33994972 PMCID: PMC8121146 DOI: 10.3389/fnhum.2021.578644] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Accepted: 03/29/2021] [Indexed: 11/22/2022] Open
Abstract
Previous work demonstrates that music with more surprising chords tends to be perceived as more enjoyable than music with more conventional harmonic structures. In that work, harmonic surprise was computed based upon a static distribution of chords. This would assume that harmonic surprise is constant over time, and the effect of harmonic surprise on music preference is similarly static. In this study we assess that assumption and establish that the relationship between harmonic surprise (as measured according to a specific time period) and music preference is not constant as time goes on. Analyses of harmonic surprise and preference from 1958 to 1991 showed increased harmonic surprise over time, and that this increase was significantly more pronounced in preferred songs. Separate analyses showed similar increases over the years from 2000 to 2019. As such, these findings provide evidence that the human perception of tonality is influenced by exposure. Baseline harmonic expectations that were developed through listening to the music of “yesterday” are violated in the music of “today,” leading to preference. Then, once the music of “today” provides the baseline expectations for the music of “tomorrow,” more pronounced violations—and with them, higher harmonic surprise values—become associated with preference formation. We call this phenomenon the “Inflationary-Surprise Hypothesis.” Support for this hypothesis could impact the understanding of how the perception of tonality, and other statistical regularities, are developed in the human brain.
Collapse
Affiliation(s)
- Scott A Miles
- Interdisciplinary Program in Neuroscience, Georgetown University, Washington, DC, United States.,Secret Chord Laboratories, Norfolk, VA, United States
| | - David S Rosen
- Secret Chord Laboratories, Norfolk, VA, United States.,Music and Entertainment Technology Laboratory, Drexel University, Philadelphia, PA, United States
| | - Shaun Barry
- Secret Chord Laboratories, Norfolk, VA, United States
| | | | - Norberto Grzywacz
- Interdisciplinary Program in Neuroscience, Georgetown University, Washington, DC, United States.,Department of Psychology, Loyola University Chicago, Chicago, IL, United States.,Department of Molecular Pharmacology and Neuroscience, Loyola University Chicago, Chicago, IL, United States
| |
Collapse
|
7
|
Zioga I, Harrison PMC, Pearce MT, Bhattacharya J, Luft CDB. Auditory but Not Audiovisual Cues Lead to Higher Neural Sensitivity to the Statistical Regularities of an Unfamiliar Musical Style. J Cogn Neurosci 2020; 32:2241-2259. [PMID: 32762519 DOI: 10.1162/jocn_a_01614] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
It is still a matter of debate whether visual aids improve learning of music. In a multisession study, we investigated the neural signatures of novel music sequence learning with or without aids (auditory-only: AO, audiovisual: AV). During three training sessions on three separate days, participants (nonmusicians) reproduced (note by note on a keyboard) melodic sequences generated by an artificial musical grammar. The AV group (n = 20) had each note color-coded on screen, whereas the AO group (n = 20) had no color indication. We evaluated learning of the statistical regularities of the novel music grammar before and after training by presenting melodies ending on correct or incorrect notes and by asking participants to judge the correctness and surprisal of the final note, while EEG was recorded. We found that participants successfully learned the new grammar. Although the AV group, as compared to the AO group, reproduced longer sequences during training, there was no significant difference in learning between groups. At the neural level, after training, the AO group showed a larger N100 response to low-probability compared with high-probability notes, suggesting an increased neural sensitivity to statistical properties of the grammar; this effect was not observed in the AV group. Our findings indicate that visual aids might improve sequence reproduction while not necessarily promoting better learning, indicating a potential dissociation between sequence reproduction and learning. We suggest that the difficulty induced by auditory-only input during music training might enhance cognitive engagement, thereby improving neural sensitivity to the underlying statistical properties of the learned material.
Collapse
|
8
|
Zioga I, Harrison PM, Pearce MT, Bhattacharya J, Di Bernardi Luft C. From learning to creativity: Identifying the behavioural and neural correlates of learning to predict human judgements of musical creativity. Neuroimage 2020; 206:116311. [DOI: 10.1016/j.neuroimage.2019.116311] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Revised: 10/18/2019] [Accepted: 10/22/2019] [Indexed: 10/25/2022] Open
|
9
|
Pearce MT. Statistical learning and probabilistic prediction in music cognition: mechanisms of stylistic enculturation. Ann N Y Acad Sci 2018; 1423:378-395. [PMID: 29749625 PMCID: PMC6849749 DOI: 10.1111/nyas.13654] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2017] [Revised: 01/31/2018] [Accepted: 02/06/2018] [Indexed: 11/28/2022]
Abstract
Music perception depends on internal psychological models derived through exposure to a musical culture. It is hypothesized that this musical enculturation depends on two cognitive processes: (1) statistical learning, in which listeners acquire internal cognitive models of statistical regularities present in the music to which they are exposed; and (2) probabilistic prediction based on these learned models that enables listeners to organize and process their mental representations of music. To corroborate these hypotheses, I review research that uses a computational model of probabilistic prediction based on statistical learning (the information dynamics of music (IDyOM) model) to simulate data from empirical studies of human listeners. The results show that a broad range of psychological processes involved in music perception-expectation, emotion, memory, similarity, segmentation, and meter-can be understood in terms of a single, underlying process of probabilistic prediction using learned statistical models. Furthermore, IDyOM simulations of listeners from different musical cultures demonstrate that statistical learning can plausibly predict causal effects of differential cultural exposure to musical styles, providing a quantitative model of cultural distance. Understanding the neural basis of musical enculturation will benefit from close coordination between empirical neuroimaging and computational modeling of underlying mechanisms, as outlined here.
Collapse
Affiliation(s)
- Marcus T. Pearce
- Cognitive Science Research Group, School of Electronic Engineering and Computer ScienceQueen Mary University of LondonLondonUK
- Centre for Music in the BrainAarhus UniversityAarhusDenmark
| |
Collapse
|
10
|
Heald SLM, Van Hedger SC, Nusbaum HC. Perceptual Plasticity for Auditory Object Recognition. Front Psychol 2017; 8:781. [PMID: 28588524 PMCID: PMC5440584 DOI: 10.3389/fpsyg.2017.00781] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2016] [Accepted: 04/26/2017] [Indexed: 01/25/2023] Open
Abstract
In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as "noise" in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples of perceptual categories that are thought to be highly stable. This framework suggests that the process of auditory recognition cannot be divorced from the short-term context in which an auditory object is presented. Implications for auditory category acquisition and extant models of auditory perception, both cognitive and neural, are discussed.
Collapse
|
11
|
Miles SA, Rosen DS, Grzywacz NM. A Statistical Analysis of the Relationship between Harmonic Surprise and Preference in Popular Music. Front Hum Neurosci 2017; 11:263. [PMID: 28572763 PMCID: PMC5435755 DOI: 10.3389/fnhum.2017.00263] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2016] [Accepted: 05/02/2017] [Indexed: 11/24/2022] Open
Abstract
Studies have shown that some musical pieces may preferentially activate reward centers in the brain. Less is known, however, about the structural aspects of music that are associated with this activation. Based on the music cognition literature, we propose two hypotheses for why some musical pieces are preferred over others. The first, the Absolute-Surprise Hypothesis, states that unexpected events in music directly lead to pleasure. The second, the Contrastive-Surprise Hypothesis, proposes that the juxtaposition of unexpected events and subsequent expected events leads to an overall rewarding response. We tested these hypotheses within the framework of information theory, using the measure of "surprise." This information-theoretic variable mathematically describes how improbable an event is given a known distribution. We performed a statistical investigation of surprise in the harmonic structure of songs within a representative corpus of Western popular music, namely, the McGill Billboard Project corpus. We found that chords of songs in the top quartile of the Billboard chart showed greater average surprise than those in the bottom quartile. We also found that the different sections within top-quartile songs varied more in their average surprise than the sections within bottom-quartile songs. The results of this study are consistent with both the Absolute- and Contrastive-Surprise Hypotheses. Although these hypotheses seem contradictory to one another, we cannot yet discard the possibility that both absolute and contrastive types of surprise play roles in the enjoyment of popular music. We call this possibility the Hybrid-Surprise Hypothesis. The results of this statistical investigation have implications for both music cognition and the human neural mechanisms of esthetic judgments.
Collapse
Affiliation(s)
- Scott A. Miles
- Interdisciplinary Program in Neuroscience, Georgetown UniversityWashington, DC, United States
- Department of Neuroscience, Georgetown UniversityWashington, DC, United States
| | - David S. Rosen
- Applied Cognitive and Brain Sciences, Drexel UniversityPhiladelphia, PA, United States
| | - Norberto M. Grzywacz
- Interdisciplinary Program in Neuroscience, Georgetown UniversityWashington, DC, United States
- Department of Neuroscience, Georgetown UniversityWashington, DC, United States
- Department of Physics, Georgetown UniversityWashington, DC, United States
- Graduate School of Arts and Sciences, Georgetown UniversityWashington, DC, United States
| |
Collapse
|
12
|
Agres K, Abdallah S, Pearce M. Information-Theoretic Properties of Auditory Sequences Dynamically Influence Expectation and Memory. Cogn Sci 2017; 42:43-76. [DOI: 10.1111/cogs.12477] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2015] [Revised: 11/17/2016] [Accepted: 11/28/2016] [Indexed: 11/30/2022]
Affiliation(s)
- Kat Agres
- School of Electronic Engineering and Computer Science; Queen Mary University of London
| | - Samer Abdallah
- Department of Computer Science; University College London
| | - Marcus Pearce
- School of Electronic Engineering and Computer Science; Queen Mary University of London
| |
Collapse
|
13
|
Rohrmeier M, Widdess R. Incidental Learning of Melodic Structure of North Indian Music. Cogn Sci 2016; 41:1299-1327. [PMID: 27859578 DOI: 10.1111/cogs.12404] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2012] [Revised: 03/11/2016] [Accepted: 03/21/2016] [Indexed: 11/28/2022]
Abstract
Musical knowledge is largely implicit. It is acquired without awareness of its complex rules, through interaction with a large number of samples during musical enculturation. Whereas several studies explored implicit learning of mostly abstract and less ecologically valid features of Western music, very little work has been done with respect to ecologically valid stimuli as well as non-Western music. The present study investigated implicit learning of modal melodic features in North Indian classical music in a realistic and ecologically valid way. It employed a cross-grammar design, using melodic materials from two modes (rāgas) that use the same scale. Findings indicated that Western participants unfamiliar with Indian music incidentally learned to identify distinctive features of each mode. Confidence ratings suggest that participants' performance was consistently correlated with confidence, indicating that they became aware of whether they were right in their responses; that is, they possessed explicit judgment knowledge. Altogether our findings show incidental learning in a realistic ecologically valid context during only a very short exposure, they provide evidence that incidental learning constitutes a powerful mechanism that plays a fundamental role in musical acquisition.
Collapse
Affiliation(s)
- Martin Rohrmeier
- Department of Art and Musicology, Dresden University of Technology.,Department of Linguistics and Philosophy, MIT Intelligence Initiative, Massachusetts Institute of Technology
| | - Richard Widdess
- Department of Music, School of Oriental and African Studies, University of London
| |
Collapse
|
14
|
Rohrmeier MA, Cross I. Modelling unsupervised online-learning of artificial grammars: linking implicit and statistical learning. Conscious Cogn 2014; 27:155-67. [PMID: 24905545 DOI: 10.1016/j.concog.2014.03.011] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2011] [Revised: 03/26/2014] [Accepted: 03/28/2014] [Indexed: 11/25/2022]
Abstract
Humans rapidly learn complex structures in various domains. Findings of above-chance performance of some untrained control groups in artificial grammar learning studies raise questions about the extent to which learning can occur in an untrained, unsupervised testing situation with both correct and incorrect structures. The plausibility of unsupervised online-learning effects was modelled with n-gram, chunking and simple recurrent network models. A novel evaluation framework was applied, which alternates forced binary grammaticality judgments and subsequent learning of the same stimulus. Our results indicate a strong online learning effect for n-gram and chunking models and a weaker effect for simple recurrent network models. Such findings suggest that online learning is a plausible effect of statistical chunk learning that is possible when ungrammatical sequences contain a large proportion of grammatical chunks. Such common effects of continuous statistical learning may underlie statistical and implicit learning paradigms and raise implications for study design and testing methodologies.
Collapse
Affiliation(s)
- Martin A Rohrmeier
- Cluster Languages of Emotion, Freie Universität Berlin, Habelschwerdter Allee 45, 14195 Berlin, Germany; Centre for Music and Science, Faculty of Music, University of Cambridge, United Kingdom.
| | - Ian Cross
- Centre for Music and Science, Faculty of Music, University of Cambridge, United Kingdom.
| |
Collapse
|
15
|
Rohrmeier M, Cross I. Artificial grammar learning of melody is constrained by melodic inconsistency: Narmour's principles affect melodic learning. PLoS One 2013; 8:e66174. [PMID: 23874388 PMCID: PMC3706544 DOI: 10.1371/journal.pone.0066174] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2012] [Accepted: 05/07/2013] [Indexed: 11/18/2022] Open
Abstract
Considerable evidence suggests that people acquire artificial grammars incidentally and implicitly, an indispensable capacity for the acquisition of music or language. However, less research has been devoted to exploring constraints affecting incidental learning. Within the domain of music, the extent to which Narmour's (1990) melodic principles affect implicit learning of melodic structure was experimentally explored. Extending previous research (Rohrmeier, Rebuschat & Cross, 2011), the identical finite-state grammar is employed having terminals (the alphabet) manipulated so that melodies generated systematically violated Narmour's principles. Results indicate that Narmour-inconsistent melodic materials impede implicit learning. This further constitutes a case in which artificial grammar learning is affected by prior knowledge or processing constraints.
Collapse
Affiliation(s)
- Martin Rohrmeier
- Massachusetts Institute of Technology, Cambridge, Massachussetts, United States of America
- Centre for Music and Science, Faculty of Music, University of Cambridge, Cambridge, United Kingdom
| | - Ian Cross
- Massachusetts Institute of Technology, Cambridge, Massachussetts, United States of America
- * E-mail:
| |
Collapse
|
16
|
|
17
|
Loui P. Learning and liking of melody and harmony: further studies in artificial grammar learning. Top Cogn Sci 2012; 4:554-67. [PMID: 22760940 DOI: 10.1111/j.1756-8765.2012.01208.x] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Much of what we know and love about music is based on implicitly acquired mental representations of musical pitches and the relationships between them. While previous studies have shown that these mental representations of music can be acquired rapidly and can influence preference, it is still unclear which aspects of music influence learning and preference formation. This article reports two experiments that use an artificial musical system to examine two questions: (1) which aspects of music matter most for learning, and (2) which aspects of music matter most for preference formation. Two aspects of music are tested: melody and harmony. In Experiment 1 we tested the learning and liking of a new musical system that is manipulated melodically so that only some of the possible conditional probabilities between successive notes are presented. In Experiment 2 we administered the same tests for learning and liking, but we used a musical system that is manipulated harmonically to eliminate the property of harmonic whole-integer ratios between pitches. Results show that disrupting melody (Experiment 1) disabled the learning of music without disrupting preference formation, whereas disrupting harmony (Experiment 2) does not affect learning and memory but disrupts preference formation. Results point to a possible dissociation between learning and preference in musical knowledge.
Collapse
Affiliation(s)
- Psyche Loui
- Department of Neurology, Beth Israel Deaconess Medical Center, Harvard Medical School, USA.
| |
Collapse
|
18
|
Abstract
Musical knowledge is ubiquitous, effortless, and implicitly acquired all over the world via exposure to musical materials in one's culture. In contrast, one group of individuals who show insensitivity to music, specifically the inability to discriminate pitches and melodies, is the tone-deaf. In this study, we asked whether difficulties in pitch and melody discrimination among the tone-deaf could be related to learning difficulties, and, if so, what processes of learning might be affected in the tone-deaf. We investigated the learning of frequency information in a new musical system in tone-deaf individuals and matched controls. Results showed significantly impaired learning abilities in frequency matching in the tone-deaf. This impairment was positively correlated with the severity of tone deafness as assessed by the Montreal Battery for Evaluation of Amusia. Taken together, the results suggest that tone deafness is characterized by an impaired ability to acquire frequency information from pitched materials in the sound environment.
Collapse
Affiliation(s)
- Psyche Loui
- Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts 02215, USA.
| | | |
Collapse
|
19
|
Incidental and online learning of melodic structure. Conscious Cogn 2011; 20:214-22. [DOI: 10.1016/j.concog.2010.07.004] [Citation(s) in RCA: 58] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2010] [Revised: 07/19/2010] [Accepted: 07/20/2010] [Indexed: 11/21/2022]
|
20
|
Kim SG, Kim JS, Chung CK. The effect of conditional probability of chord progression on brain response: an MEG study. PLoS One 2011; 6:e17337. [PMID: 21364895 PMCID: PMC3045443 DOI: 10.1371/journal.pone.0017337] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2010] [Accepted: 01/29/2011] [Indexed: 01/14/2023] Open
Abstract
Background Recent electrophysiological and neuroimaging studies have explored how and where musical syntax in Western music is processed in the human brain. An inappropriate chord progression elicits an event-related potential (ERP) component called an early right anterior negativity (ERAN) or simply an early anterior negativity (EAN) in an early stage of processing the musical syntax. Though the possible underlying mechanism of the EAN is assumed to be probabilistic learning, the effect of the probability of chord progressions on the EAN response has not been previously explored explicitly. Methodology/Principal Findings In the present study, the empirical conditional probabilities in a Western music corpus were employed as an approximation of the frequencies in previous exposure of participants. Three types of chord progression were presented to musicians and non-musicians in order to examine the correlation between the probability of chord progression and the neuromagnetic response using magnetoencephalography (MEG). Chord progressions were found to elicit early responses in a negatively correlating fashion with the conditional probability. Observed EANm (as a magnetic counterpart of the EAN component) responses were consistent with the previously reported EAN responses in terms of latency and location. The effect of conditional probability interacted with the effect of musical training. In addition, the neural response also correlated with the behavioral measures in the non-musicians. Conclusions/Significance Our study is the first to reveal the correlation between the probability of chord progression and the corresponding neuromagnetic response. The current results suggest that the physiological response is a reflection of the probabilistic representations of the musical syntax. Moreover, the results indicate that the probabilistic representation is related to the musical training as well as the sensitivity of an individual.
Collapse
Affiliation(s)
- Seung-Goo Kim
- Interdisciplinary Program in Cognitive Science, Seoul National University, Seoul, Korea
| | - June Sic Kim
- MEG Center, Department of Neurosurgery, Seoul National University College of Medicine, Seoul, Korea
| | - Chun Kee Chung
- Interdisciplinary Program in Cognitive Science, Seoul National University, Seoul, Korea
- MEG Center, Department of Neurosurgery, Seoul National University College of Medicine, Seoul, Korea
- * E-mail:
| |
Collapse
|
21
|
White matter integrity in right hemisphere predicts pitch-related grammar learning. Neuroimage 2010; 55:500-7. [PMID: 21168517 DOI: 10.1016/j.neuroimage.2010.12.022] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2010] [Revised: 11/23/2010] [Accepted: 12/06/2010] [Indexed: 11/24/2022] Open
Abstract
White matter plays an important role in various domains of cognitive function. While disruptions in white matter are known to affect many domains of behavior and cognition, the ability to acquire grammatical regularities has been mostly linked to the left hemisphere, perhaps due to its dependence on linguistic stimuli. The role of white matter in the right hemisphere in grammar acquisition is yet unknown. Here we show for the first time that in the domain of pitch, intact white matter connectivity in right-hemisphere analogs of language areas is important for grammar learning. A pitch-based artificial grammar learning task was conducted on subjects who also underwent diffusion tensor imaging. Probabilistic tractography using seed regions of interest in the right inferior frontal gyrus and right middle temporal gyrus showed positive correlations between tract volume and learning performance. Furthermore, significant correlations were observed between learning performance and FA in white matter underlying the supramarginal gyrus, corresponding to the right temporal-parietal junction of the arcuate fasciculus. The control task of recognition did not correlate with tract volume or FA, and control tracts in the left hemisphere did not correlate with behavioral performance. Results show that the right ventral arcuate fasciculus is important in pitch-based artificial grammar learning, and that brain structures subserving learning may be tied to the hemisphere that processes the stimulus more generally.
Collapse
|
22
|
Tillmann B, Poulin-Charronnat B. Auditory expectations for newly acquired structures. Q J Exp Psychol (Hove) 2010; 63:1646-64. [DOI: 10.1080/17470210903511228] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Our study investigated whether newly acquired auditory structure knowledge allows listeners to develop perceptual expectations for future events. For that aim, we introduced a new experimental approach that combines implicit learning and priming paradigms. Participants were first exposed to structured tone sequences without being told about the underlying artificial grammar. They then made speeded judgements on a perceptual feature of target tones in new sequences (i.e., in-tune/out-of-tune judgements). The target tones respected or violated the structure of the artificial grammar and were thus supposed to be expected or unexpected. In this priming task, grammatical tones were processed faster and more accurately than ungrammatical ones. This processing advantage was observed for an experimental group performing a memory task during the exposure phase, but was not observed for a control group, which was lacking the exposure phase (Experiment 1). It persisted when participants realized an in-tune/out-of-tune detection task during exposure (Experiment 2). This finding suggests that the acquisition of new structure knowledge not only influences grammaticality judgements on entire sequences (as previously shown in implicit learning research), but allows developing perceptual expectations that influence single event processing. It further promotes the priming paradigm as an implicit access to acquired artificial structure knowledge.
Collapse
Affiliation(s)
| | - Bénédicte Poulin-Charronnat
- Université Claude Bernard Lyon 1, CNRS-UMR 5020, Lyon, France Université de Bourgogne, CNRS-UMR 5022, Dijon, France
| |
Collapse
|
23
|
Abstract
Surviving in a complex and changeable environment relies on the ability to extract probable recurring patterns. Here we report a neurophysiological mechanism for rapid probabilistic learning of a new system of music. Participants listened to different combinations of tones from a previously unheard system of pitches based on the Bohlen-Pierce scale, with chord progressions that form 3:1 ratios in frequency, notably different from 2:1 frequency ratios in existing musical systems. Event-related brain potentials elicited by improbable sounds in the new music system showed emergence over a 1 h period of physiological signatures known to index sound expectation in standard Western music. These indices of expectation learning were eliminated when sound patterns were played equiprobably, and covaried with individual behavioral differences in learning. These results demonstrate that humans use a generalized probability-based perceptual learning mechanism to process novel sound patterns in music.
Collapse
|