1
|
Ishida K, Nittono H. Statistical Learning of Chord-Transition Regularities in a Novel Equitempered Scale: An MMN Study. Neurosci Lett 2023; 815:137478. [PMID: 37714286 DOI: 10.1016/j.neulet.2023.137478] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Revised: 08/02/2023] [Accepted: 09/11/2023] [Indexed: 09/17/2023]
Abstract
In music and language domains, it has been suggested that patterned transitions of sounds can be acquired implicitly through statistical learning. Previous studies have investigated the statistical learning of auditory regularities by recording early neural responses to a sequence of tones presented at high or low transition probabilities. However, it remains unclear whether the statistical learning of musical chord transitions is reflected in endogenous, regularity-dependent components of the event-related potential (ERP). The present study aimed to record the mismatch negativity (MMN) elicited by chord transitions that deviated from newly learned transitional regularities. Chords were generated in a novel 18 equal temperament pitch class scale to avoid interference from the existing tonal representations of the 12 equal temperament pitch class system. Thirty-six adults without professional musical training listened to a sequence of randomly inverted chords in which certain chords were presented with high (standard) or low (deviant) transition probabilities. An irrelevant timbre change detection task was assigned to make them attend to the sequence during the ERP recording. After that, a familiarity test was administered in which the participants were asked to choose the more familiar chord sequence out of two successive sequences. The results showed that deviant transitions elicited the MMN, although the participants could not recognize the standard transition beyond the level of chance. These findings suggest that humans can statistically learn new transitional regularities of chords in a novel musical scale, even though they did not recognize them explicitly. This study provides further evidence that music-syntactic regularities can be acquired implicitly through statistical learning.
Collapse
Affiliation(s)
- Kai Ishida
- Graduate School of Human Sciences, Osaka University, 1-2 Yamadaoka, Suita, Osaka 565-0871, JAPAN.
| | - Hiroshi Nittono
- Graduate School of Human Sciences, Osaka University, 1-2 Yamadaoka, Suita, Osaka 565-0871, JAPAN
| |
Collapse
|
2
|
Loui P. New music system reveals spectral contribution to statistical learning. Cognition 2022; 224:105071. [PMID: 35227982 DOI: 10.1016/j.cognition.2022.105071] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 02/10/2022] [Accepted: 02/20/2022] [Indexed: 11/03/2022]
Abstract
Knowledge of speech and music depends upon the ability to perceive relationships between sounds in order to form a stable mental representation of statistical structure. Although evidence exists for the learning of musical scale structure from the statistical properties of sound events, little research has been able to observe how specific acoustic features contribute to statistical learning independent of the effects of long-term exposure. Here, using a new musical system, we show that spectral content is an important cue for acquiring musical scale structure. In two experiments, participants completed probe-tone ratings before and after a half-hour period of exposure to melodies in a novel musical scale with a predefined statistical structure. In Experiment 1, participants were randomly assigned to either a no-exposure control group, or to exposure groups who heard pure tone or complex tone sequences. In Experiment 2, participants were randomly assigned to exposure groups who heard complex tones constructed with odd harmonics or even harmonics. Learning outcome was assessed by correlating pre/post-exposure ratings and the statistical structure of tones within the exposure period. Spectral information significantly affected sensitivity to statistical structure: participants were able to learn after exposure to all tested timbres, but did best at learning with timbres with odd harmonics, which were congruent with scale structure. Results show that spectral amplitude distribution is a useful cue for statistical learning, and suggest that musical scale structure might be acquired through exposure to spectral distribution in sounds.
Collapse
|
3
|
Musical instrument familiarity affects statistical learning of tone sequences. Cognition 2021; 218:104949. [PMID: 34768123 DOI: 10.1016/j.cognition.2021.104949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Revised: 06/22/2021] [Accepted: 10/28/2021] [Indexed: 11/23/2022]
Abstract
Most listeners have an implicit understanding of the rules that govern how music unfolds over time. This knowledge is acquired in part through statistical learning, a robust learning mechanism that allows individuals to extract regularities from the environment. However, it is presently unclear how this prior musical knowledge might facilitate or interfere with the learning of novel tone sequences that do not conform to familiar musical rules. In the present experiment, participants listened to novel, statistically structured tone sequences composed of pitch intervals not typically found in Western music. Between participants, the tone sequences either had the timbre of artificial, computerized instruments or familiar instruments (piano or violin). Knowledge of the statistical regularities was measured as by a two-alternative forced choice recognition task, requiring discrimination between novel sequences that followed versus violated the statistical structure, assessed at three time points (immediately post-training, as well as one day and one week post-training). Compared to artificial instruments, training on familiar instruments resulted in reduced accuracy. Moreover, sequences from familiar instruments - but not artificial instruments - were more likely to be judged as grammatical when they contained intervals that approximated those commonly used in Western music, even though this cue was non-informative. Overall, these results demonstrate that instrument familiarity can interfere with the learning of novel statistical regularities, presumably through biasing memory representations to be aligned with Western musical structures. These results demonstrate that real-world experience influences statistical learning in a non-linguistic domain, supporting the view that statistical learning involves the continuous updating of existing representations, rather than the establishment of entirely novel ones.
Collapse
|
4
|
Zioga I, Harrison PMC, Pearce MT, Bhattacharya J, Luft CDB. Auditory but Not Audiovisual Cues Lead to Higher Neural Sensitivity to the Statistical Regularities of an Unfamiliar Musical Style. J Cogn Neurosci 2020; 32:2241-2259. [PMID: 32762519 DOI: 10.1162/jocn_a_01614] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
It is still a matter of debate whether visual aids improve learning of music. In a multisession study, we investigated the neural signatures of novel music sequence learning with or without aids (auditory-only: AO, audiovisual: AV). During three training sessions on three separate days, participants (nonmusicians) reproduced (note by note on a keyboard) melodic sequences generated by an artificial musical grammar. The AV group (n = 20) had each note color-coded on screen, whereas the AO group (n = 20) had no color indication. We evaluated learning of the statistical regularities of the novel music grammar before and after training by presenting melodies ending on correct or incorrect notes and by asking participants to judge the correctness and surprisal of the final note, while EEG was recorded. We found that participants successfully learned the new grammar. Although the AV group, as compared to the AO group, reproduced longer sequences during training, there was no significant difference in learning between groups. At the neural level, after training, the AO group showed a larger N100 response to low-probability compared with high-probability notes, suggesting an increased neural sensitivity to statistical properties of the grammar; this effect was not observed in the AV group. Our findings indicate that visual aids might improve sequence reproduction while not necessarily promoting better learning, indicating a potential dissociation between sequence reproduction and learning. We suggest that the difficulty induced by auditory-only input during music training might enhance cognitive engagement, thereby improving neural sensitivity to the underlying statistical properties of the learned material.
Collapse
|
5
|
Zioga I, Harrison PM, Pearce MT, Bhattacharya J, Di Bernardi Luft C. From learning to creativity: Identifying the behavioural and neural correlates of learning to predict human judgements of musical creativity. Neuroimage 2020; 206:116311. [DOI: 10.1016/j.neuroimage.2019.116311] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Revised: 10/18/2019] [Accepted: 10/22/2019] [Indexed: 10/25/2022] Open
|
6
|
Smit EA, Milne AJ, Dean RT, Weidemann G. Perception of affect in unfamiliar musical chords. PLoS One 2019; 14:e0218570. [PMID: 31226170 PMCID: PMC6588276 DOI: 10.1371/journal.pone.0218570] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2019] [Accepted: 06/04/2019] [Indexed: 11/23/2022] Open
Abstract
This study investigates the role of extrinsic and intrinsic predictors in the perception of affect in mostly unfamiliar musical chords from the Bohlen-Pierce microtonal tuning system. Extrinsic predictors are derived, in part, from long-term statistical regularities in music; for example, the prevalence of a chord in a corpus of music that is relevant to a participant. Conversely, intrinsic predictors make no use of long-term statistical regularities in music; for example, psychoacoustic features inherent in the music, such as roughness. Two types of affect were measured for each chord: pleasantness/unpleasantness and happiness/sadness. We modelled the data with a number of novel and well-established intrinsic predictors, namely roughness, harmonicity, spectral entropy and average pitch height; and a single extrinsic predictor, 12-TET Dissimilarity, which was estimated by the chord's smallest distance to any 12-tone equally tempered chord. Musical sophistication was modelled as a potential moderator of the above predictors. Two experiments were conducted, each using slightly different tunings of the Bohlen-Pierce musical system: a just intonation version and an equal-tempered version. It was found that, across both tunings and across both affective responses, all the tested intrinsic features and 12-TET Dissimilarity have consistent influences in the expected direction. These results contrast with much current music perception research, which tends to assume the dominance of extrinsic over intrinsic predictors. This study highlights the importance of both intrinsic characteristics of the acoustic signal itself, as well as extrinsic factors, such as 12-TET Dissimilarity, on perception of affect in music.
Collapse
Affiliation(s)
- Eline Adrianne Smit
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Milperra, NSW, Australia
| | - Andrew J. Milne
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Milperra, NSW, Australia
| | - Roger T. Dean
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Milperra, NSW, Australia
| | - Gabrielle Weidemann
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Milperra, NSW, Australia
- School of Social Sciences and Psychology, Western Sydney University, Milperra, NSW, Australia
| |
Collapse
|
7
|
Lumaca M, Ravignani A, Baggio G. Music Evolution in the Laboratory: Cultural Transmission Meets Neurophysiology. Front Neurosci 2018; 12:246. [PMID: 29713263 PMCID: PMC5911491 DOI: 10.3389/fnins.2018.00246] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2017] [Accepted: 03/29/2018] [Indexed: 11/16/2022] Open
Abstract
In recent years, there has been renewed interest in the biological and cultural evolution of music, and specifically in the role played by perceptual and cognitive factors in shaping core features of musical systems, such as melody, harmony, and rhythm. One proposal originates in the language sciences. It holds that aspects of musical systems evolve by adapting gradually, in the course of successive generations, to the structural and functional characteristics of the sensory and memory systems of learners and “users” of music. This hypothesis has found initial support in laboratory experiments on music transmission. In this article, we first review some of the most important theoretical and empirical contributions to the field of music evolution. Next, we identify a major current limitation of these studies, i.e., the lack of direct neural support for the hypothesis of cognitive adaptation. Finally, we discuss a recent experiment in which this issue was addressed by using event-related potentials (ERPs). We suggest that the introduction of neurophysiology in cultural transmission research may provide novel insights on the micro-evolutionary origins of forms of variation observed in cultural systems.
Collapse
Affiliation(s)
- Massimo Lumaca
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| | - Andrea Ravignani
- Artificial Intelligence Lab, Vrije Universiteit Brussel, Brussels, Belgium.,Research Department, Sealcentre Pieterburen, Pieterburen, Netherlands.,Language and Cognition Department, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
| | - Giosuè Baggio
- Language Acquisition and Language Processing Lab, Department of Language and Literature, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
8
|
Rohrmeier M, Widdess R. Incidental Learning of Melodic Structure of North Indian Music. Cogn Sci 2016; 41:1299-1327. [PMID: 27859578 DOI: 10.1111/cogs.12404] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2012] [Revised: 03/11/2016] [Accepted: 03/21/2016] [Indexed: 11/28/2022]
Abstract
Musical knowledge is largely implicit. It is acquired without awareness of its complex rules, through interaction with a large number of samples during musical enculturation. Whereas several studies explored implicit learning of mostly abstract and less ecologically valid features of Western music, very little work has been done with respect to ecologically valid stimuli as well as non-Western music. The present study investigated implicit learning of modal melodic features in North Indian classical music in a realistic and ecologically valid way. It employed a cross-grammar design, using melodic materials from two modes (rāgas) that use the same scale. Findings indicated that Western participants unfamiliar with Indian music incidentally learned to identify distinctive features of each mode. Confidence ratings suggest that participants' performance was consistently correlated with confidence, indicating that they became aware of whether they were right in their responses; that is, they possessed explicit judgment knowledge. Altogether our findings show incidental learning in a realistic ecologically valid context during only a very short exposure, they provide evidence that incidental learning constitutes a powerful mechanism that plays a fundamental role in musical acquisition.
Collapse
Affiliation(s)
- Martin Rohrmeier
- Department of Art and Musicology, Dresden University of Technology.,Department of Linguistics and Philosophy, MIT Intelligence Initiative, Massachusetts Institute of Technology
| | - Richard Widdess
- Department of Music, School of Oriental and African Studies, University of London
| |
Collapse
|
9
|
Processing structure in language and music: a case for shared reliance on cognitive control. Psychon Bull Rev 2016; 22:637-52. [PMID: 25092390 DOI: 10.3758/s13423-014-0712-4] [Citation(s) in RCA: 44] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
The relationship between structural processing in music and language has received increasing interest in the past several years, spurred by the influential Shared Syntactic Integration Resource Hypothesis (SSIRH; Patel, Nature Neuroscience, 6, 674-681, 2003). According to this resource-sharing framework, music and language rely on separable syntactic representations but recruit shared cognitive resources to integrate these representations into evolving structures. The SSIRH is supported by findings of interactions between structural manipulations in music and language. However, other recent evidence suggests that such interactions also can arise with nonstructural manipulations, and some recent neuroimaging studies report largely nonoverlapping neural regions involved in processing musical and linguistic structure. These conflicting results raise the question of exactly what shared (and distinct) resources underlie musical and linguistic structural processing. This paper suggests that one shared resource is prefrontal cortical mechanisms of cognitive control, which are recruited to detect and resolve conflict that occurs when expectations are violated and interpretations must be revised. By this account, musical processing involves not just the incremental processing and integration of musical elements as they occur, but also the incremental generation of musical predictions and expectations, which must sometimes be overridden and revised in light of evolving musical input.
Collapse
|
10
|
Predictions and the brain: how musical sounds become rewarding. Trends Cogn Sci 2015; 19:86-91. [DOI: 10.1016/j.tics.2014.12.001] [Citation(s) in RCA: 200] [Impact Index Per Article: 22.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2013] [Revised: 11/24/2014] [Accepted: 12/01/2014] [Indexed: 11/27/2022]
|
11
|
Selchenkova T, Jones MR, Tillmann B. The influence of temporal regularities on the implicit learning of pitch structures. Q J Exp Psychol (Hove) 2014; 67:2360-80. [DOI: 10.1080/17470218.2014.929155] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Implicit learning is the acquisition of complex information without the intention to learn. The aim of this study was to investigate the influence of temporal regularities on the implicit learning of an artificial pitch grammar. According to the dynamic attending theory (DAT) external regularities can entrain internal oscillators that guide attention over time, inducing temporal expectations that influence perception of future events. In the present study, the presentation of the artificial pitch grammar in the exposure phase was temporally either regular or irregular for one of two participant groups. Based on the DAT, it was hypothesized that the regular temporal presentation would favour implicit learning of tone structures in comparison to the irregular temporal presentation. Results demonstrated learning of the artificial grammar for the group with the regular exposure phase and partial learning for the group with the irregular exposure phase. These findings suggest that the regular presentation helps listeners to develop perceptual expectations about the temporal occurrence of future tones and thus facilitates the learning of the artificial pitch grammar.
Collapse
Affiliation(s)
- Tatiana Selchenkova
- CNRS, UMR5292; INSERM, U1028; Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics Team, Lyon, France
- University Lyon 1, Villeurbanne, France
| | - Mari Riess Jones
- Department of Psychology, The Ohio State University, Columbus, OH, USA
- Department of Psychology, University of California, Santa Barbara, CA, USA
| | - Barbara Tillmann
- CNRS, UMR5292; INSERM, U1028; Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics Team, Lyon, France
- University Lyon 1, Villeurbanne, France
| |
Collapse
|
12
|
Rohrmeier M, Cross I. Artificial grammar learning of melody is constrained by melodic inconsistency: Narmour's principles affect melodic learning. PLoS One 2013; 8:e66174. [PMID: 23874388 PMCID: PMC3706544 DOI: 10.1371/journal.pone.0066174] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2012] [Accepted: 05/07/2013] [Indexed: 11/18/2022] Open
Abstract
Considerable evidence suggests that people acquire artificial grammars incidentally and implicitly, an indispensable capacity for the acquisition of music or language. However, less research has been devoted to exploring constraints affecting incidental learning. Within the domain of music, the extent to which Narmour's (1990) melodic principles affect implicit learning of melodic structure was experimentally explored. Extending previous research (Rohrmeier, Rebuschat & Cross, 2011), the identical finite-state grammar is employed having terminals (the alphabet) manipulated so that melodies generated systematically violated Narmour's principles. Results indicate that Narmour-inconsistent melodic materials impede implicit learning. This further constitutes a case in which artificial grammar learning is affected by prior knowledge or processing constraints.
Collapse
Affiliation(s)
- Martin Rohrmeier
- Massachusetts Institute of Technology, Cambridge, Massachussetts, United States of America
- Centre for Music and Science, Faculty of Music, University of Cambridge, Cambridge, United Kingdom
| | - Ian Cross
- Massachusetts Institute of Technology, Cambridge, Massachussetts, United States of America
- * E-mail:
| |
Collapse
|
13
|
Abstract
Why should music be of interest to cognitive scientists, and what role does it play in human cognition? We review three factors that make music an important topic for cognitive scientific research. First, music is a universal human trait fulfilling crucial roles in everyday life. Second, music has an important part to play in ontogenetic development and human evolution. Third, appreciating and producing music simultaneously engage many complex perceptual, cognitive, and emotional processes, rendering music an ideal object for studying the mind. We propose an integrated status for music cognition in the Cognitive Sciences and conclude by reviewing challenges and big questions in the field and the way in which these reflect recent developments.
Collapse
Affiliation(s)
- Marcus Pearce
- Music Cognition Lab, School of Electronic Engineering & Computer Science, Queen Mary, University of London
| | | |
Collapse
|
14
|
|
15
|
|