1
|
Reply to 'Towards a cross-cultural framework for predictive coding of music'. Nat Rev Neurosci 2022; 23:641-642. [PMID: 35995945 DOI: 10.1038/s41583-022-00621-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
2
|
Shin H, Fujioka T. Effects of Visual Predictive Information and Sequential Context on Neural Processing of Musical Syntax. Front Psychol 2019; 9:2528. [PMID: 30618951 PMCID: PMC6300505 DOI: 10.3389/fpsyg.2018.02528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2018] [Accepted: 11/27/2018] [Indexed: 11/13/2022] Open
Abstract
The early right anterior negativity (ERAN) in event-related potentials (ERPs) is typically elicited by syntactically unexpected events in Western tonal music. We examined how visual predictive information influences syntactic processing, how musical or non-musical cues have different effects, and how they interact with sequential effects between trials, which could modulate with the strength of the sense of established tonality. The EEG was recorded from musicians who listened to chord sequences paired with one of four types of visual stimuli; two provided predictive information about the syntactic validity of the last chord through either musical notation of the whole sequence, or the word "regular" or "irregular," while the other two, empty musical staves or a blank screen, provided no information. Half of the sequences ended with the syntactically invalid Neapolitan sixth chord, while the other half ended with the Tonic chord. Clear ERAN was observed in frontocentral electrodes in all conditions. A principal component analysis (PCA) was performed on the grand average response in the audio-only condition, to separate spatio-temporal dynamics of different scalp areas as principal components (PCs) and use them to extract auditory-related neural activities in the other visual-cue conditions. The first principal component (PC1) showed a symmetrical frontocentral topography, while the second (PC2) showed a right-lateralized frontal concentration. A source analysis confirmed the relative contribution of temporal sources to the former and a right frontal source to the latter. Cue predictability affected only the ERAN projected onto PC1, especially when the previous trial ended with the Tonic chord. The ERAN in PC2 was reduced in the trials following Neapolitan endings in general. However, the extent of this reduction differed between cue-styles, whereby it was nearly absent when musical notation was used, regardless of whether the staves were filled with notes or empty. The results suggest that the right frontal areas carry out the primary role in musical syntactic analysis and integration of the ongoing context, which produce schematic expectations that, together with the veridical expectation incorporated by the temporal areas, inform musical syntactic processing in musicians.
Collapse
Affiliation(s)
- Hana Shin
- Department of Music, Center for Computer Research in Music and Acoustics, Stanford University, Stanford, CA, United States
| | - Takako Fujioka
- Department of Music, Center for Computer Research in Music and Acoustics, Stanford University, Stanford, CA, United States.,Stanford Neurosciences Institute, Stanford University, Stanford, CA, United States
| |
Collapse
|
3
|
Bedoin N, Brisseau L, Molinier P, Roch D, Tillmann B. Temporally Regular Musical Primes Facilitate Subsequent Syntax Processing in Children with Specific Language Impairment. Front Neurosci 2016; 10:245. [PMID: 27378833 PMCID: PMC4913515 DOI: 10.3389/fnins.2016.00245] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2016] [Accepted: 05/17/2016] [Indexed: 11/15/2022] Open
Abstract
Children with developmental language disorders have been shown to be also impaired in rhythm and meter perception. Temporal processing and its link to language processing can be understood within the dynamic attending theory. An external stimulus can stimulate internal oscillators, which orient attention over time and drive speech signal segmentation to provide benefits for syntax processing, which is impaired in various patient populations. For children with Specific Language Impairment (SLI) and dyslexia, previous research has shown the influence of an external rhythmic stimulation on subsequent language processing by comparing the influence of a temporally regular musical prime to that of a temporally irregular prime. Here we tested whether the observed rhythmic stimulation effect is indeed due to a benefit provided by the regular musical prime (rather than a cost subsequent to the temporally irregular prime). Sixteen children with SLI and 16 age-matched controls listened to either a regular musical prime sequence or an environmental sound scene (without temporal regularities in event occurrence; i.e., referred to as “baseline condition”) followed by grammatically correct and incorrect sentences. They were required to perform grammaticality judgments for each auditorily presented sentence. Results revealed that performance for the grammaticality judgments was better after the regular prime sequences than after the baseline sequences. Our findings are interpreted in the theoretical framework of the dynamic attending theory (Jones, 1976) and the temporal sampling (oscillatory) framework for developmental language disorders (Goswami, 2011). Furthermore, they encourage the use of rhythmic structures (even in non-verbal materials) to boost linguistic structure processing and outline perspectives for rehabilitation.
Collapse
Affiliation(s)
- Nathalie Bedoin
- Dynamique Du Langage Laboratory, Centre National de la Recherche Scientifique UMR 5596 and University Lyon 2 Lyon, France
| | - Lucie Brisseau
- Institut Médico-Educatif Franchemont Franchemont, France
| | | | - Didier Roch
- Institut Médico-Educatif Franchemont Franchemont, France
| | - Barbara Tillmann
- Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics Team, Centre National de la Recherche Scientifique -UMR 5292, INSERM U 1082, University Lyon 1 Lyon, France
| |
Collapse
|
4
|
Computational-Model-Based Analysis of Context Effects on Harmonic Expectancy. PLoS One 2016; 11:e0151374. [PMID: 27003807 PMCID: PMC4803284 DOI: 10.1371/journal.pone.0151374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2015] [Accepted: 02/27/2016] [Indexed: 11/29/2022] Open
Abstract
Expectancy for an upcoming musical chord, harmonic expectancy, is supposedly based on automatic activation of tonal knowledge. Since previous studies implicitly relied on interpretations based on Western music theory, the underlying computational processes involved in harmonic expectancy and how it relates to tonality need further clarification. In particular, short chord sequences which cannot lead to unique keys are difficult to interpret in music theory. In this study, we examined effects of preceding chords on harmonic expectancy from a computational perspective, using stochastic modeling. We conducted a behavioral experiment, in which participants listened to short chord sequences and evaluated the subjective relatedness of the last chord to the preceding ones. Based on these judgments, we built stochastic models of the computational process underlying harmonic expectancy. Following this, we compared the explanatory power of the models. Our results imply that, even when listening to short chord sequences, internally constructed and updated tonal assumptions determine the expectancy of the upcoming chord.
Collapse
|
5
|
Fogel AR, Rosenberg JC, Lehman FM, Kuperberg GR, Patel AD. Studying Musical and Linguistic Prediction in Comparable Ways: The Melodic Cloze Probability Method. Front Psychol 2015; 6:1718. [PMID: 26617548 PMCID: PMC4641899 DOI: 10.3389/fpsyg.2015.01718] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2015] [Accepted: 10/26/2015] [Indexed: 11/13/2022] Open
Abstract
Prediction or expectancy is thought to play an important role in both music and language processing. However, prediction is currently studied independently in the two domains, limiting research on relations between predictive mechanisms in music and language. One limitation is a difference in how expectancy is quantified. In language, expectancy is typically measured using the cloze probability task, in which listeners are asked to complete a sentence fragment with the first word that comes to mind. In contrast, previous production-based studies of melodic expectancy have asked participants to sing continuations following only one to two notes. We have developed a melodic cloze probability task in which listeners are presented with the beginning of a novel tonal melody (5-9 notes) and are asked to sing the note they expect to come next. Half of the melodies had an underlying harmonic structure designed to constrain expectations for the next note, based on an implied authentic cadence (AC) within the melody. Each such 'authentic cadence' melody was matched to a 'non-cadential' (NC) melody matched in terms of length, rhythm and melodic contour, but differing in implied harmonic structure. Participants showed much greater consistency in the notes sung following AC vs. NC melodies on average. However, significant variation in degree of consistency was observed within both AC and NC melodies. Analysis of individual melodies suggests that pitch prediction in tonal melodies depends on the interplay of local factors just prior to the target note (e.g., local pitch interval patterns) and larger-scale structural relationships (e.g., melodic patterns and implied harmonic structure). We illustrate how the melodic cloze method can be used to test a computational model of melodic expectation. Future uses for the method include exploring the interplay of different factors shaping melodic expectation, and designing experiments that compare the cognitive mechanisms of prediction in music and language.
Collapse
Affiliation(s)
| | - Jason C Rosenberg
- Department of Arts and Humanities, Yale-NUS College Singapore, Singapore
| | | | - Gina R Kuperberg
- Department of Psychology, Tufts University, Medford MA, USA ; MGH/HST Athinoula A. Martinos Center for Biomedical Imaging, Charlestown MA, USA ; Department of Psychiatry, Massachusetts General Hospital, Charlestown MA, USA
| | | |
Collapse
|
6
|
Selchenkova T, Jones MR, Tillmann B. The influence of temporal regularities on the implicit learning of pitch structures. Q J Exp Psychol (Hove) 2014; 67:2360-80. [DOI: 10.1080/17470218.2014.929155] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Implicit learning is the acquisition of complex information without the intention to learn. The aim of this study was to investigate the influence of temporal regularities on the implicit learning of an artificial pitch grammar. According to the dynamic attending theory (DAT) external regularities can entrain internal oscillators that guide attention over time, inducing temporal expectations that influence perception of future events. In the present study, the presentation of the artificial pitch grammar in the exposure phase was temporally either regular or irregular for one of two participant groups. Based on the DAT, it was hypothesized that the regular temporal presentation would favour implicit learning of tone structures in comparison to the irregular temporal presentation. Results demonstrated learning of the artificial grammar for the group with the regular exposure phase and partial learning for the group with the irregular exposure phase. These findings suggest that the regular presentation helps listeners to develop perceptual expectations about the temporal occurrence of future tones and thus facilitates the learning of the artificial pitch grammar.
Collapse
Affiliation(s)
- Tatiana Selchenkova
- CNRS, UMR5292; INSERM, U1028; Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics Team, Lyon, France
- University Lyon 1, Villeurbanne, France
| | - Mari Riess Jones
- Department of Psychology, The Ohio State University, Columbus, OH, USA
- Department of Psychology, University of California, Santa Barbara, CA, USA
| | - Barbara Tillmann
- CNRS, UMR5292; INSERM, U1028; Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics Team, Lyon, France
- University Lyon 1, Villeurbanne, France
| |
Collapse
|
7
|
Vuvan DT, Podolak OM, Schmuckler MA. Memory for musical tones: the impact of tonality and the creation of false memories. Front Psychol 2014; 5:582. [PMID: 24971071 PMCID: PMC4054327 DOI: 10.3389/fpsyg.2014.00582] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2013] [Accepted: 05/25/2014] [Indexed: 11/24/2022] Open
Abstract
Although the relation between tonality and musical memory has been fairly well-studied, less is known regarding the contribution of tonal-schematic expectancies to this relation. Three experiments investigated the influence of tonal expectancies on memory for single tones in a tonal melodic context. In the first experiment, listener responses indicated superior recognition of both expected and unexpected targets in a major tonal context than for moderately expected targets. Importantly, and in support of previous work on false memories, listener responses also revealed a higher false alarm rate for expected than unexpected targets. These results indicate roles for tonal schematic congruency as well as distinctiveness in memory for melodic tones. The second experiment utilized minor melodies, which weakened tonal expectancies since the minor tonality can be represented in three forms simultaneously. Finally, tonal expectancies were abolished entirely in the third experiment through the use of atonal melodies. Accordingly, the expectancy-based results observed in the first experiment were disrupted in the second experiment, and disappeared in the third experiment. These results are discussed in light of schema theory, musical expectancy, and classic memory work on the availability and distinctiveness heuristics.
Collapse
Affiliation(s)
- Dominique T Vuvan
- Department of Psychology, International Laboratory for Brain, Music, and Sound Research, Université de Montréal Montreal, QC, Canada
| | - Olivia M Podolak
- Department of Psychology, University of Toronto Scarborough Toronto, ON, Canada
| | - Mark A Schmuckler
- Department of Psychology, University of Toronto Scarborough Toronto, ON, Canada
| |
Collapse
|
8
|
Bigand E, Delbé C, Poulin-Charronnat B, Leman M, Tillmann B. Empirical evidence for musical syntax processing? Computer simulations reveal the contribution of auditory short-term memory. Front Syst Neurosci 2014; 8:94. [PMID: 24936174 PMCID: PMC4047967 DOI: 10.3389/fnsys.2014.00094] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2013] [Accepted: 05/05/2014] [Indexed: 11/25/2022] Open
Abstract
During the last decade, it has been argued that (1) music processing involves syntactic representations similar to those observed in language, and (2) that music and language share similar syntactic-like processes and neural resources. This claim is important for understanding the origin of music and language abilities and, furthermore, it has clinical implications. The Western musical system, however, is rooted in psychoacoustic properties of sound, and this is not the case for linguistic syntax. Accordingly, musical syntax processing could be parsimoniously understood as an emergent property of auditory memory rather than a property of abstract processing similar to linguistic processing. To support this view, we simulated numerous empirical studies that investigated the processing of harmonic structures, using a model based on the accumulation of sensory information in auditory memory. The simulations revealed that most of the musical syntax manipulations used with behavioral and neurophysiological methods as well as with developmental and cross-cultural approaches can be accounted for by the auditory memory model. This led us to question whether current research on musical syntax can really be compared with linguistic processing. Our simulation also raises methodological and theoretical challenges to study musical syntax while disentangling the confounded low-level sensory influences. In order to investigate syntactic abilities in music comparable to language, research should preferentially use musical material with structures that circumvent the tonal effect exerted by psychoacoustic properties of sounds.
Collapse
Affiliation(s)
- Emmanuel Bigand
- LEAD, CNRS-UMR 5022, Université de Bourgogne Dijon, France ; Institut Universitaire de France Paris, France
| | - Charles Delbé
- LEAD, CNRS-UMR 5022, Université de Bourgogne Dijon, France
| | | | - Marc Leman
- Department of Musicology, IPEM, Ghent University Ghent, Belgium
| | - Barbara Tillmann
- Lyon Neuroscience Research Center, CNRS-UMR 5292, INSERM-UMR 1028, Université Lyon1 Lyon, France
| |
Collapse
|
9
|
Tillmann B. Music and language perception: expectations, structural integration, and cognitive sequencing. Top Cogn Sci 2012; 4:568-84. [PMID: 22760955 DOI: 10.1111/j.1756-8765.2012.01209.x] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Music can be described as sequences of events that are structured in pitch and time. Studying music processing provides insight into how complex event sequences are learned, perceived, and represented by the brain. Given the temporal nature of sound, expectations, structural integration, and cognitive sequencing are central in music perception (i.e., which sounds are most likely to come next and at what moment should they occur?). This paper focuses on similarities in music and language cognition research, showing that music cognition research provides insight into the understanding of not only music processing but also language processing and the processing of other structured stimuli. The hypothesis of shared resources between music and language processing and of domain-general dynamic attention has motivated the development of research to test music as a means to stimulate sensory, cognitive, and motor processes.
Collapse
Affiliation(s)
- Barbara Tillmann
- Lyon Neuroscience Research Center - CRNL, CNRS UMR5292, INSERM U1028, Université Lyon 1, Lyon Cedex.
| |
Collapse
|
10
|
Hoch L, Tillmann B. Shared structural and temporal integration resources for music and arithmetic processing. Acta Psychol (Amst) 2012; 140:230-5. [PMID: 22673068 DOI: 10.1016/j.actpsy.2012.03.008] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2011] [Revised: 03/27/2012] [Accepted: 03/31/2012] [Indexed: 10/28/2022] Open
Abstract
While previous research has investigated the relationship either between language and music processing or between language and arithmetic processing, the present study investigated the relationship between music and arithmetic processing. Rule-governed number series, with the final number being a correct or incorrect series ending, were visually presented in synchrony with musical sequences, with the final chord functioning as the expected tonic or the less-expected subdominant chord (i.e., tonal function manipulation). Participants were asked to judge the correctness of the final number as quickly and accurately as possible. The results revealed an interaction between the processing of series ending and the processing of the task-irrelevant chords' tonal function, thus suggesting that music and arithmetic processing share cognitive resources. These findings are discussed in terms of general temporal and structural integration resources for linguistic and non-linguistic rule-governed sequences.
Collapse
|
11
|
McMurray B, Dennhardt JL, Struck-Marcell A. Context effects on musical chord categorization: Different forms of top-down feedback in speech and music? Cogn Sci 2010; 32:893-920. [PMID: 21490878 DOI: 10.1080/03640210802222021] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
A critical issue in perception is the manner in which top-down expectancies guide lower-level perceptual processes. In speech, a common paradigm is to construct continua ranging between two phonetic endpoints and to determine how higher level lexical context influences the perceived boundary. We applied this approach to music, presenting subjects with major/minor triad continua after brief musical contexts. Two experiments yielded results that differed from classic results in speech perception. In speech, context generally expands the category of the expected stimuli. We found the opposite in music: the major/minor boundary shifted toward the expected category, contracting it. Together, these experiments support the hypothesis that musical expectancy can feed back to affect lower-level perceptual processes. However, it may do so in a way that differs fundamentally from what has been seen in other domains.
Collapse
|
12
|
Musical training modulates the development of syntax processing in children. Neuroimage 2009; 47:735-44. [DOI: 10.1016/j.neuroimage.2009.04.090] [Citation(s) in RCA: 110] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2008] [Revised: 04/23/2009] [Accepted: 04/29/2009] [Indexed: 11/17/2022] Open
|
13
|
Tillmann B, Peretz I, Bigand E, Gosselin N. Harmonic priming in an amusic patient: the power of implicit tasks. Cogn Neuropsychol 2008; 24:603-22. [PMID: 18416511 DOI: 10.1080/02643290701609527] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Our study investigated with an implicit method (i.e., priming paradigm) whether I.R. - a brain-damaged patient exhibiting severe amusia - processes implicitly musical structures. The task consisted in identifying one of two phonemes (Experiment 1) or timbres (Experiment 2) on the last chord of eight-chord sequences (i.e., target). The targets were harmonically related or less related to the prior chords. I.R. displayed harmonic priming effects: Phoneme and timbre identification was faster for related than for less related targets (Experiments 1 and 2). However, I.R.'s explicit judgements of completion for the same sequences did not differ between related and less related contexts (Experiment 3). Her impaired performance in explicit judgements was not due to general difficulties with task demands since she performed like controls for completion judgements on spoken sentences (Experiment 4). The findings indicate that implicit knowledge of musical structures might remain intact and accessible, even when explicit judgements and overt recognition have been lost.
Collapse
|
14
|
|
15
|
Abstract
By mere exposure to musical pieces in everyday life, Western listeners acquire sensitivity to the regularities of the tonal system and to the context dependency of musical sounds. This implicitly acquired tonal knowledge allows nonmusician listeners to perceive relationships among musical events and to develop expectations for future events that then influence the processing of these events. The musical priming paradigm is one method of the indirect investigation of listeners' tonal knowledge. It investigates the influence of a preceding context (with its musical structures and relationships) on the processing of a musical target event, without asking participants for direct evaluations. Behavioral priming data have provided evidence for facilitated processing of musically related events in comparison to unrelated and less-related events. The sensitivity of implicit investigations is further shown by I.R., a patient with severe amusia, showing spared implicit knowledge of music. Finally, the priming paradigm allows us to investigate the neural correlates of musical structure processing. Two fMRI studies reported the implication of inferior frontal regions in musical priming, contrasting related and unrelated events, as well as finer structural manipulations contrasting in-key events.
Collapse
Affiliation(s)
- Barbara Tillmann
- CNRS UMR 5020, Neurosciences et Systèmes Sensoriels, 50 Av. Tony Garnier, F-69366 Lyon Cedex 07, France.
| |
Collapse
|
16
|
Tillmann B, Bigand E, Escoffier N, Lalitte P. The influence of musical relatedness on timbre discrimination. ACTA ACUST UNITED AC 2006. [DOI: 10.1080/09541440500269548] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
17
|
Tillmann B, Lebrun-Guillaud G. Influence of tonal and temporal expectations on chord processing and on completion judgments of chord sequences. PSYCHOLOGICAL RESEARCH 2005; 70:345-58. [PMID: 16177925 DOI: 10.1007/s00426-005-0222-0] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2004] [Accepted: 02/14/2005] [Indexed: 10/25/2022]
Abstract
Pitch and time are two principal form-bearing dimensions in Western tonal music. Research on melody perception has shown that listeners develop expectations about "What" note is coming next and "When" in time it will occur. Our study used sequences of chords (i.e., simultaneously sounding notes) to investigate the influence of these expectations on chord processing (Experiments 1 and 4) and subjective judgments of completion (Experiments 2 and 3). Both tasks showed an influence of tonal relations and temporal regularities: expected events occurring at the expected moment were processed faster and led to higher completion judgments. However, pitch and time dimensions interacted only for completion judgments. The present outcome suggests that for chord perception the influence of pitch and time might depend on the required processing: with a more global judgment favoring interactive influences in contrast to a task focusing on local chord processing.
Collapse
Affiliation(s)
- Barbara Tillmann
- CNRS UMR 5020 Neurosciences et Systèmes Sensoriels, Université Claude Bernard-Lyon I, 50 Av Tony Garnier, 69366, Lyon Cedex 07, France.
| | | |
Collapse
|
18
|
Abstract
Perceiving the tonality of a musical passage is a fundamental aspect of the experience of hearing music. Models for determining tonality have thus occupied a central place in music cognition research. Three experiments investigated 1 well-known model of tonal determination: the Krumhansl-Schmuckler key-finding algorithm. In Experiment 1, listeners' percepts of tonality following short musical fragments derived from preludes by Bach and Chopin were compared with predictions of tonality produced by the algorithm; these predictions were very accurate for the Bach preludes but considerably less so for the Chopin preludes. Experiment 2 explored a subset of the Chopin preludes, finding that the algorithm could predict tonal percepts on a measure-by-measure basis. In Experiment 3, the algorithm predicted listeners' percepts of tonal movement throughout a complete Chopin prelude. These studies support the viability of the Krumhansl-Schmuckler key-finding algorithm as well as a model of listeners' tonal perceptions of musical passages.
Collapse
Affiliation(s)
- Mark A Schmuckler
- Department of Life Sciences, University of Toronto at Scarborough, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada.
| | | |
Collapse
|