1
|
Dalla Bella S, Janaqi S, Benoit CE, Farrugia N, Bégel V, Verga L, Harding EE, Kotz SA. Unravelling individual rhythmic abilities using machine learning. Sci Rep 2024; 14:1135. [PMID: 38212632 PMCID: PMC10784578 DOI: 10.1038/s41598-024-51257-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Accepted: 01/02/2024] [Indexed: 01/13/2024] Open
Abstract
Humans can easily extract the rhythm of a complex sound, like music, and move to its regular beat, like in dance. These abilities are modulated by musical training and vary significantly in untrained individuals. The causes of this variability are multidimensional and typically hard to grasp in single tasks. To date we lack a comprehensive model capturing the rhythmic fingerprints of both musicians and non-musicians. Here we harnessed machine learning to extract a parsimonious model of rhythmic abilities, based on behavioral testing (with perceptual and motor tasks) of individuals with and without formal musical training (n = 79). We demonstrate that variability in rhythmic abilities and their link with formal and informal music experience can be successfully captured by profiles including a minimal set of behavioral measures. These findings highlight that machine learning techniques can be employed successfully to distill profiles of rhythmic abilities, and ultimately shed light on individual variability and its relationship with both formal musical training and informal musical experiences.
Collapse
Affiliation(s)
- Simone Dalla Bella
- International Laboratory for Brain, Music, and Sound Research (BRAMS), Montreal, Canada.
- Department of Psychology, University of Montreal, Pavillon Marie-Victorin, CP 6128 Succursale Centre-Ville, Montréal, QC, H3C 3J7, Canada.
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Canada.
- University of Economics and Human Sciences in Warsaw, Warsaw, Poland.
| | - Stefan Janaqi
- EuroMov Digital Health in Motion, IMT Mines Ales and University of Montpellier, Ales and Montpellier, France
| | - Charles-Etienne Benoit
- Inter-University Laboratory of Human Movement Biology, EA 7424, University Claude Bernard Lyon 1, 69 622, Villeurbanne, France
| | | | | | - Laura Verga
- Comparative Bioacoustics Group, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Department of Neuropsychology & Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, P.O. 616, Maastricht, 6200 MD, The Netherlands
| | - Eleanor E Harding
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Sonja A Kotz
- Department of Neuropsychology & Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, P.O. 616, Maastricht, 6200 MD, The Netherlands.
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| |
Collapse
|
2
|
Cecchetti G, Tomasini CA, Herff SA, Rohrmeier MA. Interpreting Rhythm as Parsing: Syntactic-Processing Operations Predict the Migration of Visual Flashes as Perceived During Listening to Musical Rhythms. Cogn Sci 2023; 47:e13389. [PMID: 38038624 DOI: 10.1111/cogs.13389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 11/10/2023] [Accepted: 11/13/2023] [Indexed: 12/02/2023]
Abstract
Music can be interpreted by attributing syntactic relationships to sequential musical events, and, computationally, such musical interpretation represents an analogous combinatorial task to syntactic processing in language. While this perspective has been primarily addressed in the domain of harmony, we focus here on rhythm in the Western tonal idiom, and we propose for the first time a framework for modeling the moment-by-moment execution of processing operations involved in the interpretation of music. Our approach is based on (1) a music-theoretically motivated grammar formalizing the competence of rhythmic interpretation in terms of three basic types of dependency (preparation, syncopation, and split; Rohrmeier, 2020), and (2) psychologically plausible predictions about the complexity of structural integration and memory storage operations, necessary for parsing hierarchical dependencies, derived from the dependency locality theory (Gibson, 2000). With a behavioral experiment, we exemplify an empirical implementation of the proposed theoretical framework. One hundred listeners were asked to reproduce the location of a visual flash presented while listening to three rhythmic excerpts, each exemplifying a different interpretation under the formal grammar. The hypothesized execution of syntactic-processing operations was found to be a significant predictor of the observed displacement between the reported and the objective location of the flashes. Overall, this study presents a theoretical approach and a first empirical proof-of-concept for modeling the cognitive process resulting in such interpretation as a form of syntactic parsing with algorithmic similarities to its linguistic counterpart. Results from the present small-scale experiment should not be read as a final test of the theory, but they are consistent with the theoretical predictions after controlling for several possible confounding factors and may form the basis for further large-scale and ecological testing.
Collapse
Affiliation(s)
- Gabriele Cecchetti
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne
| | - Cédric A Tomasini
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne
| | - Steffen A Herff
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University
| | - Martin A Rohrmeier
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne
| |
Collapse
|
3
|
Weiss MW, Peretz I. Improvisation is a novel tool to study musicality. Sci Rep 2022; 12:12595. [PMID: 35869086 PMCID: PMC9307610 DOI: 10.1038/s41598-022-15312-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Accepted: 06/22/2022] [Indexed: 11/10/2022] Open
Abstract
Humans spontaneously invent songs from an early age. Here, we exploit this natural inclination to probe implicit musical knowledge in 33 untrained and poor singers (amusia). Each sang 28 long improvisations as a response to a verbal prompt or a continuation of a melodic stem. To assess the extent to which each improvisation reflects tonality, which has been proposed to be a core organizational principle of musicality and which is present within most music traditions, we developed a new algorithm that compares a sung excerpt to a probability density function representing the tonal hierarchy of Western music. The results show signatures of tonality in both nonmusicians and individuals with congenital amusia, who have notorious difficulty performing musical tasks that require explicit responses and memory. The findings are a proof of concept that improvisation can serve as a novel, even enjoyable method for systematically measuring hidden aspects of musicality across the spectrum of musical ability.
Collapse
|
4
|
Norman-Haignere SV, Feather J, Boebinger D, Brunner P, Ritaccio A, McDermott JH, Schalk G, Kanwisher N. A neural population selective for song in human auditory cortex. Curr Biol 2022; 32:1470-1484.e12. [PMID: 35196507 PMCID: PMC9092957 DOI: 10.1016/j.cub.2022.01.069] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 10/26/2021] [Accepted: 01/24/2022] [Indexed: 12/18/2022]
Abstract
How is music represented in the brain? While neuroimaging has revealed some spatial segregation between responses to music versus other sounds, little is known about the neural code for music itself. To address this question, we developed a method to infer canonical response components of human auditory cortex using intracranial responses to natural sounds, and further used the superior coverage of fMRI to map their spatial distribution. The inferred components replicated many prior findings, including distinct neural selectivity for speech and music, but also revealed a novel component that responded nearly exclusively to music with singing. Song selectivity was not explainable by standard acoustic features, was located near speech- and music-selective responses, and was also evident in individual electrodes. These results suggest that representations of music are fractionated into subpopulations selective for different types of music, one of which is specialized for the analysis of song.
Collapse
Affiliation(s)
- Sam V Norman-Haignere
- Zuckerman Institute, Columbia University, New York, NY, USA; HHMI Fellow of the Life Sciences Research Foundation, Chevy Chase, MD, USA; Laboratoire des Sytèmes Perceptifs, Département d'Études Cognitives, ENS, PSL University, CNRS, Paris, France; Department of Biostatistics & Computational Biology, University of Rochester Medical Center, Rochester, NY, USA; Department of Neuroscience, University of Rochester Medical Center, Rochester, NY, USA; Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
| | - Jenelle Feather
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA; Center for Brains, Minds and Machines, Cambridge, MA, USA
| | - Dana Boebinger
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA; Program in Speech and Hearing Biosciences and Technology, Harvard University, Cambridge, MA, USA
| | - Peter Brunner
- Department of Neurology, Albany Medical College, Albany, NY, USA; National Center for Adaptive Neurotechnologies, Albany, NY, USA; Department of Neurosurgery, Washington University School of Medicine, St. Louis, MO, USA
| | - Anthony Ritaccio
- Department of Neurology, Albany Medical College, Albany, NY, USA; Department of Neurology, Mayo Clinic, Jacksonville, FL, USA
| | - Josh H McDermott
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA; Center for Brains, Minds and Machines, Cambridge, MA, USA; Program in Speech and Hearing Biosciences and Technology, Harvard University, Cambridge, MA, USA
| | - Gerwin Schalk
- Department of Neurology, Albany Medical College, Albany, NY, USA
| | - Nancy Kanwisher
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA; Center for Brains, Minds and Machines, Cambridge, MA, USA
| |
Collapse
|
5
|
Boebinger D, Norman-Haignere SV, McDermott JH, Kanwisher N. Music-selective neural populations arise without musical training. J Neurophysiol 2021; 125:2237-2263. [PMID: 33596723 PMCID: PMC8285655 DOI: 10.1152/jn.00588.2020] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Revised: 02/12/2021] [Accepted: 02/12/2021] [Indexed: 11/22/2022] Open
Abstract
Recent work has shown that human auditory cortex contains neural populations anterior and posterior to primary auditory cortex that respond selectively to music. However, it is unknown how this selectivity for music arises. To test whether musical training is necessary, we measured fMRI responses to 192 natural sounds in 10 people with almost no musical training. When voxel responses were decomposed into underlying components, this group exhibited a music-selective component that was very similar in response profile and anatomical distribution to that previously seen in individuals with moderate musical training. We also found that musical genres that were less familiar to our participants (e.g., Balinese gamelan) produced strong responses within the music component, as did drum clips with rhythm but little melody, suggesting that these neural populations are broadly responsive to music as a whole. Our findings demonstrate that the signature properties of neural music selectivity do not require musical training to develop, showing that the music-selective neural populations are a fundamental and widespread property of the human brain.NEW & NOTEWORTHY We show that music-selective neural populations are clearly present in people without musical training, demonstrating that they are a fundamental and widespread property of the human brain. Additionally, we show music-selective neural populations respond strongly to music from unfamiliar genres as well as music with rhythm but little pitch information, suggesting that they are broadly responsive to music as a whole.
Collapse
Affiliation(s)
- Dana Boebinger
- Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, Massachusetts
- Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Sam V Norman-Haignere
- Laboratoire des Sytèmes Perceptifs, Département d'Études Cognitives, École Normale Supérieure, PSL Research University, CNRS, Paris France
- Zuckerman Institute for Brain Research, Columbia University, New York, New York
| | - Josh H McDermott
- Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, Massachusetts
- Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts
- Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Nancy Kanwisher
- Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts
- Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, Massachusetts
| |
Collapse
|
6
|
Popescu T, Widdess R, Rohrmeier M. Western listeners detect boundary hierarchy in Indian music: a segmentation study. Sci Rep 2021; 11:3112. [PMID: 33542358 PMCID: PMC7862587 DOI: 10.1038/s41598-021-82629-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Accepted: 01/04/2021] [Indexed: 11/23/2022] Open
Abstract
How are listeners able to follow and enjoy complex pieces of music? Several theoretical frameworks suggest links between the process of listening and the formal structure of music, involving a division of the musical surface into structural units at multiple hierarchical levels. Whether boundaries between structural units are perceivable to listeners unfamiliar with the style, and are identified congruently between naïve listeners and experts, remains unclear. Here, we focused on the case of Indian music, and asked 65 Western listeners (of mixed levels of musical training; most unfamiliar with Indian music) to intuitively segment into phrases a recording of sitar ālāp of two different rāga-modes. Each recording was also segmented by two experts, who identified boundary regions at section and phrase levels. Participant- and region-wise scores were computed on the basis of "clicks" inside or outside boundary regions (hits/false alarms), inserted earlier or later within those regions (high/low "promptness"). We found substantial agreement-expressed as hit rates and click densities-among participants, and between participants' and experts' segmentations. The agreement and promptness scores differed between participants, levels, and recordings. We found no effect of musical training, but detected real-time awareness of grouping completion and boundary hierarchy. The findings may potentially be explained by underlying general bottom-up processes, implicit learning of structural relationships, cross-cultural musical similarities, or universal cognitive capacities.
Collapse
Affiliation(s)
- Tudor Popescu
- Department of Behavioural and Cognitive Biology, Universität Wien, Althanstrasse 14, 1090, Vienna, Austria.
- Medizinische Universität Wien, Spitalgasse 23, 1090, Vienna, Austria.
| | - Richard Widdess
- Department of Music, School of Arts, SOAS University of London, London, UK
| | - Martin Rohrmeier
- Centre for Music and Science, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| |
Collapse
|
7
|
Politimou N, Douglass-Kirk P, Pearce M, Stewart L, Franco F. Melodic expectations in 5- and 6-year-old children. J Exp Child Psychol 2020; 203:105020. [PMID: 33271397 DOI: 10.1016/j.jecp.2020.105020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2020] [Revised: 10/01/2020] [Accepted: 10/02/2020] [Indexed: 11/28/2022]
Abstract
It has been argued that children implicitly acquire the rules relating to the structure of music in their environment using domain-general mechanisms such as statistical learning. Closely linked to statistical learning is the ability to form expectations about future events. Whether children as young as 5 years can make use of such internalized regularities to form expectations about the next note in a melody is still unclear. The possible effect of the home musical environment on the strength of musical expectations has also been under-explored. Using a newly developed melodic priming task that included melodies with either "expected" or "unexpected" endings according to rules of Western music theory, we tested 5- and 6-year-old children (N = 46). The stimuli in this task were constructed using the information dynamics of music (IDyOM) system, a probabilistic model estimating the level of "unexpectedness" of a note given the preceding context. Results showed that responses to expected versus unexpected tones were faster and more accurate, indicating that children have already formed robust melodic expectations at 5 years of age. Aspects of the home musical environment significantly predicted the strength of melodic expectations, suggesting that implicit musical learning may be influenced by the quantity of informal exposure to the surrounding musical environment.
Collapse
Affiliation(s)
- Nina Politimou
- Department of Psychology, Middlesex University London, The Burroughs, Hendon, London NW4 4BT, UK.
| | - Pedro Douglass-Kirk
- Department of Psychology, Goldsmiths University of London, New Cross, London SE14 6NW, UK
| | - Marcus Pearce
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Bethnal Green, London E1 4NS, UK; Center for Music in the Brain, Aarhus University, 8000 Aarhus, Denmark
| | - Lauren Stewart
- Department of Psychology, Goldsmiths University of London, New Cross, London SE14 6NW, UK
| | - Fabia Franco
- Department of Psychology, Middlesex University London, The Burroughs, Hendon, London NW4 4BT, UK
| |
Collapse
|
8
|
Shou Z, Li Z, Wang X, Chen M, Bai Y, Di H. Non-invasive brain intervention techniques used in patients with disorders of consciousness. Int J Neurosci 2020; 131:390-404. [PMID: 32238043 DOI: 10.1080/00207454.2020.1744598] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
Aim of the study: With the development of emergency medicine and intensive care technology, the number of people who survive with disorders of consciousness (DOC) has dramatically increased. The diagnosis and treatment of such patients have attracted much attention from the medical community. From the latest evidence-based guidelines, non-invasive brain intervention (NIBI) techniques may be valuable and promising in the diagnosis and conscious rehabilitation of DOC patients.Methods: This work reviews the studies on NIBI techniques for the assessment and intervention of DOC patients.Results: A large number of studies have explored the application of NIBI techniques in DOC patients. The NIBI techniques include transcranial magnetic stimulation, transcranial electric stimulation, music stimulation, near-infrared laser stimulation, focused shock wave therapy, low-intensity focused ultrasound pulsation and transcutaneous auricular vagus nerve stimulation.Conclusions: NIBI techniques present numerous advantages such as being painless, safe and inexpensive; having adjustable parameters and targets; and having broad development prospects in treating DOC patients.
Collapse
Affiliation(s)
- Zeyu Shou
- International Vegetative State and Consciousness Science Institute, Hangzhou Normal University, Hangzhou, China.,Key Laboratory of Aging and Cancer Biology of Zhejiang Province, Hangzhou Normal University, Hangzhou, China
| | - Zhilong Li
- International Vegetative State and Consciousness Science Institute, Hangzhou Normal University, Hangzhou, China.,Key Laboratory of Aging and Cancer Biology of Zhejiang Province, Hangzhou Normal University, Hangzhou, China
| | - Xueying Wang
- International Vegetative State and Consciousness Science Institute, Hangzhou Normal University, Hangzhou, China.,Key Laboratory of Aging and Cancer Biology of Zhejiang Province, Hangzhou Normal University, Hangzhou, China
| | - Miaoyang Chen
- International Vegetative State and Consciousness Science Institute, Hangzhou Normal University, Hangzhou, China.,Key Laboratory of Aging and Cancer Biology of Zhejiang Province, Hangzhou Normal University, Hangzhou, China
| | - Yang Bai
- International Vegetative State and Consciousness Science Institute, Hangzhou Normal University, Hangzhou, China.,Key Laboratory of Aging and Cancer Biology of Zhejiang Province, Hangzhou Normal University, Hangzhou, China
| | - Haibo Di
- International Vegetative State and Consciousness Science Institute, Hangzhou Normal University, Hangzhou, China.,Key Laboratory of Aging and Cancer Biology of Zhejiang Province, Hangzhou Normal University, Hangzhou, China
| |
Collapse
|
9
|
The pleasantness of sensory dissonance is mediated by musical style and expertise. Sci Rep 2019; 9:1070. [PMID: 30705379 PMCID: PMC6355932 DOI: 10.1038/s41598-018-35873-8] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2017] [Accepted: 11/09/2018] [Indexed: 12/13/2022] Open
Abstract
Western musical styles use a large variety of chords and vertical sonorities. Based on objective acoustical properties, chords can be situated on a dissonant-consonant continuum. While this might to some extent converge with the unpleasant-pleasant continuum, subjective liking might diverge for various chord forms from music across different styles. Our study aimed to investigate how well appraisals of the roughness and pleasantness dimensions of isolated chords taken from real-world music are predicted by Parncutt’s established model of sensory dissonance. Furthermore, we related these subjective ratings to style of origin and acoustical features of the chords as well as musical sophistication of the raters. Ratings were obtained for chords deemed representative of the harmonic language of three different musical styles (classical, jazz and avant-garde music), plus randomly generated chords. Results indicate that pleasantness and roughness ratings were, on average, mirror opposites; however, their relative distribution differed greatly across styles, reflecting different underlying aesthetic ideals. Parncutt’s model only weakly predicted ratings for all but Classical chords, suggesting that listeners’ appraisal of the dissonance and pleasantness of chords bears not only on stimulus-side but also on listener-side factors. Indeed, we found that levels of musical sophistication negatively predicted listeners’ tendency to rate the consonance and pleasantness of any one chord as coupled measures, suggesting that musical education and expertise may serve to individuate how these musical dimensions are apprehended.
Collapse
|
10
|
Shin H, Fujioka T. Effects of Visual Predictive Information and Sequential Context on Neural Processing of Musical Syntax. Front Psychol 2019; 9:2528. [PMID: 30618951 PMCID: PMC6300505 DOI: 10.3389/fpsyg.2018.02528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2018] [Accepted: 11/27/2018] [Indexed: 11/13/2022] Open
Abstract
The early right anterior negativity (ERAN) in event-related potentials (ERPs) is typically elicited by syntactically unexpected events in Western tonal music. We examined how visual predictive information influences syntactic processing, how musical or non-musical cues have different effects, and how they interact with sequential effects between trials, which could modulate with the strength of the sense of established tonality. The EEG was recorded from musicians who listened to chord sequences paired with one of four types of visual stimuli; two provided predictive information about the syntactic validity of the last chord through either musical notation of the whole sequence, or the word "regular" or "irregular," while the other two, empty musical staves or a blank screen, provided no information. Half of the sequences ended with the syntactically invalid Neapolitan sixth chord, while the other half ended with the Tonic chord. Clear ERAN was observed in frontocentral electrodes in all conditions. A principal component analysis (PCA) was performed on the grand average response in the audio-only condition, to separate spatio-temporal dynamics of different scalp areas as principal components (PCs) and use them to extract auditory-related neural activities in the other visual-cue conditions. The first principal component (PC1) showed a symmetrical frontocentral topography, while the second (PC2) showed a right-lateralized frontal concentration. A source analysis confirmed the relative contribution of temporal sources to the former and a right frontal source to the latter. Cue predictability affected only the ERAN projected onto PC1, especially when the previous trial ended with the Tonic chord. The ERAN in PC2 was reduced in the trials following Neapolitan endings in general. However, the extent of this reduction differed between cue-styles, whereby it was nearly absent when musical notation was used, regardless of whether the staves were filled with notes or empty. The results suggest that the right frontal areas carry out the primary role in musical syntactic analysis and integration of the ongoing context, which produce schematic expectations that, together with the veridical expectation incorporated by the temporal areas, inform musical syntactic processing in musicians.
Collapse
Affiliation(s)
- Hana Shin
- Department of Music, Center for Computer Research in Music and Acoustics, Stanford University, Stanford, CA, United States
| | - Takako Fujioka
- Department of Music, Center for Computer Research in Music and Acoustics, Stanford University, Stanford, CA, United States.,Stanford Neurosciences Institute, Stanford University, Stanford, CA, United States
| |
Collapse
|
11
|
Freitas C, Manzato E, Burini A, Taylor MJ, Lerch JP, Anagnostou E. Neural Correlates of Familiarity in Music Listening: A Systematic Review and a Neuroimaging Meta-Analysis. Front Neurosci 2018; 12:686. [PMID: 30344470 PMCID: PMC6183416 DOI: 10.3389/fnins.2018.00686] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2018] [Accepted: 09/13/2018] [Indexed: 11/15/2022] Open
Abstract
Familiarity in music has been reported as an important factor modulating emotional and hedonic responses in the brain. Familiarity and repetition may increase the liking of a piece of music, thus inducing positive emotions. Neuroimaging studies have focused on identifying the brain regions involved in the processing of familiar and unfamiliar musical stimuli. However, the use of different modalities and experimental designs has led to discrepant results and it is not clear which areas of the brain are most reliably engaged when listening to familiar and unfamiliar musical excerpts. In the present study, we conducted a systematic review from three databases (Medline, PsychoINFO, and Embase) using the keywords (recognition OR familiar OR familiarity OR exposure effect OR repetition) AND (music OR song) AND (brain OR brains OR neuroimaging OR functional Magnetic Resonance Imaging OR Position Emission Tomography OR Electroencephalography OR Event Related Potential OR Magnetoencephalography). Of the 704 titles identified, 23 neuroimaging studies met our inclusion criteria for the systematic review. After removing studies providing insufficient information or contrasts, 11 studies (involving 212 participants) qualified for the meta-analysis using the activation likelihood estimation (ALE) approach. Our results did not find significant peak activations consistently across included studies. Using a less conservative approach (p < 0.001, uncorrected for multiple comparisons) we found that the left superior frontal gyrus, the ventral lateral (VL) nucleus of the left thalamus, and the left medial surface of the superior frontal gyrus had the highest likelihood of being activated by familiar music. On the other hand, the left insula, and the right anterior cingulate cortex had the highest likelihood of being activated by unfamiliar music. We had expected limbic structures as top clusters when listening to familiar music. But, instead, music familiarity had a motor pattern of activation. This could reflect an audio-motor synchronization to the rhythm which is more engaging for familiar tunes, and/or a sing-along response in one's mind, anticipating melodic, harmonic progressions, rhythms, timbres, and lyric events in the familiar songs. These data provide evidence for the need for larger neuroimaging studies to understand the neural correlates of music familiarity.
Collapse
Affiliation(s)
- Carina Freitas
- Faculty of Medicine, Institute of Medical Science, University of Toronto, Toronto, ON, Canada
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Toronto, ON, Canada
| | | | | | - Margot J. Taylor
- Faculty of Medicine, Institute of Medical Science, University of Toronto, Toronto, ON, Canada
- Department of Diagnostic Imaging, Hospital for Sick Children, Toronto, ON, Canada
- Department of Psychology, University of Toronto, Toronto, ON, Canada
- Neuroscience & Mental Health Program, Hospital for Sick Children Research Institute, Toronto, ON, Canada
| | - Jason P. Lerch
- Neuroscience & Mental Health Program, Hospital for Sick Children Research Institute, Toronto, ON, Canada
- Mouse Imaging Centre, Hospital for Sick Children, Toronto, ON, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada
| | - Evdokia Anagnostou
- Faculty of Medicine, Institute of Medical Science, University of Toronto, Toronto, ON, Canada
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Toronto, ON, Canada
- Neuroscience & Mental Health Program, Hospital for Sick Children Research Institute, Toronto, ON, Canada
- Department of Pediatrics, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
12
|
Stark EA, Vuust P, Kringelbach ML. Music, dance, and other art forms: New insights into the links between hedonia (pleasure) and eudaimonia (well-being). PROGRESS IN BRAIN RESEARCH 2018; 237:129-152. [PMID: 29779732 DOI: 10.1016/bs.pbr.2018.03.019] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
For Aristotle, the goal of human life was to live well, to flourish, and to ultimately have a good life. These goals can be conceptualized as "eudaimonia," a concept distinct from "hedonia" (pleasure). Many people would argue that the arts play a large role in their well-being and eudaimonia. Music in particular is a culturally ubiquitous phenomenon which brings joy and social bonding to listeners. Research has given insights into how the "sweet anticipation" of music and other art forms can lead to pleasure, but a full understanding of eudaimonia from the arts is still missing. What is clear is that anticipation and prediction are important for extracting meaning from our environment. In fleeting moments this may translate into pleasure, but over longer timescales, it can imbue life with meaning and purpose and lead to eudaimonia. Based on the existing evidence from neuroimaging, we hypothesize that a special network in the brain, the default-mode network, may play a central role in orchestrating eudaimonia, and propose future strategies for exploring these questions further.
Collapse
Affiliation(s)
- Eloise A Stark
- Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & Royal Academy of Music, Aarhus/Aalborg, Denmark; Center of Functionally Integrative Neuroscience, Aarhus University, Aarhus, Denmark
| | - Morten L Kringelbach
- Department of Psychiatry, University of Oxford, Oxford, United Kingdom; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & Royal Academy of Music, Aarhus/Aalborg, Denmark; Center of Functionally Integrative Neuroscience, Aarhus University, Aarhus, Denmark; Institut d'études avancées de Paris, Paris, France.
| |
Collapse
|
13
|
Clark CN, Golden HL, McCallion O, Nicholas JM, Cohen MH, Slattery CF, Paterson RW, Fletcher PD, Mummery CJ, Rohrer JD, Crutch SJ, Warren JD. Music models aberrant rule decoding and reward valuation in dementia. Soc Cogn Affect Neurosci 2018; 13:192-202. [PMID: 29186630 PMCID: PMC5827340 DOI: 10.1093/scan/nsx140] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2017] [Revised: 11/06/2017] [Accepted: 11/19/2017] [Indexed: 01/03/2023] Open
Abstract
Aberrant rule- and reward-based processes underpin abnormalities of socio-emotional behaviour in major dementias. However, these processes remain poorly characterized. Here we used music to probe rule decoding and reward valuation in patients with frontotemporal dementia (FTD) syndromes and Alzheimer's disease (AD) relative to healthy age-matched individuals. We created short melodies that were either harmonically resolved ('finished') or unresolved ('unfinished'); the task was to classify each melody as finished or unfinished (rule processing) and rate its subjective pleasantness (reward valuation). Results were adjusted for elementary pitch and executive processing; neuroanatomical correlates were assessed using voxel-based morphometry. Relative to healthy older controls, patients with behavioural variant FTD showed impairments of both musical rule decoding and reward valuation, while patients with semantic dementia showed impaired reward valuation but intact rule decoding, patients with AD showed impaired rule decoding but intact reward valuation and patients with progressive non-fluent aphasia performed comparably to healthy controls. Grey matter associations with task performance were identified in anterior temporal, medial and lateral orbitofrontal cortices, previously implicated in computing diverse biological and non-biological rules and rewards. The processing of musical rules and reward distils cognitive and neuroanatomical mechanisms relevant to complex socio-emotional dysfunction in major dementias.
Collapse
Affiliation(s)
- Camilla N Clark
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Hannah L Golden
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Oliver McCallion
- Oxford University Clinical Academic Graduate School, University of Oxford, Oxford, UK
| | - Jennifer M Nicholas
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
- London School of Hygiene and Tropical Medicine, University of London, London, UK
| | - Miriam H Cohen
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Catherine F Slattery
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Ross W Paterson
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Phillip D Fletcher
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Catherine J Mummery
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Jonathan D Rohrer
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Sebastian J Crutch
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| | - Jason D Warren
- Dementia Research Centre, UCL Institute of Neurology, University College London, London, UK
| |
Collapse
|
14
|
Pearce M, Rohrmeier M. Musical Syntax II: Empirical Perspectives. SPRINGER HANDBOOK OF SYSTEMATIC MUSICOLOGY 2018. [DOI: 10.1007/978-3-662-55004-5_26] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
|
15
|
Podlipniak P. The Role of the Baldwin Effect in the Evolution of Human Musicality. Front Neurosci 2017; 11:542. [PMID: 29056895 PMCID: PMC5635050 DOI: 10.3389/fnins.2017.00542] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2017] [Accepted: 09/19/2017] [Indexed: 12/17/2022] Open
Abstract
From the biological perspective human musicality is the term referred to as a set of abilities which enable the recognition and production of music. Since music is a complex phenomenon which consists of features that represent different stages of the evolution of human auditory abilities, the question concerning the evolutionary origin of music must focus mainly on music specific properties and their possible biological function or functions. What usually differentiates music from other forms of human sound expressions is a syntactically organized structure based on pitch classes and rhythmic units measured in reference to musical pulse. This structure is an auditory (not acoustical) phenomenon, meaning that it is a human-specific interpretation of sounds achieved thanks to certain characteristics of the nervous system. There is historical and cross-cultural diversity of this structure which indicates that learning is an important part of the development of human musicality. However, the fact that there is no culture without music, the syntax of which is implicitly learned and easily recognizable, suggests that human musicality may be an adaptive phenomenon. If the use of syntactically organized structure as a communicative phenomenon were adaptive it would be only in circumstances in which this structure is recognizable by more than one individual. Therefore, there is a problem to explain the adaptive value of an ability to recognize a syntactically organized structure that appeared accidentally as the result of mutation or recombination in an environment without a syntactically organized structure. The possible solution could be explained by the Baldwin effect in which a culturally invented trait is transformed into an instinctive trait by the means of natural selection. It is proposed that in the beginning musical structure was invented and learned thanks to neural plasticity. Because structurally organized music appeared adaptive (phenotypic adaptation) e.g., as a tool of social consolidation, our predecessors started to spend a lot of time and energy on music. In such circumstances, accidentally one individual was born with the genetically controlled development of new neural circuitry which allowed him or her to learn music faster and with less energy use.
Collapse
Affiliation(s)
- Piotr Podlipniak
- Institute of Musicology, Adam Mickiewicz University in Poznań, Poznań, Poland
| |
Collapse
|
16
|
Gorzelańczyk EJ, Podlipniak P, Walecki P, Karpiński M, Tarnowska E. Pitch Syntax Violations Are Linked to Greater Skin Conductance Changes, Relative to Timbral Violations - The Predictive Role of the Reward System in Perspective of Cortico-subcortical Loops. Front Psychol 2017; 8:586. [PMID: 28458648 PMCID: PMC5394172 DOI: 10.3389/fpsyg.2017.00586] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2016] [Accepted: 03/29/2017] [Indexed: 12/03/2022] Open
Abstract
According to contemporary opinion emotional reactions to syntactic violations are due to surprise as a result of the general mechanism of prediction. The classic view is that, the processing of musical syntax can be explained by activity of the cerebral cortex. However, some recent studies have indicated that subcortical brain structures, including those related to the processing of emotions, are also important during the processing of syntax. In order to check whether emotional reactions play a role in the processing of pitch syntax or are only the result of the general mechanism of prediction, the comparison of skin conductance levels reacting to three types of melodies were recorded. In this study, 28 subjects listened to three types of short melodies prepared in Musical Instrument Digital Interface Standard files (MIDI) – tonally correct, tonally violated (with one out-of-key – i.e., of high information content), and tonally correct but with one note played in a different timbre. The BioSemi ActiveTwo with two passive Nihon Kohden electrodes was used. Skin conductance levels were positively correlated with the presented stimuli (timbral changes and tonal violations). Although changes in skin conductance levels were also observed in response to the change in timbre, the reactions to tonal violations were significantly stronger. Therefore, despite the fact that timbral change is at least as equally unexpected as an out-of-key note, the processing of pitch syntax mainly generates increased activation of the sympathetic part of the autonomic nervous system. These results suggest that the cortico–subcortical loops (especially the anterior cingulate – limbic loop) may play an important role in the processing of musical syntax.
Collapse
Affiliation(s)
- Edward J Gorzelańczyk
- Department of Theoretical Basis of Bio-Medical Sciences and Medical Informatics, Nicolaus Copernicus University Collegium MedicumBydgoszcz, Poland.,Non-Public Health Care Center Sue Ryder HomeBydgoszcz, Poland.,Medseven-Outpatient Addiction TreatmentBydgoszcz, Poland.,Institute of Philosophy, Kazimierz Wielki UniversityBydgoszcz, Poland
| | - Piotr Podlipniak
- Institute of Musicology, Adam Mickiewicz University in PoznańPoznań, Poland
| | - Piotr Walecki
- Department of Bioinformatics and Telemedicine, Jagiellonian University Collegium MedicumKrakow, Poland
| | - Maciej Karpiński
- Institute of Linguistics, Adam Mickiewicz University in PoznańPoznań, Poland
| | - Emilia Tarnowska
- Institute of Acoustics, Adam Mickiewicz University in PoznańPoznań, Poland
| |
Collapse
|
17
|
Rohrmeier M, Widdess R. Incidental Learning of Melodic Structure of North Indian Music. Cogn Sci 2016; 41:1299-1327. [PMID: 27859578 DOI: 10.1111/cogs.12404] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2012] [Revised: 03/11/2016] [Accepted: 03/21/2016] [Indexed: 11/28/2022]
Abstract
Musical knowledge is largely implicit. It is acquired without awareness of its complex rules, through interaction with a large number of samples during musical enculturation. Whereas several studies explored implicit learning of mostly abstract and less ecologically valid features of Western music, very little work has been done with respect to ecologically valid stimuli as well as non-Western music. The present study investigated implicit learning of modal melodic features in North Indian classical music in a realistic and ecologically valid way. It employed a cross-grammar design, using melodic materials from two modes (rāgas) that use the same scale. Findings indicated that Western participants unfamiliar with Indian music incidentally learned to identify distinctive features of each mode. Confidence ratings suggest that participants' performance was consistently correlated with confidence, indicating that they became aware of whether they were right in their responses; that is, they possessed explicit judgment knowledge. Altogether our findings show incidental learning in a realistic ecologically valid context during only a very short exposure, they provide evidence that incidental learning constitutes a powerful mechanism that plays a fundamental role in musical acquisition.
Collapse
Affiliation(s)
- Martin Rohrmeier
- Department of Art and Musicology, Dresden University of Technology.,Department of Linguistics and Philosophy, MIT Intelligence Initiative, Massachusetts Institute of Technology
| | - Richard Widdess
- Department of Music, School of Oriental and African Studies, University of London
| |
Collapse
|
18
|
Cui AX, Diercks C, Troje NF, Cuddy LL. Short and long term representation of an unfamiliar tone distribution. PeerJ 2016; 4:e2399. [PMID: 27635355 PMCID: PMC5012311 DOI: 10.7717/peerj.2399] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2016] [Accepted: 08/02/2016] [Indexed: 11/20/2022] Open
Abstract
We report on a study conducted to extend our knowledge about the process of gaining a mental representation of music. Several studies, inspired by research on the statistical learning of language, have investigated statistical learning of sequential rules underlying tone sequences. Given that the mental representation of music correlates with distributional properties of music, we tested whether participants are able to abstract distributional information contained in tone sequences to form a mental representation. For this purpose, we created an unfamiliar music genre defined by an underlying tone distribution, to which 40 participants were exposed. Our stimuli allowed us to differentiate between sensitivity to the distributional properties contained in test stimuli and long term representation of the distributional properties of the music genre overall. Using a probe tone paradigm and a two-alternative forced choice discrimination task, we show that listeners are able to abstract distributional properties of music through mere exposure into a long term representation of music. This lends support to the idea that statistical learning is involved in the process of gaining musical knowledge.
Collapse
Affiliation(s)
- Anja X Cui
- Department of Psychology, Queen's University , Kingston , Ontario , Canada
| | - Charlette Diercks
- Department of Psychology, Queen's University, Kingston, Ontario, Canada; Fachbereich Humanwissenschaften, Universität Osnabrück, Osnabrück, Germany
| | - Nikolaus F Troje
- Department of Psychology, Queen's University , Kingston , Ontario , Canada
| | - Lola L Cuddy
- Department of Psychology, Queen's University , Kingston , Ontario , Canada
| |
Collapse
|
19
|
Discrimination of tonal and atonal music in congenital amusia: The advantage of implicit tasks. Neuropsychologia 2016; 85:10-8. [DOI: 10.1016/j.neuropsychologia.2016.02.027] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2015] [Revised: 02/06/2016] [Accepted: 02/28/2016] [Indexed: 11/20/2022]
|
20
|
Riganello F, Cortese MD, Arcuri F, Quintieri M, Dolce G. How Can Music Influence the Autonomic Nervous System Response in Patients with Severe Disorder of Consciousness? Front Neurosci 2015; 9:461. [PMID: 26696818 PMCID: PMC4674557 DOI: 10.3389/fnins.2015.00461] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2015] [Accepted: 11/20/2015] [Indexed: 11/13/2022] Open
Abstract
Activations to pleasant and unpleasant musical stimuli were observed within an extensive neuronal network and different brain structures, as well as in the processing of the syntactic and semantic aspects of the music. Previous studies evidenced a correlation between autonomic activity and emotion evoked by music listening in patients with Disorders of Consciousness (DoC). In this study, we analyzed retrospectively the autonomic response to musical stimuli by mean of normalized units of Low Frequency (nuLF) and Sample Entropy (SampEn) of Heart Rate Variability (HRV) parameters, and their possible correlation to the different complexity of four musical samples (i.e., Mussorgsky, Tchaikovsky, Grieg, and Boccherini) in Healthy subjects and Vegetative State/Unresponsive Wakefulness Syndrome (VS/UWS) patients. The complexity of musical sample was based on Formal Complexity and General Dynamics parameters defined by Imberty's semiology studies. The results showed a significant difference between the two groups for SampEn during the listening of Mussorgsky's music and for nuLF during the listening of Boccherini and Mussorgsky's music. Moreover, the VS/UWS group showed a reduction of nuLF as well as SampEn comparing music of increasing Formal Complexity and General Dynamics. These results put in evidence how the internal structure of the music can change the autonomic response in patients with DoC. Further investigations are required to better comprehend how musical stimulation can modify the autonomic response in DoC patients, in order to administer the stimuli in a more effective way.
Collapse
Affiliation(s)
| | - Maria D Cortese
- Research in Advanced Neurorehabilitation, Istituto S. Anna Crotone, Italy
| | - Francesco Arcuri
- Research in Advanced Neurorehabilitation, Istituto S. Anna Crotone, Italy
| | - Maria Quintieri
- Research in Advanced Neurorehabilitation, Istituto S. Anna Crotone, Italy
| | - Giuliano Dolce
- Research in Advanced Neurorehabilitation, Istituto S. Anna Crotone, Italy
| |
Collapse
|
21
|
Clark CN, Downey LE, Warren JD. Brain disorders and the biological role of music. Soc Cogn Affect Neurosci 2015; 10:444-52. [PMID: 24847111 PMCID: PMC4350491 DOI: 10.1093/scan/nsu079] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2013] [Revised: 03/07/2014] [Accepted: 05/14/2014] [Indexed: 12/16/2022] Open
Abstract
Despite its evident universality and high social value, the ultimate biological role of music and its connection to brain disorders remain poorly understood. Recent findings from basic neuroscience have shed fresh light on these old problems. New insights provided by clinical neuroscience concerning the effects of brain disorders promise to be particularly valuable in uncovering the underlying cognitive and neural architecture of music and for assessing candidate accounts of the biological role of music. Here we advance a new model of the biological role of music in human evolution and the link to brain disorders, drawing on diverse lines of evidence derived from comparative ethology, cognitive neuropsychology and neuroimaging studies in the normal and the disordered brain. We propose that music evolved from the call signals of our hominid ancestors as a means mentally to rehearse and predict potentially costly, affectively laden social routines in surrogate, coded, low-cost form: essentially, a mechanism for transforming emotional mental states efficiently and adaptively into social signals. This biological role of music has its legacy today in the disordered processing of music and mental states that characterizes certain developmental and acquired clinical syndromes of brain network disintegration.
Collapse
Affiliation(s)
- Camilla N Clark
- Dementia Research Centre, UCL Institute of Neurology, University College London, London WC1N 3BG, UK
| | - Laura E Downey
- Dementia Research Centre, UCL Institute of Neurology, University College London, London WC1N 3BG, UK
| | - Jason D Warren
- Dementia Research Centre, UCL Institute of Neurology, University College London, London WC1N 3BG, UK
| |
Collapse
|
22
|
Guo S, Koelsch S. The effects of supervised learning on event-related potential correlates of music-syntactic processing. Brain Res 2015; 1626:232-46. [PMID: 25660849 DOI: 10.1016/j.brainres.2015.01.046] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2014] [Revised: 01/22/2015] [Accepted: 01/24/2015] [Indexed: 10/24/2022]
Abstract
Humans process music even without conscious effort according to implicit knowledge about syntactic regularities. Whether such automatic and implicit processing is modulated by veridical knowledge has remained unknown in previous neurophysiological studies. This study investigates this issue by testing whether the acquisition of veridical knowledge of a music-syntactic irregularity (acquired through supervised learning) modulates early, partly automatic, music-syntactic processes (as reflected in the early right anterior negativity, ERAN), and/or late controlled processes (as reflected in the late positive component, LPC). Excerpts of piano sonatas with syntactically regular and less regular chords were presented repeatedly (10 times) to non-musicians and amateur musicians. Participants were informed by a cue as to whether the following excerpt contained a regular or less regular chord. Results showed that the repeated exposure to several presentations of regular and less regular excerpts did not influence the ERAN elicited by less regular chords. By contrast, amplitudes of the LPC (as well as of the P3a evoked by less regular chords) decreased systematically across learning trials. These results reveal that late controlled, but not early (partly automatic), neural mechanisms of music-syntactic processing are modulated by repeated exposure to a musical piece. This article is part of a Special Issue entitled SI: Prediction and Attention.
Collapse
Affiliation(s)
- Shuang Guo
- Cluster Languages of Emotion, Freie Universität Berlin, Berlin, Germany
| | - Stefan Koelsch
- Cluster Languages of Emotion, Freie Universität Berlin, Berlin, Germany.
| |
Collapse
|
23
|
|
24
|
Abstract
Recent developments in the cognitive neuroscience of music suggest that a further review of the topic of amusia is timely. In this chapter, we first consider previous taxonomies of amusia and propose a fresh framework for understanding the amusias, essentially as disorders of cognitive information processing. We critically review current cognitive and neuroanatomic findings in the published literature on amusia. We assess the extent to which the clinical and neuropsychologic evidence in amusia can be reconciled; both with the information-processing framework we propose, and with the picture of the brain organization of music and language processing emerging from cognitive neuroscience and functional neuroimaging studies. The balance of evidence suggests that the amusias can be understood as disorders of musical object cognition targeting separable levels of an information-processing hierarchy and underpinned by specific brain network dysfunction. The neuroanatomic associations of the amusias show substantial overlap with brain networks that process speech; however, this convergence leaves scope for separable brain mechanisms based on altered connectivity and dynamics across culprit networks. The study of the amusias contributes to an increasingly complex picture of the musical brain that transcends any simple dichotomy between music and speech or other complex sounds.
Collapse
Affiliation(s)
- Camilla N Clark
- Dementia Research Centre, UCL Institute of Neurology, University College London, Queen Square, London, United Kingdom
| | - Hannah L Golden
- Dementia Research Centre, UCL Institute of Neurology, University College London, Queen Square, London, United Kingdom
| | - Jason D Warren
- Dementia Research Centre, UCL Institute of Neurology, University College London, Queen Square, London, United Kingdom.
| |
Collapse
|
25
|
Spada D, Verga L, Iadanza A, Tettamanti M, Perani D. The auditory scene: An fMRI study on melody and accompaniment in professional pianists. Neuroimage 2014; 102 Pt 2:764-75. [DOI: 10.1016/j.neuroimage.2014.08.036] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2014] [Revised: 06/13/2014] [Accepted: 08/20/2014] [Indexed: 11/17/2022] Open
|
26
|
Hansen NC, Pearce MT. Predictive uncertainty in auditory sequence processing. Front Psychol 2014; 5:1052. [PMID: 25295018 PMCID: PMC4171990 DOI: 10.3389/fpsyg.2014.01052] [Citation(s) in RCA: 66] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2014] [Accepted: 09/02/2014] [Indexed: 11/23/2022] Open
Abstract
Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty—a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.
Collapse
Affiliation(s)
- Niels Chr Hansen
- Music in the Brain, Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University Hospital Aarhus, Denmark ; Royal Academy of Music Aarhus/Aalborg Aarhus, Denmark ; Department of Aesthetics and Communication, Aarhus University Aarhus, Denmark
| | - Marcus T Pearce
- Cognitive Science Research Group, School of Electronic Engineering and Computer Science, Queen Mary University of London London, UK
| |
Collapse
|
27
|
Olsen KN, Stevens CJ, Dean RT, Bailes F. Continuous loudness response to acoustic intensity dynamics in melodies: effects of melodic contour, tempo, and tonality. Acta Psychol (Amst) 2014; 149:117-28. [PMID: 24809252 DOI: 10.1016/j.actpsy.2014.03.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2013] [Revised: 03/06/2014] [Accepted: 03/24/2014] [Indexed: 10/25/2022] Open
Abstract
The aim of this work was to investigate perceived loudness change in response to melodies that increase (up-ramp) or decrease (down-ramp) in acoustic intensity, and the interaction with other musical factors such as melodic contour, tempo, and tonality (tonal/atonal). A within-subjects design manipulated direction of linear intensity change (up-ramp, down-ramp), melodic contour (ascending, descending), tempo, and tonality, using single ramp trials and paired ramp trials, where single up-ramps and down-ramps were assembled to create continuous up-ramp/down-ramp or down-ramp/up-ramp pairs. Twenty-nine (Exp 1) and thirty-six (Exp 2) participants rated loudness continuously in response to trials with monophonic 13-note piano melodies lasting either 6.4s or 12s. Linear correlation coefficients >.89 between loudness and time show that time-series loudness responses to dynamic up-ramp and down-ramp melodies are essentially linear across all melodies. Therefore, 'indirect' loudness change derived from the difference in loudness at the beginning and end points of the continuous response was calculated. Down-ramps were perceived to change significantly more in loudness than up-ramps in both tonalities and at a relatively slow tempo. Loudness change was also greater for down-ramps presented with a congruent descending melodic contour, relative to an incongruent pairing (down-ramp and ascending melodic contour). No differential effect of intensity ramp/melodic contour congruency was observed for up-ramps. In paired ramp trials assessing the possible impact of ramp context, loudness change in response to up-ramps was significantly greater when preceded by down-ramps, than when not preceded by another ramp. Ramp context did not affect down-ramp perception. The contribution to the fields of music perception and psychoacoustics are discussed in the context of real-time perception of music, principles of music composition, and performance of musical dynamics.
Collapse
|
28
|
Stevens CJ, Tardieu J, Dunbar-Hall P, Best CT, Tillmann B. Expectations in culturally unfamiliar music: influences of proximal and distal cues and timbral characteristics. Front Psychol 2013; 4:789. [PMID: 24223562 PMCID: PMC3819523 DOI: 10.3389/fpsyg.2013.00789] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2013] [Accepted: 10/07/2013] [Indexed: 11/13/2022] Open
Abstract
Listeners' musical perception is influenced by cues that can be stored in short-term memory (e.g., within the same musical piece) or long-term memory (e.g., based on one's own musical culture). The present study tested how these cues (referred to as, respectively, proximal and distal cues) influence the perception of music from an unfamiliar culture. Western listeners who were naïve to Gamelan music judged completeness and coherence for newly constructed melodies in the Balinese gamelan tradition. In these melodies, we manipulated the final tone with three possibilities: the original gong tone, an in-scale tone replacement or an out-of-scale tone replacement. We also manipulated the musical timbre employed in Gamelan pieces. We hypothesized that novice listeners are sensitive to out-of-scale changes, but not in-scale changes, and that this might be influenced by the more unfamiliar timbre created by Gamelan "sister" instruments whose harmonics beat with the harmonics of the other instrument, creating a timbrally "shimmering" sound. The results showed: (1) out-of-scale endings were judged less complete than original gong and in-scale endings; (2) for melodies played with "sister" instruments, in-scale endings were judged as less complete than original endings. Furthermore, melodies using the original scale tones were judged more coherent than melodies containing few or multiple tone replacements; melodies played on single instruments were judged more coherent than the same melodies played on sister instruments. Additionally, there is some indication of within-session statistical learning, with expectations for the initially-novel materials developing during the course of the experiment. The data suggest the influence of both distal cues (e.g., previously unfamiliar timbres) and proximal cues (within the same sequence and over the experimental session) on the perception of melodies from other cultural systems based on unfamiliar tunings and scale systems.
Collapse
|
29
|
van den Bosch I, Salimpoor VN, Zatorre RJ. Familiarity mediates the relationship between emotional arousal and pleasure during music listening. Front Hum Neurosci 2013; 7:534. [PMID: 24046738 PMCID: PMC3763198 DOI: 10.3389/fnhum.2013.00534] [Citation(s) in RCA: 56] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2013] [Accepted: 08/16/2013] [Indexed: 11/30/2022] Open
Abstract
Emotional arousal appears to be a major contributing factor to the pleasure that listeners experience in response to music. Accordingly, a strong positive correlation between self-reported pleasure and electrodermal activity (EDA), an objective indicator of emotional arousal, has been demonstrated when individuals listen to familiar music. However, it is not yet known to what extent familiarity contributes to this relationship. In particular, as listening to familiar music involves expectations and predictions over time based on veridical knowledge of the piece, it could be that such memory factors plays a major role. Here, we tested such a contribution by using musical stimuli entirely unfamiliar to listeners. In a second experiment we repeated the novel music to experimentally establish a sense of familiarity. We aimed to determine whether (1) pleasure and emotional arousal would continue to correlate when listeners have no explicit knowledge of how the tones will unfold, and (2) whether this could be enhanced by experimentally-induced familiarity. In the first experiment, we presented 33 listeners with 70 unfamiliar musical excerpts in two sessions. There was no relationship between the degree of experienced pleasure and emotional arousal as measured by EDA. In the second experiment, 7 participants listened to 35 unfamiliar excerpts over two sessions separated by 30 min. Repeated exposure significantly increased EDA, even though individuals did not explicitly recall having heard all the pieces before. Furthermore, increases in self-reported familiarity significantly enhanced experienced pleasure and there was a general, though not significant, increase in EDA. These results suggest that some level of expectation and predictability mediated by prior exposure to a given piece of music play an important role in the experience of emotional arousal in response to music.
Collapse
Affiliation(s)
- Iris van den Bosch
- Neuroscience and Cognition, Graduate School of Life Sciences, Utrecht University Utrecht, Netherlands ; Montreal Neurological Institute, McGill University Montreal, QC, Canada
| | | | | |
Collapse
|
30
|
Rohrmeier M, Cross I. Artificial grammar learning of melody is constrained by melodic inconsistency: Narmour's principles affect melodic learning. PLoS One 2013; 8:e66174. [PMID: 23874388 PMCID: PMC3706544 DOI: 10.1371/journal.pone.0066174] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2012] [Accepted: 05/07/2013] [Indexed: 11/18/2022] Open
Abstract
Considerable evidence suggests that people acquire artificial grammars incidentally and implicitly, an indispensable capacity for the acquisition of music or language. However, less research has been devoted to exploring constraints affecting incidental learning. Within the domain of music, the extent to which Narmour's (1990) melodic principles affect implicit learning of melodic structure was experimentally explored. Extending previous research (Rohrmeier, Rebuschat & Cross, 2011), the identical finite-state grammar is employed having terminals (the alphabet) manipulated so that melodies generated systematically violated Narmour's principles. Results indicate that Narmour-inconsistent melodic materials impede implicit learning. This further constitutes a case in which artificial grammar learning is affected by prior knowledge or processing constraints.
Collapse
Affiliation(s)
- Martin Rohrmeier
- Massachusetts Institute of Technology, Cambridge, Massachussetts, United States of America
- Centre for Music and Science, Faculty of Music, University of Cambridge, Cambridge, United Kingdom
| | - Ian Cross
- Massachusetts Institute of Technology, Cambridge, Massachussetts, United States of America
- * E-mail:
| |
Collapse
|
31
|
Abstract
Why should music be of interest to cognitive scientists, and what role does it play in human cognition? We review three factors that make music an important topic for cognitive scientific research. First, music is a universal human trait fulfilling crucial roles in everyday life. Second, music has an important part to play in ontogenetic development and human evolution. Third, appreciating and producing music simultaneously engage many complex perceptual, cognitive, and emotional processes, rendering music an ideal object for studying the mind. We propose an integrated status for music cognition in the Cognitive Sciences and conclude by reviewing challenges and big questions in the field and the way in which these reflect recent developments.
Collapse
Affiliation(s)
- Marcus Pearce
- Music Cognition Lab, School of Electronic Engineering & Computer Science, Queen Mary, University of London
| | | |
Collapse
|
32
|
|
33
|
Regularity of unit length boosts statistical learning in verbal and nonverbal artificial languages. Psychon Bull Rev 2012; 20:142-7. [DOI: 10.3758/s13423-012-0309-8] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
34
|
Tillmann B. Music and language perception: expectations, structural integration, and cognitive sequencing. Top Cogn Sci 2012; 4:568-84. [PMID: 22760955 DOI: 10.1111/j.1756-8765.2012.01209.x] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Music can be described as sequences of events that are structured in pitch and time. Studying music processing provides insight into how complex event sequences are learned, perceived, and represented by the brain. Given the temporal nature of sound, expectations, structural integration, and cognitive sequencing are central in music perception (i.e., which sounds are most likely to come next and at what moment should they occur?). This paper focuses on similarities in music and language cognition research, showing that music cognition research provides insight into the understanding of not only music processing but also language processing and the processing of other structured stimuli. The hypothesis of shared resources between music and language processing and of domain-general dynamic attention has motivated the development of research to test music as a means to stimulate sensory, cognitive, and motor processes.
Collapse
Affiliation(s)
- Barbara Tillmann
- Lyon Neuroscience Research Center - CRNL, CNRS UMR5292, INSERM U1028, Université Lyon 1, Lyon Cedex.
| |
Collapse
|
35
|
Hoch L, Tillmann B. Shared structural and temporal integration resources for music and arithmetic processing. Acta Psychol (Amst) 2012; 140:230-5. [PMID: 22673068 DOI: 10.1016/j.actpsy.2012.03.008] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2011] [Revised: 03/27/2012] [Accepted: 03/31/2012] [Indexed: 10/28/2022] Open
Abstract
While previous research has investigated the relationship either between language and music processing or between language and arithmetic processing, the present study investigated the relationship between music and arithmetic processing. Rule-governed number series, with the final number being a correct or incorrect series ending, were visually presented in synchrony with musical sequences, with the final chord functioning as the expected tonic or the less-expected subdominant chord (i.e., tonal function manipulation). Participants were asked to judge the correctness of the final number as quickly and accurately as possible. The results revealed an interaction between the processing of series ending and the processing of the task-irrelevant chords' tonal function, thus suggesting that music and arithmetic processing share cognitive resources. These findings are discussed in terms of general temporal and structural integration resources for linguistic and non-linguistic rule-governed sequences.
Collapse
|
36
|
Quintin EM, Bhatara A, Poissant H, Fombonne E, Levitin DJ. Processing of musical structure by high-functioning adolescents with autism spectrum disorders. Child Neuropsychol 2012; 19:250-75. [PMID: 22397615 DOI: 10.1080/09297049.2011.653540] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
Enhanced pitch perception and memory have been cited as evidence of a local processing bias in autism spectrum disorders (ASD). This bias is argued to account for enhanced perceptual functioning ( Mottron & Burack, 2001 ; Mottron, Dawson, Soulières, Hubert, & Burack, 2006 ) and central coherence theories of ASD ( Frith, 1989 ; Happé & Frith, 2006 ). A local processing bias confers a different cognitive style to individuals with ASD ( Happé, 1999 ), which accounts in part for their good visuospatial and visuoconstructive skills. Here, we present analogues in the auditory domain, audiotemporal or audioconstructive processing, which we assess using a novel experimental task: a musical puzzle. This task evaluates the ability of individuals with ASD to process temporal sequences of musical events as well as various elements of musical structure and thus indexes their ability to employ a global processing style. Musical structures created and replicated by children and adolescents with ASD (10-19 years old) and typically developing children and adolescents (7-17 years old) were found to be similar in global coherence. Presenting a musical template for reference increased accuracy equally for both groups, with performance associated to performance IQ and short-term auditory memory. The overall pattern of performance was similar for both groups; some puzzles were easier than others and this was the case for both groups. Task performance was further found to be correlated with the ability to perceive musical emotions, more so for typically developing participants. Findings are discussed in light of the empathizing-systemizing theory of ASD ( Baron-Cohen, 2009 ) and the importance of describing the strengths of individuals with ASD ( Happé, 1999 ; Heaton, 2009 ).
Collapse
Affiliation(s)
- Eve-Marie Quintin
- Center for Interdisciplinary Brain Sciences Research, Stanford University, School of Medicine, Stanford, California, USA
| | | | | | | | | |
Collapse
|
37
|
Predictive information processing in music cognition. A critical review. Int J Psychophysiol 2012; 83:164-75. [PMID: 22245599 DOI: 10.1016/j.ijpsycho.2011.12.010] [Citation(s) in RCA: 100] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2011] [Revised: 12/27/2011] [Accepted: 12/28/2011] [Indexed: 11/21/2022]
|
38
|
Wehrum S, Degé F, Ott U, Walter B, Stippekohl B, Kagerer S, Schwarzer G, Vaitl D, Stark R. Can you hear a difference? Neuronal correlates of melodic deviance processing in children. Brain Res 2011; 1402:80-92. [DOI: 10.1016/j.brainres.2011.05.057] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2010] [Revised: 05/21/2011] [Accepted: 05/24/2011] [Indexed: 11/24/2022]
|
39
|
Hoch L, Poulin-Charronnat B, Tillmann B. The influence of task-irrelevant music on language processing: syntactic and semantic structures. Front Psychol 2011; 2:112. [PMID: 21713122 PMCID: PMC3112335 DOI: 10.3389/fpsyg.2011.00112] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2011] [Accepted: 05/13/2011] [Indexed: 12/03/2022] Open
Abstract
Recent research has suggested that music and language processing share neural resources, leading to new hypotheses about interference in the simultaneous processing of these two structures. The present study investigated the effect of a musical chord's tonal function on syntactic processing (Experiment 1) and semantic processing (Experiment 2) using a cross-modal paradigm and controlling for acoustic differences. Participants read sentences and performed a lexical decision task on the last word, which was, syntactically or semantically, expected or unexpected. The simultaneously presented (task-irrelevant) musical sequences ended on either an expected tonic or a less-expected subdominant chord. Experiment 1 revealed interactive effects between music-syntactic and linguistic-syntactic processing. Experiment 2 showed only main effects of both music-syntactic and linguistic-semantic expectations. An additional analysis over the two experiments revealed that linguistic violations interacted with musical violations, though not differently as a function of the type of linguistic violations. The present findings were discussed in light of currently available data on the processing of music as well as of syntax and semantics in language, leading to the hypothesis that resources might be shared for structural integration processes and sequencing.
Collapse
Affiliation(s)
- Lisianne Hoch
- Lyon Neuroscience Research Center, CNRS UMR5292, INSERM U1028, Université Lyon1Lyon, France
| | - Benedicte Poulin-Charronnat
- Lyon Neuroscience Research Center, CNRS UMR5292, INSERM U1028, Université Lyon1Lyon, France
- Laboratoire d'Etude de l'Apprentissage et du Développement, CNRS-UMR 5022, Université de BourgogneDijon, France
| | - Barbara Tillmann
- Lyon Neuroscience Research Center, CNRS UMR5292, INSERM U1028, Université Lyon1Lyon, France
| |
Collapse
|
40
|
Incidental and online learning of melodic structure. Conscious Cogn 2011; 20:214-22. [DOI: 10.1016/j.concog.2010.07.004] [Citation(s) in RCA: 58] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2010] [Revised: 07/19/2010] [Accepted: 07/20/2010] [Indexed: 11/21/2022]
|
41
|
McDermott JH, Keebler MV, Micheyl C, Oxenham AJ. Musical intervals and relative pitch: frequency resolution, not interval resolution, is special. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2010; 128:1943-1951. [PMID: 20968366 PMCID: PMC2981111 DOI: 10.1121/1.3478785] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/03/2009] [Revised: 07/16/2010] [Accepted: 07/19/2010] [Indexed: 05/26/2023]
Abstract
Pitch intervals are central to most musical systems, which utilize pitch at the expense of other acoustic dimensions. It seemed plausible that pitch might uniquely permit precise perception of the interval separating two sounds, as this could help explain its importance in music. To explore this notion, a simple discrimination task was used to measure the precision of interval perception for the auditory dimensions of pitch, brightness, and loudness. Interval thresholds were then expressed in units of just-noticeable differences for each dimension, to enable comparison across dimensions. Contrary to expectation, when expressed in these common units, interval acuity was actually worse for pitch than for loudness or brightness. This likely indicates that the perceptual dimension of pitch is unusual not for interval perception per se, but rather for the basic frequency resolution it supports. The ubiquity of pitch in music may be due in part to this fine-grained basic resolution.
Collapse
Affiliation(s)
- Josh H McDermott
- Center for Neural Science, New York University, 4 Washington Place, New York, New York 10003, USA.
| | | | | | | |
Collapse
|
42
|
Tillmann B, Poulin-Charronnat B. Auditory expectations for newly acquired structures. Q J Exp Psychol (Hove) 2010; 63:1646-64. [DOI: 10.1080/17470210903511228] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Our study investigated whether newly acquired auditory structure knowledge allows listeners to develop perceptual expectations for future events. For that aim, we introduced a new experimental approach that combines implicit learning and priming paradigms. Participants were first exposed to structured tone sequences without being told about the underlying artificial grammar. They then made speeded judgements on a perceptual feature of target tones in new sequences (i.e., in-tune/out-of-tune judgements). The target tones respected or violated the structure of the artificial grammar and were thus supposed to be expected or unexpected. In this priming task, grammatical tones were processed faster and more accurately than ungrammatical ones. This processing advantage was observed for an experimental group performing a memory task during the exposure phase, but was not observed for a control group, which was lacking the exposure phase (Experiment 1). It persisted when participants realized an in-tune/out-of-tune detection task during exposure (Experiment 2). This finding suggests that the acquisition of new structure knowledge not only influences grammaticality judgements on entire sequences (as previously shown in implicit learning research), but allows developing perceptual expectations that influence single event processing. It further promotes the priming paradigm as an implicit access to acquired artificial structure knowledge.
Collapse
Affiliation(s)
| | - Bénédicte Poulin-Charronnat
- Université Claude Bernard Lyon 1, CNRS-UMR 5020, Lyon, France Université de Bourgogne, CNRS-UMR 5022, Dijon, France
| |
Collapse
|