51
|
Abstract
Functional brain imaging has revealed much about the neuroanatomical substrates of higher cognition, including music, language, learning, and memory. The technique lends itself to studying of groups of individuals. In contrast, the nature of expert performance is typically studied through the examination of exceptional individuals using behavioral case studies and retrospective biography. Here, we combined fMRI and the study of an individual who is a world-class expert musician and composer in order to better understand the neural underpinnings of his music perception and cognition, in particular, his mental representations for music. We used state of the art multivoxel pattern analysis (MVPA) and representational dissimilarity analysis (RDA) in a fixed set of brain regions to test three exploratory hypotheses with the musician Sting: (1) Composing would recruit neutral structures that are both unique and distinguishable from other creative acts, such as composing prose or visual art; (2) listening and imagining music would recruit similar neural regions, indicating that musical memory shares anatomical substrates with music listening; (3) the MVPA and RDA results would help us to map the representational space for music, revealing which musical pieces and genres are perceived to be similar in the musician's mental models for music. Our hypotheses were confirmed. The act of composing, and even of imagining elements of the composed piece separately, such as melody and rhythm, activated a similar cluster of brain regions, and were distinct from prose and visual art. Listened and imagined music showed high similarity, and in addition, notable similarity/dissimilarity patterns emerged among the various pieces used as stimuli: Muzak and Top 100/Pop songs were far from all other musical styles in Mahalanobis distance (Euclidean representational space), whereas jazz, R&B, tango and rock were comparatively close. Closer inspection revealed principaled explanations for the similarity clusters found, based on key, tempo, motif, and orchestration.
Collapse
Affiliation(s)
- Daniel J Levitin
- a Department of Psychology , McGill University , Montreal , Canada
| | - Scott T Grafton
- b Department of Psychological and Brain Sciences , University of California at Santa Barbara , Santa Barbara , CA , USA
| |
Collapse
|
52
|
Frid E, Bresin R, Alborno P, Elblaus L. Interactive Sonification of Spontaneous Movement of Children-Cross-Modal Mapping and the Perception of Body Movement Qualities through Sound. Front Neurosci 2016; 10:521. [PMID: 27891074 PMCID: PMC5104747 DOI: 10.3389/fnins.2016.00521] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2016] [Accepted: 10/27/2016] [Indexed: 11/13/2022] Open
Abstract
In this paper we present three studies focusing on the effect of different sound models in interactive sonification of bodily movement. We hypothesized that a sound model characterized by continuous smooth sounds would be associated with other movement characteristics than a model characterized by abrupt variation in amplitude and that these associations could be reflected in spontaneous movement characteristics. Three subsequent studies were conducted to investigate the relationship between properties of bodily movement and sound: (1) a motion capture experiment involving interactive sonification of a group of children spontaneously moving in a room, (2) an experiment involving perceptual ratings of sonified movement data and (3) an experiment involving matching between sonified movements and their visualizations in the form of abstract drawings. In (1) we used a system constituting of 17 IR cameras tracking passive reflective markers. The head positions in the horizontal plane of 3–4 children were simultaneously tracked and sonified, producing 3–4 sound sources spatially displayed through an 8-channel loudspeaker system. We analyzed children's spontaneous movement in terms of energy-, smoothness- and directness-index. Despite large inter-participant variability and group-specific effects caused by interaction among children when engaging in the spontaneous movement task, we found a small but significant effect of sound model. Results from (2) indicate that different sound models can be rated differently on a set of motion-related perceptual scales (e.g., expressivity and fluidity). Also, results imply that audio-only stimuli can evoke stronger perceived properties of movement (e.g., energetic, impulsive) than stimuli involving both audio and video representations. Findings in (3) suggest that sounds portraying bodily movement can be represented using abstract drawings in a meaningful way. We argue that the results from these studies support the existence of a cross-modal mapping of body motion qualities from bodily movement to sounds. Sound can be translated and understood from bodily motion, conveyed through sound visualizations in the shape of drawings and translated back from sound visualizations to audio. The work underlines the potential of using interactive sonification to communicate high-level features of human movement data.
Collapse
Affiliation(s)
- Emma Frid
- Sound and Music Computing, Media Technology and Interaction Design, School of Computer Science and Communication, KTH Royal Institute of Technology Stockholm, Sweden
| | - Roberto Bresin
- Sound and Music Computing, Media Technology and Interaction Design, School of Computer Science and Communication, KTH Royal Institute of Technology Stockholm, Sweden
| | - Paolo Alborno
- Casa Paganini - Infomus Research Centre, DIBRIS Dipartimento di Informatica, Bioingegneria, Robotica e Ingegneria, Università di Genova Genova, Italy
| | - Ludvig Elblaus
- Sound and Music Computing, Media Technology and Interaction Design, School of Computer Science and Communication, KTH Royal Institute of Technology Stockholm, Sweden
| |
Collapse
|
53
|
Lima CF, Krishnan S, Scott SK. Roles of Supplementary Motor Areas in Auditory Processing and Auditory Imagery. Trends Neurosci 2016; 39:527-542. [PMID: 27381836 PMCID: PMC5441995 DOI: 10.1016/j.tins.2016.06.003] [Citation(s) in RCA: 156] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2016] [Revised: 05/26/2016] [Accepted: 06/09/2016] [Indexed: 11/28/2022]
Abstract
Although the supplementary and pre-supplementary motor areas have been intensely investigated in relation to their motor functions, they are also consistently reported in studies of auditory processing and auditory imagery. This involvement is commonly overlooked, in contrast to lateral premotor and inferior prefrontal areas. We argue here for the engagement of supplementary motor areas across a variety of sound categories, including speech, vocalizations, and music, and we discuss how our understanding of auditory processes in these regions relate to findings and hypotheses from the motor literature. We suggest that supplementary and pre-supplementary motor areas play a role in facilitating spontaneous motor responses to sound, and in supporting a flexible engagement of sensorimotor processes to enable imagery and to guide auditory perception. Hearing and imagining sounds–including speech, vocalizations, and music–can recruit SMA and pre-SMA, which are normally discussed in relation to their motor functions. Emerging research indicates that individual differences in the structure and function of SMA and pre-SMA can predict performance in auditory perception and auditory imagery tasks. Responses during auditory processing primarily peak in pre-SMA and in the boundary area between pre-SMA and SMA. This boundary area is crucially involved in the control of speech and vocal production, suggesting that sounds engage this region in an effector-specific manner. Activating sound-related motor representations in SMA and pre-SMA might facilitate behavioral responses to sounds. This might also support a flexible generation of sensory predictions based on previous experience to enable imagery and guide perception.
Collapse
Affiliation(s)
- César F Lima
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Saloni Krishnan
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK.
| |
Collapse
|
54
|
Su YH. Visual tuning and metrical perception of realistic point-light dance movements. Sci Rep 2016; 6:22774. [PMID: 26947252 PMCID: PMC4780026 DOI: 10.1038/srep22774] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2015] [Accepted: 02/23/2016] [Indexed: 11/24/2022] Open
Abstract
Humans move to music spontaneously, and this sensorimotor coupling underlies musical rhythm perception. The present research proposed that, based on common action representation, different metrical levels as in auditory rhythms could emerge visually when observing structured dance movements. Participants watched a point-light figure performing basic steps of Swing dance cyclically in different tempi, whereby the trunk bounced vertically at every beat and the limbs moved laterally at every second beat, yielding two possible metrical periodicities. In Experiment 1, participants freely identified a tempo of the movement and tapped along. While some observers only tuned to the bounce and some only to the limbs, the majority tuned to one level or the other depending on the movement tempo, which was also associated with individuals' preferred tempo. In Experiment 2, participants reproduced the tempo of leg movements by four regular taps, and showed a slower perceived leg tempo with than without the trunk bouncing simultaneously in the stimuli. This mirrors previous findings of an auditory 'subdivision effect', suggesting the leg movements were perceived as beat while the bounce as subdivisions. Together these results support visual metrical perception of dance movements, which may employ similar action-based mechanisms to those underpinning auditory rhythm perception.
Collapse
Affiliation(s)
- Yi-Huang Su
- Department of Movement Science, Faculty of Sport and Health Sciences, Technical University of Munich, Munich, Germany
| |
Collapse
|
55
|
Neural correlates of binding lyrics and melodies for the encoding of new songs. Neuroimage 2016; 127:333-345. [DOI: 10.1016/j.neuroimage.2015.12.018] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2015] [Revised: 12/07/2015] [Accepted: 12/11/2015] [Indexed: 01/19/2023] Open
|
56
|
Alamia A, Solopchuk O, D'Ausilio A, Van Bever V, Fadiga L, Olivier E, Zénon A. Disruption of Broca's Area Alters Higher-order Chunking Processing during Perceptual Sequence Learning. J Cogn Neurosci 2016; 28:402-17. [PMID: 26765778 DOI: 10.1162/jocn_a_00911] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Because Broca's area is known to be involved in many cognitive functions, including language, music, and action processing, several attempts have been made to propose a unifying theory of its role that emphasizes a possible contribution to syntactic processing. Recently, we have postulated that Broca's area might be involved in higher-order chunk processing during implicit learning of a motor sequence. Chunking is an information-processing mechanism that consists of grouping consecutive items in a sequence and is likely to be involved in all of the aforementioned cognitive processes. Demonstrating a contribution of Broca's area to chunking during the learning of a nonmotor sequence that does not involve language could shed new light on its function. To address this issue, we used offline MRI-guided TMS in healthy volunteers to disrupt the activity of either the posterior part of Broca's area (left Brodmann's area [BA] 44) or a control site just before participants learned a perceptual sequence structured in distinct hierarchical levels. We found that disruption of the left BA 44 increased the processing time of stimuli representing the boundaries of higher-order chunks and modified the chunking strategy. The current results highlight the possible role of the left BA 44 in building up effector-independent representations of higher-order events in structured sequences. This might clarify the contribution of Broca's area in processing hierarchical structures, a key mechanism in many cognitive functions, such as language and composite actions.
Collapse
Affiliation(s)
| | | | | | | | - Luciano Fadiga
- Fondazione Istituto Italiano di Tecnologia, Genova, Italy.,University of Ferrara
| | - Etienne Olivier
- Université catholique de Louvain.,Fondazione Istituto Italiano di Tecnologia, Genova, Italy
| | | |
Collapse
|
57
|
Matheson AMM, Sakata JT. Relationship between the Sequencing and Timing of Vocal Motor Elements in Birdsong. PLoS One 2015; 10:e0143203. [PMID: 26650933 PMCID: PMC4674110 DOI: 10.1371/journal.pone.0143203] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2015] [Accepted: 11/02/2015] [Indexed: 11/29/2022] Open
Abstract
Accurate coordination of the sequencing and timing of motor gestures is important for the performance of complex and evolutionarily relevant behaviors. However, the degree to which motor sequencing and timing are related remains largely unknown. Birdsong is a communicative behavior that consists of discrete vocal motor elements (‘syllables’) that are sequenced and timed in a precise manner. To reveal the relationship between syllable sequencing and timing, we analyzed how variation in the probability of syllable transitions at branch points, nodes in song with variable sequencing across renditions, correlated with variation in the duration of silent gaps between syllable transitions (‘gap durations’) for adult Bengalese finch song. We observed a significant negative relationship between transition probability and gap duration: more prevalent transitions were produced with shorter gap durations. We then assessed the degree to which long-term age-dependent changes and acute context-dependent changes to syllable sequencing and timing followed this inverse relationship. Age- but not context-dependent changes to syllable sequencing and timing were inversely related. On average, gap durations at branch points decreased with age, and the magnitude of this decrease was greater for transitions that increased in prevalence than for transitions that decreased in prevalence. In contrast, there was no systematic relationship between acute context-dependent changes to syllable sequencing and timing. Gap durations at branch points decreased when birds produced female-directed courtship song compared to when they produced undirected song, and the magnitude of this decrease was not related to the direction and magnitude of changes to transition probabilities. These analyses suggest that neural mechanisms that regulate syllable sequencing could similarly control syllable timing but also highlight mechanisms that can independently regulate syllable sequencing and timing.
Collapse
Affiliation(s)
| | - Jon T. Sakata
- Department of Biology, McGill University, Montreal, Quebec, Canada
- * E-mail:
| |
Collapse
|
58
|
Schiavio A, Altenmüller E. Exploring Music-Based Rehabilitation for Parkinsonism through Embodied Cognitive Science. Front Neurol 2015; 6:217. [PMID: 26539155 PMCID: PMC4609849 DOI: 10.3389/fneur.2015.00217] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2015] [Accepted: 09/26/2015] [Indexed: 11/25/2022] Open
Abstract
Recent embodied approaches in cognitive sciences emphasize the constitutive roles of bodies and environment in driving cognitive processes. Cognition is thus seen as a distributed system based on the continuous interaction of bodies, brains, and environment. These categories, moreover, do not relate only causally, through a sequential input-output network of computations; rather, they are dynamically enfolded in each other, being mutually implemented by the concrete patterns of actions adopted by the cognitive system. However, while this claim has been widely discussed across various disciplines, its relevance and potential beneficial applications for music therapy remain largely unexplored. With this in mind, we provide here an overview of the embodied approaches to cognition, discussing their main tenets through the lenses of music therapy. In doing so, we question established methodological and theoretical paradigms and identify possible novel strategies for intervention. In particular, we refer to the music-based rehabilitative protocols adopted for Parkinson's disease patients. Indeed, in this context, it has recently been observed that music therapy not only affects movement-related skills but that it also contributes to stabilizing physiological functions and improving socio-affective behaviors. We argue that these phenomena involve previously unconsidered aspects of cognition and (motor) behavior, which are rooted in the action-perception cycle characterizing the whole living system.
Collapse
Affiliation(s)
- Andrea Schiavio
- School of Music, The Ohio State University, Columbus, OH, USA
- Department of Music, The University of Sheffield, Sheffield, UK
| | - Eckart Altenmüller
- Institute of Music Physiology and Musician’s Medicine, University of Music, Drama and Media Hannover, Hannover, Germany
| |
Collapse
|
59
|
Timing Rhythms: Perceived Duration Increases with a Predictable Temporal Structure of Short Interval Fillers. PLoS One 2015; 10:e0141018. [PMID: 26474047 PMCID: PMC4608791 DOI: 10.1371/journal.pone.0141018] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2015] [Accepted: 10/02/2015] [Indexed: 11/19/2022] Open
Abstract
Variations in the temporal structure of an interval can lead to remarkable differences in perceived duration. For example, it has previously been shown that isochronous intervals, that is, intervals filled with temporally regular stimuli, are perceived to last longer than intervals left empty or filled with randomly timed stimuli. Characterizing the extent of such distortions is crucial to understanding how duration perception works. One account to explain effects of temporal structure is a non-linear accumulator-counter mechanism reset at the beginning of every subinterval. An alternative explanation based on entrainment to regular stimulation posits that the neural response to each filler stimulus in an isochronous sequence is amplified and a higher neural response may lead to an overestimation of duration. If entrainment is the key that generates response amplification and the distortions in perceived duration, then any form of predictability in the temporal structure of interval fillers should lead to the perception of an interval that lasts longer than a randomly filled one. The present experiments confirm that intervals filled with fully predictable rhythmically grouped stimuli lead to longer perceived duration than anisochronous intervals. No general over- or underestimation is registered for rhythmically grouped compared to isochronous intervals. However, we find that the number of stimuli in each group composing the rhythm also influences perceived duration. Implications of these findings for a non-linear clock model as well as a neural response magnitude account of perceived duration are discussed.
Collapse
|
60
|
Biomechanical metrics of aesthetic perception in dance. Exp Brain Res 2015; 233:3565-81. [PMID: 26319546 DOI: 10.1007/s00221-015-4424-4] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2015] [Accepted: 08/18/2015] [Indexed: 10/23/2022]
Abstract
The brain may be tuned to evaluate aesthetic perception through perceptual chunking when we observe the grace of the dancer. We modelled biomechanical metrics to explain biological determinants of aesthetic perception in dance. Eighteen expert (EXP) and intermediate (INT) dancers performed développé arabesque in three conditions: (1) slow tempo, (2) slow tempo with relevé, and (3) fast tempo. To compare biomechanical metrics of kinematic data, we calculated intra-excursion variability, principal component analysis (PCA), and dimensionless jerk for the gesture limb. Observers, all trained dancers, viewed motion capture stick figures of the trials and ranked each for aesthetic (1) proficiency and (2) movement smoothness. Statistical analyses included group by condition repeated-measures ANOVA for metric data; Mann-Whitney U rank and Friedman's rank tests for nonparametric rank data; Spearman's rho correlations to compare aesthetic rankings and metrics; and linear regression to examine which metric best quantified observers' aesthetic rankings, p < 0.05. The goodness of fit of the proposed models was determined using Akaike information criteria. Aesthetic proficiency and smoothness rankings of the dance movements revealed differences between groups and condition, p < 0.0001. EXP dancers were rated more aesthetically proficient than INT dancers. The slow and fast conditions were judged more aesthetically proficient than slow with relevé (p < 0.0001). Of the metrics, PCA best captured the differences due to group and condition. PCA also provided the most parsimonious model to explain aesthetic proficiency and smoothness rankings. By permitting organization of large data sets into simpler groupings, PCA may mirror the phenomenon of chunking in which the brain combines sensory motor elements into integrated units of behaviour. In this representation, the chunk of information which is remembered, and to which the observer reacts, is the elemental mode shape of the motion rather than physical displacements. This suggests that reduction in redundant information to a simplistic dimensionality is related to the experienced observer's aesthetic perception.
Collapse
|
61
|
Asano R, Boeckx C. Syntax in language and music: what is the right level of comparison? Front Psychol 2015; 6:942. [PMID: 26191034 PMCID: PMC4488597 DOI: 10.3389/fpsyg.2015.00942] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2015] [Accepted: 06/22/2015] [Indexed: 11/21/2022] Open
Abstract
It is often claimed that music and language share a process of hierarchical structure building, a mental “syntax.” Although several lines of research point to commonalities, and possibly a shared syntactic component, differences between “language syntax” and “music syntax” can also be found at several levels: conveyed meaning, and the atoms of combination, for example. To bring music and language closer to one another, some researchers have suggested a comparison between music and phonology (“phonological syntax”), but here too, one quickly arrives at a situation of intriguing similarities and obvious differences. In this paper, we suggest that a fruitful comparison between the two domains could benefit from taking the grammar of action into account. In particular, we suggest that what is called “syntax” can be investigated in terms of goal of action, action planning, motor control, and sensory-motor integration. At this level of comparison, we suggest that some of the differences between language and music could be explained in terms of different goals reflected in the hierarchical structures of action planning: the hierarchical structures of music arise to achieve goals with a strong relation to the affective-gestural system encoding tension-relaxation patterns as well as socio-intentional system, whereas hierarchical structures in language are embedded in a conceptual system that gives rise to compositional meaning. Similarities between music and language are most clear in the way several hierarchical plans for executing action are processed in time and sequentially integrated to achieve various goals.
Collapse
Affiliation(s)
- Rie Asano
- Department of Systematic Musicology, Institute of Musicology, University of Cologne , Cologne, Germany
| | - Cedric Boeckx
- Catalan Institute for Research and Advanced Studies , Barcelona, Spain ; Department of General Linguistics, Universitat de Barcelona , Barcelona, Spain
| |
Collapse
|
62
|
Andric M, Hasson U. Global features of functional brain networks change with contextual disorder. Neuroimage 2015; 117:103-13. [PMID: 25988223 PMCID: PMC4528071 DOI: 10.1016/j.neuroimage.2015.05.025] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2015] [Revised: 05/07/2015] [Accepted: 05/09/2015] [Indexed: 11/25/2022] Open
Abstract
It is known that features of stimuli in the environment affect the strength of functional connectivity in the human brain. However, investigations to date have not converged in determining whether these also impact functional networks' global features, such as modularity strength, number of modules, partition structure, or degree distributions. We hypothesized that one environmental attribute that may strongly impact global features is the temporal regularity of the environment, as prior work indicates that differences in regularity impact regions involved in sensory, attentional and memory processes. We examined this with an fMRI study, in which participants passively listened to tonal series that had identical physical features and differed only in their regularity, as defined by the strength of transition structure between tones. We found that series-regularity induced systematic changes to global features of functional networks, including modularity strength, number of modules, partition structure, and degree distributions. In tandem, we used a novel node-level analysis to determine the extent to which brain regions maintained their within-module connectivity across experimental conditions. This analysis showed that primary sensory regions and those associated with default-mode processes are most likely to maintain their within-module connectivity across conditions, whereas prefrontal regions are least likely to do so. Our work documents a significant capacity for global-level brain network reorganization as a function of context. These findings suggest that modularity and other core, global features, while likely constrained by white-matter structural brain connections, are not completely determined by them. We examined global features of whole-brain functional connectivity to inputs with varying disorder. Modularity, module numbers, and partition similarity varied with input disorder. Default-mode and sensory brain regions were least impacted by the manipulation.
Collapse
Affiliation(s)
- Michael Andric
- Center for Mind/Brain Sciences (CIMeC), The University of Trento, Rovereto, TN, Italy.
| | - Uri Hasson
- Center for Mind/Brain Sciences (CIMeC), The University of Trento, Rovereto, TN, Italy; Department of Psychology and Cognitive Sciences, The University of Trento, Rovereto, TN, Italy
| |
Collapse
|
63
|
Hoeschele M, Merchant H, Kikuchi Y, Hattori Y, ten Cate C. Searching for the origins of musicality across species. Philos Trans R Soc Lond B Biol Sci 2015; 370:20140094. [PMID: 25646517 PMCID: PMC4321135 DOI: 10.1098/rstb.2014.0094] [Citation(s) in RCA: 62] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
In the introduction to this theme issue, Honing et al. suggest that the origins of musicality--the capacity that makes it possible for us to perceive, appreciate and produce music--can be pursued productively by searching for components of musicality in other species. Recent studies have highlighted that the behavioural relevance of stimuli to animals and the relation of experimental procedures to their natural behaviour can have a large impact on the type of results that can be obtained for a given species. Through reviewing laboratory findings on animal auditory perception and behaviour, as well as relevant findings on natural behaviour, we provide evidence that both traditional laboratory studies and studies relating to natural behaviour are needed to answer the problem of musicality. Traditional laboratory studies use synthetic stimuli that provide more control than more naturalistic studies, and are in many ways suitable to test the perceptual abilities of animals. However, naturalistic studies are essential to inform us as to what might constitute relevant stimuli and parameters to test with laboratory studies, or why we may or may not expect certain stimulus manipulations to be relevant. These two approaches are both vital in the comparative study of musicality.
Collapse
Affiliation(s)
| | - Hugo Merchant
- Instituto de Neurobiologia, UNAM, Campus Juriquilla, Santiago de Querétaro, Mexico
| | - Yukiko Kikuchi
- Institute of Neuroscience, Newcastle University Medical School, Newcastle upon Tyne, UK
| | - Yuko Hattori
- Primate Research Institute, Kyoto University, Kyoto, Japan
| | - Carel ten Cate
- Institute of Biology, Leiden University, Leiden, The Netherlands Leiden Institute for Brain and Cognition, Leiden University, Leiden, The Netherlands
| |
Collapse
|
64
|
Merchant H, Pérez O, Bartolo R, Méndez JC, Mendoza G, Gámez J, Yc K, Prado L. Sensorimotor neural dynamics during isochronous tapping in the medial premotor cortex of the macaque. Eur J Neurosci 2015; 41:586-602. [DOI: 10.1111/ejn.12811] [Citation(s) in RCA: 51] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2014] [Accepted: 11/26/2014] [Indexed: 11/28/2022]
Affiliation(s)
- Hugo Merchant
- Instituto de Neurobiología; UNAM; Campus Juriquilla; Boulevard Juriquilla No. 3001 Querétaro Qro. 76230 México
| | - Oswaldo Pérez
- Instituto de Neurobiología; UNAM; Campus Juriquilla; Boulevard Juriquilla No. 3001 Querétaro Qro. 76230 México
| | - Ramón Bartolo
- Instituto de Neurobiología; UNAM; Campus Juriquilla; Boulevard Juriquilla No. 3001 Querétaro Qro. 76230 México
| | - Juan Carlos Méndez
- Instituto de Neurobiología; UNAM; Campus Juriquilla; Boulevard Juriquilla No. 3001 Querétaro Qro. 76230 México
| | - Germán Mendoza
- Instituto de Neurobiología; UNAM; Campus Juriquilla; Boulevard Juriquilla No. 3001 Querétaro Qro. 76230 México
| | - Jorge Gámez
- Instituto de Neurobiología; UNAM; Campus Juriquilla; Boulevard Juriquilla No. 3001 Querétaro Qro. 76230 México
| | - Karyna Yc
- Instituto de Neurobiología; UNAM; Campus Juriquilla; Boulevard Juriquilla No. 3001 Querétaro Qro. 76230 México
| | - Luis Prado
- Instituto de Neurobiología; UNAM; Campus Juriquilla; Boulevard Juriquilla No. 3001 Querétaro Qro. 76230 México
| |
Collapse
|
65
|
|
66
|
Giordano BL, Egermann H, Bresin R. The production and perception of emotionally expressive walking sounds: similarities between musical performance and everyday motor activity. PLoS One 2014; 9:e115587. [PMID: 25551392 PMCID: PMC4281241 DOI: 10.1371/journal.pone.0115587] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2014] [Accepted: 12/01/2014] [Indexed: 11/18/2022] Open
Abstract
Several studies have investigated the encoding and perception of emotional expressivity in music performance. A relevant question concerns how the ability to communicate emotions in music performance is acquired. In accordance with recent theories on the embodiment of emotion, we suggest here that both the expression and recognition of emotion in music might at least in part rely on knowledge about the sounds of expressive body movements. We test this hypothesis by drawing parallels between musical expression of emotions and expression of emotions in sounds associated with a non-musical motor activity: walking. In a combined production-perception design, two experiments were conducted, and expressive acoustical features were compared across modalities. An initial performance experiment tested for similar feature use in walking sounds and music performance, and revealed that strong similarities exist. Features related to sound intensity, tempo and tempo regularity were identified as been used similarly in both domains. Participants in a subsequent perception experiment were able to recognize both non-emotional and emotional properties of the sound-generating walkers. An analysis of the acoustical correlates of behavioral data revealed that variations in sound intensity, tempo, and tempo regularity were likely used to recognize expressed emotions. Taken together, these results lend support the motor origin hypothesis for the musical expression of emotions.
Collapse
Affiliation(s)
- Bruno L. Giordano
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Hauke Egermann
- Audio Communication Group, Technische Universität Berlin, Berlin, Germany
| | - Roberto Bresin
- Sound and Music Computing Group, KTH Royal Institute of Technology, Stockholm, Sweden
- * E-mail:
| |
Collapse
|
67
|
Cookson LJ. A desire for parsimony. Behav Sci (Basel) 2014; 3:576-586. [PMID: 25379257 PMCID: PMC4217607 DOI: 10.3390/bs3040576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2013] [Revised: 10/16/2013] [Accepted: 10/22/2013] [Indexed: 11/22/2022] Open
Abstract
An understanding of wildness is being developed as a quality of interactive processing that increases survival opportunities in nature. A link is made between the need to improve interactive quality for wildness, and cognitive desires and interests in art, music, religion and philosophy as these can also be seen as attempts to improve interactive quality internally and externally. Interactive quality can be improved through gains in parsimony, that is, simplifications in the organisation of skills. The importance of parsimony in evolution is discussed, along with indicators of an internal parsimony desire that experiences joy if achieved through processes such as insight and understanding. A mechanism for the production and measurement of the parsimony desire is proposed, based on the number of subcortical pleasure hotspots that can be stimulated at once within the ‘archipelago’ available in the limbic system.
Collapse
Affiliation(s)
- Lawrence J Cookson
- School of Biological Sciences, Monash University, Clayton, Vic. 3800, Australia
| |
Collapse
|
68
|
Tarr B, Launay J, Dunbar RIM. Music and social bonding: "self-other" merging and neurohormonal mechanisms. Front Psychol 2014; 5:1096. [PMID: 25324805 PMCID: PMC4179700 DOI: 10.3389/fpsyg.2014.01096] [Citation(s) in RCA: 194] [Impact Index Per Article: 17.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2014] [Accepted: 09/10/2014] [Indexed: 01/13/2023] Open
Abstract
It has been suggested that a key function of music during its development and spread amongst human populations was its capacity to create and strengthen social bonds amongst interacting group members. However, the mechanisms by which this occurs have not been fully discussed. In this paper we review evidence supporting two thus far independently investigated mechanisms for this social bonding effect: self-other merging as a consequence of inter-personal synchrony, and the release of endorphins during exertive rhythmic activities including musical interaction. In general, self-other merging has been experimentally investigated using dyads, which provide limited insight into large-scale musical activities. Given that music can provide an external rhythmic framework that facilitates synchrony, explanations of social bonding during group musical activities should include reference to endorphins, which are released during synchronized exertive movements. Endorphins (and the endogenous opioid system (EOS) in general) are involved in social bonding across primate species, and are associated with a number of human social behaviors (e.g., laughter, synchronized sports), as well as musical activities (e.g., singing and dancing). Furthermore, passively listening to music engages the EOS, so here we suggest that both self-other merging and the EOS are important in the social bonding effects of music. In order to investigate possible interactions between these two mechanisms, future experiments should recreate ecologically valid examples of musical activities.
Collapse
Affiliation(s)
- Bronwyn Tarr
- Social and Evolutionary Neuroscience Research Group, Department of Experimental Psychology, University of OxfordOxford, UK
| | | | | |
Collapse
|
69
|
Herrojo Ruiz M, Hong SB, Hennig H, Altenmüller E, Kühn AA. Long-range correlation properties in timing of skilled piano performance: the influence of auditory feedback and deep brain stimulation. Front Psychol 2014; 5:1030. [PMID: 25309487 PMCID: PMC4174744 DOI: 10.3389/fpsyg.2014.01030] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2014] [Accepted: 08/28/2014] [Indexed: 11/13/2022] Open
Abstract
Unintentional timing deviations during musical performance can be conceived of as timing errors. However, recent research on humanizing computer-generated music has demonstrated that timing fluctuations that exhibit long-range temporal correlations (LRTC) are preferred by human listeners. This preference can be accounted for by the ubiquitous presence of LRTC in human tapping and rhythmic performances. Interestingly, the manifestation of LRTC in tapping behavior seems to be driven in a subject-specific manner by the LRTC properties of resting-state background cortical oscillatory activity. In this framework, the current study aimed to investigate whether propagation of timing deviations during the skilled, memorized piano performance (without metronome) of 17 professional pianists exhibits LRTC and whether the structure of the correlations is influenced by the presence or absence of auditory feedback. As an additional goal, we set out to investigate the influence of altering the dynamics along the cortico-basal-ganglia-thalamo-cortical network via deep brain stimulation (DBS) on the LRTC properties of musical performance. Specifically, we investigated temporal deviations during the skilled piano performance of a non-professional pianist who was treated with subthalamic-deep brain stimulation (STN-DBS) due to severe Parkinson's disease, with predominant tremor affecting his right upper extremity. In the tremor-affected right hand, the timing fluctuations of the performance exhibited random correlations with DBS OFF. By contrast, DBS restored long-range dependency in the temporal fluctuations, corresponding with the general motor improvement on DBS. Overall, the present investigations demonstrate the presence of LRTC in skilled piano performances, indicating that unintentional temporal deviations are correlated over a wide range of time scales. This phenomenon is stable after removal of the auditory feedback, but is altered by STN-DBS, which suggests that cortico-basal ganglia-thalamocortical circuits play a role in the modulation of the serial correlations of timing fluctuations exhibited in skilled musical performance.
Collapse
Affiliation(s)
- María Herrojo Ruiz
- Department of Neurology, Charité-University Medicine Berlin Berlin, Germany
| | - Sang Bin Hong
- Department of Neurology, Charité-University Medicine Berlin Berlin, Germany
| | - Holger Hennig
- Department of Physics, Harvard University Cambridge, MA, USA ; Broad Institute of Harvard and MIT Cambridge, MA, USA
| | - Eckart Altenmüller
- Institute of Music Physiology and Musicians' Medicine, Hanover University of Music, Drama and Media Hanover, Germany
| | - Andrea A Kühn
- Department of Neurology, Charité-University Medicine Berlin Berlin, Germany ; Cluster of Excellence NeuroCure, Charité-University Medicine Berlin Berlin, Germany
| |
Collapse
|
70
|
Jola C, Pollick FE, Calvo-Merino B. "Some like it hot": spectators who score high on the personality trait openness enjoy the excitement of hearing dancers breathing without music. Front Hum Neurosci 2014; 8:718. [PMID: 25309393 PMCID: PMC4161163 DOI: 10.3389/fnhum.2014.00718] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2014] [Accepted: 08/27/2014] [Indexed: 11/23/2022] Open
Abstract
Music is an integral part of dance. Over the last 10 years, however, dance stimuli (without music) have been repeatedly used to study action observation processes, increasing our understanding of the influence of observer’s physical abilities on action perception. Moreover, beyond trained skills and empathy traits, very little has been investigated on how other observer or spectators’ properties modulate action observation and action preference. Since strong correlations have been shown between music and personality traits, here we aim to investigate how personality traits shape the appreciation of dance when this is presented with three different music/sounds. Therefore, we investigated the relationship between personality traits and the subjective esthetic experience of 52 spectators watching a 24 min lasting contemporary dance performance projected on a big screen containing three movement phrases performed to three different sound scores: classical music (i.e., Bach), an electronic sound-score, and a section without music but where the breathing of the performers was audible. We found that first, spectators rated the experience of watching dance without music significantly different from with music. Second, we found that the higher spectators scored on the Big Five personality factor openness, the more they liked the no-music section. Third, spectators’ physical experience with dance was not linked to their appreciation but was significantly related to high average extravert scores. For the first time, we showed that spectators’ reported entrainment to watching dance movements without music is strongly related to their personality and thus may need to be considered when using dance as a means to investigate action observation processes and esthetic preferences.
Collapse
Affiliation(s)
- Corinne Jola
- Division of Psychology, University of Abertay Dundee Dundee, UK ; School of Psychology, University of Glasgow Glasgow, UK
| | | | - Beatriz Calvo-Merino
- Department of Psychology, City University London London, UK ; Department of Psychology, Universidad Complutense de Madrid Madrid, Spain
| |
Collapse
|
71
|
Crowe DA, Zarco W, Bartolo R, Merchant H. Dynamic representation of the temporal and sequential structure of rhythmic movements in the primate medial premotor cortex. J Neurosci 2014; 34:11972-83. [PMID: 25186744 PMCID: PMC6608467 DOI: 10.1523/jneurosci.2177-14.2014] [Citation(s) in RCA: 94] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2014] [Revised: 07/10/2014] [Accepted: 07/24/2014] [Indexed: 11/21/2022] Open
Abstract
We determined the encoding properties of single cells and the decoding accuracy of cell populations in the medial premotor cortex (MPC) of Rhesus monkeys to represent in a time-varying fashion the duration and serial order of six intervals produced rhythmically during a synchronization-continuation tapping task. We found that MPC represented the temporal and sequential structure of rhythmic movements by activating small ensembles of neurons that encoded the duration or the serial order in rapid succession, so that the pattern of active neurons changed dramatically within each interval. Interestingly, the width of the encoding or decoding function for serial order increased as a function of duration. Finally, we found that the strength of correlation in spontaneous activity of the individual cells varied as a function of the timing of their recruitment. These results demonstrate the existence of dynamic representations in MPC for the duration and serial order of intervals produced rhythmically and suggest that this dynamic code depends on ensembles of interconnected neurons that provide a strong synaptic drive to the next ensemble in a consecutive chain of neural events.
Collapse
Affiliation(s)
- David A Crowe
- Department of Biology, Augsburg College, Minneapolis, Minnesota 55454, Brain Sciences Center, Department of Veterans Affairs Medical Center, Minneapolis Minnesota 55417, and
| | - Wilbert Zarco
- Instituto de Neurobiología, UNAM, campus Juriquilla, 76230 México
| | - Ramon Bartolo
- Instituto de Neurobiología, UNAM, campus Juriquilla, 76230 México
| | - Hugo Merchant
- Instituto de Neurobiología, UNAM, campus Juriquilla, 76230 México
| |
Collapse
|
72
|
Valtonen J, Gregory E, Landau B, McCloskey M. New learning of music after bilateral medial temporal lobe damage: evidence from an amnesic patient. Front Hum Neurosci 2014; 8:694. [PMID: 25232312 PMCID: PMC4153029 DOI: 10.3389/fnhum.2014.00694] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2014] [Accepted: 08/19/2014] [Indexed: 11/13/2022] Open
Abstract
Damage to the hippocampus impairs the ability to acquire new declarative memories, but not the ability to learn simple motor tasks. An unresolved question is whether hippocampal damage affects learning for music performance, which requires motor processes, but in a cognitively complex context. We studied learning of novel musical pieces by sight-reading in a newly identified amnesic, LSJ, who was a skilled amateur violist prior to contracting herpes simplex encephalitis. LSJ has suffered virtually complete destruction of the hippocampus bilaterally, as well as extensive damage to other medial temporal lobe structures and the left anterior temporal lobe. Because of LSJ's rare combination of musical training and near-complete hippocampal destruction, her case provides a unique opportunity to investigate the role of the hippocampus for complex motor learning processes specifically related to music performance. Three novel pieces of viola music were composed and closely matched for factors contributing to a piece's musical complexity. LSJ practiced playing two of the pieces, one in each of the two sessions during the same day. Relative to a third unpracticed control piece, LSJ showed significant pre- to post-training improvement for the two practiced pieces. Learning effects were observed both with detailed analyses of correctly played notes, and with subjective whole-piece performance evaluations by string instrument players. The learning effects were evident immediately after practice and 14 days later. The observed learning stands in sharp contrast to LSJ's complete lack of awareness that the same pieces were being presented repeatedly, and to the profound impairments she exhibits in other learning tasks. Although learning in simple motor tasks has been previously observed in amnesic patients, our results demonstrate that non-hippocampal structures can support complex learning of novel musical sequences for music performance.
Collapse
Affiliation(s)
- Jussi Valtonen
- Institute of Behavioural Sciences, University of Helsinki , Helsinki , Finland
| | - Emma Gregory
- Department of Cognitive Science, Johns Hopkins University , Baltimore, MD , USA
| | - Barbara Landau
- Department of Cognitive Science, Johns Hopkins University , Baltimore, MD , USA
| | - Michael McCloskey
- Department of Cognitive Science, Johns Hopkins University , Baltimore, MD , USA
| |
Collapse
|
73
|
Rauschecker JP. Is there a tape recorder in your head? How the brain stores and retrieves musical melodies. Front Syst Neurosci 2014; 8:149. [PMID: 25221479 PMCID: PMC4147715 DOI: 10.3389/fnsys.2014.00149] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2013] [Accepted: 08/04/2014] [Indexed: 11/19/2022] Open
Abstract
Music consists of strings of sound that vary over time. Technical devices, such as tape recorders, store musical melodies by transcribing event times of temporal sequences into consecutive locations on the storage medium. Playback occurs by reading out the stored information in the same sequence. However, it is unclear how the brain stores and retrieves auditory sequences. Neurons in the anterior lateral belt of auditory cortex are sensitive to the combination of sound features in time, but the integration time of these neurons is not sufficient to store longer sequences that stretch over several seconds, minutes or more. Functional imaging studies in humans provide evidence that music is stored instead within the auditory dorsal stream, including premotor and prefrontal areas. In monkeys, these areas are the substrate for learning of motor sequences. It appears, therefore, that the auditory dorsal stream transforms musical into motor sequence information and vice versa, realizing what are known as forward and inverse models. The basal ganglia and the cerebellum are involved in setting up the sensorimotor associations, translating timing information into spatial codes and back again.
Collapse
Affiliation(s)
- Josef P Rauschecker
- Department of Neuroscience, Georgetown University Medical Center Washington, DC, USA ; Institute for Advanced Studies, Technical University Munich Garching, Germany
| |
Collapse
|
74
|
Proverbio AM, Calbi M, Manfredi M, Zani A. Audio-visuomotor processing in the musician's brain: an ERP study on professional violinists and clarinetists. Sci Rep 2014; 4:5866. [PMID: 25070060 PMCID: PMC5376193 DOI: 10.1038/srep05866] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2014] [Accepted: 07/04/2014] [Indexed: 11/09/2022] Open
Abstract
The temporal dynamics of brain activation during visual and auditory perception of congruent vs. incongruent musical video clips was investigated in 12 musicians from the Milan Conservatory of music and 12 controls. 368 videos of a clarinetist and a violinist playing the same score with their instruments were presented. The sounds were similar in pitch, intensity, rhythm and duration. To produce an audiovisual discrepancy, in half of the trials, the visual information was incongruent with the soundtrack in pitch. ERPs were recorded from 128 sites. Only in musicians for their own instruments was a N400-like negative deflection elicited due to the incongruent audiovisual information. SwLORETA applied to the N400 response identified the areas mediating multimodal motor processing: the prefrontal cortex, the right superior and middle temporal gyrus, the premotor cortex, the inferior frontal and inferior parietal areas, the EBA, somatosensory cortex, cerebellum and SMA. The data indicate the existence of audiomotor mirror neurons responding to incongruent visual and auditory information, thus suggesting that they may encode multimodal representations of musical gestures and sounds. These systems may underlie the ability to learn how to play a musical instrument.
Collapse
Affiliation(s)
- Alice Mado Proverbio
- Milan Center for Neuroscience, University of Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, 20126 Milan, Italy
| | - Marta Calbi
- 1] Milan Center for Neuroscience, University of Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, 20126 Milan, Italy [2] Department of Neuroscience, University of Parma, Italy
| | - Mirella Manfredi
- 1] Milan Center for Neuroscience, University of Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, 20126 Milan, Italy [2] University of California San Diego, La Jolla, California
| | | |
Collapse
|
75
|
Gebauer L, Skewes J, Westphael G, Heaton P, Vuust P. Intact brain processing of musical emotions in autism spectrum disorder, but more cognitive load and arousal in happy vs. sad music. Front Neurosci 2014; 8:192. [PMID: 25076869 PMCID: PMC4098021 DOI: 10.3389/fnins.2014.00192] [Citation(s) in RCA: 45] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2014] [Accepted: 06/19/2014] [Indexed: 01/17/2023] Open
Abstract
Music is a potent source for eliciting emotions, but not everybody experience emotions in the same way. Individuals with autism spectrum disorder (ASD) show difficulties with social and emotional cognition. Impairments in emotion recognition are widely studied in ASD, and have been associated with atypical brain activation in response to emotional expressions in faces and speech. Whether these impairments and atypical brain responses generalize to other domains, such as emotional processing of music, is less clear. Using functional magnetic resonance imaging, we investigated neural correlates of emotion recognition in music in high-functioning adults with ASD and neurotypical adults. Both groups engaged similar neural networks during processing of emotional music, and individuals with ASD rated emotional music comparable to the group of neurotypical individuals. However, in the ASD group, increased activity in response to happy compared to sad music was observed in dorsolateral prefrontal regions and in the rolandic operculum/insula, and we propose that this reflects increased cognitive processing and physiological arousal in response to emotional musical stimuli in this group.
Collapse
Affiliation(s)
- Line Gebauer
- Music in the Brain, Department of Clinical Medicine, Center of Functionally Integrative Neuroscience, Aarhus UniversityAarhus, Denmark
- Interacting Minds Centre, Aarhus UniversityAarhus, Denmark
| | - Joshua Skewes
- Music in the Brain, Department of Clinical Medicine, Center of Functionally Integrative Neuroscience, Aarhus UniversityAarhus, Denmark
| | - Gitte Westphael
- Music in the Brain, Department of Clinical Medicine, Center of Functionally Integrative Neuroscience, Aarhus UniversityAarhus, Denmark
| | - Pamela Heaton
- Department of Psychology, Goldsmiths, University of LondonLondon, UK
| | - Peter Vuust
- Music in the Brain, Department of Clinical Medicine, Center of Functionally Integrative Neuroscience, Aarhus UniversityAarhus, Denmark
- The Royal Academy of MusicAarhus, Denmark
| |
Collapse
|
76
|
Donnet S, Bartolo R, Fernandes JM, Cunha JPS, Prado L, Merchant H. Monkeys time their pauses of movement and not their movement-kinematics during a synchronization-continuation rhythmic task. J Neurophysiol 2014; 111:2138-49. [DOI: 10.1152/jn.00802.2013] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
A critical question in tapping behavior is to understand whether the temporal control is exerted on the duration and trajectory of the downward-upward hand movement or on the pause between hand movements. In the present study, we determined the duration of both the movement execution and pauses of monkeys performing a synchronization-continuation task (SCT), using the speed profile of their tapping behavior. We found a linear increase in the variance of pause-duration as a function of interval, while the variance of the motor implementation was relatively constant across intervals. In fact, 96% of the variability of the duration of a complete tapping cycle (pause + movement) was due to the variability of the pause duration. In addition, we performed a Bayesian model selection to determine the effect of interval duration (450–1,000 ms), serial-order (1–6 produced intervals), task phase (sensory cued or internally driven), and marker modality (auditory or visual) on the duration of the movement-pause and tapping movement. The results showed that the most important parameter used to successfully perform the SCT was the control of the pause duration. We also found that the kinematics of the tapping movements was concordant with a stereotyped ballistic control of the hand pressing the push-button. The present findings support the idea that monkeys used an explicit timing strategy to perform the SCT, where a dedicated timing mechanism controlled the duration of the pauses of movement, while also triggered the execution of fixed movements across each interval of the rhythmic sequence.
Collapse
Affiliation(s)
- Sophie Donnet
- Unité Mathématiques et Informatique Appliquées, Institut National de la Recherche Agronomique, Paris, France
| | - Ramon Bartolo
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Querétaro, México
| | - José Maria Fernandes
- Instituto de Engenharia Electrónica e Telemática de Aveiro/Departamento de Electrónica, Telecomunicações e Informática, Universidade de Aveiro, Aveiro, Portugal; and
| | - João Paulo Silva Cunha
- Department of Electrical and Computer Engineering, Faculty of Engineering, University of Porto/Instituto de Engenharia de Sistemas e Computadores Tecnologia e Ciência, Porto, Portugal
| | - Luis Prado
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Querétaro, México
| | - Hugo Merchant
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Querétaro, México
| |
Collapse
|
77
|
Abstract
Sixty years ago, Karl Lashley suggested that complex action sequences, from simple motor acts to language and music, are a fundamental but neglected aspect of neural function. Lashley demonstrated the inadequacy of then-standard models of associative chaining, positing a more flexible and generalized "syntax of action" necessary to encompass key aspects of language and music. He suggested that hierarchy in language and music builds upon a more basic sequential action system, and provided several concrete hypotheses about the nature of this system. Here, we review a diverse set of modern data concerning musical, linguistic, and other action processing, finding them largely consistent with an updated neuroanatomical version of Lashley's hypotheses. In particular, the lateral premotor cortex, including Broca's area, plays important roles in hierarchical processing in language, music, and at least some action sequences. Although the precise computational function of the lateral prefrontal regions in action syntax remains debated, Lashley's notion-that this cortical region implements a working-memory buffer or stack scannable by posterior and subcortical brain regions-is consistent with considerable experimental data.
Collapse
Affiliation(s)
- W Tecumseh Fitch
- Department of Cognitive Biology, University of Vienna, Vienna, Austria
| | | |
Collapse
|
78
|
Fedorenko E. The role of domain-general cognitive control in language comprehension. Front Psychol 2014; 5:335. [PMID: 24803909 PMCID: PMC4009428 DOI: 10.3389/fpsyg.2014.00335] [Citation(s) in RCA: 135] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2012] [Accepted: 03/31/2014] [Indexed: 01/15/2023] Open
Abstract
What role does domain-general cognitive control play in understanding linguistic input? Although much evidence has suggested that domain-general cognitive control and working memory resources are sometimes recruited during language comprehension, many aspects of this relationship remain elusive. For example, how frequently do cognitive control mechanisms get engaged when we understand language? And is this engagement necessary for successful comprehension? I here (a) review recent brain imaging evidence for the neural separability of the brain regions that support high-level linguistic processing vs. those that support domain-general cognitive control abilities; (b) define the space of possibilities for the relationship between these sets of brain regions; and (c) review the available evidence that constrains these possibilities to some extent. I argue that we should stop asking whether domain-general cognitive control mechanisms play a role in language comprehension, and instead focus on characterizing the division of labor between the cognitive control brain regions and the more functionally specialized language regions.
Collapse
Affiliation(s)
- Evelina Fedorenko
- Psychiatry Department, Massachusetts General HospitalCharlestown, MA, USA
| |
Collapse
|
79
|
Cerebral activations related to audition-driven performance imagery in professional musicians. PLoS One 2014; 9:e93681. [PMID: 24714661 PMCID: PMC3979724 DOI: 10.1371/journal.pone.0093681] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2013] [Accepted: 03/10/2014] [Indexed: 11/18/2022] Open
Abstract
Functional Magnetic Resonance Imaging (fMRI) was used to study the activation of cerebral motor networks during auditory perception of music in professional keyboard musicians (n = 12). The activation paradigm implied that subjects listened to two-part polyphonic music, while either critically appraising the performance or imagining they were performing themselves. Two-part polyphonic audition and bimanual motor imagery circumvented a hemisphere bias associated with the convention of playing the melody with the right hand. Both tasks activated ventral premotor and auditory cortices, bilaterally, and the right anterior parietal cortex, when contrasted to 12 musically unskilled controls. Although left ventral premotor activation was increased during imagery (compared to judgment), bilateral dorsal premotor and right posterior-superior parietal activations were quite unique to motor imagery. The latter suggests that musicians not only recruited their manual motor repertoire but also performed a spatial transformation from the vertically perceived pitch axis (high and low sound) to the horizontal axis of the keyboard. Imagery-specific activations in controls were seen in left dorsal parietal-premotor and supplementary motor cortices. Although these activations were less strong compared to musicians, this overlapping distribution indicated the recruitment of a general 'mirror-neuron' circuitry. These two levels of sensori-motor transformations point towards common principles by which the brain organizes audition-driven music performance and visually guided task performance.
Collapse
|
80
|
Hegde S. Music-based cognitive remediation therapy for patients with traumatic brain injury. Front Neurol 2014; 5:34. [PMID: 24715887 PMCID: PMC3970008 DOI: 10.3389/fneur.2014.00034] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2014] [Accepted: 03/10/2014] [Indexed: 01/28/2023] Open
Abstract
Traumatic brain injury (TBI) is one of the common causes of disability in physical, psychological, and social domains of functioning leading to poor quality of life. TBI leads to impairment in sensory, motor, language, and emotional processing, and also in cognitive functions such as attention, information processing, executive functions, and memory. Cognitive impairment plays a central role in functional recovery in TBI. Innovative methods such as music therapy to alleviate cognitive impairments have been investigated recently. The role of music in cognitive rehabilitation is evolving, based on newer findings emerging from the fields of neuromusicology and music cognition. Research findings from these fields have contributed significantly to our understanding of music perception and cognition, and its neural underpinnings. From a neuroscientific perspective, indulging in music is considered as one of the best cognitive exercises. With “plasticity” as its veritable nature, brain engages in producing music indulging an array of cognitive functions and the product, the music, in turn permits restoration and alters brain functions. With scientific findings as its basis, “neurologic music therapy” (NMT) has been developed as a systematic treatment method to improve sensorimotor, language, and cognitive domains of functioning via music. A preliminary study examining the effect of NMT in cognitive rehabilitation has reported promising results in improving executive functions along with improvement in emotional adjustment and decreasing depression and anxiety following TBI. The potential usage of music-based cognitive rehabilitation therapy in various clinical conditions including TBI is yet to be fully explored. There is a need for systematic research studies to bridge the gap between increasing theoretical understanding of usage of music in cognitive rehabilitation and application of the same in a heterogeneous condition such as TBI.
Collapse
Affiliation(s)
- Shantala Hegde
- Cognitive Psychology and Cognitive Neurosciences Laboratory, Department of Clinical Psychology, Neurobiology Research Center, National Institute of Mental Health and Neuro Sciences (NIMHANS) , Bangalore , India
| |
Collapse
|
81
|
Hayashi MJ, Kantele M, Walsh V, Carlson S, Kanai R. Dissociable neuroanatomical correlates of subsecond and suprasecond time perception. J Cogn Neurosci 2014; 26:1685-93. [PMID: 24456398 DOI: 10.1162/jocn_a_00580] [Citation(s) in RCA: 62] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The ability to estimate durations varies across individuals. Although previous studies have reported that individual differences in perceptual skills and cognitive capacities are reflected in brain structures, it remains unknown whether timing abilities are also reflected in the brain anatomy. Here, we show that individual differences in the ability to estimate subsecond and suprasecond durations correlate with gray matter (GM) volume in different parts of cortical and subcortical areas. Better ability to discriminate subsecond durations was associated with a larger GM volume in the bilateral anterior cerebellum, whereas better performance in estimating the suprasecond range was associated with a smaller GM volume in the inferior parietal lobule. These results indicate that regional GM volume is predictive of an individual's timing abilities. These morphological results support the notion that subsecond durations are processed in the motor system, whereas suprasecond durations are processed in the parietal cortex by utilizing the capacity of attention and working memory to keep track of time.
Collapse
|
82
|
Merchant H, de Lafuente V. Introduction to the neurobiology of interval timing. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2014; 829:1-13. [PMID: 25358702 DOI: 10.1007/978-1-4939-1782-2_1] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Time is a fundamental variable that organisms must quantify in order to survive. In humans, for example, the gradual development of the sense of duration and rhythm is an essential skill in many facets of social behavior such as speaking, dancing to-, listening to- or playing music, performing a wide variety of sports, and driving a car (Merchant H, Harrington DL, Meck WH. Annu Rev Neurosci. 36:313-36, 2013). During the last 10 years there has been a rapid growth of research on the neural underpinnings of timing in the subsecond and suprasecond scales, using a variety of methodological approaches in the human being, as well as in varied animal and theoretical models. In this introductory chapter we attempt to give a conceptual framework that defines time processing as a family of different phenomena. The brain circuits and neural underpinnings of temporal quantification seem to largely depend on its time scale and the sensorimotor nature of specific behaviors. Therefore, we describe the main time scales and their associated behaviors and show how the perception and execution of timing events in the subsecond and second scales may depend on similar or different neural mechanisms.
Collapse
Affiliation(s)
- Hugo Merchant
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Boulevard Juriquilla No. 3001, Querétaro, 76230, Mexico,
| | | |
Collapse
|
83
|
Merchant H, Bartolo R, Pérez O, Méndez JC, Mendoza G, Gámez J, Yc K, Prado L. Neurophysiology of timing in the hundreds of milliseconds: multiple layers of neuronal clocks in the medial premotor areas. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2014; 829:143-54. [PMID: 25358709 DOI: 10.1007/978-1-4939-1782-2_8] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
The precise quantification of time in the subsecond scale is critical for many complex behaviors including music and dance appreciation/execution, speech comprehension/articulation, and the performance of many sports. Nevertheless, its neural underpinnings are largely unknown. Recent neurophysiological experiments from our laboratory have shown that the cell activity in the medial premotor areas (MPC) of macaques can represent different aspects of temporal processing during a synchronization-continuation tapping task (SCT). In this task the rhythmic behavior of monkeys was synchronized to a metronome of isochronous stimuli in the hundreds of milliseconds range (synchronization phase), followed by a period where animals internally temporalized their movements (continuation phase). Overall, we found that the time-keeping mechanism in MPC is governed by different layers of neural clocks. Close to the temporal control of movements are two separate populations of ramping cells that code for elapsed or remaining time for a tapping movement during the SCT. Thus, the sensorimotor loops engaged during the task may depend on the cyclic interplay between two neuronal chronometers that quantify in their instantaneous discharge rate the time passed and the remaining time for an action. In addition, we found MPC neurons that are tuned to the duration of produced intervals during the rhythmic task, showing an orderly variation in the average discharge rate as a function of duration. All the tested durations in the subsecond scale were represented in the preferred intervals of the cell population. Most of the interval-tuned cells were also tuned to the ordinal structure of the six intervals produced sequentially in the SCT. Hence, this next level of temporal processing may work as the notes of a musical score, providing information to the timing network about what duration and ordinal element of the sequence are being executed. Finally, we describe how the timing circuit can use a dynamic neural representation of the passage of time and the context in which the intervals are executed by integrating the time-varying activity of populations of cells. These neural population clocks can be defined as distinct trajectories in the multidimensional cell response-space. We provide a hypothesis of how these different levels of neural clocks can interact to constitute a coherent timing machine that controls the rhythmic behavior during the SCT.
Collapse
Affiliation(s)
- Hugo Merchant
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Boulevard Juriquilla No. 3001, Querétaro, 76230, Mexico,
| | | | | | | | | | | | | | | |
Collapse
|
84
|
Ferreri L, Aucouturier JJ, Muthalib M, Bigand E, Bugaiska A. Music improves verbal memory encoding while decreasing prefrontal cortex activity: an fNIRS study. Front Hum Neurosci 2013; 7:779. [PMID: 24339807 PMCID: PMC3857524 DOI: 10.3389/fnhum.2013.00779] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2013] [Accepted: 10/29/2013] [Indexed: 11/13/2022] Open
Abstract
Listening to music engages the whole brain, thus stimulating cognitive performance in a range of non-purely musical activities such as language and memory tasks. This article addresses an ongoing debate on the link between music and memory for words. While evidence on healthy and clinical populations suggests that music listening can improve verbal memory in a variety of situations, it is still unclear what specific memory process is affected and how. This study was designed to explore the hypothesis that music specifically benefits the encoding part of verbal memory tasks, by providing a richer context for encoding and therefore less demand on the dorsolateral prefrontal cortex (DLPFC). Twenty-two healthy young adults were subjected to functional near-infrared spectroscopy (fNIRS) imaging of their bilateral DLPFC while encoding words in the presence of either a music or a silent background. Behavioral data confirmed the facilitating effect of music background during encoding on subsequent item recognition. fNIRS results revealed significantly greater activation of the left hemisphere during encoding (in line with the HERA model of memory lateralization) and a sustained, bilateral decrease of activity in the DLPFC in the music condition compared to silence. These findings suggest that music modulates the role played by the DLPFC during verbal encoding, and open perspectives for applications to clinical populations with prefrontal impairments, such as elderly adults or Alzheimer's patients.
Collapse
Affiliation(s)
- Laura Ferreri
- Laboratory for the Study of Learning and Development, CNRS UMR 5022, Department of Psychology, University of BurgundyDijon, France
| | | | - Makii Muthalib
- Movement to Health, EUROMOV, Montpellier-1 UniversityMontpellier, France
| | - Emmanuel Bigand
- Laboratory for the Study of Learning and Development, CNRS UMR 5022, Department of Psychology, University of BurgundyDijon, France
| | - Aurelia Bugaiska
- Laboratory for the Study of Learning and Development, CNRS UMR 5022, Department of Psychology, University of BurgundyDijon, France
| |
Collapse
|
85
|
Slater J, Tierney A, Kraus N. At-risk elementary school children with one year of classroom music instruction are better at keeping a beat. PLoS One 2013; 8:e77250. [PMID: 24130865 PMCID: PMC3795075 DOI: 10.1371/journal.pone.0077250] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2013] [Accepted: 09/09/2013] [Indexed: 11/19/2022] Open
Abstract
Temporal processing underlies both music and language skills. There is increasing evidence that rhythm abilities track with reading performance and that language disorders such as dyslexia are associated with poor rhythm abilities. However, little is known about how basic time-keeping skills can be shaped by musical training, particularly during critical literacy development years. This study was carried out in collaboration with Harmony Project, a non-profit organization providing free music education to children in the gang reduction zones of Los Angeles. Our findings reveal that elementary school children with just one year of classroom music instruction perform more accurately in a basic finger-tapping task than their untrained peers, providing important evidence that fundamental time-keeping skills may be strengthened by short-term music training. This sets the stage for further examination of how music programs may be used to support the development of basic skills underlying learning and literacy, particularly in at-risk populations which may benefit the most.
Collapse
Affiliation(s)
- Jessica Slater
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, Illinois, United States of America
- Department of Communication Sciences, Northwestern University, Evanston, Illinois, United States of America
| | - Adam Tierney
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, Illinois, United States of America
- Department of Communication Sciences, Northwestern University, Evanston, Illinois, United States of America
| | - Nina Kraus
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, Illinois, United States of America
- Department of Communication Sciences, Northwestern University, Evanston, Illinois, United States of America
- Institute for Neuroscience, Northwestern University, Chicago, Illinois, United States of America
- Department of Neurobiology and Physiology, Northwestern University, Evanston, Illinois, United States of America
- Department of Otolaryngology, Northwestern University, Chicago, Illinois, United States of America
- * E-mail:
| |
Collapse
|
86
|
Larsson M. Self-generated sounds of locomotion and ventilation and the evolution of human rhythmic abilities. Anim Cogn 2013; 17:1-14. [PMID: 23990063 PMCID: PMC3889703 DOI: 10.1007/s10071-013-0678-z] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2013] [Revised: 08/07/2013] [Accepted: 08/20/2013] [Indexed: 01/20/2023]
Abstract
It has been suggested that the basic building blocks of music mimic sounds of moving humans, and because the brain was primed to exploit such sounds, they eventually became incorporated in human culture. However, that raises further questions. Why do genetically close, culturally well-developed apes lack musical abilities? Did our switch to bipedalism influence the origins of music? Four hypotheses are raised: (1) Human locomotion and ventilation can mask critical sounds in the environment. (2) Synchronization of locomotion reduces that problem. (3) Predictable sounds of locomotion may stimulate the evolution of synchronized behavior. (4) Bipedal gait and the associated sounds of locomotion influenced the evolution of human rhythmic abilities. Theoretical models and research data suggest that noise of locomotion and ventilation may mask critical auditory information. People often synchronize steps subconsciously. Human locomotion is likely to produce more predictable sounds than those of non-human primates. Predictable locomotion sounds may have improved our capacity of entrainment to external rhythms and to feel the beat in music. A sense of rhythm could aid the brain in distinguishing among sounds arising from discrete sources and also help individuals to synchronize their movements with one another. Synchronization of group movement may improve perception by providing periods of relative silence and by facilitating auditory processing. The adaptive value of such skills to early ancestors may have been keener detection of prey or stalkers and enhanced communication. Bipedal walking may have influenced the development of entrainment in humans and thereby the evolution of rhythmic abilities.
Collapse
Affiliation(s)
- Matz Larsson
- The Cardiology Clinic, Örebro University Hospital, 701 85, Örebro, Sweden,
| |
Collapse
|
87
|
Abstract
The precise quantification of time during motor performance is critical for many complex behaviors, including musical execution, speech articulation, and sports; however, its neural mechanisms are primarily unknown. We found that neurons in the medial premotor cortex (MPC) of behaving monkeys are tuned to the duration of produced intervals during rhythmic tapping tasks. Interval-tuned neurons showed similar preferred intervals across tapping behaviors that varied in the number of produced intervals and the modality used to drive temporal processing. In addition, we found that the same population of neurons is able to multiplex the ordinal structure of a sequence of rhythmic movements and a wide range of durations in the range of hundreds of milliseconds. Our results also revealed a possible gain mechanism for encoding the total number of intervals in a sequence of temporalized movements, where interval-tuned cells show a multiplicative effect of their activity for longer sequences of intervals. These data suggest that MPC is part of a core timing network that uses interval tuning as a signal to represent temporal processing in a variety of behavioral contexts where time is explicitly quantified.
Collapse
|
88
|
Tian Y, Ma W, Tian C, Xu P, Yao D. Brain oscillations and electroencephalography scalp networks during tempo perception. Neurosci Bull 2013; 29:731-6. [PMID: 23852557 DOI: 10.1007/s12264-013-1352-9] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2012] [Accepted: 11/15/2012] [Indexed: 10/26/2022] Open
Abstract
In the current study we used electroencephalography (EEG) to investigate the relation between musical tempo perception and the oscillatory activity in specific brain regions, and the scalp EEG networks in the theta, alpha, and beta bands. The results showed that the theta power at the frontal midline decreased with increased arousal level related to tempo. The alpha power induced by original music at the bilateral occipital-parietal regions was stronger than that by tempo-transformed music. The beta power did not change with tempo. At the network level, the original music-related alpha network had high global efficiency and the optimal path length. This study was the first to use EEG to investigate multi-oscillatory activities and the data support the tempo-specific timing hypothesis.
Collapse
Affiliation(s)
- Yin Tian
- Bio-information College, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China,
| | | | | | | | | |
Collapse
|
89
|
Van Niekerk C, Page-Shipp R. Improving the quality of meetings using music. TOTAL QUALITY MANAGEMENT & BUSINESS EXCELLENCE 2013. [DOI: 10.1080/14783363.2013.814292] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
90
|
Phillips-Silver J, Toiviainen P, Gosselin N, Peretz I. Amusic does not mean unmusical: Beat perception and synchronization ability despite pitch deafness. Cogn Neuropsychol 2013; 30:311-31. [DOI: 10.1080/02643294.2013.863183] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
91
|
Human melody singing by bullfinches (Pyrrhula pyrrula) gives hints about a cognitive note sequence processing. Anim Cogn 2013; 17:143-55. [PMID: 23783267 DOI: 10.1007/s10071-013-0647-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2012] [Revised: 04/23/2013] [Accepted: 05/20/2013] [Indexed: 10/26/2022]
Abstract
We studied human melody perception and production in a songbird in the light of current concepts from the cognitive neuroscience of music. Bullfinches are the species best known for learning melodies from human teachers. The study is based on the historical data of 15 bullfinches, raised by 3 different human tutors and studied later by Jürgen Nicolai (JN) in the period 1967-1975. These hand-raised bullfinches learned human folk melodies (sequences of 20-50 notes) accurately. The tutoring was interactive and variable, starting before fledging and JN continued it later throughout the birds' lives. All 15 bullfinches learned to sing alternately melody modules with JN (alternate singing). We focus on the aspects of note sequencing and timing studying song variability when singing the learned melody alone and the accuracy of listening-singing interactions during alternatively singing with JN by analyzing song recordings of 5 different males. The following results were obtained as follows: (1) Sequencing: The note sequence variability when singing alone suggests that the bullfinches retrieve the note sequence from the memory as different sets of note groups (=modules), as chunks (sensu Miller in Psychol Rev 63:81-87, 1956). (2) Auditory-motor interactions, the coupling of listening and singing the human melody: Alternate singing provides insights into the bird's brain melody processing from listening to the actually whistled part of the human melody by JN to the bird's own accurately singing the consecutive parts. We document how variable and correctly bullfinches and JN alternated in their singing the note sequences. Alternate singing demonstrates that melody-singing bullfinches did not only follow attentively the just whistled note contribution of the human by auditory feedback, but also could synchronously anticipate singing the consecutive part of the learned melody. These data suggest that both listening and singing may depend on a single learned human melody representation (=coupling between perception and production).
Collapse
|
92
|
Zuk J, Andrade PE, Andrade OVCA, Gardiner M, Gaab N. Musical, language, and reading abilities in early Portuguese readers. Front Psychol 2013; 4:288. [PMID: 23785339 PMCID: PMC3684766 DOI: 10.3389/fpsyg.2013.00288] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2012] [Accepted: 05/05/2013] [Indexed: 11/13/2022] Open
Abstract
Early language and reading abilities have been shown to correlate with a variety of musical skills and elements of music perception in children. It has also been shown that reading impaired children can show difficulties with music perception. However, it is still unclear to what extent different aspects of music perception are associated with language and reading abilities. Here we investigated the relationship between cognitive-linguistic abilities and a music discrimination task that preserves an ecologically valid musical experience. 43 Portuguese-speaking students from an elementary school in Brazil participated in this study. Children completed a comprehensive cognitive-linguistic battery of assessments. The music task was presented live in the music classroom, and children were asked to code sequences of four sounds on the guitar. Results show a strong relationship between performance on the music task and a number of linguistic variables. A principle component analysis of the cognitive-linguistic battery revealed that the strongest component (Prin1) accounted for 33% of the variance and Prin1 was significantly related to the music task. Highest loadings on Prin1 were found for reading measures such as Reading Speed and Reading Accuracy. Interestingly, 22 children recorded responses for more than four sounds within a trial on the music task, which was classified as Superfluous Responses (SR). SR was negatively correlated with a variety of linguistic variables and showed a negative correlation with Prin1. When analyzing children with and without SR separately, only children with SR showed a significant correlation between Prin1 and the music task. Our results provide implications for the use of an ecologically valid music-based screening tool for the early identification of reading disabilities in a classroom setting.
Collapse
Affiliation(s)
- Jennifer Zuk
- Laboratories of Cognitive Neuroscience, Developmental Medicine Center, Boston Children's Hospital Boston, MA, USA
| | | | | | | | | |
Collapse
|
93
|
Herrojo Ruiz M, Brücke C, Nikulin VV, Schneider GH, Kühn AA. Beta-band amplitude oscillations in the human internal globus pallidus support the encoding of sequence boundaries during initial sensorimotor sequence learning. Neuroimage 2013; 85 Pt 2:779-93. [PMID: 23711534 DOI: 10.1016/j.neuroimage.2013.05.085] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2013] [Revised: 05/14/2013] [Accepted: 05/19/2013] [Indexed: 11/17/2022] Open
Abstract
Sequential behavior characterizes both simple everyday tasks, such as getting dressed, and complex skills, such as music performance. The basal ganglia (BG) play an important role in the learning of motor sequences. To study the contribution of the human BG to the initial encoding of sequence boundaries, we recorded local field potentials in the sensorimotor area of the internal globus pallidus (GPi) during the early acquisition of sensorimotor sequences in patients undergoing deep brain stimulation for dystonia. We demonstrated an anticipatory modulation of pallidal beta-band neuronal oscillations that was specific to sequence boundaries, as compared to within-sequence elements, and independent of both the movement parameters and the initiation/termination of ongoing movement. The modulation at sequence boundaries emerged with training, in parallel with skill learning, and correlated with the degree of long-range temporal correlations (LRTC) in the dynamics of ongoing beta-band amplitude oscillations. The implication is that LRTC of beta-band oscillations in the sensorimotor GPi might facilitate the emergence of beta power modulations by the sequence boundaries in parallel with sequence learning. Taken together, the results reveal the oscillatory mechanisms in the human BG that contribute at an initial learning phase to the hierarchical organization of sequential behavior as reflected in the formation of boundary-delimited representations of action sequences.
Collapse
Affiliation(s)
- María Herrojo Ruiz
- Department of Neurology, Campus Virchow, Charité-University Medicine Berlin, Berlin 13353, Germany.
| | | | | | | | | |
Collapse
|
94
|
Margulis EH. Repetition and emotive communication in music versus speech. Front Psychol 2013; 4:167. [PMID: 23576998 PMCID: PMC3616255 DOI: 10.3389/fpsyg.2013.00167] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2013] [Accepted: 03/17/2013] [Indexed: 11/15/2022] Open
Abstract
Music and speech are often placed alongside one another as comparative cases. Their relative overlaps and disassociations have been well explored (e.g., Patel, 2008). But one key attribute distinguishing these two domains has often been overlooked: the greater preponderance of repetition in music in comparison to speech. Recent fMRI studies have shown that familiarity – achieved through repetition – is a critical component of emotional engagement with music (Pereira et al., 2011). If repetition is fundamental to emotional responses to music, and repetition is a key distinguisher between the domains of music and speech, then close examination of the phenomenon of repetition might help clarify the ways that music elicits emotion differently than speech.
Collapse
|
95
|
Seger CA, Spiering BJ, Sares AG, Quraini SI, Alpeter C, David J, Thaut MH. Corticostriatal contributions to musical expectancy perception. J Cogn Neurosci 2013; 25:1062-77. [PMID: 23410032 DOI: 10.1162/jocn_a_00371] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
This study investigates the functional neuroanatomy of harmonic music perception with fMRI. We presented short pieces of Western classical music to nonmusicians. The ending of each piece was systematically manipulated in the following four ways: Standard Cadence (expected resolution), Deceptive Cadence (moderate deviation from expectation), Modulated Cadence (strong deviation from expectation but remaining within the harmonic structure of Western tonal music), and Atonal Cadence (strongest deviation from expectation by leaving the harmonic structure of Western tonal music). Music compared with baseline broadly recruited regions of the bilateral superior temporal gyrus (STG) and the right inferior frontal gyrus (IFG). Parametric regressors scaled to the degree of deviation from harmonic expectancy identified regions sensitive to expectancy violation. Areas within the BG were significantly modulated by expectancy violation, indicating a previously unappreciated role in harmonic processing. Expectancy violation also recruited bilateral cortical regions in the IFG and anterior STG, previously associated with syntactic processing in other domains. The posterior STG was not significantly modulated by expectancy. Granger causality mapping found functional connectivity between IFG, anterior STG, posterior STG, and the BG during music perception. Our results imply the IFG, anterior STG, and the BG are recruited for higher-order harmonic processing, whereas the posterior STG is recruited for basic pitch and melodic processing.
Collapse
|
96
|
Music and movement share a dynamic structure that supports universal expressions of emotion. Proc Natl Acad Sci U S A 2012; 110:70-5. [PMID: 23248314 DOI: 10.1073/pnas.1209023110] [Citation(s) in RCA: 91] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Music moves us. Its kinetic power is the foundation of human behaviors as diverse as dance, romance, lullabies, and the military march. Despite its significance, the music-movement relationship is poorly understood. We present an empirical method for testing whether music and movement share a common structure that affords equivalent and universal emotional expressions. Our method uses a computer program that can generate matching examples of music and movement from a single set of features: rate, jitter (regularity of rate), direction, step size, and dissonance/visual spikiness. We applied our method in two experiments, one in the United States and another in an isolated tribal village in Cambodia. These experiments revealed three things: (i) each emotion was represented by a unique combination of features, (ii) each combination expressed the same emotion in both music and movement, and (iii) this common structure between music and movement was evident within and across cultures.
Collapse
|
97
|
Fedorenko E, McDermott JH, Norman-Haignere S, Kanwisher N. Sensitivity to musical structure in the human brain. J Neurophysiol 2012; 108:3289-300. [PMID: 23019005 PMCID: PMC3544885 DOI: 10.1152/jn.00209.2012] [Citation(s) in RCA: 50] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2012] [Accepted: 09/23/2012] [Indexed: 11/22/2022] Open
Abstract
Evidence from brain-damaged patients suggests that regions in the temporal lobes, distinct from those engaged in lower-level auditory analysis, process the pitch and rhythmic structure in music. In contrast, neuroimaging studies targeting the representation of music structure have primarily implicated regions in the inferior frontal cortices. Combining individual-subject fMRI analyses with a scrambling method that manipulated musical structure, we provide evidence of brain regions sensitive to musical structure bilaterally in the temporal lobes, thus reconciling the neuroimaging and patient findings. We further show that these regions are sensitive to the scrambling of both pitch and rhythmic structure but are insensitive to high-level linguistic structure. Our results suggest the existence of brain regions with representations of musical structure that are distinct from high-level linguistic representations and lower-level acoustic representations. These regions provide targets for future research investigating possible neural specialization for music or its associated mental processes.
Collapse
Affiliation(s)
- Evelina Fedorenko
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, USA.
| | | | | | | |
Collapse
|
98
|
Brown RM, Chen JL, Hollinger A, Penhune VB, Palmer C, Zatorre RJ. Repetition suppression in auditory-motor regions to pitch and temporal structure in music. J Cogn Neurosci 2012; 25:313-28. [PMID: 23163413 DOI: 10.1162/jocn_a_00322] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Music performance requires control of two sequential structures: the ordering of pitches and the temporal intervals between successive pitches. Whether pitch and temporal structures are processed as separate or integrated features remains unclear. A repetition suppression paradigm compared neural and behavioral correlates of mapping pitch sequences and temporal sequences to motor movements in music performance. Fourteen pianists listened to and performed novel melodies on an MR-compatible piano keyboard during fMRI scanning. The pitch or temporal patterns in the melodies either changed or repeated (remained the same) across consecutive trials. We expected decreased neural response to the patterns (pitch or temporal) that repeated across trials relative to patterns that changed. Pitch and temporal accuracy were high, and pitch accuracy improved when either pitch or temporal sequences repeated over trials. Repetition of either pitch or temporal sequences was associated with linear BOLD decrease in frontal-parietal brain regions including dorsal and ventral premotor cortex, pre-SMA, and superior parietal cortex. Pitch sequence repetition (in contrast to temporal sequence repetition) was associated with linear BOLD decrease in the intraparietal sulcus (IPS) while pianists listened to melodies they were about to perform. Decreased BOLD response in IPS also predicted increase in pitch accuracy only when pitch sequences repeated. Thus, behavioral performance and neural response in sensorimotor mapping networks were sensitive to both pitch and temporal structure, suggesting that pitch and temporal structure are largely integrated in auditory-motor transformations. IPS may be involved in transforming pitch sequences into spatial coordinates for accurate piano performance.
Collapse
|
99
|
Dunbar RIM, Kaskatis K, MacDonald I, Barra V. Performance of music elevates pain threshold and positive affect: implications for the evolutionary function of music. EVOLUTIONARY PSYCHOLOGY 2012. [PMID: 23089077 DOI: 10.1177/147470491201000403] [Citation(s) in RCA: 87] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
It is well known that music arouses emotional responses. In addition, it has long been thought to play an important role in creating a sense of community, especially in small scale societies. One mechanism by which it might do this is through the endorphin system, and there is evidence to support this claim. Using pain threshold as an assay for CNS endorphin release, we ask whether it is the auditory perception of music that triggers this effect or the active performance of music. We show that singing, dancing and drumming all trigger endorphin release (indexed by an increase in post-activity pain tolerance) in contexts where merely listening to music and low energy musical activities do not. We also confirm that music performance results in elevated positive (but not negative) affect. We conclude that it is the active performance of music that generates the endorphin high, not the music itself. We discuss the implications of this in the context of community bonding mechanisms that commonly involve dance and music-making.
Collapse
Affiliation(s)
- R I M Dunbar
- Department of Experimental Psychology, University of Oxford, Oxford OX1 3UD, UK.
| | | | | | | |
Collapse
|
100
|
Uddén J, Bahlmann J. A rostro-caudal gradient of structured sequence processing in the left inferior frontal gyrus. Philos Trans R Soc Lond B Biol Sci 2012; 367:2023-32. [PMID: 22688637 DOI: 10.1098/rstb.2012.0009] [Citation(s) in RCA: 65] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
In this paper, we present two novel perspectives on the function of the left inferior frontal gyrus (LIFG). First, a structured sequence processing perspective facilitates the search for functional segregation within the LIFG and provides a way to express common aspects across cognitive domains including language, music and action. Converging evidence from functional magnetic resonance imaging and transcranial magnetic stimulation studies suggests that the LIFG is engaged in sequential processing in artificial grammar learning, independently of particular stimulus features of the elements (whether letters, syllables or shapes are used to build up sequences). The LIFG has been repeatedly linked to processing of artificial grammars across all different grammars tested, whether they include non-adjacent dependencies or mere adjacent dependencies. Second, we apply the sequence processing perspective to understand how the functional segregation of semantics, syntax and phonology in the LIFG can be integrated in the general organization of the lateral prefrontal cortex (PFC). Recently, it was proposed that the functional organization of the lateral PFC follows a rostro-caudal gradient, such that more abstract processing in cognitive control is subserved by more rostral regions of the lateral PFC. We explore the literature from the viewpoint that functional segregation within the LIFG can be embedded in a general rostro-caudal abstraction gradient in the lateral PFC. If the lateral PFC follows a rostro-caudal abstraction gradient, then this predicts that the LIFG follows the same principles, but this prediction has not yet been tested or explored in the LIFG literature. Integration might provide further insights into the functional architecture of the LIFG and the lateral PFC.
Collapse
Affiliation(s)
- Julia Uddén
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.
| | | |
Collapse
|