1
|
Shorey AE, King CJ, Whiteford KL, Stilp CE. Musical training is not associated with spectral context effects in instrument sound categorization. Atten Percept Psychophys 2024; 86:991-1007. [PMID: 38216848 DOI: 10.3758/s13414-023-02839-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/21/2023] [Indexed: 01/14/2024]
Abstract
Musicians display a variety of auditory perceptual benefits relative to people with little or no musical training; these benefits are collectively referred to as the "musician advantage." Importantly, musicians consistently outperform nonmusicians for tasks relating to pitch, but there are mixed reports as to musicians outperforming nonmusicians for timbre-related tasks. Due to their experience manipulating the timbre of their instrument or voice in performance, we hypothesized that musicians would be more sensitive to acoustic context effects stemming from the spectral changes in timbre across a musical context passage (played by a string quintet then filtered) and a target instrument sound (French horn or tenor saxophone; Experiment 1). Additionally, we investigated the role of a musician's primary instrument of instruction by recruiting French horn and tenor saxophone players to also complete this task (Experiment 2). Consistent with the musician advantage literature, musicians exhibited superior pitch discrimination to nonmusicians. Contrary to our main hypothesis, there was no difference between musicians and nonmusicians in how spectral context effects shaped instrument sound categorization. Thus, musicians may only outperform nonmusicians for some auditory skills relevant to music (e.g., pitch perception) but not others (e.g., timbre perception via spectral differences).
Collapse
Affiliation(s)
- Anya E Shorey
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, KY, 40292, USA.
| | - Caleb J King
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, KY, 40292, USA.
| | - Kelly L Whiteford
- Department of Psychology, University of Minnesota, Minneapolis, MN, 55455, USA
| | - Christian E Stilp
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, KY, 40292, USA
| |
Collapse
|
2
|
Hansen NC, Højlund A, Møller C, Pearce M, Vuust P. Musicians show more integrated neural processing of contextually relevant acoustic features. Front Neurosci 2022; 16:907540. [PMID: 36312026 PMCID: PMC9612920 DOI: 10.3389/fnins.2022.907540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Accepted: 09/08/2022] [Indexed: 12/04/2022] Open
Abstract
Little is known about expertise-related plasticity of neural mechanisms for auditory feature integration. Here, we contrast two diverging hypotheses that musical expertise is associated with more independent or more integrated predictive processing of acoustic features relevant to melody perception. Mismatch negativity (MMNm) was recorded with magnetoencephalography (MEG) from 25 musicians and 25 non-musicians, exposed to interleaved blocks of a complex, melody-like multi-feature paradigm and a simple, oddball control paradigm. In addition to single deviants differing in frequency (F), intensity (I), or perceived location (L), double and triple deviants were included reflecting all possible feature combinations (FI, IL, LF, FIL). Following previous work, early neural processing overlap was approximated in terms of MMNm additivity by comparing empirical MMNms obtained with double and triple deviants to modeled MMNms corresponding to summed constituent single-deviant MMNms. Significantly greater subadditivity was found in musicians compared to non-musicians, specifically for frequency-related deviants in complex, melody-like stimuli. Despite using identical sounds, expertise effects were absent from the simple oddball paradigm. This novel finding supports the integrated processing hypothesis whereby musicians recruit overlapping neural resources facilitating more integrative representations of contextually relevant stimuli such as frequency (perceived as pitch) during melody perception. More generally, these specialized refinements in predictive processing may enable experts to optimally capitalize upon complex, domain-relevant, acoustic cues.
Collapse
Affiliation(s)
- Niels Chr. Hansen
- Aarhus Institute of Advanced Studies, Aarhus University, Aarhus, Denmark
- Department of Clinical Medicine, Center for Music in the Brain, Aarhus University, Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
- Department of Dramaturgy and Musicology, School of Communication and Culture, Aarhus University, Aarhus, Denmark
- *Correspondence: Niels Chr. Hansen,
| | - Andreas Højlund
- Department of Linguistics, Cognitive Science, and Semiotics, School of Communication and Culture, Aarhus University, Aarhus, Denmark
- Department of Clinical Medicine, Faculty of Health, Center of Functionally Integrative Neuroscience, Aarhus University, Aarhus, Denmark
| | - Cecilie Møller
- Department of Clinical Medicine, Center for Music in the Brain, Aarhus University, Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
- Department of Psychology and Behavioural Sciences, Aarhus University, Aarhus, Denmark
| | - Marcus Pearce
- Department of Clinical Medicine, Center for Music in the Brain, Aarhus University, Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
- School of Electronic Engineering and Computer Science, Cognitive Science Research Group and Centre for Digital Music, Queen Mary University of London, London, United Kingdom
| | - Peter Vuust
- Department of Clinical Medicine, Center for Music in the Brain, Aarhus University, Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| |
Collapse
|
3
|
Goupil L, Wolf T, Saint-Germier P, Aucouturier JJ, Canonne C. Emergent Shared Intentions Support Coordination During Collective Musical Improvisations. Cogn Sci 2021; 45:e12932. [PMID: 33438231 DOI: 10.1111/cogs.12932] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Revised: 11/26/2020] [Accepted: 12/09/2020] [Indexed: 11/28/2022]
Abstract
Human interactions are often improvised rather than scripted, which suggests that efficient coordination can emerge even when collective plans are largely underspecified. One possibility is that such forms of coordination primarily rely on mutual influences between interactive partners, and on perception-action couplings such as entrainment or mimicry. Yet some forms of improvised joint actions appear difficult to explain solely by appealing to these emergent mechanisms. Here, we focus on collective free improvisation, a form of highly unplanned creative practice where both agents' subjective reports and the complexity of their interactions suggest that shared intentions may sometimes emerge to support coordination during the course of the improvisation, even in the absence of verbal communication. In four experiments, we show that shared intentions spontaneously emerge during collective musical improvisations, and that they foster coordination on multiple levels, over and beyond the mere influence of shared information. We also show that musicians deploy communicative strategies to manifest and propagate their intentions within the group, and that this predicts better coordination. Overall, our results suggest that improvised and scripted joint actions are more continuous with one another than it first seems, and that they differ merely in the extent to which they rely on emergent or planned coordination mechanisms.
Collapse
Affiliation(s)
- Louise Goupil
- Science and Technology of Music and Sound (UMR 9912, IRCAM/CNRS/Sorbonne University).,School of Psychology, University of East London
| | - Thomas Wolf
- Department of Cognitive Science, Central European University
| | - Pierre Saint-Germier
- Science and Technology of Music and Sound (UMR 9912, IRCAM/CNRS/Sorbonne University)
| | | | - Clément Canonne
- Science and Technology of Music and Sound (UMR 9912, IRCAM/CNRS/Sorbonne University)
| |
Collapse
|
4
|
Srinivasan N, Bishop J, Yekovich R, Rosenfield DB, Helekar SA. Differential Activation and Functional Plasticity of Multimodal Areas Associated with Acquired Musical Skill. Neuroscience 2020; 446:294-303. [PMID: 32818600 DOI: 10.1016/j.neuroscience.2020.08.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 07/27/2020] [Accepted: 08/10/2020] [Indexed: 10/23/2022]
Abstract
Training of a musical skill is known to produce a distributed neural representation of the ability to perceive music and perform musical tasks. In the present study we tested the hypothesis that the audiovisual perception of music involves a wider activation of multimodal sensory and sensorimotor structures in the brain, including those containing mirror neurons. We mapped the activation of brain areas during passive listening and viewing of the first 40 s of "Ode to Joy" being played on the piano by an expert pianist. To do this we performed brain functional magnetic resonance imaging during the presentation of 6 different stimulus contrasts pertaining to that musical melody in a pseudo-randomized order. Group data analysis in musically trained and untrained adults showed robust activation in broadly distributed occipitotemporal, parietal and frontal areas in trained subjects and much restricted activation in untrained subjects. A visual stimulus contrast focusing on the visual motion percept of moving fingers on piano keys revealed selective bilateral activation of a locus corresponding to the V5/MT area, which was significantly more pronounced in trained subjects and showed partial linear dependence on the duration of training on the left side. Quantitative analysis of individual brain volumes confirmed a significantly greater and wider spread of activation in trained compared to untrained subjects. These findings support the view that audiovisual perception of music and musical gestures in trained musicians involves an expanded and widely distributed neural representation formed due to experience-dependent plasticity.
Collapse
Affiliation(s)
- N Srinivasan
- Speech and Language Center, Stanley H. Appel Department of Neurology, Houston Methodist Neurological Institute, Houston, TX, United States
| | - J Bishop
- Speech and Language Center, Stanley H. Appel Department of Neurology, Houston Methodist Neurological Institute, Houston, TX, United States
| | - R Yekovich
- Shepherd School of Music, Rice University, Houston, TX, United States
| | - D B Rosenfield
- Speech and Language Center, Stanley H. Appel Department of Neurology, Houston Methodist Neurological Institute, Houston, TX, United States; Shepherd School of Music, Rice University, Houston, TX, United States
| | - S A Helekar
- Speech and Language Center, Stanley H. Appel Department of Neurology, Houston Methodist Neurological Institute, Houston, TX, United States.
| |
Collapse
|
5
|
Zhou HY, Cheung EFC, Chan RCK. Audiovisual temporal integration: Cognitive processing, neural mechanisms, developmental trajectory and potential interventions. Neuropsychologia 2020; 140:107396. [PMID: 32087206 DOI: 10.1016/j.neuropsychologia.2020.107396] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Revised: 02/14/2020] [Accepted: 02/15/2020] [Indexed: 12/21/2022]
Abstract
To integrate auditory and visual signals into a unified percept, the paired stimuli must co-occur within a limited time window known as the Temporal Binding Window (TBW). The width of the TBW, a proxy of audiovisual temporal integration ability, has been found to be correlated with higher-order cognitive and social functions. A comprehensive review of studies investigating audiovisual TBW reveals several findings: (1) a wide range of top-down processes and bottom-up features can modulate the width of the TBW, facilitating adaptation to the changing and multisensory external environment; (2) a large-scale brain network works in coordination to ensure successful detection of audiovisual (a)synchrony; (3) developmentally, audiovisual TBW follows a U-shaped pattern across the lifespan, with a protracted developmental course into late adolescence and rebounding in size again in late life; (4) an enlarged TBW is characteristic of a number of neurodevelopmental disorders; and (5) the TBW is highly flexible via perceptual and musical training. Interventions targeting the TBW may be able to improve multisensory function and ameliorate social communicative symptoms in clinical populations.
Collapse
Affiliation(s)
- Han-Yu Zhou
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | | | - Raymond C K Chan
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
6
|
Wilbiks JMP, O’Brien C. Musical Training Improves Audiovisual Integration Capacity under Conditions of High Perceptual Load. Vision (Basel) 2020; 4:vision4010009. [PMID: 31991670 PMCID: PMC7157434 DOI: 10.3390/vision4010009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2019] [Revised: 01/02/2020] [Accepted: 01/17/2020] [Indexed: 12/03/2022] Open
Abstract
In considering capacity measures of audiovisual integration, it has become apparent that there is a wide degree of variation both within (based on unimodal and multimodal stimulus characteristics) and between participants. Recent work has discussed performance on a number of cognitive tasks that can form a regression model accounting for nearly a quarter of the variation in audiovisual integration capacity. The current study involves an investigation of whether different elements of musicality in participants can contribute to additional variation in capacity. Participants were presented with a series of rapidly changing visual displays and asked to note which elements of that display changed in synchrony with a tone. Results were fitted to a previously used model to establish capacity estimates, and these estimates were included in correlational analyses with musical training, musical perceptual abilities, and active engagement in music. We found that audiovisual integration capacity was positively correlated with amount of musical training, and that this correlation was statistically significant under the most difficult perceptual conditions. Results are discussed in the context of the boosting of perceptual abilities due to musical training, even under conditions that have been previously found to be overly demanding for participants.
Collapse
|
7
|
Vanden Bosch der Nederlanden CM, Zaragoza C, Rubio-Garcia A, Clarkson E, Snyder JS. Change detection in complex auditory scenes is predicted by auditory memory, pitch perception, and years of musical training. PSYCHOLOGICAL RESEARCH 2018; 84:585-601. [PMID: 30120544 DOI: 10.1007/s00426-018-1072-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2018] [Accepted: 08/04/2018] [Indexed: 10/28/2022]
Abstract
Our world is a sonically busy place and we use both acoustic information and experience-based knowledge to make sense of the sounds arriving at our ears. The knowledge we gain through experience has the potential to shape what sounds are prioritized in a complex scene. There are many examples of how visual expertise influences how we perceive objects in visual scenes, but few studies examine how auditory expertise is associated with attentional biases toward familiar real-world sounds in complex scenes. In the current study, we investigated whether musical expertise is associated with the ability to detect changes to real-world sounds in complex auditory scenes, and whether any such benefit is specific to musical instrument sounds. We also examined whether change detection is better for human-generated sounds in general or only communicative human sounds. We found that musicians had less change deafness overall. All listeners were better at detecting human communicative sounds compared to human non-communicative sounds, but this benefit was driven by speech sounds and sounds that were vocally generated. Musical listening skill, speech-in-noise, and executive function abilities were used to predict rates of change deafness. Auditory memory, musical training, fine-grained pitch processing, and an interaction between training and pitch processing accounted for 45.8% of the variance in change deafness. To better understand perceptual and cognitive expertise, it may be more important to measure various auditory skills and relate them to each other, as opposed to comparing experts to non-experts.
Collapse
Affiliation(s)
- Christina M Vanden Bosch der Nederlanden
- Department of Psychology, University of Nevada, Las Vegas, USA. .,The Brain and Mind Institute, Western University, 1151 Richmond St, London, ON, N6A 3K7, Canada.
| | | | | | - Evan Clarkson
- Department of Psychology, University of Nevada, Las Vegas, USA
| | - Joel S Snyder
- Department of Psychology, University of Nevada, Las Vegas, USA
| |
Collapse
|
8
|
Bishop L. Collaborative Musical Creativity: How Ensembles Coordinate Spontaneity. Front Psychol 2018; 9:1285. [PMID: 30087645 PMCID: PMC6066987 DOI: 10.3389/fpsyg.2018.01285] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2018] [Accepted: 07/04/2018] [Indexed: 11/24/2022] Open
Abstract
Music performance is inherently social. Most music is performed in groups, and even soloists are subject to influence from a (real or imagined) audience. It is also inherently creative. Performers are called upon to interpret notated music, improvise new musical material, adapt to unexpected playing conditions, and accommodate technical errors. The focus of this paper is how creativity is distributed across members of a music ensemble as they perform these tasks. Some aspects of ensemble performance have been investigated extensively in recent years as part of the broader literature on joint action (e.g., the processes underlying sensorimotor synchronization). Much of this research has been done under highly controlled conditions, using tasks that generate reliable results, but capture only a small part of ensemble performance as it occurs naturalistically. Still missing from this literature is an explanation of how ensemble musicians perform in conditions that require creative interpretation, improvisation, and/or adaptation: how do they coordinate the production of something new? Current theories of creativity endorse the idea that dynamic interaction between individuals, their actions, and their social and material environments underlies creative performance. This framework is much in line with the embodied music cognition paradigm and the dynamical systems perspective on ensemble coordination. This review begins by situating the concept of collaborative musical creativity in the context of embodiment. Progress that has been made toward identifying the mechanisms that underlie collaborative creativity in music performance is then assessed. The focus is on the possible role of musical imagination in facilitating performer flexibility, and on the forms of communication that are likely to support the coordination of creative musical output. Next, emergence and group flow–constructs that seem to characterize ensemble performance at its peak–are considered, and some of the conditions that may encourage periods of emergence or flow are identified. Finally, it is argued that further research is needed to (1) demystify the constructs of emergence and group flow, clarifying their effects on performer experience and listener response, (2) determine how constrained musical imagination is by perceptual experience and understand people's capacity to depart from familiar frameworks and imagine new sounds and sound structures, and (3) assess the technological developments that are supposed to facilitate or enhance musical creativity, and determine what effect they have on the processes underlying creative collaboration.
Collapse
Affiliation(s)
- Laura Bishop
- Austrian Research Institute for Artificial Intelligence (OFAI), Vienna, Austria
| |
Collapse
|
9
|
Jicol C, Proulx MJ, Pollick FE, Petrini K. Long-term music training modulates the recalibration of audiovisual simultaneity. Exp Brain Res 2018; 236:1869-1880. [PMID: 29687204 DOI: 10.1007/s00221-018-5269-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2018] [Accepted: 04/17/2018] [Indexed: 11/27/2022]
Abstract
To overcome differences in physical transmission time and neural processing, the brain adaptively recalibrates the point of simultaneity between auditory and visual signals by adapting to audiovisual asynchronies. Here, we examine whether the prolonged recalibration process of passively sensed visual and auditory signals is affected by naturally occurring multisensory training known to enhance audiovisual perceptual accuracy. Hence, we asked a group of drummers, of non-drummer musicians and of non-musicians to judge the audiovisual simultaneity of musical and non-musical audiovisual events, before and after adaptation with two fixed audiovisual asynchronies. We found that the recalibration for the musicians and drummers was in the opposite direction (sound leading vision) to that of non-musicians (vision leading sound), and change together with both increased music training and increased perceptual accuracy (i.e. ability to detect asynchrony). Our findings demonstrate that long-term musical training reshapes the way humans adaptively recalibrate simultaneity between auditory and visual signals.
Collapse
Affiliation(s)
- Crescent Jicol
- Department of Psychology, University of Bath, Bath, UK.
- Department of Computer Science, University of Bath, Claverton Down, Bath, BA2 7AY, UK.
| | | | | | - Karin Petrini
- Department of Psychology, University of Bath, Bath, UK
| |
Collapse
|
10
|
Noppeney U, Lee HL. Causal inference and temporal predictions in audiovisual perception of speech and music. Ann N Y Acad Sci 2018; 1423:102-116. [PMID: 29604082 DOI: 10.1111/nyas.13615] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2017] [Revised: 12/13/2017] [Accepted: 12/22/2017] [Indexed: 11/28/2022]
Abstract
To form a coherent percept of the environment, the brain must integrate sensory signals emanating from a common source but segregate those from different sources. Temporal regularities are prominent cues for multisensory integration, particularly for speech and music perception. In line with models of predictive coding, we suggest that the brain adapts an internal model to the statistical regularities in its environment. This internal model enables cross-sensory and sensorimotor temporal predictions as a mechanism to arbitrate between integration and segregation of signals from different senses.
Collapse
Affiliation(s)
- Uta Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK
| | - Hwee Ling Lee
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| |
Collapse
|
11
|
Bishop L, Goebl W. Beating time: How ensemble musicians' cueing gestures communicate beat position and tempo. PSYCHOLOGY OF MUSIC 2018; 46:84-106. [PMID: 29276332 PMCID: PMC5718341 DOI: 10.1177/0305735617702971] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Ensemble musicians typically exchange visual cues to coordinate piece entrances. "Cueing-in" gestures indicate when to begin playing and at what tempo. This study investigated how timing information is encoded in musicians' cueing-in gestures. Gesture acceleration patterns were expected to indicate beat position, while gesture periodicity, duration, and peak gesture velocity were expected to indicate tempo. Same-instrument ensembles (e.g., piano-piano) were expected to synchronize more successfully than mixed-instrument ensembles (e.g., piano-violin). Duos performed short passages as their head and (for violinists) bowing hand movements were tracked with accelerometers and Kinect sensors. Performers alternated between leader/follower roles; leaders heard a tempo via headphones and cued their partner in nonverbally. Violin duos synchronized more successfully than either piano duos or piano-violin duos, possibly because violinists were more experienced in ensemble playing than pianists. Peak acceleration indicated beat position in leaders' head-nodding gestures. Gesture duration and periodicity in leaders' head and bowing hand gestures indicated tempo. The results show that the spatio-temporal characteristics of cueing-in gestures guide beat perception, enabling synchronization with visual gestures that follow a range of spatial trajectories.
Collapse
Affiliation(s)
- Laura Bishop
- Austrian Research Institute for Artificial Intelligence (OFAI), Vienna, Austria
| | - Werner Goebl
- Austrian Research Institute for Artificial Intelligence (OFAI), Vienna, Austria
- Department of Music Acoustics (IWK), University of Music and Performing Arts Vienna, Austria
| |
Collapse
|
12
|
Cygańska A, Truszczyńska-Baszak A, Drzał-Grabiec J, Tarnowski A. Assessment of body parameters' symmetry in child violinists. J Back Musculoskelet Rehabil 2017; 30:1081-1086. [PMID: 28505962 DOI: 10.3233/bmr-169700] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
BACKGROUND Playing violin may lead to overload of the locomotor system. OBJECTIVE The aim of this study was to assess body parameters for trunk symmetry in child violinists and compare with the control group. METHODS We analyzed body posture of 101 children aged 7-12 years, mean age 11.09 ± 9.46, 49 child violinists and control group of 52 children. RESULTS We found statistically significant differences for the difference in depth of the lower corners of scapulae and upper posterior spina iliaca, though greater asymmetries were found in the clinical control group. The remaining parameter values are close to significance, which may suggest that the process of postural change among the children had just started and that the existing asymmetries were easy to correct. We found positive correlation between body height and the difference in distance of the lower corners of scapulae from the spine: OL (p= 0.029, correlation coefficient value was 0.167) and the Thales triangle height: (p= 0.018, correlation coefficient was 0.214). CONCLUSIONS Position maintained while playing the violin changed some parameters characterizing the curvature of the spine in frontal plane. We found the importance of detailed analysis of children body posture and its critical assessment.
Collapse
Affiliation(s)
- Anna Cygańska
- Faculty of Rehabilitation, Józef Piłsudski University of Physical Education, Marymoncka, Warsaw, Poland
| | | | | | | |
Collapse
|
13
|
Hou J, Rajmohan R, Fang D, Kashfi K, Al-Khalil K, Yang J, Westney W, Grund CM, O'Boyle MW. Mirror neuron activation of musicians and non-musicians in response to motion captured piano performances. Brain Cogn 2017; 115:47-55. [DOI: 10.1016/j.bandc.2017.04.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2016] [Revised: 02/02/2017] [Accepted: 04/04/2017] [Indexed: 10/19/2022]
|
14
|
Waddell G, Williamon A. Eye of the Beholder: Stage Entrance Behavior and Facial Expression Affect Continuous Quality Ratings in Music Performance. Front Psychol 2017; 8:513. [PMID: 28487662 PMCID: PMC5403894 DOI: 10.3389/fpsyg.2017.00513] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2016] [Accepted: 03/20/2017] [Indexed: 11/13/2022] Open
Abstract
Judgments of music performance quality are commonly employed in music practice, education, and research. However, previous studies have demonstrated the limited reliability of such judgments, and there is now evidence that extraneous visual, social, and other "non-musical" features can unduly influence them. The present study employed continuous measurement techniques to examine how the process of forming a music quality judgment is affected by the manipulation of temporally specific visual cues. Video footage comprising an appropriate stage entrance and error-free performance served as the standard condition (Video 1). This footage was manipulated to provide four additional conditions, each identical save for a single variation: an inappropriate stage entrance (Video 2); the presence of an aural performance error midway through the piece (Video 3); the same error accompanied by a negative facial reaction by the performer (Video 4); the facial reaction with no corresponding aural error (Video 5). The participants were 53 musicians and 52 non-musicians (N = 105) who individually assessed the performance quality of one of the five randomly assigned videos via a digital continuous measurement interface and headphones. The results showed that participants viewing the "inappropriate" stage entrance made judgments significantly more quickly than those viewing the "appropriate" entrance, and while the poor entrance caused significantly lower initial scores among those with musical training, the effect did not persist long into the performance. The aural error caused an immediate drop in quality judgments that persisted to a lower final score only when accompanied by the frustrated facial expression from the pianist; the performance error alone caused a temporary drop only in the musicians' ratings, and the negative facial reaction alone caused no reaction regardless of participants' musical experience. These findings demonstrate the importance of visual information in forming evaluative and aesthetic judgments in musical contexts and highlight how visual cues dynamically influence those judgments over time.
Collapse
Affiliation(s)
- George Waddell
- Centre for Performance Science, Royal College of MusicLondon, UK
| | - Aaron Williamon
- Centre for Performance Science, Royal College of MusicLondon, UK
| |
Collapse
|
15
|
Unimodal and cross-modal prediction is enhanced in musicians. Sci Rep 2016; 6:25225. [PMID: 27142627 PMCID: PMC4855230 DOI: 10.1038/srep25225] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2015] [Accepted: 04/06/2016] [Indexed: 11/09/2022] Open
Abstract
Musical training involves exposure to complex auditory and visual stimuli, memorization of elaborate sequences, and extensive motor rehearsal. It has been hypothesized that such multifaceted training may be associated with differences in basic cognitive functions, such as prediction, potentially translating to a facilitation in expert musicians. Moreover, such differences might generalize to non-auditory stimuli. This study was designed to test both hypotheses. We implemented a cross-modal attentional cueing task with auditory and visual stimuli, where a target was preceded by compatible or incompatible cues in mainly compatible (80% compatible, predictable) or random blocks (50% compatible, unpredictable). This allowed for the testing of prediction skills in musicians and controls. Musicians showed increased sensitivity to the statistical structure of the block, expressed as advantage for compatible trials (disadvantage for incompatible trials), but only in the mainly compatible (predictable) blocks. Controls did not show this pattern. The effect held within modalities (auditory, visual), across modalities, and when controlling for short-term memory capacity. These results reveal a striking enhancement in cross-modal prediction in musicians in a very basic cognitive task.
Collapse
|
16
|
Moran N, Hadley LV, Bader M, Keller PE. Perception of 'Back-Channeling' Nonverbal Feedback in Musical Duo Improvisation. PLoS One 2015; 10:e0130070. [PMID: 26086593 PMCID: PMC4473276 DOI: 10.1371/journal.pone.0130070] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2014] [Accepted: 05/16/2015] [Indexed: 12/02/2022] Open
Abstract
In witnessing face-to-face conversation, observers perceive authentic communication according to the social contingency of nonverbal feedback cues (‘back-channeling’) by non-speaking interactors. The current study investigated the generality of this function by focusing on nonverbal communication in musical improvisation. A perceptual experiment was conducted to test whether observers can reliably identify genuine versus fake (mismatched) duos from musicians’ nonverbal cues, and how this judgement is affected by observers’ musical background and rhythm perception skill. Twenty-four musicians were recruited to perform duo improvisations, which included solo episodes, in two styles: standard jazz (where rhythm is based on a regular pulse) or free improvisation (where rhythm is non-pulsed). The improvisations were recorded using a motion capture system to generate 16 ten-second point-light displays (with audio) of the soloist and the silent non-soloing musician (‘back-channeler’). Sixteen further displays were created by splicing soloists with back-channelers from different duos. Participants (N = 60) with various musical backgrounds were asked to rate the point-light displays as either real or fake. Results indicated that participants were sensitive to the real/fake distinction in the free improvisation condition independently of musical experience. Individual differences in rhythm perception skill did not account for performance in the free condition, but were positively correlated with accuracy in the standard jazz condition. These findings suggest that the perception of back-channeling in free improvisation is not dependent on music-specific skills but is a general ability. The findings invite further study of the links between interpersonal dynamics in conversation and musical interaction.
Collapse
Affiliation(s)
- Nikki Moran
- Institute for Music in Human and Social Development (IMHSD), Reid School of Music, University of Edinburgh, Edinburgh, United Kingdom
- * E-mail:
| | - Lauren V. Hadley
- Institute for Music in Human and Social Development (IMHSD), Reid School of Music, University of Edinburgh, Edinburgh, United Kingdom
- Psychology, School of Philosophy, Psychology and Language Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | - Maria Bader
- Research Group: Music Cognition and Action, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Peter E. Keller
- Research Group: Music Cognition and Action, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Music Cognition and Action Group, The MARCS Institute, University of Western Sydney, Penrith, Australia
| |
Collapse
|
17
|
Proverbio AM, Attardo L, Cozzi M, Zani A. The effect of musical practice on gesture/sound pairing. Front Psychol 2015; 6:376. [PMID: 25883580 PMCID: PMC4382982 DOI: 10.3389/fpsyg.2015.00376] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2014] [Accepted: 03/16/2015] [Indexed: 11/13/2022] Open
Abstract
Learning to play a musical instrument is a demanding process requiring years of intense practice. Dramatic changes in brain connectivity, volume, and functionality have been shown in skilled musicians. It is thought that music learning involves the formation of novel audio visuomotor associations, but not much is known about the gradual acquisition of this ability. In the present study, we investigated whether formal music training enhances audiovisual multisensory processing. To this end, pupils at different stages of education were examined based on the hypothesis that the strength of audio/visuomotor associations would be augmented as a function of the number of years of conservatory study (expertise). The study participants were violin and clarinet students of pre-academic and academic levels and of different chronological ages, ages of acquisition, and academic levels. A violinist and a clarinetist each played the same score, and each participant viewed the video corresponding to his or her instrument. Pitch, intensity, rhythm, and sound duration were matched across instruments. In half of the trials, the soundtrack did not match (in pitch) the corresponding musical gestures. Data analysis indicated a correlation between the number of years of formal training (expertise) and the ability to detect an audiomotor incongruence in music performance (relative to the musical instrument practiced), thus suggesting a direct correlation between knowing how to play and perceptual sensitivity.
Collapse
Affiliation(s)
- Alice M Proverbio
- NeuroMi - Milan Center for Neuroscience, Department of Psychology, University of Milano-Bicocca , Milan, Italy
| | - Lapo Attardo
- NeuroMi - Milan Center for Neuroscience, Department of Psychology, University of Milano-Bicocca , Milan, Italy
| | - Matteo Cozzi
- NeuroMi - Milan Center for Neuroscience, Department of Psychology, University of Milano-Bicocca , Milan, Italy
| | - Alberto Zani
- Institute of Bioimaging and Molecular Physiology, National Research Council , Milan, Italy
| |
Collapse
|
18
|
Bishop L, Goebl W. When they listen and when they watch: Pianists' use of nonverbal audio and visual cues during duet performance. MUSICAE SCIENTIAE : THE JOURNAL OF THE EUROPEAN SOCIETY FOR THE COGNITIVE SCIENCES OF MUSIC 2015; 19:84-110. [PMID: 26279610 PMCID: PMC4526249 DOI: 10.1177/1029864915570355] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Nonverbal auditory and visual communication helps ensemble musicians predict each other's intentions and coordinate their actions. When structural characteristics of the music make predicting co-performers' intentions difficult (e.g., following long pauses or during ritardandi), reliance on incoming auditory and visual signals may change. This study tested whether attention to visual cues during piano-piano and piano-violin duet performance increases in such situations. Pianists performed the secondo part to three duets, synchronizing with recordings of violinists or pianists playing the primo parts. Secondos' access to incoming audio and visual signals and to their own auditory feedback was manipulated. Synchronization was most successful when primo audio was available, deteriorating when primo audio was removed and only cues from primo visual signals were available. Visual cues were used effectively following long pauses in the music, however, even in the absence of primo audio. Synchronization was unaffected by the removal of secondos' own auditory feedback. Differences were observed in how successfully piano-piano and piano-violin duos synchronized, but these effects of instrument pairing were not consistent across pieces. Pianists' success at synchronizing with violinists and other pianists is likely moderated by piece characteristics and individual differences in the clarity of cueing gestures used.
Collapse
Affiliation(s)
- Laura Bishop
- Austrian Research Institute for Artificial Intelligence (OFAI), Austria
| | - Werner Goebl
- Austrian Research Institute for Artificial Intelligence (OFAI), Austria
| |
Collapse
|