1
|
Chen YA, Norgaard M. Important findings of a technology-assisted in-home music-based intervention for individuals with stroke: a small feasibility study. Disabil Rehabil Assist Technol 2024; 19:2239-2249. [PMID: 37910042 DOI: 10.1080/17483107.2023.2274397] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 10/14/2023] [Accepted: 10/17/2023] [Indexed: 11/03/2023]
Abstract
PURPOSE To examine the feasibility of stroke survivors receiving music-based rehabilitation via a mobile app. MATERIALS AND METHODS We recruited ten chronic stroke survivors who were community-dwelling with mild-moderate upper extremity (UE) paresis. Participants were encouraged to exercise their paretic UE with a commercial instrument training app, Yousician, with a piano keyboard at home for three weeks. The feasibility of the training was measured by: (a) the acceptance of using the app to receive in-home piano training (e.g., daily usage time, exit interview) and (b) the effects of the app functionality as a rehabilitation tool (e.g., participants' motor improvements after training). RESULTS Our small sample size of participants demonstrated general positive feedback and self-motivation (e.g., interest in extended training time) about using a mobile app to receive in-home, music-based UE training. Participants showed no trend of declined usage and practiced on average ∼33 min per day for 4-5 days per week during the 3-week participation. We also observed positive results in the Fugl-Meyer Assessment, Action Research Arm Test, and Nine Hole Peg Test after training. CONCLUSIONS This study provided insight into the feasibility of delivering music-based interventions through mobile health (mHealth) technology for stroke populations. Although this was a small sample size, participants' positive and negative comments and feedback provided useful information for future rehab app development. We suggest four ways to further improve and design a patient-oriented app to facilitate the use of a mHealth app to deliver in-home music-based interventions for stroke survivors.
Collapse
Affiliation(s)
- Yi-An Chen
- Department of Occupational Therapy, Georgia State University, Atlanta, Georgia, USA
| | - Martin Norgaard
- School of Music, Georgia State University, Atlanta, Georgia, USA
| |
Collapse
|
2
|
Michałko A, Di Stefano N, Campo A, Leman M. Enhancing human-human musical interaction through kinesthetic haptic feedback using wearable exoskeletons: theoretical foundations, validation scenarios, and limitations. Front Psychol 2024; 15:1327992. [PMID: 38515976 PMCID: PMC10954903 DOI: 10.3389/fpsyg.2024.1327992] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 02/23/2024] [Indexed: 03/23/2024] Open
Abstract
In this perspective paper, we explore the use of haptic feedback to enhance human-human interaction during musical tasks. We start by providing an overview of the theoretical foundation that underpins our approach, which is rooted in the embodied music cognition framework, and by briefly presenting the concepts of action-perception loop, sensorimotor coupling and entrainment. Thereafter, we focus on the role of haptic information in music playing and we discuss the use of wearable technologies, namely lightweight exoskeletons, for the exchange of haptic information between humans. We present two experimental scenarios in which the effectiveness of this technology for enhancing musical interaction and learning might be validated. Finally, we briefly discuss some of the theoretical and pedagogical implications of the use of technologies for haptic communication in musical contexts, while also addressing the potential barriers to the widespread adoption of exoskeletons in such contexts.
Collapse
Affiliation(s)
- Aleksandra Michałko
- Faculty of Arts and Philosophy, IPEM Institute of Psychoacoustics and Electronic Music, Ghent University, Ghent, Belgium
| | - Nicola Di Stefano
- Institute of Cognitive Sciences and Technologies, National Research Council of Italy (CNR), Rome, Italy
| | - Adriaan Campo
- Faculty of Arts and Philosophy, IPEM Institute of Psychoacoustics and Electronic Music, Ghent University, Ghent, Belgium
| | - Marc Leman
- Faculty of Arts and Philosophy, IPEM Institute of Psychoacoustics and Electronic Music, Ghent University, Ghent, Belgium
| |
Collapse
|
3
|
Etani T, Miura A, Kawase S, Fujii S, Keller PE, Vuust P, Kudo K. A review of psychological and neuroscientific research on musical groove. Neurosci Biobehav Rev 2024; 158:105522. [PMID: 38141692 DOI: 10.1016/j.neubiorev.2023.105522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 12/18/2023] [Accepted: 12/19/2023] [Indexed: 12/25/2023]
Abstract
When listening to music, we naturally move our bodies rhythmically to the beat, which can be pleasurable and difficult to resist. This pleasurable sensation of wanting to move the body to music has been called "groove." Following pioneering humanities research, psychological and neuroscientific studies have provided insights on associated musical features, behavioral responses, phenomenological aspects, and brain structural and functional correlates of the groove experience. Groove research has advanced the field of music science and more generally informed our understanding of bidirectional links between perception and action, and the role of the motor system in prediction. Activity in motor and reward-related brain networks during music listening is associated with the groove experience, and this neural activity is linked to temporal prediction and learning. This article reviews research on groove as a psychological phenomenon with neurophysiological correlates that link musical rhythm perception, sensorimotor prediction, and reward processing. Promising future research directions range from elucidating specific neural mechanisms to exploring clinical applications and socio-cultural implications of groove.
Collapse
Affiliation(s)
- Takahide Etani
- School of Medicine, College of Medical, Pharmaceutical, and Health, Kanazawa University, Kanazawa, Japan; Graduate School of Media and Governance, Keio University, Fujisawa, Japan; Advanced Research Center for Human Sciences, Waseda University, Tokorozawa, Japan.
| | - Akito Miura
- Faculty of Human Sciences, Waseda University, Tokorozawa, Japan
| | - Satoshi Kawase
- The Faculty of Psychology, Kobe Gakuin University, Kobe, Japan
| | - Shinya Fujii
- Faculty of Environment and Information Studies, Keio University, Fujisawa, Japan
| | - Peter E Keller
- Center for Music in the Brain, Aarhus University, Aarhus, Denmark/The Royal Academy of Music Aarhus/Aalborg, Denmark; The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, Australia
| | - Peter Vuust
- Center for Music in the Brain, Aarhus University, Aarhus, Denmark/The Royal Academy of Music Aarhus/Aalborg, Denmark
| | - Kazutoshi Kudo
- Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
4
|
Schiavio A, Witek MAG, Stupacher J. Meaning-making and creativity in musical entrainment. Front Psychol 2024; 14:1326773. [PMID: 38235276 PMCID: PMC10792053 DOI: 10.3389/fpsyg.2023.1326773] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 12/05/2023] [Indexed: 01/19/2024] Open
Abstract
In this paper we suggest that basic forms of musical entrainment may be considered as intrinsically creative, enabling further creative behaviors which may flourish at different levels and timescales. Rooted in an agent's capacity to form meaningful couplings with their sonic, social, and cultural environment, musical entrainment favors processes of adaptation and exploration, where innovative and functional aspects are cultivated via active, bodily experience. We explore these insights through a theoretical lens that integrates findings from enactive cognitive science and creative cognition research. We center our examination on the realms of groove experience and the communicative and emotional dimensions of music, aiming to present a novel preliminary perspective on musical entrainment, rooted in the fundamental concepts of meaning-making and creativity. To do so, we draw from a suite of approaches that place particular emphasis on the role of situated experience and review a range of recent empirical work on entrainment (in musical and non-musical settings), emphasizing the latter's biological and cognitive foundations. We conclude that musical entrainment may be regarded as a building block for different musical creativities that shape one's musical development, offering a concrete example for how this theory could be empirically tested in the future.
Collapse
Affiliation(s)
- Andrea Schiavio
- School of Arts and Creative Technologies, University of York, York, United Kingdom
- Centre for Systematic Musicology, University of Graz, Graz, Austria
| | - Maria A. G. Witek
- Department of Music, School of Languages, Cultures, Art History and Music, University of Birmingham, Birmingham, United Kingdom
| | - Jan Stupacher
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| |
Collapse
|
5
|
Körner A, Strack F. Articulation posture influences pitch during singing imagery. Psychon Bull Rev 2023; 30:2187-2195. [PMID: 37221280 PMCID: PMC10728233 DOI: 10.3758/s13423-023-02306-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/03/2023] [Indexed: 05/25/2023]
Abstract
Facial muscle activity contributes to singing and to articulation: in articulation, mouth shape can alter vowel identity; and in singing, facial movement correlates with pitch changes. Here, we examine whether mouth posture causally influences pitch during singing imagery. Based on perception-action theories and embodied cognition theories, we predict that mouth posture influences pitch judgments even when no overt utterances are produced. In two experiments (total N = 160), mouth posture was manipulated to resemble the articulation of either /i/ (as in English meet; retracted lips) or /o/ (as in French rose; protruded lips). Holding this mouth posture, participants were instructed to mentally "sing" given songs (which were all positive in valence) while listening with their inner ear and, afterwards, to assess the pitch of their mental chant. As predicted, compared to the o-posture, the i-posture led to higher pitch in mental singing. Thus, bodily states can shape experiential qualities, such as pitch, during imagery. This extends embodied music cognition and demonstrates a new link between language and music.
Collapse
Affiliation(s)
- Anita Körner
- Department of Psychology, University of Kassel, Holländische Straße 36-38, 34127, Kassel, Germany.
| | - Fritz Strack
- Department of Psychology, University of Würzburg, Würzburg, Germany
| |
Collapse
|
6
|
Honda S, Noda Y, Matsushita K, Tarumi R, Nomiyama N, Tsugawa S, Tobari Y, Hondo N, Saito K, Mimura M, Fujii S, Nakajima S. Glutamatergic neurometabolite levels in the caudate are associated with the ability of rhythm production. Front Neurosci 2023; 17:1196805. [PMID: 37600001 PMCID: PMC10436544 DOI: 10.3389/fnins.2023.1196805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Accepted: 07/12/2023] [Indexed: 08/22/2023] Open
Abstract
Introduction Glutamatergic neurometabolites play important roles in the basal ganglia, a hub of the brain networks involved in musical rhythm processing. We aimed to investigate the relationship between rhythm processing abilities and glutamatergic neurometabolites in the caudate. Methods We aquired Glutamatergic function in healthy individuals employing proton magnetic resonance spectroscopy. We targeted the right caudate and the dorsal anterior cingulate cortex (dACC) as a control region. Rhythm processing ability was assessed by the Harvard Beat Assessment Test (H-BAT). Results We found negative correlations between the production part of the Beat Saliency Test in the H-BAT and glutamate and glutamine levels in the caudate (r = -0.693, p = 0.002) whereas there was no such association in the dACC. Conclusion These results suggest that higher glutamatergic neurometabolite levels in the caudate may contribute to rhythm processing, especially the ability to produce meter in music precisely.
Collapse
Affiliation(s)
- Shiori Honda
- Department of Neuropsychiatry, Keio University School of Medicine, Tokyo, Japan
| | - Yoshihiro Noda
- Department of Neuropsychiatry, Keio University School of Medicine, Tokyo, Japan
| | - Karin Matsushita
- Department of Neuropsychiatry, Keio University School of Medicine, Tokyo, Japan
| | - Ryosuke Tarumi
- Department of Neuropsychiatry, Keio University School of Medicine, Tokyo, Japan
- Seikeikai Komagino Hospital, Hachioji, Japan
| | - Natsumi Nomiyama
- Faculty of Environment and Information Studies, Keio University, Kanagawa, Japan
| | - Sakiko Tsugawa
- Department of Neuropsychiatry, Keio University School of Medicine, Tokyo, Japan
| | - Yui Tobari
- Department of Neuropsychiatry, Keio University School of Medicine, Tokyo, Japan
| | - Nobuaki Hondo
- Department of Neuropsychiatry, Keio University School of Medicine, Tokyo, Japan
| | - Keisuke Saito
- Department of Neuropsychiatry, Keio University School of Medicine, Tokyo, Japan
| | - Masaru Mimura
- Department of Neuropsychiatry, Keio University School of Medicine, Tokyo, Japan
| | - Shinya Fujii
- Faculty of Environment and Information Studies, Keio University, Kanagawa, Japan
| | - Shinichiro Nakajima
- Department of Neuropsychiatry, Keio University School of Medicine, Tokyo, Japan
- Multimodal Imaging Group, Research Imaging Centre, Centre for Addiction and Mental Health, Toronto, ON, Canada
| |
Collapse
|
7
|
Embodiment and repeated exposure do not suffice for abstract concepts acquisition: evidence from tonal music cognition. PSYCHOLOGICAL RESEARCH 2023; 87:43-58. [PMID: 35254462 DOI: 10.1007/s00426-022-01662-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Research on abstract concepts (AC) suggests that while some AC are enacted indirectly and occasionally, others are largely grounded in our sensory-motor and affective experience, and the opportunities to enact them are countless, which would allow us to acquire them without supervision. From this, the following question arises: do embodiment and repeated exposure suffice to dispense with supervision in abstract concepts acquisition (ACA)? In the present study, this question was addressed in the context of tonal music cognition, which demands a high level of abstraction, and via musical materials that participants had frequently heard and sung. Specifically, highly trained, moderately trained, and untrained participants (24 each) were given 12 well-known melodic fragments ending on tones instantiating 6 different scale degrees (2 times each) and asked to group (round 1) or pair (round 2) those fragments whose last tone conveyed the same (or a similar enough) level of stability or rest. If embodiment and repeated exposure suffice for ACA, then one would expect a scale degree-based grouping strategy regardless of participants' training level. Results showed that only highly trained participants systematically grouped stimuli ending on the same scale degree, particularly in round 2; moderately trained participants' performance was mixed, and tonality's influence on untrained participants was negligible. Further, moderately trained and untrained participants performed inconsistently, discarding in round 2 almost all of the pairs formed in round 1. These findings are integrated with previous findings on the effect of language, affect, and category type on conceptualization to account for why and when ACA requires supervision.
Collapse
|
8
|
The rediscovered motor-related area 55b emerges as a core hub of music perception. Commun Biol 2022; 5:1104. [PMID: 36257973 PMCID: PMC9579133 DOI: 10.1038/s42003-022-04009-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 09/19/2022] [Indexed: 12/03/2022] Open
Abstract
Passive listening to music, without sound production or evident movement, is long known to activate motor control regions. Nevertheless, the exact neuroanatomical correlates of the auditory-motor association and its underlying neural mechanisms have not been fully determined. Here, based on a NeuroSynth meta-analysis and three original fMRI paradigms of music perception, we show that the long-ignored pre-motor region, area 55b, an anatomically unique and functionally intriguing region, is a core hub of music perception. Moreover, results of a brain-behavior correlation analysis implicate neural entrainment as the underlying mechanism of area 55b’s contribution to music perception. In view of the current results and prior literature, area 55b is proposed as a keystone of sensorimotor integration, a fundamental brain machinery underlying simple to hierarchically complex behaviors. Refining the neuroanatomical and physiological understanding of sensorimotor integration is expected to have a major impact on various fields, from brain disorders to artificial general intelligence. Functional magnetic resonance imaging data acquired during passive listening to music suggest that pre-motor area 55b acts as a core hub of music processing in humans.
Collapse
|
9
|
Schiavio A, Maes PJ, van der Schyff D. The dynamics of musical participation. MUSICAE SCIENTIAE : THE JOURNAL OF THE EUROPEAN SOCIETY FOR THE COGNITIVE SCIENCES OF MUSIC 2022; 26:604-626. [PMID: 36090466 PMCID: PMC9449429 DOI: 10.1177/1029864920988319] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In this paper we argue that our comprehension of musical participation-the complex network of interactive dynamics involved in collaborative musical experience-can benefit from an analysis inspired by the existing frameworks of dynamical systems theory and coordination dynamics. These approaches can offer novel theoretical tools to help music researchers describe a number of central aspects of joint musical experience in greater detail, such as prediction, adaptivity, social cohesion, reciprocity, and reward. While most musicians involved in collective forms of musicking already have some familiarity with these terms and their associated experiences, we currently lack an analytical vocabulary to approach them in a more targeted way. To fill this gap, we adopt insights from these frameworks to suggest that musical participation may be advantageously characterized as an open, non-equilibrium, dynamical system. In particular, we suggest that research informed by dynamical systems theory might stimulate new interdisciplinary scholarship at the crossroads of musicology, psychology, philosophy, and cognitive (neuro)science, pointing toward new understandings of the core features of musical participation.
Collapse
Affiliation(s)
- Andrea Schiavio
- Andrea Schiavio, Centre for
Systematic Musicology, University of Graz, Glacisstraße 27a, Graz,
8010, Austria.
| | - Pieter-Jan Maes
- IPEM, Department of Art, Music, and
Theatre Sciences, Ghent University, Belgium
| | | |
Collapse
|
10
|
Reh J, Schmitz G, Hwang TH, Effenberg AO. Loudness affects motion: asymmetric volume of auditory feedback results in asymmetric gait in healthy young adults. BMC Musculoskelet Disord 2022; 23:586. [PMID: 35715757 PMCID: PMC9206330 DOI: 10.1186/s12891-022-05503-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 06/01/2022] [Indexed: 12/02/2022] Open
Abstract
Background The potential of auditory feedback for motor learning in the rehabilitation of various diseases has become apparent in recent years. However, since the volume of auditory feedback has played a minor role so far and its influence has hardly been considered, we investigate the volume effect of auditory feedback on gait pattern and gait direction and its interaction with pitch. Methods Thirty-two healthy young participants were randomly divided into two groups: Group 1 (n = 16) received a high pitch (150-250 Hz) auditory feedback; group 2 (n = 16) received a lower pitch (95-112 Hz) auditory feedback. The feedback consisted of a real-time sonification of the right and left foot ground contact. After an initial condition (no auditory feedback and full vision), both groups realized a 30-minute habituation period followed by a 30-minute asymmetry period. At any condition, the participants were asked to walk blindfolded and with auditory feedback towards a target at 15 m distance and were stopped 5 m before the target. Three different volume conditions were applied in random order during the habituation period: loud, normal, and quiet. In the subsequent asymmetry period, the three volume conditions baseline, right quiet and left quiet were applied in random order. Results In the habituation phase, the step width from the loud to the quiet condition showed a significant interaction of volume*pitch with a decrease at high pitch (group 1) and an increase at lower pitch (group 2) (group 1: loud 1.02 ± 0.310, quiet 0.98 ± 0.301; group 2: loud 0.95 ± 0.229, quiet 1.11 ± 0.298). In the asymmetry period, a significantly increased ground contact time on the side with reduced volume could be found (right quiet: left foot 0.988 ± 0.033, right foot 1.003 ± 0.040, left quiet: left foot 1.004 ± 0.036, right foot 1.002 ± 0.033). Conclusions Our results suggest that modifying the volume of auditory feedback can be an effective way to improve gait symmetry. This could facilitate gait therapy and rehabilitation of hemiparetic and arthroplasty patients, in particular if gait improvement based on verbal corrections and conscious motor control is limited.
Collapse
Affiliation(s)
- Julia Reh
- Institute of Sports Science, Leibniz University Hannover, Am Moritzwinkel 6, 30167, Hannover, Germany.
| | - Gerd Schmitz
- Institute of Sports Science, Leibniz University Hannover, Am Moritzwinkel 6, 30167, Hannover, Germany
| | - Tong-Hun Hwang
- Institute of Sports Science, Leibniz University Hannover, Am Moritzwinkel 6, 30167, Hannover, Germany
| | - Alfred O Effenberg
- Institute of Sports Science, Leibniz University Hannover, Am Moritzwinkel 6, 30167, Hannover, Germany.
| |
Collapse
|
11
|
Szewczyk AK, Mitosek-Szewczyk K, Dworzańska E. Where words are powerless to express: Use of music in paediatric neurology. J Pediatr Rehabil Med 2022; 16:179-194. [PMID: 35599509 DOI: 10.3233/prm-200802] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Music is an art form that strongly affects people and can elicit many different emotions at the same time, including happiness, anxiety, sadness, and even ecstasy. What is it about music that causes such a strong reaction from each of us? Music engages many senses, which in turn can produce a multiplicity of responses and help create more extensive neuronal connections, as well as influence behaviour through structural and functional changes in the brain. Music-based interventions as a therapeutic tool in rehabilitation are becoming more common. It is said that the impact of music on the human body is positive. However, what impact does music have on the young nervous system, especially the affected one? This review presents the advantages and disadvantages of the use of music in paediatric neurology to treat dyslexia, cerebral palsy, and stroke, among others. Potential negative impacts such as musicogenic epilepsy and hallucinations will be discussed.
Collapse
Affiliation(s)
- Anna K Szewczyk
- Department of Neurology, Medical University of Lublin, Lublin, Poland.,Doctoral School, Medical University of Lublin, Lublin, Poland
| | | | - Ewa Dworzańska
- Department of Child Neurology, Medical University of Lublin, Lublin, Poland
| |
Collapse
|
12
|
Vuust P, Heggli OA, Friston KJ, Kringelbach ML. Music in the brain. Nat Rev Neurosci 2022; 23:287-305. [PMID: 35352057 DOI: 10.1038/s41583-022-00578-5] [Citation(s) in RCA: 104] [Impact Index Per Article: 52.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/22/2022] [Indexed: 02/06/2023]
Abstract
Music is ubiquitous across human cultures - as a source of affective and pleasurable experience, moving us both physically and emotionally - and learning to play music shapes both brain structure and brain function. Music processing in the brain - namely, the perception of melody, harmony and rhythm - has traditionally been studied as an auditory phenomenon using passive listening paradigms. However, when listening to music, we actively generate predictions about what is likely to happen next. This enactive aspect has led to a more comprehensive understanding of music processing involving brain structures implicated in action, emotion and learning. Here we review the cognitive neuroscience literature of music perception. We show that music perception, action, emotion and learning all rest on the human brain's fundamental capacity for prediction - as formulated by the predictive coding of music model. This Review elucidates how this formulation of music perception and expertise in individuals can be extended to account for the dynamics and underlying brain mechanisms of collective music making. This in turn has important implications for human creativity as evinced by music improvisation. These recent advances shed new light on what makes music meaningful from a neuroscientific perspective.
Collapse
Affiliation(s)
- Peter Vuust
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark.
| | - Ole A Heggli
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
| | - Morten L Kringelbach
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark.,Department of Psychiatry, University of Oxford, Oxford, UK.,Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, UK
| |
Collapse
|
13
|
Kantan P, Spaich EG, Dahl S. An Embodied Sonification Model for Sit-to-Stand Transfers. Front Psychol 2022; 13:806861. [PMID: 35250738 PMCID: PMC8891127 DOI: 10.3389/fpsyg.2022.806861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 01/25/2022] [Indexed: 11/13/2022] Open
Abstract
Interactive sonification of biomechanical quantities is gaining relevance as a motor learning aid in movement rehabilitation, as well as a monitoring tool. However, existing gaps in sonification research (issues related to meaning, aesthetics, and clinical effects) have prevented its widespread recognition and adoption in such applications. The incorporation of embodied principles and musical structures in sonification design has gradually become popular, particularly in applications related to human movement. In this study, we propose a general sonification model for the sit-to-stand (STS) transfer, an important activity of daily living. The model contains a fixed component independent of the use-case, which represents the rising motion of the body as an ascending melody using the physical model of a flute. In addition, a flexible component concurrently sonifies STS features of clinical interest in a particular rehabilitative/monitoring situation. Here, we chose to represent shank angular jerk and movement stoppages (freezes), through perceptually salient pitch modulations and bell sounds. We outline the details of our technical implementation of the model. We evaluated the model by means of a listening test experiment with 25 healthy participants, who were asked to identify six normal and simulated impaired STS patterns from sonified versions containing various combinations of the constituent mappings of the model. Overall, we found that the participants were able to classify the patterns accurately (86.67 ± 14.69% correct responses with the full model, 71.56% overall), confidently (64.95 ± 16.52% self-reported rating), and in a timely manner (response time: 4.28 ± 1.52 s). The amount of sonified kinematic information significantly impacted classification accuracy. The six STS patterns were also classified with significantly different accuracy depending on their kinematic characteristics. Learning effects were seen in the form of increased accuracy and confidence with repeated exposure to the sound sequences. We found no significant accuracy differences based on the participants' level of music training. Overall, we see our model as a concrete conceptual and technical starting point for STS sonification design catering to rehabilitative and clinical monitoring applications.
Collapse
Affiliation(s)
- Prithvi Kantan
- Department of Architecture, Design and Media Technology, Aalborg University, Copenhagen, Denmark
| | - Erika G Spaich
- Neurorehabilitation Systems Group, Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
| | - Sofia Dahl
- Department of Architecture, Design and Media Technology, Aalborg University, Copenhagen, Denmark
| |
Collapse
|
14
|
Fiveash A, Burger B, Canette LH, Bedoin N, Tillmann B. When Visual Cues Do Not Help the Beat: Evidence for a Detrimental Effect of Moving Point-Light Figures on Rhythmic Priming. Front Psychol 2022; 13:807987. [PMID: 35185727 PMCID: PMC8855071 DOI: 10.3389/fpsyg.2022.807987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Accepted: 01/10/2022] [Indexed: 11/13/2022] Open
Abstract
Rhythm perception involves strong auditory-motor connections that can be enhanced with movement. However, it is unclear whether just seeing someone moving to a rhythm can enhance auditory-motor coupling, resulting in stronger entrainment. Rhythmic priming studies show that presenting regular rhythms before naturally spoken sentences can enhance grammaticality judgments compared to irregular rhythms or other baseline conditions. The current study investigated whether introducing a point-light figure moving in time with regular rhythms could enhance the rhythmic priming effect. Three experiments revealed that the addition of a visual cue did not benefit rhythmic priming in comparison to auditory conditions with a static image. In Experiment 1 (27 7–8-year-old children), grammaticality judgments were poorer after audio-visual regular rhythms (with a bouncing point-light figure) compared to auditory-only regular rhythms. In Experiments 2 (31 adults) and 3 (31 different adults), there was no difference in grammaticality judgments after audio-visual regular rhythms compared to auditory-only irregular rhythms for either a bouncing point-light figure (Experiment 2) or a swaying point-light figure (Experiment 3). Comparison of the observed performance with previous data suggested that the audio-visual component removed the regular prime benefit. These findings suggest that the visual cues used in this study do not enhance rhythmic priming and could hinder the effect by potentially creating a dual-task situation. In addition, individual differences in sensory-motor and social scales of music reward influenced the effect of the visual cue. Implications for future audio-visual experiments aiming to enhance beat processing, and the importance of individual differences will be discussed.
Collapse
Affiliation(s)
- Anna Fiveash
- Lyon Neuroscience Research Center, CNRS, UMR 5292, INSERM, U1028, Lyon, France
- University of Lyon 1, Lyon, France
- *Correspondence: Anna Fiveash,
| | - Birgitta Burger
- Institute for Systematic Musicology, University of Hamburg, Hamburg, Germany
| | - Laure-Hélène Canette
- Lyon Neuroscience Research Center, CNRS, UMR 5292, INSERM, U1028, Lyon, France
- University of Lyon 1, Lyon, France
- University of Burgundy, F-21000, LEAD-CNRS UMR 5022, Dijon, France
| | - Nathalie Bedoin
- Lyon Neuroscience Research Center, CNRS, UMR 5292, INSERM, U1028, Lyon, France
- University of Lyon 1, Lyon, France
- University of Lyon 2, Lyon, France
| | - Barbara Tillmann
- Lyon Neuroscience Research Center, CNRS, UMR 5292, INSERM, U1028, Lyon, France
- University of Lyon 1, Lyon, France
| |
Collapse
|
15
|
Bojner Horwitz E, Korošec K, Theorell T. Can Dance and Music Make the Transition to a Sustainable Society More Feasible? Behav Sci (Basel) 2022; 12:bs12010011. [PMID: 35049622 PMCID: PMC8772942 DOI: 10.3390/bs12010011] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 12/25/2021] [Accepted: 01/07/2022] [Indexed: 12/23/2022] Open
Abstract
Transition to sustainability is a process that requires change on all levels of society from the physical to the psychological. This review takes an interdisciplinary view of the landscapes of research that contribute to the development of pro-social behaviors that align with sustainability goals, or what we call 'inner sustainability'. Engaging in musical and dance activities can make people feel trust and connectedness, promote prosocial behavior within a group, and also reduce prejudices between groups. Sustained engagement in these art forms brings change in a matter of seconds (such as hormonal changes and associated stress relief), months (such as improved emotional wellbeing and learning outcomes), and decades (such as structural changes to the brains of musicians and dancers and superior skills in expressing and understanding emotion). In this review, we bridge the often-separate domains of the arts and sciences by presenting evidence that suggests music and dance promote self-awareness, learning, care for others and wellbeing at individual and group levels. In doing so, we argue that artistic practices have a key role to play in leading the transformations necessary for a sustainable society. We require a movement of action that provides dance and music within a constructive framework for stimulating social sustainability.
Collapse
Affiliation(s)
- Eva Bojner Horwitz
- Department of Music, Pedagogy and Society, Royal College of Music, P.O. Box 277 11, SE-115 91 Stockholm, Sweden; (K.K.); (T.T.)
- Center for Social Sustainability, Institution of Neurobiology, Care Sciences and Society, Karolinska Institute, SE-141 83 Stockholm, Sweden
- Department of Clinical Neuroscience, Karolinska Institute, SE-171 77 Stockholm, Sweden
- Correspondence:
| | - Kaja Korošec
- Department of Music, Pedagogy and Society, Royal College of Music, P.O. Box 277 11, SE-115 91 Stockholm, Sweden; (K.K.); (T.T.)
- Center for Social Sustainability, Institution of Neurobiology, Care Sciences and Society, Karolinska Institute, SE-141 83 Stockholm, Sweden
- Department of Clinical Neuroscience, Karolinska Institute, SE-171 77 Stockholm, Sweden
| | - Töres Theorell
- Department of Music, Pedagogy and Society, Royal College of Music, P.O. Box 277 11, SE-115 91 Stockholm, Sweden; (K.K.); (T.T.)
- Center for Social Sustainability, Institution of Neurobiology, Care Sciences and Society, Karolinska Institute, SE-141 83 Stockholm, Sweden
- Stress Research Institute, Stockholm University, SE-106 91 Stockholm, Sweden
| |
Collapse
|
16
|
Godøy RI. Constraint-Based Sound-Motion Objects in Music Performance. Front Psychol 2022; 12:732729. [PMID: 34992562 PMCID: PMC8725797 DOI: 10.3389/fpsyg.2021.732729] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Accepted: 11/23/2021] [Indexed: 01/09/2023] Open
Abstract
The aim of this paper is to present principles of constraint-based sound-motion objects in music performance. Sound-motion objects are multimodal fragments of combined sound and sound-producing body motion, usually in the duration range of just a few seconds, and conceived, produced, and perceived as intrinsically coherent units. Sound-motion objects have a privileged role as building blocks in music because of their duration, coherence, and salient features and emerge from combined instrumental, biomechanical, and motor control constraints at work in performance. Exploring these constraints and the crucial role of the sound-motion objects can enhance our understanding of generative processes in music and have practical applications in performance, improvisation, and composition.
Collapse
Affiliation(s)
- Rolf Inge Godøy
- Department of Musicology, University of Oslo, Oslo, Norway.,RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
| |
Collapse
|
17
|
Raposo FA, Martins de Matos D, Ribeiro R. Learning Low-Dimensional Semantics for Music and Language via Multi-Subject fMRI. Neuroinformatics 2022; 20:451-461. [PMID: 34993852 DOI: 10.1007/s12021-021-09560-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/02/2021] [Indexed: 11/24/2022]
Abstract
Embodied Cognition (EC) states that semantics is encoded in the brain as firing patterns of neural circuits, which are learned according to the statistical structure of human multimodal experience. However, each human brain is idiosyncratically biased, according to its subjective experience, making this biological semantic machinery noisy with respect to semantics inherent to media, such as music and language. We propose to represent media semantics using low-dimensional vector embeddings by jointly modeling the functional Magnetic Resonance Imaging (fMRI) activity of several brains via Generalized Canonical Correlation Analysis (GCCA). We evaluate the semantic richness of the resulting latent space in appropriate semantic classification tasks: music genres and language topics. We show that the resulting unsupervised representations outperform the original high-dimensional fMRI voxel spaces in these downstream tasks while being more computationally efficient. Furthermore, we show that joint modeling of several subjects increases the semantic richness of the learned latent vector spaces as the number of subjects increases. Quantitative results and corresponding statistical significance testing demonstrate the instantiation of music and language semantics in the brain, thereby providing further evidence for multimodal embodied cognition as well as a method for extraction of media semantics from multi-subject brain dynamics.
Collapse
Affiliation(s)
- Francisco Afonso Raposo
- INESC-ID Lisboa, R. Alves Redol 9, Lisboa, 1000-029, Portugal. .,Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais, Lisboa, 1049-001, Portugal.
| | - David Martins de Matos
- INESC-ID Lisboa, R. Alves Redol 9, Lisboa, 1000-029, Portugal.,Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais, Lisboa, 1049-001, Portugal
| | - Ricardo Ribeiro
- INESC-ID Lisboa, R. Alves Redol 9, Lisboa, 1000-029, Portugal.,Instituto Universitário de Lisboa (ISCTE-IUL), Av. das Forças Armadas, Lisboa, 1649-026, Portugal
| |
Collapse
|
18
|
Time to imagine moving: Simulated motor activity affects time perception. Psychon Bull Rev 2021; 29:819-827. [PMID: 34918275 PMCID: PMC9166842 DOI: 10.3758/s13423-021-02028-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/02/2021] [Indexed: 12/03/2022]
Abstract
Sensing the passage of time is important for countless daily tasks, yet time perception is easily influenced by perception, cognition, and emotion. Mechanistic accounts of time perception have traditionally regarded time perception as part of central cognition. Since proprioception, action execution, and sensorimotor contingencies also affect time perception, perception-action integration theories suggest motor processes are central to the experience of the passage of time. We investigated whether sensory information and motor activity may interactively affect the perception of the passage of time. Two prospective timing tasks involved timing a visual stimulus display conveying optical flow at increasing or decreasing velocity. While doing the timing tasks, participants were instructed to imagine themselves moving at increasing or decreasing speed, independently of the optical flow. In the direct-estimation task, the duration of the visual display was explicitly judged in seconds while in the motor-timing task, participants were asked to keep a constant pace of tapping. The direct-estimation task showed imagining accelerating movement resulted in relative overestimation of time, or time dilation, while decelerating movement elicited relative underestimation, or time compression. In the motor-timing task, imagined accelerating movement also accelerated tapping speed, replicating the time-dilation effect. The experiments show imagined movement affects time perception, suggesting a causal role of simulated motor activity. We argue that imagined movements and optical flow are integrated by temporal unfolding of sensorimotor contingencies. Consequently, as physical time is relative to spatial motion, so too is perception of time relative to imaginary motion.
Collapse
|
19
|
Pando-Naude V, Patyczek A, Bonetti L, Vuust P. An ALE meta-analytic review of top-down and bottom-up processing of music in the brain. Sci Rep 2021; 11:20813. [PMID: 34675231 PMCID: PMC8531391 DOI: 10.1038/s41598-021-00139-3] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Accepted: 10/06/2021] [Indexed: 12/01/2022] Open
Abstract
A remarkable feature of the human brain is its ability to integrate information from the environment with internally generated content. The integration of top-down and bottom-up processes during complex multi-modal human activities, however, is yet to be fully understood. Music provides an excellent model for understanding this since music listening leads to the urge to move, and music making entails both playing and listening at the same time (i.e., audio-motor coupling). Here, we conducted activation likelihood estimation (ALE) meta-analyses of 130 neuroimaging studies of music perception, production and imagery, with 2660 foci, 139 experiments, and 2516 participants. We found that music perception and production rely on auditory cortices and sensorimotor cortices, while music imagery recruits distinct parietal regions. This indicates that the brain requires different structures to process similar information which is made available either by an interaction with the environment (i.e., bottom-up) or by internally generated content (i.e., top-down).
Collapse
Affiliation(s)
- Victor Pando-Naude
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music Aarhus/Aalborg, Universitetsbyen, 3-0-17, 8000, Aarhus C, Denmark.
| | - Agata Patyczek
- MR Center of Excellence, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Leonardo Bonetti
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music Aarhus/Aalborg, Universitetsbyen, 3-0-17, 8000, Aarhus C, Denmark
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music Aarhus/Aalborg, Universitetsbyen, 3-0-17, 8000, Aarhus C, Denmark
| |
Collapse
|
20
|
Li S, Timmers R, Wang W. The Communication of Timbral Intentions Between Pianists and Listeners and Its Dependence on Auditory-Visual Conditions. Front Psychol 2021; 12:717842. [PMID: 34621217 PMCID: PMC8491637 DOI: 10.3389/fpsyg.2021.717842] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 08/18/2021] [Indexed: 11/13/2022] Open
Abstract
The perceptual experiment reported in this article explored whether the communication of five pairs of timbral intentions (bright/dark, heavy/light, round/sharp, tense/relaxed, and dry/velvety) between pianists and listeners is reliable and the extent to which performers' gestures provide visual cues that influence the perceived timbre. Three pianists played three musical excerpts with 10 different timbral intentions (3 × 10 = 30 music stimuli) and 21 piano students were asked to rate perceived timbral qualities on both unipolar Likert scales and non-verbal sensory scales (shape, size, and brightness) under three modes (vision-alone, audio-alone, and audio-visual). The results revealed that nine of the timbral intentions were reliably communicated between the pianists and the listeners, except for the dark timbre. The communication of tense and relaxed timbres was improved by the visual conditions regardless of who is performing; for the rest, we found the individuality in each pianist's preference for using visual cues. The results also revealed a strong cross-modal association between timbre and shape. This study implies that the communication of piano timbre is not based on acoustic cues alone but relates to a shared understanding of sensorimotor experiences between the performers and the listeners.
Collapse
Affiliation(s)
- Shen Li
- School of Psychology, Central China Normal University, Wuhan, China.,Key Laboratory of Adolescent Cyberpsychology and Behavior, Ministry of Education, Wuhan, China
| | - Renee Timmers
- Department of Music, The University of Sheffield, Sheffield, United Kingdom
| | - Weijun Wang
- School of Psychology, Central China Normal University, Wuhan, China.,Key Laboratory of Adolescent Cyberpsychology and Behavior, Ministry of Education, Wuhan, China
| |
Collapse
|
21
|
Doi H, Yamaguchi K, Sugisaki S. Timbral perception is influenced by unconscious presentation of hands playing musical instruments. Q J Exp Psychol (Hove) 2021; 75:1186-1191. [PMID: 34507501 DOI: 10.1177/17470218211048032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Timbre is an integral dimension of musical sound quality, and people accumulate knowledge about timbre of sounds generated by various musical instruments throughout their life. Recent studies have proposed the possibility that musical sound is crossmodally integrated with visual information related to the sound. However, little is known about the influence of visual information on musical timbre perception. The present study investigated the automaticity of crossmodal integration between musical timbre and visual image of hands playing musical instruments. In the experiment, an image of hands playing piano or violin, or a control scrambled image was presented to participants unconsciously. Simultaneously, participants heard intermediate sounds synthesised by morphing piano and violin sounds with the same note. The participants answered whether the musical tone sounded like piano or violin. The results revealed that participants were more likely to perceive violin sound when an image of a violin was presented unconsciously than when playing piano was presented. This finding indicates that timbral perception of musical sound is influenced by visual information of musical performance without conscious awareness, supporting the automaticity of crossmodal integration in musical timbre perception.
Collapse
Affiliation(s)
- Hirokazu Doi
- School of Science and Engineering, Kokushikan University, Tokyo, Japan
| | - Kazuki Yamaguchi
- School of Science and Engineering, Kokushikan University, Tokyo, Japan
| | - Shoma Sugisaki
- School of Science and Engineering, Kokushikan University, Tokyo, Japan
| |
Collapse
|
22
|
|
23
|
The effects of dual-task interference in predicting turn-ends in speech and music. Brain Res 2021; 1768:147571. [PMID: 34216579 DOI: 10.1016/j.brainres.2021.147571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 05/28/2021] [Accepted: 06/23/2021] [Indexed: 11/23/2022]
Abstract
Determining when a partner's spoken or musical turn will end requires well-honed predictive abilities. Evidence suggests that our motor systems are activated during perception of both speech and music, and it has been argued that motor simulation is used to predict turn-ends across domains. Here we used a dual-task interference paradigm to investigate whether motor simulation of our partner's action underlies our ability to make accurate turn-end predictions in speech and in music. Furthermore, we explored how specific this simulation is to the action being predicted. We conducted two experiments, one investigating speech turn-ends, and one investigating music turn-ends. In each, 34 proficient pianists predicted turn-endings while (1) passively listening, (2) producing an effector-specific motor activity (mouth/hand movement), or (3) producing a task- and effector-specific motor activity (mouthing words/fingering a piano melody). In the speech experiment, any movement during speech perception disrupted predictions of spoken turn-ends, whether the movement was task-specific or not. In the music experiment, only task-specific movement (i.e., fingering a piano melody) disrupted predictions of musical turn-ends. These findings support the use of motor simulation to make turn-end predictions in both speech and music but suggest that the specificity of this simulation may differ between domains.
Collapse
|
24
|
Bishop L, Jensenius AR, Laeng B. Musical and Bodily Predictors of Mental Effort in String Quartet Music: An Ecological Pupillometry Study of Performers and Listeners. Front Psychol 2021; 12:653021. [PMID: 34262504 PMCID: PMC8274478 DOI: 10.3389/fpsyg.2021.653021] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Accepted: 05/17/2021] [Indexed: 11/25/2022] Open
Abstract
Music performance can be cognitively and physically demanding. These demands vary across the course of a performance as the content of the music changes. More demanding passages require performers to focus their attention more intensity, or expend greater “mental effort.” To date, it remains unclear what effect different cognitive-motor demands have on performers' mental effort. It is likewise unclear how fluctuations in mental effort compare between performers and perceivers of the same music. We used pupillometry to examine the effects of different cognitive-motor demands on the mental effort used by performers and perceivers of classical string quartet music. We collected pupillometry, motion capture, and audio-video recordings of a string quartet as they performed a rehearsal and concert (for live audience) in our lab. We then collected pupillometry data from a remote sample of musically-trained listeners, who heard the audio recordings (without video) that we captured during the concert. We used a modelling approach to assess the effects of performers' bodily effort (head and arm motion; sound level; performers' ratings of technical difficulty), musical complexity (performers' ratings of harmonic complexity; a score-based measure of harmonic tension), and expressive difficulty (performers' ratings of expressive difficulty) on performers' and listeners' pupil diameters. Our results show stimulating effects of bodily effort and expressive difficulty on performers' pupil diameters, and stimulating effects of expressive difficulty on listeners' pupil diameters. We also observed negative effects of musical complexity on both performers and listeners, and negative effects of performers' bodily effort on listeners, which we suggest may reflect the complex relationships that these features share with other aspects of musical structure. Looking across the concert, we found that both of the quartet violinists (who exchanged places halfway through the concert) showed more dilated pupils during their turns as 1st violinist than when playing as 2nd violinist, suggesting that they experienced greater arousal when “leading” the quartet in the 1st violin role. This study shows how eye tracking and motion capture technologies can be used in combination in an ecological setting to investigate cognitive processing in music performance.
Collapse
Affiliation(s)
- Laura Bishop
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway.,Department of Musicology, University of Oslo, Oslo, Norway
| | - Alexander Refsum Jensenius
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway.,Department of Musicology, University of Oslo, Oslo, Norway
| | - Bruno Laeng
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway.,Department of Psychology, University of Oslo, Oslo, Norway
| |
Collapse
|
25
|
|
26
|
Sorati M, Behne DM. Considerations in Audio-Visual Interaction Models: An ERP Study of Music Perception by Musicians and Non-musicians. Front Psychol 2021; 11:594434. [PMID: 33551911 PMCID: PMC7854916 DOI: 10.3389/fpsyg.2020.594434] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Accepted: 12/03/2020] [Indexed: 11/13/2022] Open
Abstract
Previous research with speech and non-speech stimuli suggested that in audiovisual perception, visual information starting prior to the onset of corresponding sound can provide visual cues, and form a prediction about the upcoming auditory sound. This prediction leads to audiovisual (AV) interaction. Auditory and visual perception interact and induce suppression and speeding up of the early auditory event-related potentials (ERPs) such as N1 and P2. To investigate AV interaction, previous research examined N1 and P2 amplitudes and latencies in response to audio only (AO), video only (VO), audiovisual, and control (CO) stimuli, and compared AV with auditory perception based on four AV interaction models (AV vs. AO+VO, AV-VO vs. AO, AV-VO vs. AO-CO, AV vs. AO). The current study addresses how different models of AV interaction express N1 and P2 suppression in music perception. Furthermore, the current study took one step further and examined whether previous musical experience, which can potentially lead to higher N1 and P2 amplitudes in auditory perception, influenced AV interaction in different models. Musicians and non-musicians were presented the recordings (AO, AV, VO) of a keyboard /C4/ key being played, as well as CO stimuli. Results showed that AV interaction models differ in their expression of N1 and P2 amplitude and latency suppression. The calculation of model (AV-VO vs. AO) and (AV-VO vs. AO-CO) has consequences for the resulting N1 and P2 difference waves. Furthermore, while musicians, compared to non-musicians, showed higher N1 amplitude in auditory perception, suppression of amplitudes and latencies for N1 and P2 was similar for the two groups across the AV models. Collectively, these results suggest that when visual cues from finger and hand movements predict the upcoming sound in AV music perception, suppression of early ERPs is similar for musicians and non-musicians. Notably, the calculation differences across models do not lead to the same pattern of results for N1 and P2, demonstrating that the four models are not interchangeable and are not directly comparable.
Collapse
Affiliation(s)
- Marzieh Sorati
- Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
| | - Dawn M Behne
- Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
27
|
Arroyo-Anlló EM, Sánchez JC, Gil R. Could Self-Consciousness Be Enhanced in Alzheimer’s Disease? An Approach from Emotional Sensorial Stimulation. J Alzheimers Dis 2020; 77:505-521. [DOI: 10.3233/jad-200408] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
Alzheimer’s disease (AD) provides a valuable field of research into impairment of self-consciousness (SC), because AD patients have a reduced capacity to understand their mental world, to experience and relive previous personal events, as well as to interpret thoughts, feelings, and beliefs about themselves. Several studies observed that AD patients had an altered SC, but not a complete abolition of it. Emotions are an integral part of the construction of personal identity, therefore of Self. In general, most studies on emotion in AD patients have observed that emotion is not completely abolished and it lets them better remember autobiographical events with greater emotional charge. The positive effect of autobiographical memories rich in emotional content, evoked directly/automatically by sensorial stimuli such as familiar odors or music, could be used to reestablish/reinforce the permanence and coherence of the Self in AD. We studied the research of empirical evidence supporting the power of the sensorial cues associated with emotion, which could be capable of enhancing the SC in AD. We presented the studies about “Emotional stimulations” using odor, music, or taste cues in AD. All studies have shown to have a positive impact on SC in AD patients such as odor-evoked autobiographical memories, taste/odor-evoked autobiographical memories, emotional sensorial stimulation using musical cues, and multi-sensorial stimulations using healing gardens. We found research supporting the notion that emotional sensorial stimulations can even temporarily exalt memory, affective state, and personal identity, that is, the SC in AD. The emotional sensory stimulations could be used as a tool to activate the SC in AD and hence improve the quality of life of patients and caregivers.
Collapse
Affiliation(s)
- Eva M. Arroyo-Anlló
- Department of Psychobiology, University of Salamanca, Neuroscience Institute of Castilla-León, Spain
| | | | - Roger Gil
- Emeriti Professor of Neurology, University Hospital, Poitiers, France
| |
Collapse
|
28
|
Toiviainen P, Burunat I, Brattico E, Vuust P, Alluri V. The chronnectome of musical beat. Neuroimage 2020; 216:116191. [DOI: 10.1016/j.neuroimage.2019.116191] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Revised: 09/10/2019] [Accepted: 09/11/2019] [Indexed: 01/03/2023] Open
|
29
|
Sorati M, Behne DM. Audiovisual Modulation in Music Perception for Musicians and Non-musicians. Front Psychol 2020; 11:1094. [PMID: 32547458 PMCID: PMC7273518 DOI: 10.3389/fpsyg.2020.01094] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Accepted: 04/29/2020] [Indexed: 11/13/2022] Open
Abstract
In audiovisual music perception, visual information from a musical instrument being played is available prior to the onset of the corresponding musical sound and consequently allows a perceiver to form a prediction about the upcoming audio music. This prediction in audiovisual music perception, compared to auditory music perception, leads to lower N1 and P2 amplitudes and latencies. Although previous research suggests that audiovisual experience, such as previous musical experience may enhance this prediction, a remaining question is to what extent musical experience modifies N1 and P2 amplitudes and latencies. Furthermore, corresponding event-related phase modulations quantified as inter-trial phase coherence (ITPC) have not previously been reported for audiovisual music perception. In the current study, audio video recordings of a keyboard key being played were presented to musicians and non-musicians in audio only (AO), video only (VO), and audiovisual (AV) conditions. With predictive movements from playing the keyboard isolated from AV music perception (AV-VO), the current findings demonstrated that, compared to the AO condition, both groups had a similar decrease in N1 amplitude and latency, and P2 amplitude, along with correspondingly lower ITPC values in the delta, theta, and alpha frequency bands. However, while musicians showed lower ITPC values in the beta-band in AV-VO compared to the AO, non-musicians did not show this pattern. Findings indicate that AV perception may be broadly correlated with auditory perception, and differences between musicians and non-musicians further indicate musical experience to be a specific factor influencing AV perception. Predicting an upcoming sound in AV music perception may involve visual predictory processes, as well as beta-band oscillations, which may be influenced by years of musical training. This study highlights possible interconnectivity in AV perception as well as potential modulation with experience.
Collapse
Affiliation(s)
- Marzieh Sorati
- Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
| | - Dawn Marie Behne
- Department of Psychology, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
30
|
Dell'Anna A, Buhmann J, Six J, Maes PJ, Leman M. Timing Markers of Interaction Quality During Semi-Hocket Singing. Front Neurosci 2020; 14:619. [PMID: 32625057 PMCID: PMC7315043 DOI: 10.3389/fnins.2020.00619] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Accepted: 05/19/2020] [Indexed: 01/30/2023] Open
Abstract
Music is believed to work as a bio-social tool enabling groups of people to establish joint action and group bonding experiences. However, little is known about the quality of the group members' interaction needed to bring about these effects. To investigate the role of interaction quality, and its effect on joint action and bonding experience, we asked dyads (two singers) to perform music in medieval "hocket" style, in order to engage their co-regulatory activity. The music contained three relative inter-onset-interval (IOI) classes: quarter note, dotted quarter note and eight note, marking time intervals between successive onsets (generated by both singers). We hypothesized that singers co-regulated their activity by minimizing prediction errors in view of stable IOI-classes. Prediction errors were measured using a dynamic Bayesian inference approach that allows us to identify three different types of error called fluctuation (micro-timing errors measured in milliseconds), narration (omission errors or misattribution of an IOI to a wrong IOI class), and collapse errors (macro-timing errors that cause the breakdown of a performance). These three types of errors were correlated with the singers' estimated quality of the performance and the experienced sense of joint agency. We let the singers perform either while moving or standing still, under the hypothesis that the moving condition would have reduced timing errors and increased We-agency as opposed to Shared-agency (the former portraying a condition in which the performers blend into one another, the latter portraying a joint, but distinct, control of the performance). The results show that estimated quality correlates with fluctuation and narration errors, while agency correlates (to a lesser degree) with narration errors. Somewhat unexpectedly, there was a minor effect of movement, and it was beneficial only for good performers. Joint agency resulted in a "shared," rather than a "we," sense of joint agency. The methodology and findings open up promising avenues for future research on social embodied music interaction.
Collapse
Affiliation(s)
- Alessandro Dell'Anna
- Department of Musicology - IPEM, Ghent University, Ghent, Belgium.,Department of Psychology, University of Turin, Turin, Italy
| | - Jeska Buhmann
- Department of Musicology - IPEM, Ghent University, Ghent, Belgium
| | - Joren Six
- Department of Musicology - IPEM, Ghent University, Ghent, Belgium
| | - Pieter-Jan Maes
- Department of Musicology - IPEM, Ghent University, Ghent, Belgium
| | - Marc Leman
- Department of Musicology - IPEM, Ghent University, Ghent, Belgium
| |
Collapse
|
31
|
Schiavio A, Küssner MB, Williamon A. Music Teachers' Perspectives and Experiences of Ensemble and Learning Skills. Front Psychol 2020; 11:291. [PMID: 32210875 PMCID: PMC7067978 DOI: 10.3389/fpsyg.2020.00291] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Accepted: 02/06/2020] [Indexed: 11/21/2022] Open
Abstract
In this article, we report data from two survey studies administered to expert music teachers. Both questionnaires aimed to explore teachers' pedagogical and performative practice and included open questions elucidating musical skills emerging in groups. The first study focuses on collective teaching settings offered to amateurs, jazz musicians, and university students with various levels of musical expertise. The second reports data from teachers based at the Royal College of Music, London, where the main emphasis is on Western classical repertoire. We integrate both studies and discuss overlapping findings. Despite intrinsic differences concerning the general goals of their teaching and the educational systems in which they operate, our data indicate the ability to "listen and respond to others" as the most important ensemble skill, whereas "time management," "comparing yourself to the class," and the "development of responsible ways of learning" emerged as main learning skills. We discuss results and suggestions for future research in teaching and learning music in different contexts in the light of recent theoretical research in the cognitive sciences, considering implications for educators interested in diverse skill levels.
Collapse
Affiliation(s)
- Andrea Schiavio
- Centre for Systematic Musicology, University of Graz, Graz, Austria
| | - Mats B. Küssner
- Institut für Musikwissenschaft und Medienwissenschaft, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Aaron Williamon
- Centre for Performance Science, Royal College of Music, London, United Kingdom
- Faculty of Medicine, Imperial College London, London, United Kingdom
| |
Collapse
|
32
|
Rajendran VG, Harper NS, Schnupp JWH. Auditory cortical representation of music favours the perceived beat. ROYAL SOCIETY OPEN SCIENCE 2020; 7:191194. [PMID: 32269783 PMCID: PMC7137933 DOI: 10.1098/rsos.191194] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2019] [Accepted: 02/03/2020] [Indexed: 06/02/2023]
Abstract
Previous research has shown that musical beat perception is a surprisingly complex phenomenon involving widespread neural coordination across higher-order sensory, motor and cognitive areas. However, the question of how low-level auditory processing must necessarily shape these dynamics, and therefore perception, is not well understood. Here, we present evidence that the auditory cortical representation of music, even in the absence of motor or top-down activations, already favours the beat that will be perceived. Extracellular firing rates in the rat auditory cortex were recorded in response to 20 musical excerpts diverse in tempo and genre, for which musical beat perception had been characterized by the tapping behaviour of 40 human listeners. We found that firing rates in the rat auditory cortex were on average higher on the beat than off the beat. This 'neural emphasis' distinguished the beat that was perceived from other possible interpretations of the beat, was predictive of the degree of tapping consensus across human listeners, and was accounted for by a spectrotemporal receptive field model. These findings strongly suggest that the 'bottom-up' processing of music performed by the auditory system predisposes the timing and clarity of the perceived musical beat.
Collapse
Affiliation(s)
- Vani G. Rajendran
- Auditory Neuroscience Group, Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, UK
- Department of Biomedical Sciences, City University of Hong Kong, Kowloon Tong, Hong Kong
| | - Nicol S. Harper
- Auditory Neuroscience Group, Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, UK
| | - Jan W. H. Schnupp
- Auditory Neuroscience Group, Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, UK
- Department of Biomedical Sciences, City University of Hong Kong, Kowloon Tong, Hong Kong
| |
Collapse
|
33
|
Ryan K, Schiavio A. Extended musicking, extended mind, extended agency. Notes on the third wave. NEW IDEAS IN PSYCHOLOGY 2019. [DOI: 10.1016/j.newideapsych.2019.03.001] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
34
|
González Sánchez V, Żelechowska A, Jensenius AR. Analysis of the Movement-Inducing Effects of Music through the Fractality of Head Sway during Standstill. J Mot Behav 2019; 52:734-749. [PMID: 31718527 DOI: 10.1080/00222895.2019.1689909] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
The links between music and human movement have been shown to provide insight into crucial aspects of human's perception, cognition, and sensorimotor systems. In this study, we examined the influence of music on movement during standstill, aiming at further characterizing the correspondences between movement, music, and perception, by analyzing head sway fractality. Eighty seven participants were asked to stand as still as possible for 500 seconds while being presented with alternating silence and audio stimuli. The audio stimuli were all rhythmic in nature, ranging from a metronome track to complex electronic dance music. The head position of each participant was captured with an optical motion capture system. Long-range correlations of head movement were estimated by detrended fluctuation analysis (DFA). Results agree with previous work on the movement-inducing effect of music, showing significantly greater head sway and lower head sway fractality during the music stimuli. In addition, patterns across stimuli suggest a two-way adaptation process to the effects of music, with musical stimuli influencing head sway while at the same time fractality modulated movement responses. Results indicate that fluctuations in head movement in both conditions exhibit long-range correlations, suggesting that the effects of music on head movement depended not only on the value of the most recent measured intervals, but also on the values of those intervals at distant times.
Collapse
Affiliation(s)
- Victor González Sánchez
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion Department of Musicology, University of Oslo, Oslo, Norway
| | - Agata Żelechowska
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion Department of Musicology, University of Oslo, Oslo, Norway
| | - Alexander Refsum Jensenius
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion Department of Musicology, University of Oslo, Oslo, Norway
| |
Collapse
|
35
|
London J, Thompson M, Burger B, Hildreth M, Toiviainen P. Tapping doesn't help: Synchronized self-motion and judgments of musical tempo. Atten Percept Psychophys 2019; 81:2461-2472. [PMID: 31062302 PMCID: PMC6848041 DOI: 10.3758/s13414-019-01722-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
For both musicians and music psychologists, beat rate (BPM) has often been regarded as a transparent measure of musical speed or tempo, yet recent research has shown that tempo is more than just BPM. In a previous study, London, Burger, Thompson, and Toiviainen (Acta Psychologica, 164, 70-80, 2016) presented participants with original as well as "time-stretched" versions of classic R&B songs; time stretching slows down or speeds up a recording without changing its pitch or timbre. In that study we discovered a tempo anchoring effect (TAE): Although relative tempo judgments (original vs. time-stretched versions of the same song) were correct, they were at odds with BPM rates of each stimulus. As previous studies have shown that synchronous movement enhances rhythm perception, we hypothesized that tapping along to the beat of these songs would reduce or eliminate the TAE and increase the salience of the beat rate of each stimulus. In the current study participants were presented with the London et al. (Acta Psychologica, 164, 70-80, 2016) stimuli in nonmovement and movement conditions. We found that although participants were able to make BPM-based tempo judgments of generic drumming patterns, and were able to tap along to the R&B stimuli at the correct beat rates, the TAE persisted in both movement and nonmovement conditions. Thus, contrary to our hypothesis that movement would reduce or eliminate the TAE, we found a disjunction between correctly synchronized motor behavior and tempo judgment. The implications of the tapping-TAE dissociation in the broader context of tempo and rhythm perception are discussed, and further approaches to studying the TAE-tapping dissociation are suggested.
Collapse
Affiliation(s)
- Justin London
- Department of Music, Carleton College, Northfield, MN, 55057, USA.
| | | | | | - Molly Hildreth
- Department of Music, Carleton College, Northfield, MN, 55057, USA
| | | |
Collapse
|
36
|
Hu B, Chomiak T. Wearable technological platform for multidomain diagnostic and exercise interventions in Parkinson's disease. INTERNATIONAL REVIEW OF NEUROBIOLOGY 2019; 147:75-93. [PMID: 31607363 DOI: 10.1016/bs.irn.2019.08.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Physical activity and exercise have become a central component of medical management of chronic illness, particular for the elderly who suffer from neurodegenerative disorders that impair their cognition and mobility. This chapter summarizes our recent research showing that a new generation of wearable technology can be adopted as diagnostic and rehabilitation tools for people living with Parkinson's disease. For example, wearable device-enabled 6-min walking test can be automated to eliminate human supervision and many other technical factors that confound the results with conventional testing. With reduced cost and increased test standardization, the technology can be adopted for population-based screening of cardiovascular fitness and gait rehabilitation training efficacy associated with many medical conditions. The Ambulosono platform for multidomain exercise intervention, in particular, has the potential to deliver lasting clinical benefits in slowing PD progression. The platform, through the integration of brisk walking with behavioral shaping strategies such as contingency reinforcement, anticipatory motor control and musical motivational stimulation, creates a home exercise regime that can transform monotonous walking into a pleasurable daily activity and habit.
Collapse
Affiliation(s)
- Bin Hu
- Division of Translational Neuroscience, Department of Clinical Neurosciences, Hotchkiss Brain Institute, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada.
| | - Taylor Chomiak
- Division of Translational Neuroscience, Department of Clinical Neurosciences, Hotchkiss Brain Institute, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
| |
Collapse
|
37
|
Abstract
An important aspect of the perceived quality of vocal music is the degree to which the vocalist sings in tune. Although most listeners seem sensitive to vocal mistuning, little is known about the development of this perceptual ability or how it differs between listeners. Motivated by a lack of suitable preexisting measures, we introduce in this article an adaptive and ecologically valid test of mistuning perception ability. The stimulus material consisted of short excerpts (6 to 12 s in length) from pop music performances (obtained from MedleyDB; Bittner et al., 2014) for which the vocal track was pitch-shifted relative to the instrumental tracks. In a first experiment, 333 listeners were tested on a two-alternative forced choice task that tested discrimination between a pitch-shifted and an unaltered version of the same audio clip. Explanatory item response modeling was then used to calibrate an adaptive version of the test. A subsequent validation experiment applied this adaptive test to 66 participants with a broad range of musical expertise, producing evidence of the test's reliability, convergent validity, and divergent validity. The test is ready to be deployed as an experimental tool and should make an important contribution to our understanding of the human ability to judge mistuning.
Collapse
|
38
|
Lewandowska OP, Schmuckler MA. Tonal and textural influences on musical sight-reading. PSYCHOLOGICAL RESEARCH 2019; 84:1920-1945. [PMID: 31073771 DOI: 10.1007/s00426-019-01187-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2018] [Accepted: 04/15/2019] [Indexed: 10/26/2022]
Abstract
Two experiments investigated the impact of two structural factors-musical tonality and musical texture-on pianists' ability to play by sight without prior preparation, known as musical sight-reading. Tonality refers to the cognitive organization of tones around a central reference pitch, whereas texture refers to the organization of music in terms of the simultaneous versus successive onsets of tones as well as the number of hands (unimanual versus bimanual) involved in performance. Both experiments demonstrated that tonality and texture influenced sight-reading. For tonality, both studies found that errors in performance increased for passages with lesser perceived psychological stability (i.e., minor and atonal passages) relative to greater perceived stability (i.e., major passages). For texture, both studies found that errors in performance increased for passages that were more texturally complex, requiring two-handed versus one-handed performance, with some additional evidence that the relative simultaneity of note onsets (primarily simultaneous versus primarily successive) also influenced errors. These experiments are interpreted within a perception-action framework of music performance, highlighting influences of both top-down cognitive factors and bottom-up motoric processes on sight-reading behavior.
Collapse
Affiliation(s)
- Olivia Podolak Lewandowska
- Department of Psychology, University of Toronto Scarborough, 1265 Military Trail Drive, Toronto, ON, M1C 1A4, USA.
| | - Mark A Schmuckler
- Department of Psychology, University of Toronto Scarborough, 1265 Military Trail Drive, Toronto, ON, M1C 1A4, USA
| |
Collapse
|
39
|
Schiavio A, van der Schyff D, Biasutti M, Moran N, Parncutt R. Instrumental Technique, Expressivity, and Communication. A Qualitative Study on Learning Music in Individual and Collective Settings. Front Psychol 2019; 10:737. [PMID: 31001179 PMCID: PMC6457278 DOI: 10.3389/fpsyg.2019.00737] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2018] [Accepted: 03/16/2019] [Indexed: 11/13/2022] Open
Abstract
In this paper, we present a qualitative study comparing individual and collective music pedagogies from the point of view of the learner. In doing so, we discuss how the theoretical tools of embodied cognitive science (ECS) can provide adequate resources to capture the main properties of both contexts. We begin by outlining the core principles of ECS, describing how it emerged in response to the information-processing approach to mind, which dominated the cognitive sciences for the latter half of the 20th century. We then consider the orientation offered by ECS and its relevance for music education. We do this by identifying overlapping principles between three tenets of ECS, and three aspects of pedagogical practice. This results in the categories of "instrumental technique," "expressivity," and "communication," which we adopted to examine and categorize the data emerging from our study. In conclusion, we consider the results of our study in light of ECS, discussing what implications can emerge for concrete pedagogical practices in both individual and collective settings.
Collapse
Affiliation(s)
- Andrea Schiavio
- Centre for Systematic Musicology, University of Graz, Graz, Austria
| | | | - Michele Biasutti
- Department of Philosophy, Sociology, Education and Applied Psychology, School of Human and Social Sciences and Cultural Heritage, University of Padova, Padova, Italy
| | - Nikki Moran
- Reid School of Music, The University of Edinburgh, Edinburgh, United Kingdom
| | - Richard Parncutt
- Centre for Systematic Musicology, University of Graz, Graz, Austria
| |
Collapse
|
40
|
Unterhofer C, Buchberger AMS, Jeleff-Wölfler O, Mansour N, Graf S. Laryngeal and Pharyngeal Movements During Inner Singing: A Cross-Sectional Study. J Voice 2019; 34:807.e1-807.e9. [PMID: 30876720 DOI: 10.1016/j.jvoice.2019.02.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2018] [Revised: 02/19/2019] [Accepted: 02/19/2019] [Indexed: 11/26/2022]
Abstract
INTRODUCTION Laryngeal and pharyngeal activity during inner singing is discussed in the context of vocal hygiene. Inner singing is defined as imagined singing, reading music silently, and listening to vocal music. When vocal rest is prescribed, doctors, speech therapists, and voice pedagogues recommend avoiding listening to music or reading music silently, since it is suggested that inner singing unconsciously influences the glottis, and thus moves the vocal folds involuntarily. The aim of this study was to compare the degree to which involuntary laryngeal and/or pharyngeal activity occur during inner singing, inner speech, and at rest, and to evaluate if current recommendations concerning vocal hygiene are still reasonable. MATERIAL AND METHOD Thirty vocally healthy participants were examined transnasally with a flexible videoendoscope. The sample consisted of 10 nonsingers, 10 lay singers, and 10 professional singers. Participants were examined during five tasks including rest, silent reading, imagining a melody, listening to music, and reading music. Two medical doctors specializing in phoniatrics analyzed the videos both qualitatively and quantitatively. RESULTS During the endoscopic examination, the raters identified movements at the base of the tongue, the posterior and lateral pharynx wall, the arytenoid cartilage, and the vocal folds. The inner singing tasks showed significantly more laryngeal movements as well as significantly more glottal closures than the control tasks (at rest, silent reading). Pharyngeal structures did not show an increase in activity during inner singing. These findings were independent of the level of proficiency in singing. CONCLUSION When total vocal rest is prescribed, patients should also be advised to avoid music imagination. Still, further research is needed to survey in detail the actual effects of these involuntary movements during inner singing on the regeneration process of vocal fold healing.
Collapse
Affiliation(s)
- Carmen Unterhofer
- Department of Otorhinolaryngology/Phoniatrics, Klinikum rechts der Isar, Technical University Munich, Munich, Germany; Department of Otorhinolaryngology/Phoniatrics, Klinikum rechts der Isar, Technical University Munich, Munich, Germany.
| | - Anna Maria Stefanie Buchberger
- Department of Otorhinolaryngology/Phoniatrics, Klinikum rechts der Isar, Technical University Munich, Munich, Germany; Department of Otorhinolaryngology/Phoniatrics, Klinikum rechts der Isar, Technical University Munich, Munich, Germany
| | - Olivia Jeleff-Wölfler
- Department of Otorhinolaryngology/Phoniatrics, Klinikum rechts der Isar, Technical University Munich, Munich, Germany
| | - Naglaa Mansour
- Department of Otorhinolaryngology, Klinikum rechts der Isar, Technical University Munich, Munich, Germany
| | - Simone Graf
- Department of Otorhinolaryngology/Phoniatrics, Klinikum rechts der Isar, Technical University Munich, Munich, Germany
| |
Collapse
|
41
|
Monier F, Droit-Volet S, Coull JT. The beneficial effect of synchronized action on motor and perceptual timing in children. Dev Sci 2019; 22:e12821. [PMID: 30803107 DOI: 10.1111/desc.12821] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2018] [Revised: 12/29/2018] [Accepted: 02/15/2019] [Indexed: 11/28/2022]
Abstract
We examined the role of action in motor and perceptual timing across development. Adults and children aged 5 or 8 years old learned the duration of a rhythmic interval with or without concurrent action. We compared the effects of sensorimotor versus visual learning on subsequent timing behaviour in three different tasks: rhythm reproduction (Experiment 1), rhythm discrimination (Experiment 2) and interval discrimination (Experiment 3). Sensorimotor learning consisted of sensorimotor synchronization (tapping) to an isochronous visual rhythmic stimulus (ISI = 800 ms), whereas visual learning consisted of simply observing this rhythmic stimulus. Results confirmed our hypothesis that synchronized action during learning systematically benefitted subsequent timing performance, particularly for younger children. Action-related improvements in accuracy were observed for both motor and perceptual timing in 5 years olds and for perceptual timing in the two older age groups. Benefits on perceptual timing tasks indicate that action shapes the cognitive representation of interval duration. Moreover, correlations with neuropsychological scores indicated that while timing performance in the visual learning condition depended on motor and memory capacity, sensorimotor learning facilitated an accurate representation of time independently of individual differences in motor and memory skill. Overall, our findings support the idea that action helps children to construct an independent and flexible representation of time, which leads to coupled sensorimotor coding for action and time.
Collapse
Affiliation(s)
- Florie Monier
- CNRS UMR 6024, Université Clermont Auvergne, Clermont-Ferrand, France
| | | | - Jennifer T Coull
- Laboratoire de Neurosciences Cognitives (LNC) UMR 7291, Aix-Marseille Université & CNRS, Marseille, France
| |
Collapse
|
42
|
Schaffert N, Janzen TB, Mattes K, Thaut MH. A Review on the Relationship Between Sound and Movement in Sports and Rehabilitation. Front Psychol 2019; 10:244. [PMID: 30809175 PMCID: PMC6379478 DOI: 10.3389/fpsyg.2019.00244] [Citation(s) in RCA: 81] [Impact Index Per Article: 16.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2018] [Accepted: 01/24/2019] [Indexed: 12/19/2022] Open
Abstract
The role of auditory information on perceptual-motor processes has gained increased interest in sports and psychology research in recent years. Numerous neurobiological and behavioral studies have demonstrated the close interaction between auditory and motor areas of the brain, and the importance of auditory information for movement execution, control, and learning. In applied research, artificially produced acoustic information and real-time auditory information have been implemented in sports and rehabilitation to improve motor performance in athletes, healthy individuals, and patients affected by neurological or movement disorders. However, this research is scattered both across time and scientific disciplines. The aim of this paper is to provide an overview about the interaction between movement and sound and review the current literature regarding the effect of natural movement sounds, movement sonification, and rhythmic auditory information in sports and motor rehabilitation. The focus here is threefold: firstly, we provide an overview of empirical studies using natural movement sounds and movement sonification in sports. Secondly, we review recent clinical and applied studies using rhythmic auditory information and sonification in rehabilitation, addressing in particular studies on Parkinson's disease and stroke. Thirdly, we summarize current evidence regarding the cognitive mechanisms and neural correlates underlying the processing of auditory information during movement execution and its mental representation. The current state of knowledge here reviewed provides evidence of the feasibility and effectiveness of the application of auditory information to improve movement execution, control, and (re)learning in sports and motor rehabilitation. Findings also corroborate the critical role of auditory information in auditory-motor coupling during motor (re)learning and performance, suggesting that this area of clinical and applied research has a large potential that is yet to be fully explored.
Collapse
Affiliation(s)
- Nina Schaffert
- Department of Movement and Training Science, Institute for Human Movement Science, University of Hamburg, Hamburg, Germany
| | - Thenille Braun Janzen
- Music and Health Science Research Collaboratory, Faculty of Music, University of Toronto, Toronto, ON, Canada
| | - Klaus Mattes
- Department of Movement and Training Science, Institute for Human Movement Science, University of Hamburg, Hamburg, Germany
| | - Michael H. Thaut
- Music and Health Science Research Collaboratory, Faculty of Music, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
43
|
Mathias B, Gehring WJ, Palmer C. Electrical Brain Responses Reveal Sequential Constraints on Planning during Music Performance. Brain Sci 2019; 9:E25. [PMID: 30696038 PMCID: PMC6406892 DOI: 10.3390/brainsci9020025] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2019] [Revised: 01/21/2019] [Accepted: 01/26/2019] [Indexed: 12/20/2022] Open
Abstract
Elements in speech and music unfold sequentially over time. To produce sentences and melodies quickly and accurately, individuals must plan upcoming sequence events, as well as monitor outcomes via auditory feedback. We investigated the neural correlates of sequential planning and monitoring processes by manipulating auditory feedback during music performance. Pianists performed isochronous melodies from memory at an initially cued rate while their electroencephalogram was recorded. Pitch feedback was occasionally altered to match either an immediately upcoming Near-Future pitch (next sequence event) or a more distant Far-Future pitch (two events ahead of the current event). Near-Future, but not Far-Future altered feedback perturbed the timing of pianists' performances, suggesting greater interference of Near-Future sequential events with current planning processes. Near-Future feedback triggered a greater reduction in auditory sensory suppression (enhanced response) than Far-Future feedback, reflected in the P2 component elicited by the pitch event following the unexpected pitch change. Greater timing perturbations were associated with enhanced cortical sensory processing of the pitch event following the Near-Future altered feedback. Both types of feedback alterations elicited feedback-related negativity (FRN) and P3a potentials and amplified spectral power in the theta frequency range. These findings suggest similar constraints on producers' sequential planning to those reported in speech production.
Collapse
Affiliation(s)
- Brian Mathias
- Department of Psychology, McGill University, Montreal, QC H3A 1B1, Canada.
- Research Group Neural Mechanisms of Human Communication, Max Planck Institute for Human Cognitive and Brain Sciences, 04103 Leipzig, Germany.
| | - William J Gehring
- Department of Psychology, University of Michigan, Ann Arbor, MI 48109, USA.
| | - Caroline Palmer
- Department of Psychology, McGill University, Montreal, QC H3A 1B1, Canada.
| |
Collapse
|
44
|
Gonzalez-Sanchez VE, Zelechowska A, Jensenius AR. Correspondences Between Music and Involuntary Human Micromotion During Standstill. Front Psychol 2018; 9:1382. [PMID: 30131742 PMCID: PMC6090462 DOI: 10.3389/fpsyg.2018.01382] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2018] [Accepted: 07/17/2018] [Indexed: 11/13/2022] Open
Abstract
The relationships between human body motion and music have been the focus of several studies characterizing the correspondence between voluntary motion and various sound features. The study of involuntary movement to music, however, is still scarce. Insight into crucial aspects of music cognition, as well as characterization of the vestibular and sensorimotor systems could be largely improved through a description of the underlying links between music and involuntary movement. This study presents an analysis aimed at quantifying involuntary body motion of a small magnitude (micromotion) during standstill, as well as assessing the correspondences between such micromotion and different sound features of the musical stimuli: pulse clarity, amplitude, and spectral centroid. A total of 71 participants were asked to stand as still as possible for 6 min while being presented with alternating silence and music stimuli: Electronic Dance Music (EDM), Classical Indian music, and Norwegian fiddle music (Telespringar). The motion of each participant's head was captured with a marker-based, infrared optical system. Differences in instantaneous position data were computed for each participant and the resulting time series were analyzed through cross-correlation to evaluate the delay between motion and musical features. The mean quantity of motion (QoM) was found to be highest across participants during the EDM condition. This musical genre is based on a clear pulse and rhythmic pattern, and it was also shown that pulse clarity was the metric that had the most significant effect in induced vertical motion across conditions. Correspondences were also found between motion and both brightness and loudness, providing some evidence of anticipation and reaction to the music. Overall, the proposed analysis techniques provide quantitative data and metrics on the correspondences between micromotion and music, with the EDM stimulus producing the clearest music-induced motion patterns. The analysis and results from this study are compatible with embodied music cognition and sensorimotor synchronization theories, and provide further evidence of the movement inducing effects of groove-related music features and human response to sound stimuli. Further work with larger data sets, and a wider range of stimuli, is necessary to produce conclusive findings on the subject.
Collapse
Affiliation(s)
- Victor E Gonzalez-Sanchez
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, Department of Musicology, University of Oslo, Oslo, Norway
| | - Agata Zelechowska
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, Department of Musicology, University of Oslo, Oslo, Norway
| | - Alexander Refsum Jensenius
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, Department of Musicology, University of Oslo, Oslo, Norway
| |
Collapse
|
45
|
Bishop L. Collaborative Musical Creativity: How Ensembles Coordinate Spontaneity. Front Psychol 2018; 9:1285. [PMID: 30087645 PMCID: PMC6066987 DOI: 10.3389/fpsyg.2018.01285] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2018] [Accepted: 07/04/2018] [Indexed: 11/24/2022] Open
Abstract
Music performance is inherently social. Most music is performed in groups, and even soloists are subject to influence from a (real or imagined) audience. It is also inherently creative. Performers are called upon to interpret notated music, improvise new musical material, adapt to unexpected playing conditions, and accommodate technical errors. The focus of this paper is how creativity is distributed across members of a music ensemble as they perform these tasks. Some aspects of ensemble performance have been investigated extensively in recent years as part of the broader literature on joint action (e.g., the processes underlying sensorimotor synchronization). Much of this research has been done under highly controlled conditions, using tasks that generate reliable results, but capture only a small part of ensemble performance as it occurs naturalistically. Still missing from this literature is an explanation of how ensemble musicians perform in conditions that require creative interpretation, improvisation, and/or adaptation: how do they coordinate the production of something new? Current theories of creativity endorse the idea that dynamic interaction between individuals, their actions, and their social and material environments underlies creative performance. This framework is much in line with the embodied music cognition paradigm and the dynamical systems perspective on ensemble coordination. This review begins by situating the concept of collaborative musical creativity in the context of embodiment. Progress that has been made toward identifying the mechanisms that underlie collaborative creativity in music performance is then assessed. The focus is on the possible role of musical imagination in facilitating performer flexibility, and on the forms of communication that are likely to support the coordination of creative musical output. Next, emergence and group flow–constructs that seem to characterize ensemble performance at its peak–are considered, and some of the conditions that may encourage periods of emergence or flow are identified. Finally, it is argued that further research is needed to (1) demystify the constructs of emergence and group flow, clarifying their effects on performer experience and listener response, (2) determine how constrained musical imagination is by perceptual experience and understand people's capacity to depart from familiar frameworks and imagine new sounds and sound structures, and (3) assess the technological developments that are supposed to facilitate or enhance musical creativity, and determine what effect they have on the processes underlying creative collaboration.
Collapse
Affiliation(s)
- Laura Bishop
- Austrian Research Institute for Artificial Intelligence (OFAI), Vienna, Austria
| |
Collapse
|
46
|
Gelding RW, Sun Y. Commentary: Sound-making actions lead to immediate plastic changes of neuromagnetic evoked responses and induced β-band oscillations during perception. Front Neurosci 2018; 12:50. [PMID: 29467612 PMCID: PMC5808282 DOI: 10.3389/fnins.2018.00050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2017] [Accepted: 01/22/2018] [Indexed: 11/13/2022] Open
Affiliation(s)
- Rebecca W. Gelding
- Centre of Excellence in Cognition and its Disorders (ARC), Sydney, Australia
- Department of Cognitive Science, Macquarie University, Sydney, Australia
- *Correspondence: Rebecca W. Gelding
| | - Yanan Sun
- Centre of Excellence in Cognition and its Disorders (ARC), Sydney, Australia
- Department of Psychology, Macquarie University, Sydney, Australia
| |
Collapse
|
47
|
Your move or mine? Music training and kinematic compatibility modulate synchronization with self- versus other-generated dance movement. PSYCHOLOGICAL RESEARCH 2018; 84:62-80. [PMID: 29380047 DOI: 10.1007/s00426-018-0987-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2017] [Accepted: 01/18/2018] [Indexed: 01/25/2023]
Abstract
Motor simulation has been implicated in how musicians anticipate the rhythm of another musician's action to achieve interpersonal synchronization. Here, we investigated whether similar mechanisms govern a related form of rhythmic action: dance. We examined (1) whether synchronization with visual dance stimuli was influenced by movement agency, (2) whether music training modulated simulation efficiency, and (3) what cues were relevant for simulating the dance rhythm. Participants were first recorded dancing the basic Charleston steps paced by a metronome, and later in a synchronization task they tapped to the rhythm of their own point-light dance stimuli, stimuli of another physically matched participant or one matched in movement kinematics, and a quantitative average across individuals. Results indicated that, while there was no overall "self advantage" and synchronization was generally most stable with the least variable (averaged) stimuli, motor simulation was driven-indicated by high tap-beat variability correlations-by familiar movement kinematics rather than morphological features. Furthermore, music training facilitated simulation, such that musicians outperformed non-musicians when synchronizing with others' movements but not with their own movements. These findings support action simulation as underlying synchronization in dance, linking action observation and rhythm processing in a common motor framework.
Collapse
|
48
|
Barrett NF, Schulkin J. A Neurodynamic Perspective on Musical Enjoyment: The Role of Emotional Granularity. Front Psychol 2018; 8:2187. [PMID: 29321756 PMCID: PMC5733545 DOI: 10.3389/fpsyg.2017.02187] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2016] [Accepted: 11/30/2017] [Indexed: 12/21/2022] Open
Affiliation(s)
| | - Jay Schulkin
- Department of Neuroscience, Georgetown University, Washington, DC, United States
| |
Collapse
|
49
|
Affiliation(s)
- Amee Baird
- Australian Research Council Centre of Excellence in Cognition and its Disorders and Psychology Department, Macquarie University, Sydney, Australia
| | - William Forde Thompson
- Australian Research Council Centre of Excellence in Cognition and its Disorders and Psychology Department, Macquarie University, Sydney, Australia
| |
Collapse
|
50
|
Colley ID, Keller PE, Halpern AR. Working memory and auditory imagery predict sensorimotor synchronisation with expressively timed music. Q J Exp Psychol (Hove) 2018; 71:1781-1796. [PMID: 28797209 DOI: 10.1080/17470218.2017.1366531] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Sensorimotor synchronisation (SMS) is prevalent and readily studied in musical settings, as most people are able to perceive and synchronise with a beat (e.g., by finger tapping). We took an individual differences approach to understanding SMS to real music characterised by expressive timing (i.e., fluctuating beat regularity). Given the dynamic nature of SMS, we hypothesised that individual differences in working memory and auditory imagery-both fluid cognitive processes-would predict SMS at two levels: (1) mean absolute asynchrony (a measure of synchronisation error) and (2) anticipatory timing (i.e., predicting, rather than reacting to beat intervals). In Experiment 1, participants completed two working memory tasks, four auditory imagery tasks, and an SMS-tapping task. Hierarchical regression models were used to predict SMS performance, with results showing dissociations among imagery types in relation to mean absolute asynchrony, and evidence of a role for working memory in anticipatory timing. In Experiment 2, a new sample of participants completed an expressive timing perception task to examine the role of imagery in perception without action. Results suggest that imagery vividness is important for perceiving and control is important for synchronising with irregular but ecologically valid musical time series. Working memory is implicated in synchronising by anticipating events in the series.
Collapse
Affiliation(s)
- Ian D Colley
- 1 Department of Psychology, Bucknell University, Lewisburg, PA, USA.,2 The MARCS Institute, Western Sydney University, Penrith, NSW, Australia
| | - Peter E Keller
- 2 The MARCS Institute, Western Sydney University, Penrith, NSW, Australia
| | - Andrea R Halpern
- 1 Department of Psychology, Bucknell University, Lewisburg, PA, USA
| |
Collapse
|