1
|
Dalla Bella S, Janaqi S, Benoit CE, Farrugia N, Bégel V, Verga L, Harding EE, Kotz SA. Unravelling individual rhythmic abilities using machine learning. Sci Rep 2024; 14:1135. [PMID: 38212632 PMCID: PMC10784578 DOI: 10.1038/s41598-024-51257-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Accepted: 01/02/2024] [Indexed: 01/13/2024] Open
Abstract
Humans can easily extract the rhythm of a complex sound, like music, and move to its regular beat, like in dance. These abilities are modulated by musical training and vary significantly in untrained individuals. The causes of this variability are multidimensional and typically hard to grasp in single tasks. To date we lack a comprehensive model capturing the rhythmic fingerprints of both musicians and non-musicians. Here we harnessed machine learning to extract a parsimonious model of rhythmic abilities, based on behavioral testing (with perceptual and motor tasks) of individuals with and without formal musical training (n = 79). We demonstrate that variability in rhythmic abilities and their link with formal and informal music experience can be successfully captured by profiles including a minimal set of behavioral measures. These findings highlight that machine learning techniques can be employed successfully to distill profiles of rhythmic abilities, and ultimately shed light on individual variability and its relationship with both formal musical training and informal musical experiences.
Collapse
Affiliation(s)
- Simone Dalla Bella
- International Laboratory for Brain, Music, and Sound Research (BRAMS), Montreal, Canada.
- Department of Psychology, University of Montreal, Pavillon Marie-Victorin, CP 6128 Succursale Centre-Ville, Montréal, QC, H3C 3J7, Canada.
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Canada.
- University of Economics and Human Sciences in Warsaw, Warsaw, Poland.
| | - Stefan Janaqi
- EuroMov Digital Health in Motion, IMT Mines Ales and University of Montpellier, Ales and Montpellier, France
| | - Charles-Etienne Benoit
- Inter-University Laboratory of Human Movement Biology, EA 7424, University Claude Bernard Lyon 1, 69 622, Villeurbanne, France
| | | | | | - Laura Verga
- Comparative Bioacoustics Group, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Department of Neuropsychology & Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, P.O. 616, Maastricht, 6200 MD, The Netherlands
| | - Eleanor E Harding
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Sonja A Kotz
- Department of Neuropsychology & Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, P.O. 616, Maastricht, 6200 MD, The Netherlands.
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| |
Collapse
|
2
|
Kirsch J, Köberlein M, Tur B, Hermann LA, Kniesburges S, Echternach M. Boys Choirs in the Pandemic: Effects of Distance and Other Factors on Spectral and Temporal Accuracy. J Voice 2023:S0892-1997(23)00292-8. [PMID: 37914657 DOI: 10.1016/j.jvoice.2023.09.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 09/14/2023] [Accepted: 09/15/2023] [Indexed: 11/03/2023]
Abstract
During the Covid-19 pandemic, choral singing has been either completely prohibited or regulated with safety measures due to increased transmission risks. However, the impact of larger inter-singer spacings on the performance and educational process in boys' choirs is unclear. This study analyzed recordings of six groups of five singers each from two boys' choirs aged 7-16 who sang Beethoven's Ode to Joy while standing on an arc with a 4 m radius and an inter-subject spacing of 0.5-3 m. The effects of singers' masks, distance, group age, and relative position on the timing of articulation and fundamental frequency were investigated, along with the amount, rate, and sign of pitch drift and loudness. The ANOCOVA results showed that onsets were robust to the tested factors, while errors in fundamental frequency tended to decrease with increasing age/experience. Loudness was affected by distance, mask, and relative position, with increasing loudness as spacing decreased. Understanding influencing factors can inform recommendations for choral singing and education.
Collapse
Affiliation(s)
- Jonas Kirsch
- Division of Phoniatrics and Pediatric Audiology, Department of Otorhinolaryngology, University Hospital, LMU Munich, Pettenkoferstraße 4a, Munich 80333, Bavaria, Germany.
| | - Marie Köberlein
- Division of Phoniatrics and Pediatric Audiology, Department of Otorhinolaryngology, University Hospital, LMU Munich, Pettenkoferstraße 4a, Munich 80333, Bavaria, Germany
| | - Bogac Tur
- Division of Phoniatrics and Pediatric Audiology, Department of Otorhinolaryngology, Head & Neck Surgery, University Hospital Erlangen, Friedrich-Alexander-University Erlangen-Nürnberg, Waldstraße 1, Erlangen 91054, Bavaria, Germany
| | - Laila Ava Hermann
- Division of Phoniatrics and Pediatric Audiology, Department of Otorhinolaryngology, University Hospital, LMU Munich, Pettenkoferstraße 4a, Munich 80333, Bavaria, Germany
| | - Stefan Kniesburges
- Division of Phoniatrics and Pediatric Audiology, Department of Otorhinolaryngology, Head & Neck Surgery, University Hospital Erlangen, Friedrich-Alexander-University Erlangen-Nürnberg, Waldstraße 1, Erlangen 91054, Bavaria, Germany
| | - Matthias Echternach
- Division of Phoniatrics and Pediatric Audiology, Department of Otorhinolaryngology, University Hospital, LMU Munich, Pettenkoferstraße 4a, Munich 80333, Bavaria, Germany
| |
Collapse
|
3
|
Mangalam M, Kelty-Stephen DG, Sommerfeld JH, Stergiou N, Likens AD. Temporal organization of stride-to-stride variations contradicts predictive models for sensorimotor control of footfalls during walking. PLoS One 2023; 18:e0290324. [PMID: 37616227 PMCID: PMC10449478 DOI: 10.1371/journal.pone.0290324] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 08/04/2023] [Indexed: 08/26/2023] Open
Abstract
Walking exhibits stride-to-stride variations. Given ongoing perturbations, these variations critically support continuous adaptations between the goal-directed organism and its surroundings. Here, we report that stride-to-stride variations during self-paced overground walking show cascade-like intermittency-stride intervals become uneven because stride intervals of different sizes interact and do not simply balance each other. Moreover, even when synchronizing footfalls with visual cues with variable timing of presentation, asynchrony in the timings of the cue and footfall shows cascade-like intermittency. This evidence conflicts with theories about the sensorimotor control of walking, according to which internal predictive models correct asynchrony in the timings of the cue and footfall from one stride to the next on crossing thresholds leading to the risk of falling. Hence, models of the sensorimotor control of walking must account for stride-to-stride variations beyond the constraints of threshold-dependent predictive internal models.
Collapse
Affiliation(s)
- Madhur Mangalam
- Division of Biomechanics and Research Development, Department of Biomechanics, and Center for Research in Human Movement Variability, University of Nebraska at Omaha, Omaha, NE, United States of America
| | - Damian G. Kelty-Stephen
- Department of Psychology, State University of New York at New Paltz, New Paltz, NY, United States of America
| | - Joel H. Sommerfeld
- Division of Biomechanics and Research Development, Department of Biomechanics, and Center for Research in Human Movement Variability, University of Nebraska at Omaha, Omaha, NE, United States of America
| | - Nick Stergiou
- Division of Biomechanics and Research Development, Department of Biomechanics, and Center for Research in Human Movement Variability, University of Nebraska at Omaha, Omaha, NE, United States of America
- Department of Department of Physical Education, & Sport Science, Aristotle University, Thessaloniki, Greece
| | - Aaron D. Likens
- Division of Biomechanics and Research Development, Department of Biomechanics, and Center for Research in Human Movement Variability, University of Nebraska at Omaha, Omaha, NE, United States of America
| |
Collapse
|
4
|
Li Y, Ye B, Bao Y. The same phase creates a unique visual rhythm unifying moving elements in time. Psych J 2023; 12:500-506. [PMID: 36916772 DOI: 10.1002/pchj.636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 12/02/2022] [Indexed: 03/16/2023]
Abstract
Attention can be selectively tuned to particular features at different spatial locations or objects. The deployment of attention can be guided by properties, such as color, orientation, and so forth, as guiding features. What might be such guiding features for visual stimuli under dynamic rhythmic conditions? We asked specifically what might be the parameters that attract attention when perceiving a visual rhythm. We used a visual search paradigm, in which a dynamic search display consisted of vertically "bouncing balls" with regular rhythms. The search target was defined by a unique visual rhythm (i.e., with either a shorter or longer period) among rhythmic distractors sharing an identical period. We modulated amplitudes and phases of the distractor balls systematically. The results showed a crucial factor of the phase, not the amplitude. If the phase is violated, the target suddenly "pops out" as an "oddball," showing an efficient parallel search. The findings indicate in general the essential role of the phase in conjunction with amplitude and period for visual rhythm perception. Furthermore, a higher saliency of moving objects with a higher frequency component has also been disclosed.
Collapse
Affiliation(s)
- Yao Li
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China
- Peking-Tsinghua Center for Life Sciences, Peking University, Beijing, China
| | - Biyi Ye
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China
| | - Yan Bao
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China
- Institute of Medical Psychology, Ludwig Maximilian University Munich, Munich, Germany
- Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| |
Collapse
|
5
|
Musso M, Altenmüller E, Reisert M, Hosp J, Schwarzwald R, Blank B, Horn J, Glauche V, Kaller C, Weiller C, Schumacher M. Speaking in gestures: Left dorsal and ventral frontotemporal brain systems underlie communication in conducting. Eur J Neurosci 2023; 57:324-350. [PMID: 36509461 DOI: 10.1111/ejn.15883] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 09/27/2022] [Accepted: 11/21/2022] [Indexed: 12/15/2022]
Abstract
Conducting constitutes a well-structured system of signs anticipating information concerning the rhythm and dynamic of a musical piece. Conductors communicate the musical tempo to the orchestra, unifying the individual instrumental voices to form an expressive musical Gestalt. In a functional magnetic resonance imaging (fMRI) experiment, 12 professional conductors and 16 instrumentalists conducted real-time novel pieces with diverse complexity in orchestration and rhythm. For control, participants either listened to the stimuli or performed beat patterns, setting the time of a metronome or complex rhythms played by a drum. Activation of the left superior temporal gyrus (STG), supplementary and premotor cortex and Broca's pars opercularis (F3op) was shared in both musician groups and separated conducting from the other conditions. Compared to instrumentalists, conductors activated Broca's pars triangularis (F3tri) and the STG, which differentiated conducting from time beating and reflected the increase in complexity during conducting. In comparison to conductors, instrumentalists activated F3op and F3tri when distinguishing complex rhythm processing from simple rhythm processing. Fibre selection from a normative human connectome database, constructed using a global tractography approach, showed that the F3op and STG are connected via the arcuate fasciculus, whereas the F3tri and STG are connected via the extreme capsule. Like language, the anatomical framework characterising conducting gestures is located in the left dorsal system centred on F3op. This system reflected the sensorimotor mapping for structuring gestures to musical tempo. The ventral system centred on F3Tri may reflect the art of conductors to set this musical tempo to the individual orchestra's voices in a global, holistic way.
Collapse
Affiliation(s)
- Mariacristina Musso
- Department of Neurology and Clinical Neuroscience, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Eckart Altenmüller
- Institute of Music Physiology and Musician's Medicine, Hannover University of Music Drama and Media, Hannover, Germany
| | - Marco Reisert
- Department of Medical Physics, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Jonas Hosp
- Department of Neurology and Clinical Neuroscience, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Ralf Schwarzwald
- Department of Neuroradiology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Bettina Blank
- Department of Neurology and Clinical Neuroscience, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Julian Horn
- Department of Neurology and Clinical Neuroscience, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Volkmar Glauche
- Department of Neurology and Clinical Neuroscience, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Christoph Kaller
- Department of Medical Physics, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Cornelius Weiller
- Department of Neurology and Clinical Neuroscience, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Martin Schumacher
- Department of Neuroradiology, Medical Center, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| |
Collapse
|
6
|
Klein L, Wood EA, Bosnyak D, Trainor LJ. Follow the sound of my violin: Granger causality reflects information flow in sound. Front Hum Neurosci 2022; 16:982177. [DOI: 10.3389/fnhum.2022.982177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 10/10/2022] [Indexed: 11/06/2022] Open
Abstract
Recent research into how musicians coordinate their expressive timing, phrasing, articulation, dynamics, and other stylistic characteristics during performances has highlighted the role of predictive processes, as musicians must anticipate how their partners will play in order to be together. Several studies have used information flow techniques such as Granger causality to show that upcoming movements of a musician can be predicted from immediate past movements of fellow musicians. Although musicians must move to play their instruments, a major goal of music making is to create a joint interpretation through the sounds they produce. Yet, information flow techniques have not been applied previously to examine the role that fellow musicians' sound output plays in these predictive processes and whether this changes as they learn to play together. In the present experiment, we asked professional violinists to play along with recordings of two folk pieces, each eight times in succession, and compared the amplitude envelopes of their performances with those of the recordings using Granger causality to measure information flow and cross-correlation to measure similarity and synchronization. In line with our hypotheses, our measure of information flow was higher from the recordings to the performances than vice versa, and decreased as the violinists became more familiar with the recordings over trials. This decline in information flow is consistent with a gradual shift from relying on auditory cues to predict the recording to relying on an internally-based (learned) model built through repetition. There was also evidence that violinists became more synchronized with the recordings over trials. These results shed light on the planning and learning processes involved in the aligning of expressive intentions in group music performance and lay the groundwork for the application of Granger causality to investigate information flow through sound in more complex musical interactions.
Collapse
|
7
|
The Generation of Piano Music Using Deep Learning Aided by Robotic Technology. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8336616. [PMID: 36262599 PMCID: PMC9576368 DOI: 10.1155/2022/8336616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 09/02/2022] [Accepted: 09/28/2022] [Indexed: 11/17/2022]
Abstract
In order to improve the accuracy and precision of music generation assisted by robotics, this study analyzes the application of deep learning in piano music generation. Firstly, based on the basic concepts of robotics and deep learning, the advantages of long short-term memory (LSTM) networks are introduced and applied to the piano music generation. Meanwhile, based on LSTM, dropout coefficients are used for optimization. Secondly, various parameters of the algorithm are determined, including the effects of the number of iterations and neurons in the hidden layer on the effect of piano music generation. Finally, the generated music sequence spectrograms are analyzed to illustrate the accuracy and rationality of the algorithm. The spectrograms are compared with the music sequence spectrograms generated by the traditional restricted Boltzmann machine (RBM) music generation algorithm. The results show that (1) when the dropout coefficient value is 0.7, the function converges faster, and the experimental results are better; (2) when the number of iterations is 6000, the error between the generated music sequence and the original music is the smallest; (3) the number of hidden layers of the network is set to 4. When the number of neurons in each hidden layer is set to 1024, the training result of the network is optimal; (4) compared with the traditional RBM piano music generation algorithm, the LSTM-based algorithm and the sampling frequency distribution tend to be consistent with the original sample. The results show that the network has good performance in music generation and can provide a certain reference for automatic music generation.
Collapse
|
8
|
Miyata K, Yamamoto T, Fukunaga M, Sugawara S, Sadato N. Neural correlates with individual differences in temporal prediction during auditory-motor synchronization. Cereb Cortex Commun 2022; 3:tgac014. [PMID: 35529518 PMCID: PMC9070830 DOI: 10.1093/texcom/tgac014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 03/15/2022] [Accepted: 04/06/2022] [Indexed: 11/17/2022] Open
Abstract
Temporal prediction ability is vital for movement synchronization with external rhythmic stimuli (sensorimotor synchronization); however, little is known regarding individual variations in temporal prediction ability and its neural correlates. We determined the underlying neural correlates of temporal prediction and individual variations during auditory-motor synchronization. We hypothesized that the non-primary motor cortices, such as the premotor cortex and supplementary motor area, are the key brain regions that correlate individual variations in prediction ability. Functional magnetic resonance imaging (7T) was performed for 18 healthy volunteers who tapped to 3 types of auditory metronome beats: isochronous, tempo change, and random. The prediction ability was evaluated using prediction/tracking ratios that were computed based on cross-correlations between tap timing and pacing events. Participants with a higher prediction/tracking ratio (i.e. stronger predictive tendency) tapped to metronome beats more accurately and precisely. The prediction/tracking ratio was positively correlated with the activity in the bilateral dorsal premotor cortex (PMd), suggesting that the bilateral PMd explains the individual variation in prediction ability. These results indicate that the PMd is involved in generating a model for temporal prediction of auditory rhythm patterns and its activity would reflect model accuracy, which is critical for accurate and precise sensorimotor synchronization.
Collapse
Affiliation(s)
- Kohei Miyata
- Graduate School of Arts and Sciences, The University of Tokyo
- Department of System Neuroscience, National Institute for Physiological Sciences
| | - Tetsuya Yamamoto
- Department of System Neuroscience, National Institute for Physiological Sciences
| | - Masaki Fukunaga
- Department of System Neuroscience, National Institute for Physiological Sciences
| | - Sho Sugawara
- Department of System Neuroscience, National Institute for Physiological Sciences
| | - Norihiro Sadato
- Department of System Neuroscience, National Institute for Physiological Sciences
| |
Collapse
|
9
|
Huang Y, Zhong S, Zhan L, Sun M, Wu X. Sustained visual attention improves visuomotor timing. PSYCHOLOGICAL RESEARCH 2022; 86:2059-2066. [PMID: 35048198 DOI: 10.1007/s00426-021-01629-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 12/07/2021] [Indexed: 11/24/2022]
Abstract
Relative to audition, vision is considered much less trustworthy in sensorimotor timing such as synchronizing finger movements with a temporally regular sequence. Visuomotor timing requires maintaining attention over time, whereas the sustained visual attention may not be well held in conventional visuomotor timing task settings where flashing visual stimuli consisted of a briefly presented flash and a long blank period. In the present study, the potential attentional lapses in time due to the disappearance of the flash were carefully controlled in Experiment 1 by changing the color of the flash instead of its disappearance, or in Experiment 2 by adding an additional continuously presented fixation point serving as an external attentional cue when the flash disappeared. Improvement of visuomotor timing performance was found in both experiments. The finding suggests a role of enhanced sustained visual attention in improving visuomotor timing, by which vision could also be a trustworthy modality for processing temporal information in sensorimotor interactions.
Collapse
Affiliation(s)
- Yingyu Huang
- Department of Psychology, Sun Yat-Sen University, 132 Waihuan East Road, Higher Education Mega Center, Guangzhou, 510006, Guangdong, China
| | - Shengqi Zhong
- Department of Psychology, Sun Yat-Sen University, 132 Waihuan East Road, Higher Education Mega Center, Guangzhou, 510006, Guangdong, China
| | - Liying Zhan
- Department of Psychology, Sun Yat-Sen University, 132 Waihuan East Road, Higher Education Mega Center, Guangzhou, 510006, Guangdong, China
| | - Mi Sun
- Department of Psychology, Sun Yat-Sen University, 132 Waihuan East Road, Higher Education Mega Center, Guangzhou, 510006, Guangdong, China
| | - Xiang Wu
- Department of Psychology, Sun Yat-Sen University, 132 Waihuan East Road, Higher Education Mega Center, Guangzhou, 510006, Guangdong, China.
| |
Collapse
|
10
|
Lenc T, Merchant H, Keller PE, Honing H, Varlet M, Nozaradan S. Mapping between sound, brain and behaviour: four-level framework for understanding rhythm processing in humans and non-human primates. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200325. [PMID: 34420381 PMCID: PMC8380981 DOI: 10.1098/rstb.2020.0325] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/14/2021] [Indexed: 12/16/2022] Open
Abstract
Humans perceive and spontaneously move to one or several levels of periodic pulses (a meter, for short) when listening to musical rhythm, even when the sensory input does not provide prominent periodic cues to their temporal location. Here, we review a multi-levelled framework to understanding how external rhythmic inputs are mapped onto internally represented metric pulses. This mapping is studied using an approach to quantify and directly compare representations of metric pulses in signals corresponding to sensory inputs, neural activity and behaviour (typically body movement). Based on this approach, recent empirical evidence can be drawn together into a conceptual framework that unpacks the phenomenon of meter into four levels. Each level highlights specific functional processes that critically enable and shape the mapping from sensory input to internal meter. We discuss the nature, constraints and neural substrates of these processes, starting with fundamental mechanisms investigated in macaque monkeys that enable basic forms of mapping between simple rhythmic stimuli and internally represented metric pulse. We propose that human evolution has gradually built a robust and flexible system upon these fundamental processes, allowing more complex levels of mapping to emerge in musical behaviours. This approach opens promising avenues to understand the many facets of rhythmic behaviours across individuals and species. This article is part of the theme issue 'Synchrony and rhythm interaction: from the brain to behavioural ecology'.
Collapse
Affiliation(s)
- Tomas Lenc
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, New South Wales 2751, Australia
- Institute of Neuroscience (IONS), Université Catholique de Louvain (UCL), Brussels 1200, Belgium
| | - Hugo Merchant
- Instituto de Neurobiologia, UNAM, Campus Juriquilla, Querétaro 76230, Mexico
| | - Peter E. Keller
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, New South Wales 2751, Australia
| | - Henkjan Honing
- Amsterdam Brain and Cognition (ABC), Institute for Logic, Language and Computation (ILLC), University of Amsterdam, Amsterdam 1090 GE, The Netherlands
| | - Manuel Varlet
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, New South Wales 2751, Australia
- School of Psychology, Western Sydney University, Penrith, New South Wales 2751, Australia
| | - Sylvie Nozaradan
- Institute of Neuroscience (IONS), Université Catholique de Louvain (UCL), Brussels 1200, Belgium
| |
Collapse
|
11
|
Colley I, Varlet M, MacRitchie J, Keller PE. The influence of a conductor and co-performer on auditory-motor synchronisation, temporal prediction, and ancillary entrainment in a musical drumming task. Hum Mov Sci 2020; 72:102653. [PMID: 32721371 DOI: 10.1016/j.humov.2020.102653] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2019] [Revised: 05/06/2020] [Accepted: 06/09/2020] [Indexed: 11/30/2022]
Abstract
Interpersonal coordination is exemplified in ensemble musicians, who coordinate their actions deliberately in order to achieve temporal synchronisation in their performances. However, musicians also move parts of their bodies unintentionally or spontaneously, sometimes in ways that do not directly produce sound from their instruments. Musicians' movements-intentional or otherwise-provide visual signals to co-performers, which might facilitate temporal synchronisation. In large ensembles, a conductor also provides a visual cue, which has been shown to enhance synchronisation. In the present study, we tested how visual cues from a co-performer and a conductor affect processes of temporal anticipation, synchronisation, and ancillary movements in a sample of primarily non-musicians. We used a dyadic synchronisation drumming task, in which paired participants drummed to the beat of tempo-changing music. We manipulated visual access between partners and a virtual conductor. Results showed that the conductor improved synchronisation with the music, but synchrony with the music did not improve when partners could see each other. Temporal prediction was improved when partners saw the conductor, but not each other. Ancillary movements of the head were more synchronised between partners when they could see each other, and greater ancillary synchrony at beat-related frequencies of movement was associated with greater drumming synchrony. These results suggest that compatible audio-visual cues can improve intentional synchronisation, that ancillary movements are affected by seeing a partner, and that attended vs. incidental visual cues thus have partially dissociable effects on temporal coordination during joint action.
Collapse
Affiliation(s)
- Ian Colley
- The MARCS Institute for Brain, Behaviour and Development, Western SydneyUniversity, Locked Bag 1797, Penrith, NSW 2751, Australia.
| | - Manuel Varlet
- The MARCS Institute for Brain, Behaviour and Development, Western SydneyUniversity, Locked Bag 1797, Penrith, NSW 2751, Australia; School of Psychology, Western Sydney University, Locked Bag 1797, Penrith, NSW 2751, Australia
| | - Jennifer MacRitchie
- The MARCS Institute for Brain, Behaviour and Development, Western SydneyUniversity, Locked Bag 1797, Penrith, NSW 2751, Australia; School of Humanities and Communication Arts, Western Sydney University, Locked Bag 1797, Penrith, NSW 2751, Australia
| | - Peter E Keller
- The MARCS Institute for Brain, Behaviour and Development, Western SydneyUniversity, Locked Bag 1797, Penrith, NSW 2751, Australia
| |
Collapse
|
12
|
Simonet M, Meziane HB, Runswick OR, North JS, Williams AM, Barral J, Roca A. The modulation of event-related alpha rhythm during the time course of anticipation. Sci Rep 2019; 9:18226. [PMID: 31796879 PMCID: PMC6890640 DOI: 10.1038/s41598-019-54763-1] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Accepted: 11/18/2019] [Indexed: 11/25/2022] Open
Abstract
Anticipation is the ability to accurately predict future actions or events ahead of the act itself. When attempting to anticipate, researchers have identified that at least two broad sources of information are used: contextual information relating to the situation in question; and biological motion from postural cues. However, the neural correlates associated with the processing of these different sources of information across groups varying in expertise has yet to be examined empirically. We compared anticipation performance and electrophysiological activity in groups of expert (n = 12) and novice (n = 15) performers using a video-based task. Participants made anticipation judgements after being presented information under three conditions: contextual information only; kinematic information only; and both sources of information combined. The experts responded more accurately across all three conditions. Stronger alpha event-related desynchronization over occipital and frontocentral sites occurred in experts compared to the novices when anticipating. The experts relied on stronger preparatory attentional mechanisms when they processed contextual information. When kinematic information was available, the domain specific motor representations built up over many years of practice likely underpinned expertise. Our findings have implications for those interested in identifying and subsequently, enhancing the neural mechanisms involved in anticipation.
Collapse
Affiliation(s)
- Marie Simonet
- Institute of Sport Sciences, University of Lausanne, Lausanne, Switzerland.
| | - Hadj Boumediene Meziane
- Institute of Psychology, Faculty of Social and Political Sciences, University of Lausanne, Lausanne, Switzerland
| | | | - Jamie Stephen North
- Expert Performance and Skill Acquisition Research Group, Faculty of Sport, Health, and Applied Science, St Mary's University, Twickenham, London, UK
| | - Andrew Mark Williams
- Department of Health, Kinesiology, and Recreation, University of Utah, Salt Lake City, UT, USA
| | - Jérôme Barral
- Institute of Sport Sciences, University of Lausanne, Lausanne, Switzerland
| | - André Roca
- Expert Performance and Skill Acquisition Research Group, Faculty of Sport, Health, and Applied Science, St Mary's University, Twickenham, London, UK
| |
Collapse
|
13
|
Takehana A, Uehara T, Sakaguchi Y. Audiovisual synchrony perception in observing human motion to music. PLoS One 2019; 14:e0221584. [PMID: 31454393 PMCID: PMC6711538 DOI: 10.1371/journal.pone.0221584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2019] [Accepted: 08/09/2019] [Indexed: 11/18/2022] Open
Abstract
To examine how individuals perceive synchrony between music and body motion, we investigated the characteristics of synchrony perception during observation of a Japanese Radio Calisthenics routine. We used the constant stimuli method to present video clips of an individual performing an exercise routine. We generated stimuli with a range of temporal shifts between the visual and auditory streams, and asked participants to make synchrony judgments. We then examined which movement-feature points agreed with music beats when the participants perceived synchrony. We found that extremities (e.g., hands and feet) reached the movement endpoint or moved through the lowest position at music beats associated with synchrony. Movement onsets never agreed with music beats. To investigate whether visual information about the feature points was necessary for synchrony perception, we conducted a second experiment where only limited portions of video clips were presented to the participants. Participants consistently judged synchrony even when the video image did not contain the critical feature points, suggesting that a prediction mechanism contributes to synchrony perception. To discuss the meaning of these feature points with respect to synchrony perception, we examined the temporal relationship between the motion of body parts and the ground reaction force (GRF) of exercise performers, which reflected the total force acting on the performer. Interestingly, vertical GRF showed local peaks consistently synchronized with music beats for most exercises, with timing that was closely correlated with the timing of movement feature points. This result suggests that synchrony perception in humans is based on some global variable anticipated from visual information, instead of the feature points found in the motion of individual body parts. In summary, the present results indicate that synchrony perception during observation of human motion to music depends largely on spatiotemporal prediction of the performer's motion.
Collapse
Affiliation(s)
- Akira Takehana
- Department of Mechanical Engineering and Intelligent Systems, Graduate School of Informatics and Engineering, University of Electro-Communications, Chofu, Tokyo, Japan
| | - Tsukasa Uehara
- Department of Mechanical Engineering and Intelligent Systems, Graduate School of Informatics and Engineering, University of Electro-Communications, Chofu, Tokyo, Japan
| | - Yutaka Sakaguchi
- Department of Mechanical Engineering and Intelligent Systems, Graduate School of Informatics and Engineering, University of Electro-Communications, Chofu, Tokyo, Japan
- Research Center for Performance Art Science, University of Electro-Communications, Chofu, Tokyo, Japan
- * E-mail:
| |
Collapse
|
14
|
Gu L, Huang Y, Wu X. Advantage of audition over vision in a perceptual timing task but not in a sensorimotor timing task. PSYCHOLOGICAL RESEARCH 2019; 84:2046-2056. [PMID: 31190091 DOI: 10.1007/s00426-019-01204-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2019] [Accepted: 05/24/2019] [Indexed: 12/28/2022]
Abstract
Timing is essential for various behaviors and relative to vision, audition is considered to be specialized for temporal processing. The present study conducted a sensorimotor timing task that required tapping in synchrony with a temporally regular sequence and a perceptual timing task that required detecting a timing deviation among a temporally regular sequence. The sequence was composed of auditory tones, visual flashes, or a visual bouncing ball. In the sensorimotor task, sensorimotor timing performance (synchronization stability) of the bouncing ball was much greater than that of flashes and was comparable to that of tones. In the perceptual task, where perceptual timing performance of the bouncing ball was greater than that of flashes, it was poorer than that of tones. These results suggest the facilitation of both perceptual and sensorimotor processing of temporal information by the bouncing ball. Given such facilitation of temporal processing, however, audition is still superior over vision in perceptual detection of timing.
Collapse
Affiliation(s)
- Li Gu
- State Key Laboratory of Ophthalmology, Guangdong Provincial Key Lab of Ophthalmology and Visual Science, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou, China.,Department of Psychology, Sun Yat-Sen University, 132 Waihuan East Road, Higher Education Mega Center, Guangzhou, 510006, Guangdong, China
| | - Yingyu Huang
- Department of Psychology, Sun Yat-Sen University, 132 Waihuan East Road, Higher Education Mega Center, Guangzhou, 510006, Guangdong, China
| | - Xiang Wu
- Department of Psychology, Sun Yat-Sen University, 132 Waihuan East Road, Higher Education Mega Center, Guangzhou, 510006, Guangdong, China.
| |
Collapse
|
15
|
Multi-layer adaptation of group coordination in musical ensembles. Sci Rep 2019; 9:5854. [PMID: 30971783 PMCID: PMC6458170 DOI: 10.1038/s41598-019-42395-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2018] [Accepted: 03/22/2019] [Indexed: 11/21/2022] Open
Abstract
Group coordination passes through an efficient integration of multimodal sources of information. This study examines complex non-verbal communication by recording movement kinematics from conductors and two sections of violinists of an orchestra adapting to a perturbation affecting their normal pattern of sensorimotor communication (rotation of half a turn of the first violinists’ section). We show that different coordination signals are channeled through ancillary (head kinematics) and instrumental movements (bow kinematics). Each one of them affect coordination either at the inter-group or intra-group levels, therefore tapping into different modes of cooperation: complementary versus imitative coordination. Our study suggests that the co-regulation of group behavior is based on the exchange of information across several layers, each one of them tuned to carry specific coordinative signals. Multi-layer sensorimotor communication may be the key musicians and, more generally humans, use to flexibly communicate between each other in interactive sensorimotor tasks.
Collapse
|