1
|
Háden GP, Bouwer FL, Honing H, Winkler I. Beat processing in newborn infants cannot be explained by statistical learning based on transition probabilities. Cognition 2024; 243:105670. [PMID: 38016227 DOI: 10.1016/j.cognition.2023.105670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 11/08/2023] [Accepted: 11/17/2023] [Indexed: 11/30/2023]
Abstract
Newborn infants have been shown to extract temporal regularities from sound sequences, both in the form of learning regular sequential properties, and extracting periodicity in the input, commonly referred to as a regular pulse or the 'beat'. However, these two types of regularities are often indistinguishable in isochronous sequences, as both statistical learning and beat perception can be elicited by the regular alternation of accented and unaccented sounds. Here, we manipulated the isochrony of sound sequences in order to disentangle statistical learning from beat perception in sleeping newborn infants in an EEG experiment, as previously done in adults and macaque monkeys. We used a binary accented sequence that induces a beat when presented with isochronous timing, but not when presented with randomly jittered timing. We compared mismatch responses to infrequent deviants falling on either accented or unaccented (i.e., odd and even) positions. Results showed a clear difference between metrical positions in the isochronous sequence, but not in the equivalent jittered sequence. This suggests that beat processing is present in newborns. Despite previous evidence for statistical learning in newborns the effects of this ability were not detected in the jittered condition. These results show that statistical learning by itself does not fully explain beat processing in newborn infants.
Collapse
Affiliation(s)
- Gábor P Háden
- Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, Magyar tudósok körútja 2, H-1117 Budapest, Hungary; Department of Telecommunications and Media Informatics, Faculty of Electrical Engineering and Informatics, Budapest University of Technology and Economics, Magyar tudósok körútja 2, 1117 Budapest, Hungary.
| | - Fleur L Bouwer
- Music Cognition Group, Institute for Logic, Language, and Computation, University of Amsterdam, P.O. Box 94242, 1090 GE Amsterdam, the Netherlands; Amsterdam Brain and Cognition, University of Amsterdam, P.O. Box 15900, 1001 NK Amsterdam, the Netherlands; Department of Psychology, Brain & Cognition, University of Amsterdam, P.O. Box 15900, 1001 NK Amsterdam, the Netherlands; Cognitive Psychology Unit, Institute of Psychology & Leiden Institute for Brain and Cognition, Leiden University, 2333 AK Leiden, the Netherlands.
| | - Henkjan Honing
- Music Cognition Group, Institute for Logic, Language, and Computation, University of Amsterdam, P.O. Box 94242, 1090 GE Amsterdam, the Netherlands; Amsterdam Brain and Cognition, University of Amsterdam, P.O. Box 15900, 1001 NK Amsterdam, the Netherlands.
| | - István Winkler
- Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, Magyar tudósok körútja 2, H-1117 Budapest, Hungary.
| |
Collapse
|
2
|
Bouwer FL, Háden GP, Honing H. Probing Beat Perception with Event-Related Potentials (ERPs) in Human Adults, Newborns, and Nonhuman Primates. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1455:227-256. [PMID: 38918355 DOI: 10.1007/978-3-031-60183-5_13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/27/2024]
Abstract
The aim of this chapter is to give an overview of how the perception of rhythmic temporal regularity such as a regular beat in music can be studied in human adults, human newborns, and nonhuman primates using event-related brain potentials (ERPs). First, we discuss different aspects of temporal structure in general, and musical rhythm in particular, and we discuss the possible mechanisms underlying the perception of regularity (e.g., a beat) in rhythm. Additionally, we highlight the importance of dissociating beat perception from the perception of other types of structure in rhythm, such as predictable sequences of temporal intervals, ordinal structure, and rhythmic grouping. In the second section of the chapter, we start with a discussion of auditory ERPs elicited by infrequent and frequent sounds: ERP responses to regularity violations, such as mismatch negativity (MMN), N2b, and P3, as well as early sensory responses to sounds, such as P1 and N1, have been shown to be instrumental in probing beat perception. Subsequently, we discuss how beat perception can be probed by comparing ERP responses to sounds in regular and irregular sequences, and by comparing ERP responses to sounds in different metrical positions in a rhythm, such as on and off the beat or on strong and weak beats. Finally, we will discuss previous research that has used the aforementioned ERPs and paradigms to study beat perception in human adults, human newborns, and nonhuman primates. In doing so, we consider the possible pitfalls and prospects of the technique, as well as future perspectives.
Collapse
Affiliation(s)
- Fleur L Bouwer
- Cognitive Psychology Unit, Institute of Psychology, Leiden Institute for Brain and Cognition, Leiden University, Leiden, The Netherlands.
- Department of Psychology, Brain & Cognition, University of Amsterdam, Amsterdam, The Netherlands.
| | - Gábor P Háden
- Institute of Cognitive Neuroscience and Psychology, Budapest, Hungary
- Department of Telecommunications and Media Informatics, Faculty of Electrical Engineering and Informatics, Budapest University of Technology and Economics, Budapest, Hungary
| | - Henkjan Honing
- Music Cognition group (MCG), Institute for Logic, Language and Computation (ILLC), Amsterdam Brain and Cognition (ABC), University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
3
|
Fram NR, Berger J. Syncopation as Probabilistic Expectation: Conceptual, Computational, and Experimental Evidence. Cogn Sci 2023; 47:e13390. [PMID: 38043104 DOI: 10.1111/cogs.13390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 08/22/2023] [Accepted: 11/17/2023] [Indexed: 12/05/2023]
Abstract
Definitions of syncopation share two characteristics: the presence of a meter or analogous hierarchical rhythmic structure and a displacement or contradiction of that structure. These attributes are translated in terms of a Bayesian theory of syncopation, where the syncopation of a rhythm is inferred based on a hierarchical structure that is, in turn, learned from the ongoing musical stimulus. Several experiments tested its simplest possible implementation, with equally weighted priors associated with different meters and independence of auditory events, which can be decomposed into two terms representing note density and deviation from a metric hierarchy. A computational simulation demonstrated that extant measures of syncopation fall into two distinct factors analogous to the terms in the simple Bayesian model. Next, a series of behavioral experiments found that perceived syncopation is significantly related to both terms, offering support for the general Bayesian construction of syncopation. However, we also found that the prior expectations associated with different metric structures are not equal across meters and that there is an interaction between density and hierarchical deviation, implying that auditory events are not independent from each other. Together, these findings provide evidence that syncopation is a manifestation of a form of temporal expectation that can be directly represented in Bayesian terms and offer a complementary, feature-driven approach to recent Bayesian models of temporal prediction.
Collapse
Affiliation(s)
- Noah R Fram
- Center for Computer Research in Music and Acoustics, Department of Music, Stanford University
- Department of Otolaryngology, Vanderbilt University Medical Center
| | - Jonathan Berger
- Center for Computer Research in Music and Acoustics, Department of Music, Stanford University
| |
Collapse
|
4
|
Whitton SA, Jiang F. Sensorimotor synchronization with visual, auditory, and tactile modalities. PSYCHOLOGICAL RESEARCH 2023; 87:2204-2217. [PMID: 36773102 PMCID: PMC10567517 DOI: 10.1007/s00426-023-01801-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2022] [Accepted: 01/30/2023] [Indexed: 02/12/2023]
Abstract
While it is well known that humans are highly responsive to rhythm, the factors that influence our ability to synchronize remain unclear. In the current study, we examined how stimulus modality and rhythmic deviation, along with the synchronizer's level of musicality, impacted sensorimotor synchronization (SMS). Utilizing a finger-tapping task and three sensory modalities (visual, auditory, and tactile), we manipulated rhythmic deviation by varying the temporal position, intensity, and availability of cues across four deviation levels. Additionally, to determine our participants' musical familiarity and aptitude, we administered the Goldsmiths Musical Sophistication Index (Gold-MSI) questionnaire. We found that SMS to external rhythmic stimuli was significantly more precise for auditory and tactile than for visual sequences. Further, we found SMS consistency significantly decreased in all modalities with increased rhythmic deviation, suggesting rhythmic deviation directly relates to SMS difficulty. Moreover, a significant correlation was found between Gold-MSI scores and SMS consistency in the most rhythmically deviant level, such that the higher one's musical general sophistication score, the greater one's SMS ability. This held for all three modalities. Combined, these findings suggest that rhythmic synchronization performance is affected not only by the modality and rhythmic deviation of the stimuli but also by the musical general sophistication of the synchronizer.
Collapse
Affiliation(s)
| | - Fang Jiang
- Department of Psychology, University of Nevada, Reno, USA
| |
Collapse
|
5
|
Middleton J, Hakulinen J, Tiitinen K, Hella J, Keskinen T, Huuskonen P, Culver J, Linna J, Turunen M, Ziat M, Raisamo R. Data-to-music sonification and user engagement. Front Big Data 2023; 6:1206081. [PMID: 37636320 PMCID: PMC10448511 DOI: 10.3389/fdata.2023.1206081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 07/17/2023] [Indexed: 08/29/2023] Open
Abstract
The process of transforming data into sounds for auditory display provides unique user experiences and new perspectives for analyzing and interpreting data. A research study for data transformation to sounds based on musical elements, called data-to-music sonification, reveals how musical characteristics can serve analytical purposes with enhanced user engagement. An existing user engagement scale has been applied to measure engagement levels in three conditions within melodic, rhythmic, and chordal contexts. This article reports findings from a user engagement study with musical traits and states the benefits and challenges of using musical characteristics in sonifications. The results can guide the design of future sonifications of multivariable data.
Collapse
Affiliation(s)
- Jonathan Middleton
- Department of Fine and Performing Arts, Eastern Washington University, Cheney, WA, United States
- Tampere Unit for Computer-Human Interaction (TAUCHI), Tampere University, Tampere, Finland
| | - Jaakko Hakulinen
- Tampere Unit for Computer-Human Interaction (TAUCHI), Tampere University, Tampere, Finland
| | - Katariina Tiitinen
- Tampere Unit for Computer-Human Interaction (TAUCHI), Tampere University, Tampere, Finland
| | - Juho Hella
- Tampere Unit for Computer-Human Interaction (TAUCHI), Tampere University, Tampere, Finland
| | - Tuuli Keskinen
- Tampere Unit for Computer-Human Interaction (TAUCHI), Tampere University, Tampere, Finland
| | - Pertti Huuskonen
- Tampere Unit for Computer-Human Interaction (TAUCHI), Tampere University, Tampere, Finland
| | - Jeffrey Culver
- School of Business, Eastern Washington University, Spokane, WA, United States
| | - Juhani Linna
- Tampere Unit for Computer-Human Interaction (TAUCHI), Tampere University, Tampere, Finland
| | - Markku Turunen
- Tampere Unit for Computer-Human Interaction (TAUCHI), Tampere University, Tampere, Finland
| | - Mounia Ziat
- Department of Information Design and Corporate Communication, Bentley University, Waltham, MA, United States
| | - Roope Raisamo
- Tampere Unit for Computer-Human Interaction (TAUCHI), Tampere University, Tampere, Finland
| |
Collapse
|
6
|
Yu CY, Cabildo A, Grahn JA, Vanden Bosch der Nederlanden CM. Perceived rhythmic regularity is greater for song than speech: examining acoustic correlates of rhythmic regularity in speech and song. Front Psychol 2023; 14:1167003. [PMID: 37303916 PMCID: PMC10250601 DOI: 10.3389/fpsyg.2023.1167003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Accepted: 05/09/2023] [Indexed: 06/13/2023] Open
Abstract
Rhythm is a key feature of music and language, but the way rhythm unfolds within each domain differs. Music induces perception of a beat, a regular repeating pulse spaced by roughly equal durations, whereas speech does not have the same isochronous framework. Although rhythmic regularity is a defining feature of music and language, it is difficult to derive acoustic indices of the differences in rhythmic regularity between domains. The current study examined whether participants could provide subjective ratings of rhythmic regularity for acoustically matched (syllable-, tempo-, and contour-matched) and acoustically unmatched (varying in tempo, syllable number, semantics, and contour) exemplars of speech and song. We used subjective ratings to index the presence or absence of an underlying beat and correlated ratings with stimulus features to identify acoustic metrics of regularity. Experiment 1 highlighted that ratings based on the term "rhythmic regularity" did not result in consistent definitions of regularity across participants, with opposite ratings for participants who adopted a beat-based definition (song greater than speech), a normal-prosody definition (speech greater than song), or an unclear definition (no difference). Experiment 2 defined rhythmic regularity as how easy it would be to tap or clap to the utterances. Participants rated song as easier to clap or tap to than speech for both acoustically matched and unmatched datasets. Subjective regularity ratings from Experiment 2 illustrated that stimuli with longer syllable durations and with less spectral flux were rated as more rhythmically regular across domains. Our findings demonstrate that rhythmic regularity distinguishes speech from song and several key acoustic features can be used to predict listeners' perception of rhythmic regularity within and across domains as well.
Collapse
Affiliation(s)
- Chu Yi Yu
- The Brain and Mind Institute, Western University, London, ON, Canada
- Department of Psychology, Western University, London, ON, Canada
| | - Anne Cabildo
- Department of Psychology, University of Toronto, Mississauga, ON, Canada
| | - Jessica A. Grahn
- The Brain and Mind Institute, Western University, London, ON, Canada
- Department of Psychology, Western University, London, ON, Canada
| | - Christina M. Vanden Bosch der Nederlanden
- The Brain and Mind Institute, Western University, London, ON, Canada
- Department of Psychology, Western University, London, ON, Canada
- Department of Psychology, University of Toronto, Mississauga, ON, Canada
| |
Collapse
|
7
|
Wen XQ, Zhang J, Ren J. Sustained effect of auditory entrainment on sequential tapping: The role of movement path complexity. Hum Mov Sci 2023; 89:103099. [PMID: 37209521 DOI: 10.1016/j.humov.2023.103099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Revised: 09/20/2022] [Accepted: 05/11/2023] [Indexed: 05/22/2023]
Abstract
The effects of auditory-motor entrainment have generally been investigated with periodic movements. Previous research has focused on how auditory-motor entrainment is influenced by the temporal structure of rhythms. The present study aimed to investigate whether auditory entrainment improved timing performance of sequential movements with varied path structures, and whether path complexity would affect any possible sustained effect of auditory entrainment. We also investigated whether the sustained effect was moderated by hearing single- vs. multiple-pitch audio prompts. Thirty participants were enrolled to perform a sequential finger-tapping task with discrete targets, in which the algebraic ratio relation of path lengths was manipulated as path complexity. Participants completed three stages per trial: initiation (to introduce the path sequence), entrainment (tapping along with the auditory and visual cues), and timekeeping (repeating the sequence without cues). We found timing improvement in terms of mean asynchronies and absolute interval error decrease after auditory entrainment. Only interval accuracy performance during timekeeping and entrainment was affected by path complexity. Moreover, no clear difference was observed between the rhythm sets in terms of single vs. multiple pitches. In conclusion, we found that phase and interval duration accuracy of predefined isochronous sequential movements with varied path complexity can be improved by auditory entrainment, and that auditory entrainment affects our performance beyond the actual presence of the auditory cue.
Collapse
Affiliation(s)
- Xiao-Qian Wen
- School of Psychology, Shanghai University of Sport, Shanghai 200438, China; Heilongjiang Shooting, Cycling and Archery Sports Management Center, Harbin, Heilongjiang 150049, China
| | - Jun Zhang
- School of Kinesiology, Shanghai University of Sport, Shanghai 200438, China; School of Sport Communication and Information Technology, Shandong Sport University, Jinan, Shandong, China.
| | - Jie Ren
- China Table Tennis College, Shanghai University of Sport, Shanghai 200438, China.
| |
Collapse
|
8
|
Gustavson DE, Coleman PL, Wang Y, Nitin R, Petty LE, Bush CT, Mosing MA, Wesseldijk LW, Ullén F, Below JE, Cox NJ, Gordon RL. Exploring the genetics of rhythmic perception and musical engagement in the Vanderbilt Online Musicality Study. Ann N Y Acad Sci 2023; 1521:140-154. [PMID: 36718543 PMCID: PMC10038917 DOI: 10.1111/nyas.14964] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Uncovering the genetic underpinnings of musical ability and engagement is a foundational step for exploring their wide-ranging associations with cognition, health, and neurodevelopment. Prior studies have focused on using twin and family designs, demonstrating moderate heritability of musical phenotypes. The current study used genome-wide complex trait analysis and polygenic score (PGS) approaches utilizing genotype data to examine genetic influences on two musicality traits (rhythmic perception and music engagement) in N = 1792 unrelated adults in the Vanderbilt Online Musicality Study. Meta-analyzed heritability estimates (including a replication sample of Swedish individuals) were 31% for rhythmic perception and 12% for self-reported music engagement. A PGS derived from a recent study on beat synchronization ability predicted both rhythmic perception (β = 0.11) and music engagement (β = 0.19) in our sample, suggesting that genetic influences underlying self-reported beat synchronization ability also influence individuals' rhythmic discrimination aptitude and the degree to which they engage in music. Cross-trait analyses revealed a modest contribution of PGSs from several nonmusical traits (from the cognitive, personality, and circadian chronotype domains) to individual differences in musicality (β = -0.06 to 0.07). This work sheds light on the complex relationship between the genetic architecture of musical rhythm processing, beat synchronization, music engagement, and other nonmusical traits.
Collapse
Affiliation(s)
- Daniel E Gustavson
- Institute for Behavioral Genetics, University of Colorado Boulder, Boulder, Colorado, USA
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Peyton L Coleman
- School of Medicine, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Youjia Wang
- Department of Otolaryngology - Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
- College of Medicine, University of Illinois at Chicago, Chicago, Illinois, USA
| | - Rachana Nitin
- Department of Otolaryngology - Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
- Vanderbilt Brain Institute, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Lauren E Petty
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Catherine T Bush
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Miriam A Mosing
- Department of Neuroscience, Karolinska Institute, Stockholm, Sweden
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
- Melbourne School of Psychological Sciences, Faculty of Medicine, Dentistry, and Health Sciences, University of Melbourne, Melbourne, Victoria, Australia
| | - Laura W Wesseldijk
- Department of Neuroscience, Karolinska Institute, Stockholm, Sweden
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
- Melbourne School of Psychological Sciences, Faculty of Medicine, Dentistry, and Health Sciences, University of Melbourne, Melbourne, Victoria, Australia
- Department of Psychiatry, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
| | - Fredrik Ullén
- Department of Neuroscience, Karolinska Institute, Stockholm, Sweden
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Jennifer E Below
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, Tennessee, USA
- Department of Medicine, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Nancy J Cox
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, Tennessee, USA
- Department of Medicine, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Reyna L Gordon
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, Tennessee, USA
- Department of Otolaryngology - Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
- Department of Psychology, Vanderbilt University, Nashville, Tennessee, USA
| |
Collapse
|
9
|
Adiasto K, van Hooff MLM, Beckers DGJ, Geurts SAE. The sound of stress recovery: an exploratory study of self-selected music listening after stress. BMC Psychol 2023; 11:40. [PMID: 36765393 PMCID: PMC9912599 DOI: 10.1186/s40359-023-01066-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Accepted: 01/24/2023] [Indexed: 02/12/2023] Open
Abstract
BACKGROUND Empirical support for the notion that music listening is beneficial for stress recovery is inconclusive, potentially due to the methodological diversity with which the effects of music on stress recovery have been investigated. Little is presently known about which recovery activities are chosen by individuals for the purpose of stress recovery, and whether audio feature commonalities exist between different songs that are selected by individuals for the purpose of stress recovery. The current pre-registered study investigated whether audio feature commonalities can be extracted from self-selected songs for the purpose of stress recovery. Furthermore, the present study exploratorily examined the relationship between audio features and participants' desired recovery-related emotions while listening and after listening to self-selected music. METHODS Participants (N = 470) completed an online survey in which they described what music they would listen to unwind from a hypothetical stressful event. Data analysis was conducted using a split-sample procedure. A k-medoid cluster analysis was conducted to identify audio feature commonalities between self-selected songs. Multiple regression analyses were conducted to examine the relationship between audio features and desired recovery emotions. RESULTS Participants valued music listening as a recovery activity to a similar extent as watching TV, sleeping, or talking to a significant other. Cluster analyses revealed that self-selected songs for the purpose of stress recovery can be grouped into two distinct categories. The two categories of songs shared similarities in key, loudness, speechiness, acousticness, instrumentalness, liveness, musical valence, tempo, duration, and time signature, and were distinguished by danceability, energy, and mode. No audio features were significantly associated with participants' desired recovery emotions. CONCLUSIONS Although a comprehensive portrait of the relationship between audio features and stress recovery still warrants further research, the present study provides a starting point for future enquiries into the nuanced effects of musical audio features on stress recovery.
Collapse
Affiliation(s)
- Krisna Adiasto
- Behavioural Science Institute, Radboud University, Nijmegen, The Netherlands.
| | - Madelon L. M. van Hooff
- grid.5590.90000000122931605Behavioural Science Institute, Radboud University, Nijmegen, The Netherlands ,grid.36120.360000 0004 0501 5439Faculty of Psychology, Open Universiteit, Heerlen, The Netherlands
| | - Debby G. J. Beckers
- grid.5590.90000000122931605Behavioural Science Institute, Radboud University, Nijmegen, The Netherlands
| | - Sabine A. E. Geurts
- grid.5590.90000000122931605Behavioural Science Institute, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
10
|
Perceptual grouping in complex rhythmic patterns. PSYCHOLOGICAL RESEARCH 2022; 87:1293-1305. [PMID: 35972580 DOI: 10.1007/s00426-022-01717-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2022] [Accepted: 07/14/2022] [Indexed: 10/15/2022]
Abstract
Perception of simple temporal patterns has been shown to rely on accentuations in terms of intensity, pitch, or timbre, but also on grouping according to runs of the same events (intervals between successive sounds or light flashes) or significant gaps between them (Garner in The processing of information and structure. Lawrence Erlbaum, 1974; Preusser et al. in Am J Psychol 83(2):151-170 in 1970; Royer and Garner in Percept Psychophys 1(1):41-47, 1966; Royer and Garner in Percept Psychophys 7(2):115-120, 1970; Yu et al. in Atten Percept Psychophys 77(8):2728-2739, 2015). Here we investigate whether the run and gap principles can also account for participants' perceived start of complex rhythmic patterns. We also investigated the role of participants' musical training. Sixteen novices and 16 amateur musicians listened to rhythmic patterns and indicated perceived starting points by a single tap with a drumstick on electronic pads. Auditory patterns contained prominent gaps, runs, or a combination of the two for target intervals. We systematically varied task complexity in terms of the target durations of intervals constituting the patterns and overall tempos. Overall, run and gap principles proved to be useful grouping principles accounting for a large proportion (59.2%) of the selected starting positions underlining the universal relevance of these principles. Grouping principles were not as successful in predicting the perceived start of a rhythmic pattern compared to previous studies. Results indicate that additional grouping principles must be at play. Predictive power of the grouping principles varied depending on the structure of rhythmic patterns. For rhythmic patterns including longer intervals (i.e., longer gaps) the gap principle alone or in combination with the run principle showed the strongest predictive power. Novices and amateur musicians were similar in their usage of grouping principles suggesting that the underlying principles might be equally at the dispositions of performers and listeners.
Collapse
|
11
|
Mårup SH, Møller C, Vuust P. Coordination of voice, hands and feet in rhythm and beat performance. Sci Rep 2022; 12:8046. [PMID: 35577815 PMCID: PMC9110414 DOI: 10.1038/s41598-022-11783-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 04/25/2022] [Indexed: 11/11/2022] Open
Abstract
Interlimb coordination is critical to the successful performance of simple activities in everyday life and it depends on precisely timed perception–action coupling. This is particularly true in music-making, where performers often use body-movements to keep the beat while playing more complex rhythmic patterns. In the current study, we used a musical rhythmic paradigm of simultaneous rhythm/beat performance to examine how interlimb coordination between voice, hands and feet is influenced by the inherent figure-ground relationship between rhythm and beat. Sixty right-handed participants—professional musicians, amateur musicians and non-musicians—performed three short rhythmic patterns while keeping the underlying beat, using 12 different combinations of voice, hands and feet. Results revealed a bodily hierarchy with five levels (1) left foot, (2) right foot, (3) left hand, (4) right hand, (5) voice, i.e., more precise task execution was observed when the rhythm was performed with an effector occupying a higher level in the hierarchy than the effector keeping the beat. The notion of a bodily hierarchy implies that the role assigned to the different effectors is key to successful interlimb coordination: the performance level of a specific effector combination differs considerably, depending on which effector holds the supporting role of the beat and which effector holds the conducting role of the rhythm. Although performance generally increased with expertise, the evidence of the hierarchy was consistent in all three expertise groups. The effects of expertise further highlight how perception influences action. We discuss the possibility that musicians’ more robust metrical prediction models make it easier for musicians to attenuate prediction errors than non-musicians. Overall, the study suggests a comprehensive bodily hierarchy, showing how interlimb coordination is influenced by hierarchical principles in both perception and action.
Collapse
|
12
|
Abstract
Abstract. This brief statement revisits some earlier observations on what makes web-based experiments, and especially citizen science using engaging games, an attractive alternative to laboratory-based setups. It suggests web-based experimenting to be a full-grown alternative to traditional laboratory-based experiments, especially in the field of music cognition, where sampling bias is a common problem and large amounts of empirical data are needed to characterize individual variability.
Collapse
Affiliation(s)
- Henkjan Honing
- Music Cognition Group, Institute for Logic, Language and Computation, University of Amsterdam, The Netherlands
| |
Collapse
|
13
|
Bouwer FL, Nityananda V, Rouse AA, ten Cate C. Rhythmic abilities in humans and non-human animals: a review and recommendations from a methodological perspective. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200335. [PMID: 34420380 PMCID: PMC8380979 DOI: 10.1098/rstb.2020.0335] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/27/2021] [Indexed: 12/15/2022] Open
Abstract
Rhythmic behaviour is ubiquitous in both human and non-human animals, but it is unclear whether the cognitive mechanisms underlying the specific rhythmic behaviours observed in different species are related. Laboratory experiments combined with highly controlled stimuli and tasks can be very effective in probing the cognitive architecture underlying rhythmic abilities. Rhythmic abilities have been examined in the laboratory with explicit and implicit perception tasks, and with production tasks, such as sensorimotor synchronization, with stimuli ranging from isochronous sequences of artificial sounds to human music. Here, we provide an overview of experimental findings on rhythmic abilities in human and non-human animals, while critically considering the wide variety of paradigms used. We identify several gaps in what is known about rhythmic abilities. Many bird species have been tested on rhythm perception, but research on rhythm production abilities in the same birds is lacking. By contrast, research in mammals has primarily focused on rhythm production rather than perception. Many experiments also do not differentiate between possible components of rhythmic abilities, such as processing of single temporal intervals, rhythmic patterns, a regular beat or hierarchical metrical structures. For future research, we suggest a careful choice of paradigm to aid cross-species comparisons, and a critical consideration of the multifaceted abilities that underlie rhythmic behaviour. This article is part of the theme issue 'Synchrony and rhythm interaction: from the brain to behavioural ecology'.
Collapse
Affiliation(s)
- Fleur L. Bouwer
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081 BT Amsterdam, The Netherlands
- Institute for Logic, Language and Computation (ILLC), University of Amsterdam, PO Box 94242, 1090 CE Amsterdam, The Netherlands
- Department of Psychology, University of Amsterdam, PO Box 15900, 1001 NK Amsterdam, The Netherlands
| | - Vivek Nityananda
- Biosciences Institute, Faculty of Medical Sciences, Newcastle University, Henry Wellcome Building, Framlington Place, Newcastle upon Tyne NE2 4HH, UK
| | - Andrew A. Rouse
- Department of Psychology, Tufts University, Medford, MA 02155, USA
| | - Carel ten Cate
- Institute of Biology Leiden (IBL), Leiden Institute for Brain and Cognition (LIBC), Leiden University, PO Box 9505, 2300 RA Leiden, The Netherlands
| |
Collapse
|
14
|
Guo S, Peng K, Ding R, Zhou J, Liu Y, He Y, Liu Y, Li K, Liu P, Luo C, Lu J, Yao D. Chinese and Western Musical Training Impacts the Circuit in Auditory and Reward Systems. Front Neurosci 2021; 15:663015. [PMID: 34366771 PMCID: PMC8334552 DOI: 10.3389/fnins.2021.663015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Accepted: 06/29/2021] [Indexed: 11/13/2022] Open
Abstract
Previous studies have provided evidence about the brain plasticity effects of musical training, however, the issue of how expertise in music styles induced by Chinese or Western musical training affects neuroplasticity and reward responses has been less considered, especially for subjects of Chinese origin. In this work, 16 musicians who trained in the Western music style (Western-trained musicians) and 18 musicians who trained in the Chinese music style (Chinese-trained musicians) were recruited as the musician group for the experiment, while 15 non-musicians were recruited as the control group. Using a paradigm that consisted of listening to Chinese and Western music and measurements using functional magnetic resonance imaging (fMRI) technology, we found that Chinese-trained musicians activated the bilateral superior temporal gyrus (STG) when listening to music, while Western-trained musicians activated the left STG. In addition, under the condition of listening to music with Chinese style, Chinese-trained musicians have a stronger functional connection in the circuit of the auditory and reward system than Western-trained musicians. The finding is opposite under the condition of listening to music with Western style. Interestingly, it seems that the circuit of Chinese-trained musicians is partial to the right STG, while Western-trained musicians show the opposite, i.e., a tendency toward the left STG. The influence of different music styles on experienced musicians is reflected by the functional activities and connections between the auditory system and the reward system. This outcome indicates that training in Chinese music style or Western music style affects the strategies of musicians when listening to music. Musical characteristics such as rhythm, melody and cultural attributes play an important role in this process. These findings, which provide evidence for functional neuroplasticity based on musical training, can enrich our insights into the musical brain.
Collapse
Affiliation(s)
- Sijia Guo
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, China.,Center for Information in Medicine, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Ke Peng
- School of Music Education, Xinghai Conservatory of Music, Guangzhou, China
| | - Rui Ding
- Center for Information in Medicine, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Junchen Zhou
- Center for Information in Medicine, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Yan Liu
- Center for Information in Medicine, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Yao He
- Center for Information in Medicine, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Yuhong Liu
- Center for Information in Medicine, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Ke Li
- Department of Imaging, The 306th Hospital of the People's Liberation Army, Beijing, China
| | - Pei Liu
- Department of Music Education, China Conservatory of Music, Beijing, China
| | - Cheng Luo
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, China.,Center for Information in Medicine, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Jing Lu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, China.,Center for Information in Medicine, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Dezhong Yao
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, China.,Center for Information in Medicine, School of Life Sciences and Technology, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
15
|
Benedetto A, Baud-Bovy G. Tapping Force Encodes Metrical Aspects of Rhythm. Front Hum Neurosci 2021; 15:633956. [PMID: 33986651 PMCID: PMC8111927 DOI: 10.3389/fnhum.2021.633956] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2020] [Accepted: 03/26/2021] [Indexed: 11/13/2022] Open
Abstract
Humans possess the ability to extract highly organized perceptual structures from sequences of temporal stimuli. For instance, we can organize specific rhythmical patterns into hierarchical, or metrical, systems. Despite the evidence of a fundamental influence of the motor system in achieving this skill, few studies have attempted to investigate the organization of our motor representation of rhythm. To this aim, we studied-in musicians and non-musicians-the ability to perceive and reproduce different rhythms. In a first experiment participants performed a temporal order-judgment task, for rhythmical sequences presented via auditory or tactile modality. In a second experiment, they were asked to reproduce the same rhythmic sequences, while their tapping force and timing were recorded. We demonstrate that tapping force encodes the metrical aspect of the rhythm, and the strength of the coding correlates with the individual's perceptual accuracy. We suggest that the similarity between perception and tapping-force organization indicates a common representation of rhythm, shared between the perceptual and motor systems.
Collapse
Affiliation(s)
| | - Gabriel Baud-Bovy
- Robotics, Brain and Cognitive Science Unit, Italian Institute of Technology, Genoa, Italy
- Faculty of Psychology, Vita-Salute San Raffaele University, Milan, Italy
| |
Collapse
|
16
|
Samuels B, Grahn J, Henry MJ, MacDougall-Shackleton SA. European starlings (sturnus vulgaris) discriminate rhythms by rate, not temporal patterns. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:2546. [PMID: 33940875 DOI: 10.1121/10.0004215] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 03/17/2021] [Indexed: 06/12/2023]
Abstract
Humans can perceive a regular psychological pulse in music known as the beat. The evolutionary origins and neural mechanisms underlying this ability are hypothetically linked to imitative vocal learning, a rare trait found only in some species of mammals and birds. Beat perception has been demonstrated in vocal learning parrots but not in songbirds. We trained European starlings (Sturnus vulgaris) on two sound discriminations to investigate their perception of the beat and temporal structure in rhythmic patterns. First, we trained birds on a two-choice discrimination between rhythmic patterns of tones that contain or lack a regular beat. Despite receiving extensive feedback, the starlings were unable to distinguish the first two patterns. Next, we probed the temporal cues that starlings use for discriminating rhythms in general. We trained birds to discriminate a baseline set of isochronous and triplet tone sequences. On occasional probe trials, we presented transformations of the baseline patterns. The starlings' responses to the probes suggest they relied on absolute temporal features to sort the sounds into "fast" and "slow" and otherwise ignored patterns that were present. Our results support that starlings attend to local features in rhythms and are less sensitive to the global temporal organization.
Collapse
Affiliation(s)
- Brendon Samuels
- Brain and Mind Institute, Department of Psychology, University of Western Ontario, 1151 Richmond Street, London, Ontario, N6A 5K7, Canada
| | - Jessica Grahn
- Brain and Mind Institute, Department of Psychology, University of Western Ontario, 1151 Richmond Street, London, Ontario, N6A 5K7, Canada
| | - Molly J Henry
- Brain and Mind Institute, Department of Psychology, University of Western Ontario, 1151 Richmond Street, London, Ontario, N6A 5K7, Canada
| | | |
Collapse
|
17
|
Morris PO, Hope E, Foulsham T, Mills JP. Dance, rhythm, and autism spectrum disorder: An explorative study. ARTS IN PSYCHOTHERAPY 2021. [DOI: 10.1016/j.aip.2020.101755] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
18
|
Bouvet CJ, Bardy BG, Keller PE, Dalla Bella S, Nozaradan S, Varlet M. Accent-induced Modulation of Neural and Movement Patterns during Spontaneous Synchronization to Auditory Rhythms. J Cogn Neurosci 2020; 32:2260-2271. [DOI: 10.1162/jocn_a_01605] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Human rhythmic movements spontaneously synchronize with auditory rhythms at various frequency ratios. The emergence of more complex relationships—for instance, frequency ratios of 1:2 and 1:3—is enhanced by adding a congruent accentuation pattern (binary for 1:2 and ternary for 1:3), resulting in a 1:1 movement–accentuation relationship. However, this benefit of accentuation on movement synchronization appears to be stronger for the ternary pattern than for the binary pattern. Here, we investigated whether this difference in accent-induced movement synchronization may be related to a difference in the neural tracking of these accentuation profiles. Accented and control unaccented auditory sequences were presented to participants who concurrently produced finger taps at their preferred frequency, and spontaneous movement synchronization was measured. EEG was recorded during passive listening to each auditory sequence. The results revealed that enhanced movement synchronization with ternary accentuation was accompanied by enhanced neural tracking of this pattern. Larger EEG responses at the accentuation frequency were found for the ternary pattern compared with the binary pattern. Moreover, the amplitude of accent-induced EEG responses was positively correlated with the magnitude of accent-induced movement synchronization across participants. Altogether, these findings show that the dynamics of spontaneous auditory–motor synchronization is strongly driven by the multi-time-scale sensory processing of auditory rhythms, highlighting the importance of considering neural responses to rhythmic sequences for understanding and enhancing synchronization performance.
Collapse
Affiliation(s)
| | | | | | - Simone Dalla Bella
- Université Montpellier
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
- University of Montreal
- University of Economics and Human Sciences in Warsaw
| | - Sylvie Nozaradan
- Western Sydney University
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
- Université Catholique de Louvain
| | | |
Collapse
|
19
|
Pesnot Lerousseau J, Hidalgo C, Schön D. Musical Training for Auditory Rehabilitation in Hearing Loss. J Clin Med 2020; 9:jcm9041058. [PMID: 32276390 PMCID: PMC7230165 DOI: 10.3390/jcm9041058] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2020] [Revised: 04/02/2020] [Accepted: 04/06/2020] [Indexed: 01/17/2023] Open
Abstract
Despite the overall success of cochlear implantation, language outcomes remain suboptimal and subject to large inter-individual variability. Early auditory rehabilitation techniques have mostly focused on low-level sensory abilities. However, a new body of literature suggests that cognitive operations are critical for auditory perception remediation. We argue in this paper that musical training is a particularly appealing candidate for such therapies, as it involves highly relevant cognitive abilities, such as temporal predictions, hierarchical processing, and auditory-motor interactions. We review recent studies demonstrating that music can enhance both language perception and production at multiple levels, from syllable processing to turn-taking in natural conversation.
Collapse
|
20
|
Bouwer FL, Honing H, Slagter HA. Beat-based and Memory-based Temporal Expectations in Rhythm: Similar Perceptual Effects, Different Underlying Mechanisms. J Cogn Neurosci 2020; 32:1221-1241. [PMID: 31933432 DOI: 10.1162/jocn_a_01529] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Predicting the timing of incoming information allows the brain to optimize information processing in dynamic environments. Behaviorally, temporal expectations have been shown to facilitate processing of events at expected time points, such as sounds that coincide with the beat in musical rhythm. Yet, temporal expectations can develop based on different forms of structure in the environment, not just the regularity afforded by a musical beat. Little is still known about how different types of temporal expectations are neurally implemented and affect performance. Here, we orthogonally manipulated the periodicity and predictability of rhythmic sequences to examine the mechanisms underlying beat-based and memory-based temporal expectations, respectively. Behaviorally and using EEG, we looked at the effects of beat-based and memory-based expectations on auditory processing when rhythms were task-relevant or task-irrelevant. At expected time points, both beat-based and memory-based expectations facilitated target detection and led to attenuation of P1 and N1 responses, even when expectations were task-irrelevant (unattended). For beat-based expectations, we additionally found reduced target detection and enhanced N1 responses for events at unexpected time points (e.g., off-beat), regardless of the presence of memory-based expectations or task relevance. This latter finding supports the notion that periodicity selectively induces rhythmic fluctuations in neural excitability and furthermore indicates that, although beat-based and memory-based expectations may similarly affect auditory processing of expected events, their underlying neural mechanisms may be different.
Collapse
|
21
|
Task-set control, chunking, and hierarchical timing in rhythm production. PSYCHOLOGICAL RESEARCH 2019; 83:1685-1702. [PMID: 29909429 DOI: 10.1007/s00426-018-1038-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2017] [Accepted: 06/12/2018] [Indexed: 10/14/2022]
Abstract
We investigated task-set control processes and chunking in 16 novices and 16 amateur musicians, who produced unimanual rhythms in three experimental conditions: low-level timing tasks required isochronous tapping at constant target durations; sequencing tasks consisted of individual rhythmic patterns comprising multiple target durations; the task-set control condition required alternations between two rhythmic patterns. According to our hierarchical timing control model conditions differed in their task-set control demands necessary to provide rhythm programs for the sequencing of individual intervals. Transitions at predicted chunk boundaries were marked by increased frequencies of sequence errors, relative lengthening of intervals preceding the switch to a new rhythm chunk, and increased variabilities in intervals immediately following a switch. Amateur musicians showed superior timing (less variability) in complex rhythm tasks. Moreover, they made fewer sequence errors than novices at set-switch points with their error patterns suggesting that they relied on larger chunks compared with novices. Our findings elucidate the time course of task reconfiguration processes in rhythm production and the role of chunking in the context of musical skill.
Collapse
|
22
|
Bouvet CJ, Varlet M, Dalla Bella S, Keller PE, Bardy BG. Accent-induced stabilization of spontaneous auditory-motor synchronization. PSYCHOLOGICAL RESEARCH 2019; 84:2196-2209. [PMID: 31203454 DOI: 10.1007/s00426-019-01208-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2018] [Accepted: 06/03/2019] [Indexed: 01/12/2023]
Abstract
Humans spontaneously synchronize their movements with external auditory rhythms such as a metronome or music. Although such synchronization preferentially occurs toward a simple 1:1 movement-sound frequency ratio, the parameters facilitating spontaneous synchronization to more complex frequency ratios remain largely unclear. The present study investigates the dynamics of spontaneous auditory-motor synchronization at a range of frequency ratios between movement and sound, and examines the benefit of simple accentuation pattern on synchronization emergence and stability. Participants performed index finger oscillations at their preferred tempo while listening to a metronome presented at either their preferred tempo, or twice or three times faster (frequency ratios of 1:1, 1:2 or 1:3) with different patterns of accentuation (unaccented, binary or ternary accented), and no instruction to synchronize. Participants' movements were spontaneously entrained to the auditory stimuli in the three different frequency ratio conditions. Moreover, the emergence and stability of the modes of coordination were influenced by the interaction between frequency ratio and pattern of accentuation. Coherent patterns, such as a 1:3 frequency ratio supported by a ternary accentuation, facilitated the emergence and stability of the corresponding mode of coordination. Furthermore, ternary accentuation induced a greater gain in stability for the corresponding mode of coordination than was observed with binary accentuation. Together, these findings demonstrate the importance of matching accentuation pattern and movement tempo for enhanced synchronization, opening new perspectives for stabilizing complex rhythmic motor behaviors, such as running.
Collapse
Affiliation(s)
- Cécile J Bouvet
- EuroMov, Univ. Montpellier, Montpellier, France.
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, Australia.
| | - Manuel Varlet
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, Australia
- School of Social Sciences and Psychology, Western Sydney University, Penrith, Australia
| | - Simone Dalla Bella
- EuroMov, Univ. Montpellier, Montpellier, France
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
- Department of Psychology, University of Montreal, Montreal, Canada
- Department of Cognitive Psychology, WSFiZ in Warsaw, Warsaw, Poland
| | - Peter E Keller
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, Australia
| | | |
Collapse
|
23
|
Cameron DJ, Zioga I, Lindsen JP, Pearce MT, Wiggins GA, Potter K, Bhattacharya J. Neural entrainment is associated with subjective groove and complexity for performed but not mechanical musical rhythms. Exp Brain Res 2019; 237:1981-1991. [PMID: 31152188 PMCID: PMC6647194 DOI: 10.1007/s00221-019-05557-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2018] [Accepted: 05/07/2019] [Indexed: 11/29/2022]
Abstract
Both movement and neural activity in humans can be entrained by the regularities of an external stimulus, such as the beat of musical rhythms. Neural entrainment to auditory rhythms supports temporal perception, and is enhanced by selective attention and by hierarchical temporal structure imposed on rhythms. However, it is not known how neural entrainment to rhythms is related to the subjective experience of groove (the desire to move along with music or rhythm), the perception of a regular beat, the perception of complexity, and the experience of pleasure. In two experiments, we used musical rhythms (from Steve Reich’s Clapping Music) to investigate whether rhythms that are performed by humans (with naturally variable timing) and rhythms that are mechanical (with precise timing), elicit differences in (1) neural entrainment, as measured by inter-trial phase coherence, and (2) subjective ratings of the complexity, preference, groove, and beat strength of rhythms. We also combined results from the two experiments to investigate relationships between neural entrainment and subjective perception of musical rhythms. We found that mechanical rhythms elicited a greater degree of neural entrainment than performed rhythms, likely due to the greater temporal precision in the stimulus, and the two types only elicited different ratings for some individual rhythms. Neural entrainment to performed rhythms, but not to mechanical ones, correlated with subjective desire to move and subjective complexity. These data, therefore, suggest multiple interacting influences on neural entrainment to rhythms, from low-level stimulus properties to high-level cognition and perception.
Collapse
Affiliation(s)
- Daniel J Cameron
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, ON, Canada.
| | - Ioanna Zioga
- School of Biological and Chemical Sciences, Queen Mary University of London, London, UK
| | - Job P Lindsen
- Department of Psychology, Goldsmiths, University of London, London, UK
| | - Marcus T Pearce
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK
| | - Geraint A Wiggins
- AI Lab, Vrije Universiteit Brussel, Brussels, Belgium
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK
| | - Keith Potter
- Department of Music, Goldsmiths, University of London, London, UK
| | | |
Collapse
|
24
|
Music synchronizes brainwaves across listeners with strong effects of repetition, familiarity and training. Sci Rep 2019; 9:3576. [PMID: 30837633 PMCID: PMC6401073 DOI: 10.1038/s41598-019-40254-w] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2018] [Accepted: 02/12/2019] [Indexed: 11/08/2022] Open
Abstract
Music tends to be highly repetitive, both in terms of musical structure and in terms of listening behavior, yet little is known about how engagement changes with repeated exposure. Here we postulate that engagement with music affects the inter-subject correlation of brain responses during listening. We predict that repeated exposure to music will affect engagement and thus inter-subject correlation. Across repeated exposures to instrumental music, inter-subject correlation decreased for music written in a familiar style. Participants with formal musical training showed more inter-subject correlation, and sustained it across exposures to music in an unfamiliar style. This distinguishes music from other domains, where repetition has consistently been shown to decrease inter-subject correlation. Overall, the study suggests that listener engagement tends to decrease across repeated exposures of familiar music, but that unfamiliar musical styles can sustain an audience's interest, in particular in individuals with some musical training. Future work needs to validate the link proposed here between music engagement and inter-subject correlation of brain responses during listening.
Collapse
|