1
|
Uemura M, Katagiri Y, Imai E, Kawahara Y, Otani Y, Ichinose T, Kondo K, Kowa H. Dorsal Anterior Cingulate Cortex Coordinates Contextual Mental Imagery for Single-Beat Manipulation during Rhythmic Sensorimotor Synchronization. Brain Sci 2024; 14:757. [PMID: 39199452 PMCID: PMC11352649 DOI: 10.3390/brainsci14080757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2024] [Revised: 07/17/2024] [Accepted: 07/23/2024] [Indexed: 09/01/2024] Open
Abstract
Flexible pulse-by-pulse regulation of sensorimotor synchronization is crucial for voluntarily showing rhythmic behaviors synchronously with external cueing; however, the underpinning neurophysiological mechanisms remain unclear. We hypothesized that the dorsal anterior cingulate cortex (dACC) plays a key role by coordinating both proactive and reactive motor outcomes based on contextual mental imagery. To test our hypothesis, a missing-oddball task in finger-tapping paradigms was conducted in 33 healthy young volunteers. The dynamic properties of the dACC were evaluated by event-related deep-brain activity (ER-DBA), supported by event-related potential (ERP) analysis and behavioral evaluation based on signal detection theory. We found that ER-DBA activation/deactivation reflected a strategic choice of motor control modality in accordance with mental imagery. Reverse ERP traces, as omission responses, confirmed that the imagery was contextual. We found that mental imagery was updated only by environmental changes via perceptual evidence and response-based abductive reasoning. Moreover, stable on-pulse tapping was achievable by maintaining proactive control while creating an imagery of syncopated rhythms from simple beat trains, whereas accuracy was degraded with frequent erroneous tapping for missing pulses. We conclude that the dACC voluntarily regulates rhythmic sensorimotor synchronization by utilizing contextual mental imagery based on experience and by creating novel rhythms.
Collapse
Affiliation(s)
- Maho Uemura
- Department of Rehabilitation Science, Kobe University Graduate School of Health Sciences, Kobe 654-0142, Japan; (Y.O.); (H.K.)
- School of Music, Mukogawa Women’s University, Nishinomiya 663-8558, Japan;
| | - Yoshitada Katagiri
- Department of Bioengineering, School of Engineering, The University of Tokyo, Tokyo 113-8655, Japan;
| | - Emiko Imai
- Department of Biophysics, Kobe University Graduate School of Health Sciences, Kobe 654-0142, Japan;
| | - Yasuhiro Kawahara
- Department of Human life and Health Sciences, Division of Arts and Sciences, The Open University of Japan, Chiba 261-8586, Japan;
| | - Yoshitaka Otani
- Department of Rehabilitation Science, Kobe University Graduate School of Health Sciences, Kobe 654-0142, Japan; (Y.O.); (H.K.)
- Faculty of Rehabilitation, Kobe International University, Kobe 658-0032, Japan
| | - Tomoko Ichinose
- School of Music, Mukogawa Women’s University, Nishinomiya 663-8558, Japan;
| | | | - Hisatomo Kowa
- Department of Rehabilitation Science, Kobe University Graduate School of Health Sciences, Kobe 654-0142, Japan; (Y.O.); (H.K.)
| |
Collapse
|
2
|
Aharoni M, Breska A, Müller MM, Schröger E. Mechanisms of sustained perceptual entrainment after stimulus offset. Eur J Neurosci 2024; 59:1047-1060. [PMID: 37150801 DOI: 10.1111/ejn.16032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Revised: 04/21/2023] [Accepted: 04/22/2023] [Indexed: 05/09/2023]
Abstract
Temporal alignment of neural activity to rhythmic stimulation has been suggested to result from a resonating internal neural oscillator mechanism, but can also be explained by interval-based temporal prediction. Here, we investigate behavioural and brain responses in the post-stimulation period to compare an oscillatory versus an interval-based account. Hickok et al.'s (2015) behavioural paradigm yielded results that relate to a neural oscillatory entrainment mechanism. We adapted the paradigm to an event-related potential (ERP) suitable design: a periodic sequence was followed, in half of the trials, by near-threshold targets embedded in noise. The targets were played in various phases in relation to the preceding sequences' period. Participants had to detect whether targets were played or not, and their EEG was recorded. Both behavioural results and the P300 component of the ERP were not only partially consistent with an oscillatory mechanism but also partially consistent with an interval-based attentional gain mechanism. Instead, data obtained in the post-entrainment period can best be explained with a combination of both mechanisms.
Collapse
Affiliation(s)
- Moran Aharoni
- Edmund and Lilly Safra Center for Brain Science, The Hebrew University of Jerusalem, Jerusalem, Israel
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
| | - Assaf Breska
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Matthias M Müller
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
| | - Erich Schröger
- Wilhelm Wundt Institute for Psychology, Leipzig University, Leipzig, Germany
| |
Collapse
|
3
|
Fiorin G, Delfitto D. Syncopation as structure bootstrapping: the role of asymmetry in rhythm and language. Front Psychol 2024; 15:1304485. [PMID: 38440243 PMCID: PMC10911290 DOI: 10.3389/fpsyg.2024.1304485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 01/22/2024] [Indexed: 03/06/2024] Open
Abstract
Syncopation - the occurrence of a musical event on a metrically weak position preceding a rest on a metrically strong position - represents an important challenge in the study of the mapping between rhythm and meter. In this contribution, we present the hypothesis that syncopation is an effective strategy to elicit the bootstrapping of a multi-layered, hierarchically organized metric structure from a linear rhythmic surface. The hypothesis is inspired by a parallel with the problem of linearization in natural language syntax, which is the problem of how hierarchically organized phrase-structure markers are mapped onto linear sequences of words. The hypothesis has important consequences for the role of meter in music perception and cognition and, more particularly, for its role in the relationship between rhythm and bodily entrainment.
Collapse
Affiliation(s)
- Gaetano Fiorin
- Department of Humanities, University of Trieste, Trieste, Italy
| | - Denis Delfitto
- Department of Cultures and Civilizations, University of Verona, Verona, Italy
| |
Collapse
|
4
|
Barchet AV, Henry MJ, Pelofi C, Rimmele JM. Auditory-motor synchronization and perception suggest partially distinct time scales in speech and music. COMMUNICATIONS PSYCHOLOGY 2024; 2:2. [PMID: 39242963 PMCID: PMC11332030 DOI: 10.1038/s44271-023-00053-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Accepted: 12/19/2023] [Indexed: 09/09/2024]
Abstract
Speech and music might involve specific cognitive rhythmic timing mechanisms related to differences in the dominant rhythmic structure. We investigate the influence of different motor effectors on rate-specific processing in both domains. A perception and a synchronization task involving syllable and piano tone sequences and motor effectors typically associated with speech (whispering) and music (finger-tapping) were tested at slow (~2 Hz) and fast rates (~4.5 Hz). Although synchronization performance was generally better at slow rates, the motor effectors exhibited specific rate preferences. Finger-tapping was advantaged compared to whispering at slow but not at faster rates, with synchronization being effector-dependent at slow, but highly correlated at faster rates. Perception of speech and music was better at different rates and predicted by a fast general and a slow finger-tapping synchronization component. Our data suggests partially independent rhythmic timing mechanisms for speech and music, possibly related to a differential recruitment of cortical motor circuitry.
Collapse
Affiliation(s)
- Alice Vivien Barchet
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
| | - Molly J Henry
- Research Group 'Neural and Environmental Rhythms', Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Department of Psychology, Toronto Metropolitan University, Toronto, Canada
| | - Claire Pelofi
- Music and Audio Research Laboratory, New York University, New York, NY, USA
- Max Planck NYU Center for Language, Music, and Emotion, New York, NY, USA
| | - Johanna M Rimmele
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
- Max Planck NYU Center for Language, Music, and Emotion, New York, NY, USA.
| |
Collapse
|
5
|
Bouwer FL, Háden GP, Honing H. Probing Beat Perception with Event-Related Potentials (ERPs) in Human Adults, Newborns, and Nonhuman Primates. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1455:227-256. [PMID: 38918355 DOI: 10.1007/978-3-031-60183-5_13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/27/2024]
Abstract
The aim of this chapter is to give an overview of how the perception of rhythmic temporal regularity such as a regular beat in music can be studied in human adults, human newborns, and nonhuman primates using event-related brain potentials (ERPs). First, we discuss different aspects of temporal structure in general, and musical rhythm in particular, and we discuss the possible mechanisms underlying the perception of regularity (e.g., a beat) in rhythm. Additionally, we highlight the importance of dissociating beat perception from the perception of other types of structure in rhythm, such as predictable sequences of temporal intervals, ordinal structure, and rhythmic grouping. In the second section of the chapter, we start with a discussion of auditory ERPs elicited by infrequent and frequent sounds: ERP responses to regularity violations, such as mismatch negativity (MMN), N2b, and P3, as well as early sensory responses to sounds, such as P1 and N1, have been shown to be instrumental in probing beat perception. Subsequently, we discuss how beat perception can be probed by comparing ERP responses to sounds in regular and irregular sequences, and by comparing ERP responses to sounds in different metrical positions in a rhythm, such as on and off the beat or on strong and weak beats. Finally, we will discuss previous research that has used the aforementioned ERPs and paradigms to study beat perception in human adults, human newborns, and nonhuman primates. In doing so, we consider the possible pitfalls and prospects of the technique, as well as future perspectives.
Collapse
Affiliation(s)
- Fleur L Bouwer
- Cognitive Psychology Unit, Institute of Psychology, Leiden Institute for Brain and Cognition, Leiden University, Leiden, The Netherlands.
- Department of Psychology, Brain & Cognition, University of Amsterdam, Amsterdam, The Netherlands.
| | - Gábor P Háden
- Institute of Cognitive Neuroscience and Psychology, Budapest, Hungary
- Department of Telecommunications and Media Informatics, Faculty of Electrical Engineering and Informatics, Budapest University of Technology and Economics, Budapest, Hungary
| | - Henkjan Honing
- Music Cognition group (MCG), Institute for Logic, Language and Computation (ILLC), Amsterdam Brain and Cognition (ABC), University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
6
|
Beker S, Molholm S. Do we all synch alike? Brain-body-environment interactions in ASD. Front Neural Circuits 2023; 17:1275896. [PMID: 38186630 PMCID: PMC10769494 DOI: 10.3389/fncir.2023.1275896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 11/27/2023] [Indexed: 01/09/2024] Open
Abstract
Autism Spectrum Disorder (ASD) is characterized by rigidity of routines and restricted interests, and atypical social communication and interaction. Recent evidence for altered synchronization of neuro-oscillatory brain activity with regularities in the environment and of altered peripheral nervous system function in ASD present promising novel directions for studying pathophysiology and its relationship to ASD clinical phenotype. Human cognition and action are significantly influenced by physiological rhythmic processes that are generated by both the central nervous system (CNS) and the autonomic nervous system (ANS). Normally, perception occurs in a dynamic context, where brain oscillations and autonomic signals synchronize with external events to optimally receive temporally predictable rhythmic information, leading to improved performance. The recent findings on the time-sensitive coupling between the brain and the periphery in effective perception and successful social interactions in typically developed highlight studying the interactions within the brain-body-environment triad as a critical direction in the study of ASD. Here we offer a novel perspective of autism as a case where the temporal dynamics of brain-body-environment coupling is impaired. We present evidence from the literature to support the idea that in autism the nervous system fails to operate in an adaptive manner to synchronize with temporally predictable events in the environment to optimize perception and behavior. This framework could potentially lead to novel biomarkers of hallmark deficits in ASD such as cognitive rigidity and altered social interaction.
Collapse
Affiliation(s)
- Shlomit Beker
- Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, United States
| | | |
Collapse
|
7
|
Nguyen T, Flaten E, Trainor LJ, Novembre G. Early social communication through music: State of the art and future perspectives. Dev Cogn Neurosci 2023; 63:101279. [PMID: 37515832 PMCID: PMC10407289 DOI: 10.1016/j.dcn.2023.101279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 07/03/2023] [Accepted: 07/14/2023] [Indexed: 07/31/2023] Open
Abstract
A growing body of research shows that the universal capacity for music perception and production emerges early in development. Possibly building on this predisposition, caregivers around the world often communicate with infants using songs or speech entailing song-like characteristics. This suggests that music might be one of the earliest developing and most accessible forms of interpersonal communication, providing a platform for studying early communicative behavior. However, little research has examined music in truly communicative contexts. The current work aims to facilitate the development of experimental approaches that rely on dynamic and naturalistic social interactions. We first review two longstanding lines of research that examine musical interactions by focusing either on the caregiver or the infant. These include defining the acoustic and non-acoustic features that characterize infant-directed (ID) music, as well as behavioral and neurophysiological research examining infants' processing of musical timing and pitch. Next, we review recent studies looking at early musical interactions holistically. This research focuses on how caregivers and infants interact using music to achieve co-regulation, mutual engagement, and increase affiliation and prosocial behavior. We conclude by discussing methodological, technological, and analytical advances that might empower a comprehensive study of musical communication in early childhood.
Collapse
Affiliation(s)
- Trinh Nguyen
- Neuroscience of Perception and Action Lab, Italian Institute of Technology, Rome, Italy.
| | - Erica Flaten
- Department of Psychology, Neuroscience and Behavior, McMaster University, Hamilton, Canada
| | - Laurel J Trainor
- Department of Psychology, Neuroscience and Behavior, McMaster University, Hamilton, Canada; McMaster Institute for Music and the Mind, McMaster University, Hamilton, Canada; Rotman Research Institute, Baycrest Hospital, Toronto, Canada
| | - Giacomo Novembre
- Neuroscience of Perception and Action Lab, Italian Institute of Technology, Rome, Italy
| |
Collapse
|
8
|
Matthews TE, Stupacher J, Vuust P. The Pleasurable Urge to Move to Music Through the Lens of Learning Progress. J Cogn 2023; 6:55. [PMID: 37720891 PMCID: PMC10503533 DOI: 10.5334/joc.320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Accepted: 08/17/2023] [Indexed: 09/19/2023] Open
Abstract
Interacting with music is a uniquely pleasurable activity that is ubiquitous across human cultures. Current theories suggest that a prominent driver of musical pleasure responses is the violation and confirmation of temporal predictions. For example, the pleasurable urge to move to music (PLUMM), which is associated with the broader concept of groove, is higher for moderately complex rhythms compared to simple and complex rhythms. This inverted U-shaped relation between PLUMM and rhythmic complexity is thought to result from a balance between predictability and uncertainty. That is, moderately complex rhythms lead to strongly weighted prediction errors which elicit an urge to move to reinforce the predictive model (i.e., the meter). However, the details of these processes and how they bring about positive affective responses are currently underspecified. We propose that the intrinsic motivation for learning progress drives PLUMM and informs the music humans choose to listen to, dance to, and create. Here, learning progress reflects the rate of prediction error minimization over time. Accordingly, reducible prediction errors signal the potential for learning progress, producing a pleasurable, curious state characterized by the mobilization of attentional and memory resources. We discuss this hypothesis in the context of current psychological and neuroscientific research on musical pleasure and PLUMM. We propose a theoretical neuroscientific model focusing on the roles of dopamine and norepinephrine within a feedback loop linking prediction-based learning, curiosity, and memory. This perspective provides testable predictions that will motivate future research to further illuminate the fundamental relation between predictions, movement, and reward.
Collapse
Affiliation(s)
- Tomas E. Matthews
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University Hospital, Nørrebrogade 44, Building 1A, 8000 Aarhus C, Denmark
- Royal Academy of Music, Skovgaardsgade 2C, DK-8000 Aarhus C, Denmark
| | - Jan Stupacher
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University Hospital, Nørrebrogade 44, Building 1A, 8000 Aarhus C, Denmark
- Royal Academy of Music, Skovgaardsgade 2C, DK-8000 Aarhus C, Denmark
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University Hospital, Nørrebrogade 44, Building 1A, 8000 Aarhus C, Denmark
- Royal Academy of Music, Skovgaardsgade 2C, DK-8000 Aarhus C, Denmark
| |
Collapse
|
9
|
Correa JP. Cross-Modal Musical Expectancy in Complex Sound Music: A Grounded Theory. J Cogn 2023; 6:33. [PMID: 37426063 PMCID: PMC10327858 DOI: 10.5334/joc.281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Accepted: 05/16/2023] [Indexed: 07/11/2023] Open
Abstract
Expectancy is a core mechanism for constructing affective and cognitive experiences of music. However, research on musical expectations has been largely founded upon the perception of tonal music. Therefore, it is still to be determined how this mechanism explains the cognition of sound-based acoustic and electroacoustic music, such as complex sound music (CSM). Additionally, the dominant methodologies have consisted of well-controlled experimental designs with low ecological validity that have overlooked the listening experience as described by the listeners. This paper presents results concerning musical expectancy from a qualitative research project that investigated the listening experiences of 15 participants accustomed to CSM listening. Corbin and Strauss' (2015) grounded theory was used to triangulate data from interviews along with musical analyses of the pieces chosen by the participants to describe their listening experiences. Cross-modal musical expectancy (CMME) emerged from the data as a subcategory that explained prediction through the interaction of multimodal elements beyond just the acoustic properties of music. The results led to hypothesise that multimodal information coming from sounds, performance gestures, and indexical, iconic, and conceptual associations re-enact cross-modal schemata and episodic memories where real and imagined sounds, objects, actions, and narratives interrelate to give rise to CMME processes. This construct emphasises the effect of CSM's subversive acoustic features and performance practices on the listening experience. Further, it reveals the multiplicity of factors involved in musical expectancy, such as cultural values, subjective musical and non-musical experiences, music structure, listening situation, and psychological mechanisms. Following these ideas, CMME is conceived as a grounded cognition process.
Collapse
|
10
|
Large EW, Roman I, Kim JC, Cannon J, Pazdera JK, Trainor LJ, Rinzel J, Bose A. Dynamic models for musical rhythm perception and coordination. Front Comput Neurosci 2023; 17:1151895. [PMID: 37265781 PMCID: PMC10229831 DOI: 10.3389/fncom.2023.1151895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Accepted: 04/28/2023] [Indexed: 06/03/2023] Open
Abstract
Rhythmicity permeates large parts of human experience. Humans generate various motor and brain rhythms spanning a range of frequencies. We also experience and synchronize to externally imposed rhythmicity, for example from music and song or from the 24-h light-dark cycles of the sun. In the context of music, humans have the ability to perceive, generate, and anticipate rhythmic structures, for example, "the beat." Experimental and behavioral studies offer clues about the biophysical and neural mechanisms that underlie our rhythmic abilities, and about different brain areas that are involved but many open questions remain. In this paper, we review several theoretical and computational approaches, each centered at different levels of description, that address specific aspects of musical rhythmic generation, perception, attention, perception-action coordination, and learning. We survey methods and results from applications of dynamical systems theory, neuro-mechanistic modeling, and Bayesian inference. Some frameworks rely on synchronization of intrinsic brain rhythms that span the relevant frequency range; some formulations involve real-time adaptation schemes for error-correction to align the phase and frequency of a dedicated circuit; others involve learning and dynamically adjusting expectations to make rhythm tracking predictions. Each of the approaches, while initially designed to answer specific questions, offers the possibility of being integrated into a larger framework that provides insights into our ability to perceive and generate rhythmic patterns.
Collapse
Affiliation(s)
- Edward W. Large
- Department of Psychological Sciences, University of Connecticut, Mansfield, CT, United States
- Department of Physics, University of Connecticut, Mansfield, CT, United States
| | - Iran Roman
- Music and Audio Research Laboratory, New York University, New York, NY, United States
| | - Ji Chul Kim
- Department of Psychological Sciences, University of Connecticut, Mansfield, CT, United States
| | - Jonathan Cannon
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, ON, Canada
| | - Jesse K. Pazdera
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, ON, Canada
| | - Laurel J. Trainor
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, ON, Canada
| | - John Rinzel
- Center for Neural Science, New York University, New York, NY, United States
- Courant Institute of Mathematical Sciences, New York University, New York, NY, United States
| | - Amitabha Bose
- Department of Mathematical Sciences, New Jersey Institute of Technology, Newark, NJ, United States
| |
Collapse
|
11
|
Beck J, Konieczny L. What a difference a syllable makes-Rhythmic reading of poetry. Front Psychol 2023; 14:1043651. [PMID: 36865353 PMCID: PMC9973453 DOI: 10.3389/fpsyg.2023.1043651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 01/06/2023] [Indexed: 02/15/2023] Open
Abstract
In reading conventional poems aloud, the rhythmic experience is coupled with the projection of meter, enabling the prediction of subsequent input. However, it is unclear how top-down and bottom-up processes interact. If the rhythmicity in reading loud is governed by the top-down prediction of metric patterns of weak and strong stress, these should be projected also onto a randomly included, lexically meaningless syllable. If bottom-up information such as the phonetic quality of consecutive syllables plays a functional role in establishing a structured rhythm, the occurrence of the lexically meaningless syllable should affect reading and the number of these syllables in a metrical line should modulate this effect. To investigate this, we manipulated poems by replacing regular syllables at random positions with the syllable "tack". Participants were instructed to read the poems aloud and their voice was recorded during the reading. At the syllable level, we calculated the syllable onset interval (SOI) as a measure of articulation duration, as well as the mean syllable intensity. Both measures were supposed to operationalize how strongly a syllable was stressed. Results show that the average articulation duration of metrically strong regular syllables was longer than for weak syllables. This effect disappeared for "tacks". Syllable intensities, on the other hand, captured metrical stress of "tacks" as well, but only for musically active participants. Additionally, we calculated the normalized pairwise variability index (nPVI) for each line as an indicator for rhythmic contrast, i.e., the alternation between long and short, as well as louder and quieter syllables, to estimate the influence of "tacks" on reading rhythm. For SOI the nPVI revealed a clear negative effect: When "tacks" occurred, lines appeared to be read less altering, and this effect was proportional to the number of tacks per line. For intensity, however, the nPVI did not capture significant effects. Results suggests that top-down prediction does not always suffice to maintain a rhythmic gestalt across a series of syllables that carry little bottom-up prosodic information. Instead, the constant integration of sufficiently varying bottom-up information appears necessary to maintain a stable metrical pattern prediction.
Collapse
Affiliation(s)
- Judith Beck
- Center for Cognitive Science, Institute of Psychology, University of Freiburg, Freiburg, Germany
| | | |
Collapse
|
12
|
Park KS, Williams DM, Etnier JL. Exploring the use of music to promote physical activity: From the viewpoint of psychological hedonism. Front Psychol 2023; 14:1021825. [PMID: 36760458 PMCID: PMC9905642 DOI: 10.3389/fpsyg.2023.1021825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Accepted: 01/09/2023] [Indexed: 01/26/2023] Open
Abstract
Despite the global efforts to encourage people to regularly participate in physical activity (PA) at moderate-to-vigorous intensity, an inadequate number of adults and adolescents worldwide meet the recommended dose of PA. A major challenge to promoting PA is that sedentary or low-active people experience negative shifts in affective valence (feeling bad versus good) in response to moderate-to-vigorous intensity PA. Interestingly, empirical data indicate that listening to music during acute bouts of PA positively alters affective valence (feeling good versus bad), reduces perceived exertion, and improves physical performance and oxygen utilization efficiency. From the viewpoint of the ancient principle of psychological hedonism - humans have ultimate desires to obtain pleasure and avoid displeasure - we elaborate on three putative mechanisms underlying the affective and ergogenic effects of music on acute bouts of PA: (1) musical pleasure and reward, (2) rhythmic entrainment, and (3) sensory distraction from physical exertion. Given that a positive shift in affective valence during an acute bout of PA is associated with more PA in the future, an important question arises as to whether the affective effect of music on acute PA can be carried over to promote long-term PA. Although this research question seems intuitive, to our knowledge, it has been scarcely investigated. We propose a theoretical model of Music as an Affective Stimulant to Physical Activity (MASPA) to further explain the putative mechanisms underlying the use of music to promote long-term PA. We believe there have been important gaps in music-based interventions in terms of the rationale supporting various components of the intervention and the efficacy of these interventions to promote long-term PA. Our specification of relevant mechanisms and proposal of a new theoretical model may advance our understanding of the optimal use of music as an affective, ergogenic, and sensory stimulant for PA promotion. Future directions are suggested to address the gaps in the literature.
Collapse
Affiliation(s)
- Kyoung Shin Park
- Department of Kinesiology, University of North Carolina at Greensboro, Greensboro, NC, United States,*Correspondence: Kyoung Shin Park, ✉
| | - David M. Williams
- Center for Health Promotion and Health Equity, Brown University, Providence, RI, United States
| | - Jennifer L. Etnier
- Department of Kinesiology, University of North Carolina at Greensboro, Greensboro, NC, United States
| |
Collapse
|
13
|
Ladányi E, Novakovic M, Boorom OA, Aaron AS, Scartozzi AC, Gustavson DE, Nitin R, Bamikole PO, Vaughan C, Fromboluti EK, Schuele CM, Camarata SM, McAuley JD, Gordon RL. Using Motor Tempi to Understand Rhythm and Grammatical Skills in Developmental Language Disorder and Typical Language Development. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2023; 4:1-28. [PMID: 36875176 PMCID: PMC9979588 DOI: 10.1162/nol_a_00082] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 09/19/2022] [Indexed: 04/18/2023]
Abstract
Children with developmental language disorder (DLD) show relative weaknesses on rhythm tasks beyond their characteristic linguistic impairments. The current study compares preferred tempo and the width of an entrainment region for 5- to 7-year-old typically developing (TD) children and children with DLD and considers the associations with rhythm aptitude and expressive grammar skills in the two populations. Preferred tempo was measured with a spontaneous motor tempo task (tapping tempo at a comfortable speed), and the width (range) of an entrainment region was measured by the difference between the upper (slow) and lower (fast) limits of tapping a rhythm normalized by an individual's spontaneous motor tempo. Data from N = 16 children with DLD and N = 114 TD children showed that whereas entrainment-region width did not differ across the two groups, slowest motor tempo, the determinant of the upper (slow) limit of the entrainment region, was at a faster tempo in children with DLD vs. TD. In other words, the DLD group could not pace their slow tapping as slowly as the TD group. Entrainment-region width was positively associated with rhythm aptitude and receptive grammar even after taking into account potential confounding factors, whereas expressive grammar did not show an association with any of the tapping measures. Preferred tempo was not associated with any study variables after including covariates in the analyses. These results motivate future neuroscientific studies of low-frequency neural oscillatory mechanisms as the potential neural correlates of entrainment-region width and their associations with musical rhythm and spoken language processing in children with typical and atypical language development.
Collapse
Affiliation(s)
- Enikő Ladányi
- Department of Otolaryngology—Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN
- Department of Linguistics, University of Potsdam, Potsdam, Germany
| | - Michaela Novakovic
- Department of Pharmacology, Northwestern University Feinberg School of Medicine, Chicago, IL
| | - Olivia A. Boorom
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
- Department of Speech-Language-Hearing: Sciences and Disorders, University of Kansas, Lawrence, KS
| | - Allison S. Aaron
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA
| | - Alyssa C. Scartozzi
- Department of Otolaryngology—Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN
- Vanderbilt Genetics Institute, Vanderbilt University, Nashville, TN
| | - Daniel E. Gustavson
- Institute for Behavioral Genetics, University of Colorado Boulder, Boulder, CO
| | - Rachana Nitin
- Department of Otolaryngology—Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN
| | - Peter O. Bamikole
- Department of Anesthesiology and Perioperative Medicine, Oregon Health & Science University, Portland, OR
| | - Chloe Vaughan
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | | | - C. Melanie Schuele
- Department of Hearing and Speech Sciences, Vanderbilt University School of Medicine, Nashville, TN
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN
| | - Stephen M. Camarata
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN
| | - J. Devin McAuley
- Department of Psychology, Michigan State University, East Lansing, MI
| | - Reyna L. Gordon
- Department of Otolaryngology—Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN
- Vanderbilt Genetics Institute, Vanderbilt University, Nashville, TN
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN
| |
Collapse
|
14
|
Tichko P, Page N, Kim JC, Large EW, Loui P. Neural Entrainment to Musical Pulse in Naturalistic Music Is Preserved in Aging: Implications for Music-Based Interventions. Brain Sci 2022; 12:brainsci12121676. [PMID: 36552136 PMCID: PMC9775503 DOI: 10.3390/brainsci12121676] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 11/21/2022] [Accepted: 12/01/2022] [Indexed: 12/12/2022] Open
Abstract
Neural entrainment to musical rhythm is thought to underlie the perception and production of music. In aging populations, the strength of neural entrainment to rhythm has been found to be attenuated, particularly during attentive listening to auditory streams. However, previous studies on neural entrainment to rhythm and aging have often employed artificial auditory rhythms or limited pieces of recorded, naturalistic music, failing to account for the diversity of rhythmic structures found in natural music. As part of larger project assessing a novel music-based intervention for healthy aging, we investigated neural entrainment to musical rhythms in the electroencephalogram (EEG) while participants listened to self-selected musical recordings across a sample of younger and older adults. We specifically measured neural entrainment to the level of musical pulse-quantified here as the phase-locking value (PLV)-after normalizing the PLVs to each musical recording's detected pulse frequency. As predicted, we observed strong neural phase-locking to musical pulse, and to the sub-harmonic and harmonic levels of musical meter. Overall, PLVs were not significantly different between older and younger adults. This preserved neural entrainment to musical pulse and rhythm could support the design of music-based interventions that aim to modulate endogenous brain activity via self-selected music for healthy cognitive aging.
Collapse
Affiliation(s)
- Parker Tichko
- Department of Music, Northeastern University, Boston, MA 02115, USA
| | - Nicole Page
- Department of Music, Northeastern University, Boston, MA 02115, USA
| | - Ji Chul Kim
- Department of Psychological Sciences, University of Connecticut, Storrs, CT 06269, USA
| | - Edward W. Large
- Department of Psychological Sciences, University of Connecticut, Storrs, CT 06269, USA
| | - Psyche Loui
- Department of Music, Northeastern University, Boston, MA 02115, USA
- Correspondence:
| |
Collapse
|
15
|
Theta Band (4-8 Hz) Oscillations Reflect Online Processing of Rhythm in Speech Production. Brain Sci 2022; 12:brainsci12121593. [PMID: 36552053 PMCID: PMC9775388 DOI: 10.3390/brainsci12121593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 11/08/2022] [Accepted: 11/14/2022] [Indexed: 11/24/2022] Open
Abstract
How speech prosody is processed in the brain during language production remains an unsolved issue. The present work used the phrase-recall paradigm to analyze brain oscillation underpinning rhythmic processing in speech production. Participants were told to recall target speeches aloud consisting of verb-noun pairings with a common (e.g., [2+2], the numbers in brackets represent the number of syllables) or uncommon (e.g., [1+3]) rhythmic pattern. Target speeches were preceded by rhythmic musical patterns, either congruent or incongruent, created by using pure tones at various temporal intervals. Electroencephalogram signals were recorded throughout the experiment. Behavioral results in 2+2 target speeches showed a rhythmic priming effect when comparing congruent and incongruent conditions. Cerebral-acoustic coherence analysis showed that neural activities synchronized with the rhythmic patterns of primes. Furthermore, target phrases that had congruent rhythmic patterns with a prime rhythm were associated with increased theta-band (4-8 Hz) activity in the time window of 400-800 ms in both the 2+2 and 1+3 target conditions. These findings suggest that rhythmic patterns can be processed online. Neural activities synchronize with the rhythmic input and speakers create an abstract rhythmic pattern before and during articulation in speech production.
Collapse
|
16
|
Cantiani C, Dondena C, Molteni M, Riva V, Piazza C. Synchronizing with the rhythm: Infant neural entrainment to complex musical and speech stimuli. Front Psychol 2022; 13:944670. [PMID: 36337544 PMCID: PMC9635850 DOI: 10.3389/fpsyg.2022.944670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Accepted: 09/22/2022] [Indexed: 11/14/2022] Open
Abstract
Neural entrainment is defined as the process whereby brain activity, and more specifically neuronal oscillations measured by EEG, synchronize with exogenous stimulus rhythms. Despite the importance that neural oscillations have assumed in recent years in the field of auditory neuroscience and speech perception, in human infants the oscillatory brain rhythms and their synchronization with complex auditory exogenous rhythms are still relatively unexplored. In the present study, we investigate infant neural entrainment to complex non-speech (musical) and speech rhythmic stimuli; we provide a developmental analysis to explore potential similarities and differences between infants' and adults' ability to entrain to the stimuli; and we analyze the associations between infants' neural entrainment measures and the concurrent level of development. 25 8-month-old infants were included in the study. Their EEG signals were recorded while they passively listened to non-speech and speech rhythmic stimuli modulated at different rates. In addition, Bayley Scales were administered to all infants to assess their cognitive, language, and social-emotional development. Neural entrainment to the incoming rhythms was measured in the form of peaks emerging from the EEG spectrum at frequencies corresponding to the rhythm envelope. Analyses of the EEG spectrum revealed clear responses above the noise floor at frequencies corresponding to the rhythm envelope, suggesting that - similarly to adults - infants at 8 months of age were capable of entraining to the incoming complex auditory rhythms. Infants' measures of neural entrainment were associated with concurrent measures of cognitive and social-emotional development.
Collapse
Affiliation(s)
- Chiara Cantiani
- Child Psychopathology Unit, Scientific Institute, IRCCS Eugenio Medea, Lecco, Italy
| | - Chiara Dondena
- Child Psychopathology Unit, Scientific Institute, IRCCS Eugenio Medea, Lecco, Italy
| | - Massimo Molteni
- Child Psychopathology Unit, Scientific Institute, IRCCS Eugenio Medea, Lecco, Italy
| | - Valentina Riva
- Child Psychopathology Unit, Scientific Institute, IRCCS Eugenio Medea, Lecco, Italy
| | - Caterina Piazza
- Bioengineering Lab, Scientific Institute, IRCCS Eugenio Medea, Lecco, Italy
| |
Collapse
|
17
|
Testing beat perception without sensory cues to the beat: the Beat-Drop Alignment Test (BDAT). Atten Percept Psychophys 2022; 84:2702-2714. [PMID: 36261763 PMCID: PMC9630205 DOI: 10.3758/s13414-022-02592-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/29/2022] [Indexed: 11/22/2022]
Abstract
Beat perception can serve as a window into internal time-keeping mechanisms, auditory–motor interactions, and aspects of cognition. One aspect of beat perception is the covert continuation of an internal pulse. Of the several popular tests of beat perception, none provide a satisfying test of this faculty of covert continuation. The current study proposes a new beat-perception test focused on covert pulse continuation: The Beat-Drop Alignment Test (BDAT). In this test, participants must identify the beat in musical excerpts and then judge whether a single probe falls on or off the beat. The probe occurs during a short break in the rhythmic components of the music when no rhythmic events are present, forcing participants to judge beat alignment relative to an internal pulse maintained in the absence of local acoustic timing cues. Here, we present two large (N > 100) tests of the BDAT. In the first, we explore the effect of test item parameters (e.g., probe displacement) on performance. In the second, we correlate scores on an adaptive version of the BDAT with the computerized adaptive Beat Alignment Test (CA-BAT) scores and indices of musical experience. Musical experience indices outperform CA-BAT score as a predictor of BDAT score, suggesting that the BDAT measures a distinct aspect of beat perception that is more experience-dependent and may draw on cognitive resources such as working memory and musical imagery differently than the BAT. The BDAT may prove useful in future behavioral and neural research on beat perception, and all stimuli and code are freely available for download.
Collapse
|
18
|
The rediscovered motor-related area 55b emerges as a core hub of music perception. Commun Biol 2022; 5:1104. [PMID: 36257973 PMCID: PMC9579133 DOI: 10.1038/s42003-022-04009-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 09/19/2022] [Indexed: 12/03/2022] Open
Abstract
Passive listening to music, without sound production or evident movement, is long known to activate motor control regions. Nevertheless, the exact neuroanatomical correlates of the auditory-motor association and its underlying neural mechanisms have not been fully determined. Here, based on a NeuroSynth meta-analysis and three original fMRI paradigms of music perception, we show that the long-ignored pre-motor region, area 55b, an anatomically unique and functionally intriguing region, is a core hub of music perception. Moreover, results of a brain-behavior correlation analysis implicate neural entrainment as the underlying mechanism of area 55b’s contribution to music perception. In view of the current results and prior literature, area 55b is proposed as a keystone of sensorimotor integration, a fundamental brain machinery underlying simple to hierarchically complex behaviors. Refining the neuroanatomical and physiological understanding of sensorimotor integration is expected to have a major impact on various fields, from brain disorders to artificial general intelligence. Functional magnetic resonance imaging data acquired during passive listening to music suggest that pre-motor area 55b acts as a core hub of music processing in humans.
Collapse
|
19
|
Daikoku T, Goswami U. Hierarchical amplitude modulation structures and rhythm patterns: Comparing Western musical genres, song, and nature sounds to Babytalk. PLoS One 2022; 17:e0275631. [PMID: 36240225 PMCID: PMC9565671 DOI: 10.1371/journal.pone.0275631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Accepted: 09/20/2022] [Indexed: 11/19/2022] Open
Abstract
Statistical learning of physical stimulus characteristics is important for the development of cognitive systems like language and music. Rhythm patterns are a core component of both systems, and rhythm is key to language acquisition by infants. Accordingly, the physical stimulus characteristics that yield speech rhythm in "Babytalk" may also describe the hierarchical rhythmic relationships that characterize human music and song. Computational modelling of the amplitude envelope of "Babytalk" (infant-directed speech, IDS) using a demodulation approach (Spectral-Amplitude Modulation Phase Hierarchy model, S-AMPH) can describe these characteristics. S-AMPH modelling of Babytalk has shown previously that bands of amplitude modulations (AMs) at different temporal rates and their phase relations help to create its structured inherent rhythms. Additionally, S-AMPH modelling of children's nursery rhymes shows that different rhythm patterns (trochaic, iambic, dactylic) depend on the phase relations between AM bands centred on ~2 Hz and ~5 Hz. The importance of these AM phase relations was confirmed via a second demodulation approach (PAD, Probabilistic Amplitude Demodulation). Here we apply both S-AMPH and PAD to demodulate the amplitude envelopes of Western musical genres and songs. Quasi-rhythmic and non-human sounds found in nature (birdsong, rain, wind) were utilized for control analyses. We expected that the physical stimulus characteristics in human music and song from an AM perspective would match those of IDS. Given prior speech-based analyses, we also expected that AM cycles derived from the modelling may identify musical units like crotchets, quavers and demi-quavers. Both models revealed an hierarchically-nested AM modulation structure for music and song, but not nature sounds. This AM modulation structure for music and song matched IDS. Both models also generated systematic AM cycles yielding musical units like crotchets and quavers. Both music and language are created by humans and shaped by culture. Acoustic rhythm in IDS and music appears to depend on many of the same physical characteristics, facilitating learning.
Collapse
Affiliation(s)
- Tatsuya Daikoku
- Centre for Neuroscience in Education, University of Cambridge, Cambridge, United Kingdom
- International Research Center for Neurointelligence, The University of Tokyo, Bunkyo City, Tokyo, Japan
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Hiroshima, Japan
- * E-mail:
| | - Usha Goswami
- Centre for Neuroscience in Education, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
20
|
Otero M, Lea-Carnall C, Prado P, Escobar MJ, El-Deredy W. Modelling neural entrainment and its persistence: influence of frequency of stimulation and phase at the stimulus offset. Biomed Phys Eng Express 2022; 8. [PMID: 35320793 DOI: 10.1088/2057-1976/ac605a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Accepted: 03/23/2022] [Indexed: 11/12/2022]
Abstract
Neural entrainment, the synchronization of brain oscillations to the frequency of an external stimuli, is a key mechanism that shapes perceptual and cognitive processes.Objective.Using simulations, we investigated the dynamics of neural entrainment, particularly the period following the end of the stimulation, since the persistence (reverberation) of neural entrainment may condition future sensory representations based on predictions about stimulus rhythmicity.Methods.Neural entrainment was assessed using a modified Jansen-Rit neural mass model (NMM) of coupled cortical columns, in which the spectral features of the output resembled that of the electroencephalogram (EEG). We evaluated spectro-temporal features of entrainment as a function of the stimulation frequency, the resonant frequency of the neural populations comprising the NMM, and the coupling strength between cortical columns. Furthermore, we tested if the entrainment persistence depended on the phase of the EEG-like oscillation at the time the stimulus ended.Main Results.The entrainment of the column that received the stimulation was maximum when the frequency of the entrainer was within a narrow range around the resonant frequency of the column. When this occurred, entrainment persisted for several cycles after the stimulus terminated, and the propagation of the entrainment to other columns was facilitated. Propagation also depended on the resonant frequency of the second column, and the coupling strength between columns. The duration of the persistence of the entrainment depended on the phase of the neural oscillation at the time the entrainer terminated, such that falling phases (fromπ/2 to 3π/2 in a sine function) led to longer persistence than rising phases (from 0 toπ/2 and 3π/2 to 2π).Significance.The study bridges between models of neural oscillations and empirical electrophysiology, providing insights to the mechanisms underlying neural entrainment and the use of rhythmic sensory stimulation for neuroenhancement.
Collapse
Affiliation(s)
- Mónica Otero
- Escuela de Ingeniería Biomédica, Universidad de Valparaíso, Chile.,Advanced Center for Electric and Electronic Engineering, Valparaíso, Chile
| | - Caroline Lea-Carnall
- Division of Neuroscience and Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, United Kingdom
| | - Pavel Prado
- Latin-American Brain Health Institute (BrainLat), Universidad Adolfo Ibañez, Chile
| | | | - Wael El-Deredy
- Escuela de Ingeniería Biomédica, Universidad de Valparaíso, Chile.,Advanced Center for Electric and Electronic Engineering, Valparaíso, Chile.,Division of Neuroscience and Experimental Psychology, School of Biological Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, United Kingdom
| |
Collapse
|
21
|
Cariani P, Baker JM. Time Is of the Essence: Neural Codes, Synchronies, Oscillations, Architectures. Front Comput Neurosci 2022; 16:898829. [PMID: 35814343 PMCID: PMC9262106 DOI: 10.3389/fncom.2022.898829] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 05/04/2022] [Indexed: 11/25/2022] Open
Abstract
Time is of the essence in how neural codes, synchronies, and oscillations might function in encoding, representation, transmission, integration, storage, and retrieval of information in brains. This Hypothesis and Theory article examines observed and possible relations between codes, synchronies, oscillations, and types of neural networks they require. Toward reverse-engineering informational functions in brains, prospective, alternative neural architectures incorporating principles from radio modulation and demodulation, active reverberant circuits, distributed content-addressable memory, signal-signal time-domain correlation and convolution operations, spike-correlation-based holography, and self-organizing, autoencoding anticipatory systems are outlined. Synchronies and oscillations are thought to subserve many possible functions: sensation, perception, action, cognition, motivation, affect, memory, attention, anticipation, and imagination. These include direct involvement in coding attributes of events and objects through phase-locking as well as characteristic patterns of spike latency and oscillatory response. They are thought to be involved in segmentation and binding, working memory, attention, gating and routing of signals, temporal reset mechanisms, inter-regional coordination, time discretization, time-warping transformations, and support for temporal wave-interference based operations. A high level, partial taxonomy of neural codes consists of channel, temporal pattern, and spike latency codes. The functional roles of synchronies and oscillations in candidate neural codes, including oscillatory phase-offset codes, are outlined. Various forms of multiplexing neural signals are considered: time-division, frequency-division, code-division, oscillatory-phase, synchronized channels, oscillatory hierarchies, polychronous ensembles. An expandable, annotative neural spike train framework for encoding low- and high-level attributes of events and objects is proposed. Coding schemes require appropriate neural architectures for their interpretation. Time-delay, oscillatory, wave-interference, synfire chain, polychronous, and neural timing networks are discussed. Some novel concepts for formulating an alternative, more time-centric theory of brain function are discussed. As in radio communication systems, brains can be regarded as networks of dynamic, adaptive transceivers that broadcast and selectively receive multiplexed temporally-patterned pulse signals. These signals enable complex signal interactions that select, reinforce, and bind common subpatterns and create emergent lower dimensional signals that propagate through spreading activation interference networks. If memory traces share the same kind of temporal pattern forms as do active neuronal representations, then distributed, holograph-like content-addressable memories are made possible via temporal pattern resonances.
Collapse
Affiliation(s)
- Peter Cariani
- Hearing Research Center, Boston University, Boston, MA, United States
- Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, MA, United States
| | | |
Collapse
|
22
|
Tichko P, Kim JC, Large E, Loui P. Integrating music-based interventions with Gamma-frequency stimulation: Implications for healthy ageing. Eur J Neurosci 2022; 55:3303-3323. [PMID: 33236353 PMCID: PMC9899516 DOI: 10.1111/ejn.15059] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2020] [Revised: 11/18/2020] [Accepted: 11/18/2020] [Indexed: 02/07/2023]
Abstract
In recent years, music-based interventions (MBIs) have risen in popularity as a non-invasive, sustainable form of care for treating dementia-related disorders, such as Mild Cognitive Impairment (MCI) and Alzheimer's disease (AD). Despite their clinical potential, evidence regarding the efficacy of MBIs on patient outcomes is mixed. Recently, a line of related research has begun to investigate the clinical impact of non-invasive Gamma-frequency (e.g., 40 Hz) sensory stimulation on dementia. Current work, using non-human-animal models of AD, suggests that non-invasive Gamma-frequency stimulation can remediate multiple pathophysiologies of dementia at the molecular, cellular and neural-systems scales, and, importantly, improve cognitive functioning. These findings suggest that the efficacy of MBIs could, in theory, be enhanced by incorporating Gamma-frequency stimulation into current MBI protocols. In the current review, we propose a novel clinical framework for non-invasively treating dementia-related disorders that combines previous MBIs with current approaches employing Gamma-frequency sensory stimulation. We theorize that combining MBIs with Gamma-frequency stimulation could increase the therapeutic power of MBIs by simultaneously targeting multiple biomarkers of dementia, restoring neural activity that underlies learning and memory (e.g., Gamma-frequency neural activity, Theta-Gamma coupling), and actively engaging auditory and reward networks in the brain to promote behavioural change.
Collapse
Affiliation(s)
- Parker Tichko
- Department of Music, Northeastern University, Boston, MA, USA
| | - Ji Chul Kim
- Perception, Action, Cognition (PAC) Division, Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA
| | - Edward Large
- Perception, Action, Cognition (PAC) Division, Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA,Center for the Ecological Study of Perception & Action (CESPA), Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA,Department of Physics, University of Connecticut, Storrs, CT, USA
| | - Psyche Loui
- Department of Music, Northeastern University, Boston, MA, USA
| |
Collapse
|
23
|
Wei Y, Hancock R, Mozeiko J, Large EW. The relationship between entrainment dynamics and reading fluency assessed by sensorimotor perturbation. Exp Brain Res 2022; 240:1775-1790. [PMID: 35507069 DOI: 10.1007/s00221-022-06369-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 04/06/2022] [Indexed: 11/25/2022]
Abstract
A consistent relationship has been found between rhythmic processing and reading skills. Impairment of the ability to entrain movements to an auditory rhythm in clinical populations with language-related deficits, such as children with developmental dyslexia, has been found in both behavioral and neural studies. In this study, we explored the relationship between rhythmic entrainment, behavioral synchronization, reading fluency, and reading comprehension in neurotypical English- and Mandarin-speaking adults. First, we examined entrainment stability by asking participants to coordinate taps with an auditory metronome in which unpredictable perturbations were introduced to disrupt entrainment. Next, we assessed behavioral synchronization by asking participants to coordinate taps with the syllables they produced while reading sentences as naturally as possible (tap to syllable task). Finally, we measured reading fluency and reading comprehension for native English and native Mandarin speakers. Stability of entrainment correlated strongly with tap to syllable task performance and with reading fluency, and both findings generalized across English and Mandarin speakers.
Collapse
Affiliation(s)
- Yi Wei
- Department of Psychological Sciences, University of Connecticut, Storrs, USA.
- Brain Imaging Research Center, University of Connecticut, Storrs, USA.
- The Connecticut Institute for the Brain and Cognitive Sciences of University of Connecticut, Storrs, USA.
| | - Roeland Hancock
- Department of Psychological Sciences, University of Connecticut, Storrs, USA
- Brain Imaging Research Center, University of Connecticut, Storrs, USA
- The Connecticut Institute for the Brain and Cognitive Sciences of University of Connecticut, Storrs, USA
| | - Jennifer Mozeiko
- Department of Speech, Language and Hearing Sciences, University of Connecticut, Storrs, USA
| | - Edward W Large
- Department of Psychological Sciences, University of Connecticut, Storrs, USA
- Department of Physics, University of Connecticut, Storrs, USA
- Brain Imaging Research Center, University of Connecticut, Storrs, USA
- The Connecticut Institute for the Brain and Cognitive Sciences of University of Connecticut, Storrs, USA
| |
Collapse
|
24
|
Flaten E, Marshall SA, Dittrich A, Trainor L. Evidence for Top-down Meter Perception in Infancy as Shown by Primed Neural Responses to an Ambiguous Rhythm. Eur J Neurosci 2022; 55:2003-2023. [PMID: 35445451 DOI: 10.1111/ejn.15671] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 03/23/2022] [Accepted: 03/24/2022] [Indexed: 11/30/2022]
Abstract
From auditory rhythm patterns, listeners extract the underlying steady beat, and perceptually group beats to form meters. While previous studies show infants discriminate different auditory meters, it remains unknown whether they can maintain (imagine) a metrical interpretation of an ambiguous rhythm through top-down processes. We investigated this via electroencephalographic mismatch responses. We primed 6-month-old infants (N = 24) to hear a 6-beat ambiguous rhythm either in duple meter (n = 13), or in triple meter (n = 11) through loudness accents either on every second or every third beat. Periods of priming were inserted before sequences of the ambiguous unaccented rhythm. To elicit mismatch responses, occasional pitch deviants occurred on either beat 4 (strong beat in triple meter; weak in duple) or beat 5 (strong in duple; weak in triple) of the unaccented trials. At frontal left sites, we found a significant interaction between beat and priming group in the predicted direction. Post-hoc analyses showed mismatch response amplitudes were significantly larger for beat 5 in the duple- than triple-primed group (p = .047) and were non-significantly larger for beat 4 in the triple- than duple-primed group. Further, amplitudes were generally larger in infants with musically experienced parents. At frontal right sites, mismatch responses were generally larger for those in the duple compared to triple group, which may reflect a processing advantage for duple meter. These results indicate infants can impose a top-down, internally generated meter on ambiguous auditory rhythms, an ability that would aid early language and music learning.
Collapse
Affiliation(s)
- Erica Flaten
- Department of Psychology, Neuroscience and Behaviour, McMaster University
| | - Sara A Marshall
- Department of Psychology, Neuroscience and Behaviour, McMaster University
| | - Angela Dittrich
- Department of Psychology, Neuroscience and Behaviour, McMaster University
| | - Laurel Trainor
- Department of Psychology, Neuroscience and Behaviour, McMaster University.,McMaster Institute for Music and the Mind, McMaster University.,Rotman Research Institute, Baycrest Hospital, Toronto, ON, Canada
| |
Collapse
|
25
|
Grossberg S. Toward Understanding the Brain Dynamics of Music: Learning and Conscious Performance of Lyrics and Melodies With Variable Rhythms and Beats. Front Syst Neurosci 2022; 16:766239. [PMID: 35465193 PMCID: PMC9028030 DOI: 10.3389/fnsys.2022.766239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2021] [Accepted: 02/23/2022] [Indexed: 11/13/2022] Open
Abstract
A neural network architecture models how humans learn and consciously perform musical lyrics and melodies with variable rhythms and beats, using brain design principles and mechanisms that evolved earlier than human musical capabilities, and that have explained and predicted many kinds of psychological and neurobiological data. One principle is called factorization of order and rhythm: Working memories store sequential information in a rate-invariant and speaker-invariant way to avoid using excessive memory and to support learning of language, spatial, and motor skills. Stored invariant representations can be flexibly performed in a rate-dependent and speaker-dependent way under volitional control. A canonical working memory design stores linguistic, spatial, motoric, and musical sequences, including sequences with repeated words in lyrics, or repeated pitches in songs. Stored sequences of individual word chunks and pitch chunks are categorized through learning into lyrics chunks and pitches chunks. Pitches chunks respond selectively to stored sequences of individual pitch chunks that categorize harmonics of each pitch, thereby supporting tonal music. Bottom-up and top-down learning between working memory and chunking networks dynamically stabilizes the memory of learned music. Songs are learned by associatively linking sequences of lyrics and pitches chunks. Performance begins when list chunks read word chunk and pitch chunk sequences into working memory. Learning and performance of regular rhythms exploits cortical modulation of beats that are generated in the basal ganglia. Arbitrary performance rhythms are learned by adaptive timing circuits in the cerebellum interacting with prefrontal cortex and basal ganglia. The same network design that controls walking, running, and finger tapping also generates beats and the urge to move with a beat.
Collapse
Affiliation(s)
- Stephen Grossberg
- Center for Adaptive Systems, Graduate Program in Cognitive and Neural Systems, Department of Mathematics & Statistics, Psychological & Brain Sciences, and Biomedical Engineering, Boston University, Boston, MA, United States
| |
Collapse
|
26
|
Palmer C, Demos AP. Are We in Time? How Predictive Coding and Dynamical Systems Explain Musical Synchrony. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 2022; 31:147-153. [PMID: 35400858 PMCID: PMC8988459 DOI: 10.1177/09637214211053635] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Humans tend to anticipate events when they synchronize their actions with sound (such as when they clap to music), which has puzzled scientists for decades. What accounts for this anticipation? We review two theoretical mechanisms for synchrony: predictive coding and dynamical systems. Both theories are grounded in neural activation patterns, but there are important distinctions. We contrast their assumptions, their computations, and their musical applications to anticipatory synchronization.
Collapse
|
27
|
Di Liberto GM, Hjortkjær J, Mesgarani N. Editorial: Neural Tracking: Closing the Gap Between Neurophysiology and Translational Medicine. Front Neurosci 2022; 16:872600. [PMID: 35368278 PMCID: PMC8966872 DOI: 10.3389/fnins.2022.872600] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Accepted: 02/17/2022] [Indexed: 11/25/2022] Open
Affiliation(s)
- Giovanni M. Di Liberto
- School of Computer Science and Statistics, Trinity College Dublin, Dublin, Ireland
- ADAPT Centre, d-real, Trinity College Institute for Neuroscience, Dublin, Ireland
- *Correspondence: Giovanni M. Di Liberto
| | - Jens Hjortkjær
- Hearing Systems Group, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, Ireland
| | - Nima Mesgarani
- Electrical Engineering Department, Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, United States
| |
Collapse
|
28
|
Tichko P, Kim JC, Large EW. A Dynamical, Radically Embodied, and Ecological Theory of Rhythm Development. Front Psychol 2022; 13:653696. [PMID: 35282203 PMCID: PMC8907845 DOI: 10.3389/fpsyg.2022.653696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Accepted: 01/03/2022] [Indexed: 11/13/2022] Open
Abstract
Musical rhythm abilities-the perception of and coordinated action to the rhythmic structure of music-undergo remarkable change over human development. In the current paper, we introduce a theoretical framework for modeling the development of musical rhythm. The framework, based on Neural Resonance Theory (NRT), explains rhythm development in terms of resonance and attunement, which are formalized using a general theory that includes non-linear resonance and Hebbian plasticity. First, we review the developmental literature on musical rhythm, highlighting several developmental processes related to rhythm perception and action. Next, we offer an exposition of Neural Resonance Theory and argue that elements of the theory are consistent with dynamical, radically embodied (i.e., non-representational) and ecological approaches to cognition and development. We then discuss how dynamical models, implemented as self-organizing networks of neural oscillations with Hebbian plasticity, predict key features of music development. We conclude by illustrating how the notions of dynamical embodiment, resonance, and attunement provide a conceptual language for characterizing musical rhythm development, and, when formalized in physiologically informed dynamical models, provide a theoretical framework for generating testable empirical predictions about musical rhythm development, such as the kinds of native and non-native rhythmic structures infants and children can learn, steady-state evoked potentials to native and non-native musical rhythms, and the effects of short-term (e.g., infant bouncing, infant music classes), long-term (e.g., perceptual narrowing to musical rhythm), and very-long term (e.g., music enculturation, musical training) learning on music perception-action.
Collapse
Affiliation(s)
- Parker Tichko
- Department of Music, Northeastern University, Boston, MA, United States
| | - Ji Chul Kim
- Perception, Action, Cognition (PAC) Division, Department of Psychological Sciences, University of Connecticut, Mansfield, CT, United States
| | - Edward W. Large
- Perception, Action, Cognition (PAC) Division, Department of Psychological Sciences, University of Connecticut, Mansfield, CT, United States
- Center for the Ecological Study of Perception and Action (CESPA), Department of Psychological Sciences, University of Connecticut, Mansfield, CT, United States
- Department of Physics, University of Connecticut, Mansfield, CT, United States
| |
Collapse
|
29
|
Mosabbir AA, Braun Janzen T, Al Shirawi M, Rotzinger S, Kennedy SH, Farzan F, Meltzer J, Bartel L. Investigating the Effects of Auditory and Vibrotactile Rhythmic Sensory Stimulation on Depression: An EEG Pilot Study. Cureus 2022; 14:e22557. [PMID: 35371676 PMCID: PMC8958118 DOI: 10.7759/cureus.22557] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/23/2022] [Indexed: 12/18/2022] Open
Abstract
Background Major depressive disorder (MDD) is a persistent psychiatric condition and one of the leading causes of global disease burden. In a previous study, we investigated the effects of a five-week intervention consisting of rhythmic gamma frequency (30-70 Hz) vibroacoustic stimulation in 20 patients formally diagnosed with MDD. In that study, the findings suggested a significant clinical improvement in depression symptoms as measured using the Montgomery-Asberg Depression Rating Scale (MADRS), with 37% of participants meeting the criteria for clinical response. The goal of the present research was to examine possible changes from baseline to posttreatment in resting-state electroencephalography (EEG) recordings using the same treatment protocol and to characterize basic changes in EEG related to treatment response. Materials and methods The study sample consisted of 19 individuals aged 18-70 years with a clinical diagnosis of MDD. The participants were assessed before and after a five-week treatment period, which consisted of listening to an instrumental musical track on a vibroacoustic device, delivering auditory and vibrotactile stimulus in the gamma-band range (30-70 Hz, with particular emphasis on 40 Hz). The primary outcome measure was the change in Montgomery-Asberg Depression Rating Scale (MADRS) from baseline to posttreatment and resting-state EEG. Results Analysis comparing MADRS score at baseline and post-intervention indicated a significant change in the severity of depression symptoms after five weeks (t = 3.9923, df = 18, p = 0.0009). The clinical response rate was 36.85%. Resting-state EEG power analysis revealed a significant increase in occipital alpha power (t = -2.149, df = 18, p = 0.04548), as well as an increase in the prefrontal gamma power of the responders (t = 2.8079, df = 13.431, p = 0.01442). Conclusions The results indicate that improvements in MADRS scores after rhythmic sensory stimulation (RSS) were accompanied by an increase in alpha power in the occipital region and an increase in gamma in the prefrontal region, thus suggesting treatment effects on cortical activity in depression. The results of this pilot study will help inform subsequent controlled studies evaluating whether treatment response to vibroacoustic stimulation constitutes a real and replicable reduction of depressive symptoms and to characterize the underlying mechanisms.
Collapse
Affiliation(s)
| | | | | | - Susan Rotzinger
- Department of Psychiatry, University Health Network, Toronto, CAN
| | - Sidney H Kennedy
- Centre for Depression and Suicide Studies, St. Michael's Hospital, Toronto, CAN
| | - Faranak Farzan
- School of Mechatronic Systems Engineering, Simon Fraser University, Surrey, CAN
| | - Jed Meltzer
- Rotman Research Institute, Baycrest Health Sciences, Toronto, CAN
| | - Lee Bartel
- Faculty of Music, University of Toronto, Toronto, CAN
| |
Collapse
|
30
|
Braun Janzen T, Koshimori Y, Richard NM, Thaut MH. Rhythm and Music-Based Interventions in Motor Rehabilitation: Current Evidence and Future Perspectives. Front Hum Neurosci 2022; 15:789467. [PMID: 35111007 PMCID: PMC8801707 DOI: 10.3389/fnhum.2021.789467] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Accepted: 12/27/2021] [Indexed: 12/17/2022] Open
Abstract
Research in basic and clinical neuroscience of music conducted over the past decades has begun to uncover music’s high potential as a tool for rehabilitation. Advances in our understanding of how music engages parallel brain networks underpinning sensory and motor processes, arousal, reward, and affective regulation, have laid a sound neuroscientific foundation for the development of theory-driven music interventions that have been systematically tested in clinical settings. Of particular significance in the context of motor rehabilitation is the notion that musical rhythms can entrain movement patterns in patients with movement-related disorders, serving as a continuous time reference that can help regulate movement timing and pace. To date, a significant number of clinical and experimental studies have tested the application of rhythm- and music-based interventions to improve motor functions following central nervous injury and/or degeneration. The goal of this review is to appraise the current state of knowledge on the effectiveness of music and rhythm to modulate movement spatiotemporal patterns and restore motor function. By organizing and providing a critical appraisal of a large body of research, we hope to provide a revised framework for future research on the effectiveness of rhythm- and music-based interventions to restore and (re)train motor function.
Collapse
Affiliation(s)
- Thenille Braun Janzen
- Center of Mathematics, Computing and Cognition, Universidade Federal do ABC, São Bernardo do Campo, Brazil
| | - Yuko Koshimori
- Music and Health Science Research Collaboratory, Faculty of Music, University of Toronto, Toronto, ON, Canada
- Brain Health Imaging Centre, CAMH, Toronto, ON, Canada
| | - Nicole M. Richard
- Music and Health Science Research Collaboratory, Faculty of Music, University of Toronto, Toronto, ON, Canada
- Faculty of Music, Belmont University, Nashville, TN, United States
| | - Michael H. Thaut
- Music and Health Science Research Collaboratory, Faculty of Music, University of Toronto, Toronto, ON, Canada
- Rehabilitation Sciences Institute, University of Toronto, Toronto, ON, Canada
- *Correspondence: Michael H. Thaut,
| |
Collapse
|
31
|
Rimmele JM, Kern P, Lubinus C, Frieler K, Poeppel D, Assaneo MF. Musical Sophistication and Speech Auditory-Motor Coupling: Easy Tests for Quick Answers. Front Neurosci 2022; 15:764342. [PMID: 35058741 PMCID: PMC8763673 DOI: 10.3389/fnins.2021.764342] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 11/22/2021] [Indexed: 12/05/2022] Open
Abstract
Musical training enhances auditory-motor cortex coupling, which in turn facilitates music and speech perception. How tightly the temporal processing of music and speech are intertwined is a topic of current research. We investigated the relationship between musical sophistication (Goldsmiths Musical Sophistication index, Gold-MSI) and spontaneous speech-to-speech synchronization behavior as an indirect measure of speech auditory-motor cortex coupling strength. In a group of participants (n = 196), we tested whether the outcome of the spontaneous speech-to-speech synchronization test (SSS-test) can be inferred from self-reported musical sophistication. Participants were classified as high (HIGHs) or low (LOWs) synchronizers according to the SSS-test. HIGHs scored higher than LOWs on all Gold-MSI subscales (General Score, Active Engagement, Musical Perception, Musical Training, Singing Skills), but the Emotional Attachment scale. More specifically, compared to a previously reported German-speaking sample, HIGHs overall scored higher and LOWs lower. Compared to an estimated distribution of the English-speaking general population, our sample overall scored lower, with the scores of LOWs significantly differing from the normal distribution, with scores in the ∼30th percentile. While HIGHs more often reported musical training compared to LOWs, the distribution of training instruments did not vary across groups. Importantly, even after the highly correlated subscores of the Gold-MSI were decorrelated, particularly the subscales Musical Perception and Musical Training allowed to infer the speech-to-speech synchronization behavior. The differential effects of musical perception and training were observed, with training predicting audio-motor synchronization in both groups, but perception only in the HIGHs. Our findings suggest that speech auditory-motor cortex coupling strength can be inferred from training and perceptual aspects of musical sophistication, suggesting shared mechanisms involved in speech and music perception.
Collapse
Affiliation(s)
- Johanna M. Rimmele
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
- Max Planck NYU Center for Language, Music and Emotion, New York, NY, United States
| | - Pius Kern
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Christina Lubinus
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Klaus Frieler
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
| | - David Poeppel
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Frankfurt, Germany
- Max Planck NYU Center for Language, Music and Emotion, New York, NY, United States
- Department of Psychology, New York University, New York, NY, United States
- Ernst Strüngmann Institute for Neuroscience, Frankfurt, Germany
| | - M. Florencia Assaneo
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Querétaro, México
| |
Collapse
|
32
|
Nave KM, Hannon EE, Snyder JS. Steady state-evoked potentials of subjective beat perception in musical rhythms. Psychophysiology 2021; 59:e13963. [PMID: 34743347 DOI: 10.1111/psyp.13963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Revised: 09/21/2021] [Accepted: 10/01/2021] [Indexed: 12/01/2022]
Abstract
Synchronization of movement to music is a seemingly universal human capacity that depends on sustained beat perception. Previous research has suggested that listener's conscious perception of the musical structure (e.g., beat and meter) might be reflected in neural responses that follow the frequency of the beat. However, the extent to which these neural responses directly reflect concurrent, listener-reported perception of musical beat versus stimulus-driven activity is understudied. We investigated whether steady state-evoked potentials (SSEPs), measured using electroencephalography (EEG), reflect conscious perception of beat by holding the stimulus constant while contextually manipulating listeners' perception and measuring perceptual responses on every trial. Listeners with minimal music training heard a musical excerpt that strongly supported one of two beat patterns (context phase), followed by a rhythm consistent with either beat pattern (ambiguous phase). During the final phase, listeners indicated whether or not a superimposed drum matched the perceived beat (probe phase). Participants were more likely to indicate that the probe matched the music when that probe matched the original context, suggesting an ability to maintain the beat percept through the ambiguous phase. Likewise, we observed that the spectral amplitude during the ambiguous phase was higher at frequencies that matched the beat of the preceding context. Exploratory analyses investigated whether EEG amplitude at the beat-related SSEPs (steady state-evoked potentials) predicted performance on the beat induction task on a single-trial basis, but were inconclusive. Our findings substantiate the claim that auditory SSEPs reflect conscious perception of musical beat and not just stimulus features.
Collapse
Affiliation(s)
- Karli M Nave
- Department of Psychology, University of Nevada Las Vegas, Las Vegas, Nevada, USA
| | - Erin E Hannon
- Department of Psychology, University of Nevada Las Vegas, Las Vegas, Nevada, USA
| | - Joel S Snyder
- Department of Psychology, University of Nevada Las Vegas, Las Vegas, Nevada, USA
| |
Collapse
|
33
|
Fiveash A, Bedoin N, Gordon RL, Tillmann B. Processing rhythm in speech and music: Shared mechanisms and implications for developmental speech and language disorders. Neuropsychology 2021; 35:771-791. [PMID: 34435803 PMCID: PMC8595576 DOI: 10.1037/neu0000766] [Citation(s) in RCA: 42] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
OBJECTIVE Music and speech are complex signals containing regularities in how they unfold in time. Similarities between music and speech/language in terms of their auditory features, rhythmic structure, and hierarchical structure have led to a large body of literature suggesting connections between the two domains. However, the precise underlying mechanisms behind this connection remain to be elucidated. METHOD In this theoretical review article, we synthesize previous research and present a framework of potentially shared neural mechanisms for music and speech rhythm processing. We outline structural similarities of rhythmic signals in music and speech, synthesize prominent music and speech rhythm theories, discuss impaired timing in developmental speech and language disorders, and discuss music rhythm training as an additional, potentially effective therapeutic tool to enhance speech/language processing in these disorders. RESULTS We propose the processing rhythm in speech and music (PRISM) framework, which outlines three underlying mechanisms that appear to be shared across music and speech/language processing: Precise auditory processing, synchronization/entrainment of neural oscillations to external stimuli, and sensorimotor coupling. The goal of this framework is to inform directions for future research that integrate cognitive and biological evidence for relationships between rhythm processing in music and speech. CONCLUSION The current framework can be used as a basis to investigate potential links between observed timing deficits in developmental disorders, impairments in the proposed mechanisms, and pathology-specific deficits which can be targeted in treatment and training supporting speech therapy outcomes. On these grounds, we propose future research directions and discuss implications of our framework. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
Affiliation(s)
- Anna Fiveash
- Lyon Neuroscience Research Center, CRNL, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France
- University Lyon 1, Lyon, France
| | - Nathalie Bedoin
- Lyon Neuroscience Research Center, CRNL, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France
- University Lyon 1, Lyon, France
- University of Lyon 2, CNRS, UMR5596, Lyon, F-69000, France
| | - Reyna L. Gordon
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee
- Vanderbilt Genetics Institute, Vanderbilt University, Nashville, Tennessee
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Barbara Tillmann
- Lyon Neuroscience Research Center, CRNL, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France
- University Lyon 1, Lyon, France
| |
Collapse
|
34
|
Lenc T, Merchant H, Keller PE, Honing H, Varlet M, Nozaradan S. Mapping between sound, brain and behaviour: four-level framework for understanding rhythm processing in humans and non-human primates. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200325. [PMID: 34420381 PMCID: PMC8380981 DOI: 10.1098/rstb.2020.0325] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/14/2021] [Indexed: 12/16/2022] Open
Abstract
Humans perceive and spontaneously move to one or several levels of periodic pulses (a meter, for short) when listening to musical rhythm, even when the sensory input does not provide prominent periodic cues to their temporal location. Here, we review a multi-levelled framework to understanding how external rhythmic inputs are mapped onto internally represented metric pulses. This mapping is studied using an approach to quantify and directly compare representations of metric pulses in signals corresponding to sensory inputs, neural activity and behaviour (typically body movement). Based on this approach, recent empirical evidence can be drawn together into a conceptual framework that unpacks the phenomenon of meter into four levels. Each level highlights specific functional processes that critically enable and shape the mapping from sensory input to internal meter. We discuss the nature, constraints and neural substrates of these processes, starting with fundamental mechanisms investigated in macaque monkeys that enable basic forms of mapping between simple rhythmic stimuli and internally represented metric pulse. We propose that human evolution has gradually built a robust and flexible system upon these fundamental processes, allowing more complex levels of mapping to emerge in musical behaviours. This approach opens promising avenues to understand the many facets of rhythmic behaviours across individuals and species. This article is part of the theme issue 'Synchrony and rhythm interaction: from the brain to behavioural ecology'.
Collapse
Affiliation(s)
- Tomas Lenc
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, New South Wales 2751, Australia
- Institute of Neuroscience (IONS), Université Catholique de Louvain (UCL), Brussels 1200, Belgium
| | - Hugo Merchant
- Instituto de Neurobiologia, UNAM, Campus Juriquilla, Querétaro 76230, Mexico
| | - Peter E. Keller
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, New South Wales 2751, Australia
| | - Henkjan Honing
- Amsterdam Brain and Cognition (ABC), Institute for Logic, Language and Computation (ILLC), University of Amsterdam, Amsterdam 1090 GE, The Netherlands
| | - Manuel Varlet
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, New South Wales 2751, Australia
- School of Psychology, Western Sydney University, Penrith, New South Wales 2751, Australia
| | - Sylvie Nozaradan
- Institute of Neuroscience (IONS), Université Catholique de Louvain (UCL), Brussels 1200, Belgium
| |
Collapse
|
35
|
Sifuentes-Ortega R, Lenc T, Nozaradan S, Peigneux P. Partially Preserved Processing of Musical Rhythms in REM but Not in NREM Sleep. Cereb Cortex 2021; 32:1508-1519. [PMID: 34491309 DOI: 10.1093/cercor/bhab303] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
Abstract
The extent of high-level perceptual processing during sleep remains controversial. In wakefulness, perception of periodicities supports the emergence of high-order representations such as the pulse-like meter perceived while listening to music. Electroencephalography (EEG) frequency-tagged responses elicited at envelope frequencies of musical rhythms have been shown to provide a neural representation of rhythm processing. Specifically, responses at frequencies corresponding to the perceived meter are enhanced over responses at meter-unrelated frequencies. This selective enhancement must rely on higher-level perceptual processes, as it occurs even in irregular (i.e., syncopated) rhythms where meter frequencies are not prominent input features, thus ruling out acoustic confounds. We recorded EEG while presenting a regular (unsyncopated) and an irregular (syncopated) rhythm across sleep stages and wakefulness. Our results show that frequency-tagged responses at meter-related frequencies of the rhythms were selectively enhanced during wakefulness but attenuated across sleep states. Most importantly, this selective attenuation occurred even in response to the irregular rhythm, where meter-related frequencies were not prominent in the stimulus, thus suggesting that neural processes selectively enhancing meter-related frequencies during wakefulness are weakened during rapid eye movement (REM) and further suppressed in non-rapid eye movement (NREM) sleep. These results indicate preserved processing of low-level acoustic properties but limited higher-order processing of auditory rhythms during sleep.
Collapse
Affiliation(s)
- Rebeca Sifuentes-Ortega
- UR2NF - Neuropsychology and Functional Neuroimaging Research Unit at CRCN - Center for Research in Cognition & Neurosciences, and UNI - ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), 1050 Brussels, Belgium
| | - Tomas Lenc
- Institute of Neuroscience (IONS), Université Catholique de Louvain, 1200 Brussels, Belgium
| | - Sylvie Nozaradan
- Institute of Neuroscience (IONS), Université Catholique de Louvain, 1200 Brussels, Belgium
| | - Philippe Peigneux
- UR2NF - Neuropsychology and Functional Neuroimaging Research Unit at CRCN - Center for Research in Cognition & Neurosciences, and UNI - ULB Neuroscience Institute, Université Libre de Bruxelles (ULB), 1050 Brussels, Belgium
| |
Collapse
|
36
|
Zuk NJ, Murphy JW, Reilly RB, Lalor EC. Envelope reconstruction of speech and music highlights stronger tracking of speech at low frequencies. PLoS Comput Biol 2021; 17:e1009358. [PMID: 34534211 PMCID: PMC8480853 DOI: 10.1371/journal.pcbi.1009358] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 09/29/2021] [Accepted: 08/18/2021] [Indexed: 11/19/2022] Open
Abstract
The human brain tracks amplitude fluctuations of both speech and music, which reflects acoustic processing in addition to the encoding of higher-order features and one's cognitive state. Comparing neural tracking of speech and music envelopes can elucidate stimulus-general mechanisms, but direct comparisons are confounded by differences in their envelope spectra. Here, we use a novel method of frequency-constrained reconstruction of stimulus envelopes using EEG recorded during passive listening. We expected to see music reconstruction match speech in a narrow range of frequencies, but instead we found that speech was reconstructed better than music for all frequencies we examined. Additionally, models trained on all stimulus types performed as well or better than the stimulus-specific models at higher modulation frequencies, suggesting a common neural mechanism for tracking speech and music. However, speech envelope tracking at low frequencies, below 1 Hz, was associated with increased weighting over parietal channels, which was not present for the other stimuli. Our results highlight the importance of low-frequency speech tracking and suggest an origin from speech-specific processing in the brain.
Collapse
Affiliation(s)
- Nathaniel J. Zuk
- Department of Electronic & Electrical Engineering, Trinity College, The University of Dublin, Dublin, Ireland
- Department of Mechanical, Manufacturing & Biomedical Engineering, Trinity College, The University of Dublin, Dublin, Ireland
- Trinity College Institute of Neuroscience, Trinity College, The University of Dublin, Dublin, Ireland
- Department of Biomedical Engineering, University of Rochester, Rochester, New York, United States of America
- Del Monte Institute of Neuroscience, University of Rochester Medical Center, Rochester, New York, United States of America
| | - Jeremy W. Murphy
- Department of Electronic & Electrical Engineering, Trinity College, The University of Dublin, Dublin, Ireland
| | - Richard B. Reilly
- Department of Mechanical, Manufacturing & Biomedical Engineering, Trinity College, The University of Dublin, Dublin, Ireland
- Trinity College Institute of Neuroscience, Trinity College, The University of Dublin, Dublin, Ireland
- Trinity Centre for Biomedical Engineering, Trinity College, The University of Dublin, Dublin, Ireland
| | - Edmund C. Lalor
- Department of Electronic & Electrical Engineering, Trinity College, The University of Dublin, Dublin, Ireland
- Department of Biomedical Engineering, University of Rochester, Rochester, New York, United States of America
- Del Monte Institute of Neuroscience, University of Rochester Medical Center, Rochester, New York, United States of America
| |
Collapse
|
37
|
Di Liberto GM, Marion G, Shamma SA. Accurate Decoding of Imagined and Heard Melodies. Front Neurosci 2021; 15:673401. [PMID: 34421512 PMCID: PMC8375770 DOI: 10.3389/fnins.2021.673401] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2021] [Accepted: 06/17/2021] [Indexed: 11/16/2022] Open
Abstract
Music perception requires the human brain to process a variety of acoustic and music-related properties. Recent research used encoding models to tease apart and study the various cortical contributors to music perception. To do so, such approaches study temporal response functions that summarise the neural activity over several minutes of data. Here we tested the possibility of assessing the neural processing of individual musical units (bars) with electroencephalography (EEG). We devised a decoding methodology based on a maximum correlation metric across EEG segments (maxCorr) and used it to decode melodies from EEG based on an experiment where professional musicians listened and imagined four Bach melodies multiple times. We demonstrate here that accurate decoding of melodies in single-subjects and at the level of individual musical units is possible, both from EEG signals recorded during listening and imagination. Furthermore, we find that greater decoding accuracies are measured for the maxCorr method than for an envelope reconstruction approach based on backward temporal response functions (bTRFenv). These results indicate that low-frequency neural signals encode information beyond note timing, especially with respect to low-frequency cortical signals below 1 Hz, which are shown to encode pitch-related information. Along with the theoretical implications of these results, we discuss the potential applications of this decoding methodology in the context of novel brain-computer interface solutions.
Collapse
Affiliation(s)
- Giovanni M Di Liberto
- Laboratoire des Systèmes Perceptifs, CNRS, Paris, France.,Ecole Normale Supérieure, PSL University, Paris, France.,Department of Mechanical, Manufacturing and Biomedical Engineering, Trinity Centre for Biomedical Engineering, Trinity College, Trinity Institute of Neuroscience, The University of Dublin, Dublin, Ireland.,Centre for Biomedical Engineering, School of Electrical and Electronic Engineering and UCD University College Dublin, Dublin, Ireland
| | - Guilhem Marion
- Laboratoire des Systèmes Perceptifs, CNRS, Paris, France
| | - Shihab A Shamma
- Laboratoire des Systèmes Perceptifs, CNRS, Paris, France.,Institute for Systems Research, Electrical and Computer Engineering, University of Maryland, College Park, College Park, MD, United States
| |
Collapse
|
38
|
Expertise Modulates Neural Stimulus-Tracking. eNeuro 2021; 8:ENEURO.0065-21.2021. [PMID: 34341067 PMCID: PMC8371925 DOI: 10.1523/eneuro.0065-21.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 06/14/2021] [Accepted: 06/16/2021] [Indexed: 11/21/2022] Open
Abstract
How does the brain anticipate information in language? When people perceive speech, low-frequency (<10 Hz) activity in the brain synchronizes with bursts of sound and visual motion. This phenomenon, called cortical stimulus-tracking, is thought to be one way that the brain predicts the timing of upcoming words, phrases, and syllables. In this study, we test whether stimulus-tracking depends on domain-general expertise or on language-specific prediction mechanisms. We go on to examine how the effects of expertise differ between frontal and sensory cortex. We recorded electroencephalography (EEG) from human participants who were experts in either sign language or ballet, and we compared stimulus-tracking between groups while participants watched videos of sign language or ballet. We measured stimulus-tracking by computing coherence between EEG recordings and visual motion in the videos. Results showed that stimulus-tracking depends on domain-general expertise, and not on language-specific prediction mechanisms. At frontal channels, fluent signers showed stronger coherence to sign language than to dance, whereas expert dancers showed stronger coherence to dance than to sign language. At occipital channels, however, the two groups of participants did not show different patterns of coherence. These results are difficult to explain by entrainment of endogenous oscillations, because neither sign language nor dance show any periodicity at the frequencies of significant expertise-dependent stimulus-tracking. These results suggest that the brain may rely on domain-general predictive mechanisms to optimize perception of temporally-predictable stimuli such as speech, sign language, and dance.
Collapse
|
39
|
The Music of Silence: Part II: Music Listening Induces Imagery Responses. J Neurosci 2021; 41:7449-7460. [PMID: 34341154 DOI: 10.1523/jneurosci.0184-21.2021] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 06/22/2021] [Accepted: 06/24/2021] [Indexed: 01/22/2023] Open
Abstract
During music listening, humans routinely acquire the regularities of the acoustic sequences and use them to anticipate and interpret the ongoing melody. Specifically, in line with this predictive framework, it is thought that brain responses during such listening reflect a comparison between the bottom-up sensory responses and top-down prediction signals generated by an internal model that embodies the music exposure and expectations of the listener. To attain a clear view of these predictive responses, previous work has eliminated the sensory inputs by inserting artificial silences (or sound omissions) that leave behind only the corresponding predictions of the thwarted expectations. Here, we demonstrate a new alternate approach in which we decode the predictive electroencephalography (EEG) responses to the silent intervals that are naturally interspersed within the music. We did this as participants (experiment 1, 20 participants, 10 female; experiment 2, 21 participants, 6 female) listened or imagined Bach piano melodies. Prediction signals were quantified and assessed via a computational model of the melodic structure of the music and were shown to exhibit the same response characteristics when measured during listening or imagining. These include an inverted polarity for both silence and imagined responses relative to listening, as well as response magnitude modulations that precisely reflect the expectations of notes and silences in both listening and imagery conditions. These findings therefore provide a unifying view that links results from many previous paradigms, including omission reactions and the expectation modulation of sensory responses, all in the context of naturalistic music listening.SIGNIFICANCE STATEMENT Music perception depends on our ability to learn and detect melodic structures. It has been suggested that our brain does so by actively predicting upcoming music notes, a process inducing instantaneous neural responses as the music confronts these expectations. Here, we studied this prediction process using EEGs recorded while participants listen to and imagine Bach melodies. Specifically, we examined neural signals during the ubiquitous musical pauses (or silent intervals) in a music stream and analyzed them in contrast to the imagery responses. We find that imagined predictive responses are routinely co-opted during ongoing music listening. These conclusions are revealed by a new paradigm using listening and imagery of naturalistic melodies.
Collapse
|
40
|
Schirmer A, Wijaya M, Chiu MH, Maess B, Gunter TC. Musical rhythm effects on visual attention are non-rhythmical: evidence against metrical entrainment. Soc Cogn Affect Neurosci 2021; 16:58-71. [PMID: 32507877 PMCID: PMC7812633 DOI: 10.1093/scan/nsaa077] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Revised: 05/26/2020] [Accepted: 06/02/2020] [Indexed: 12/11/2022] Open
Abstract
The idea that external rhythms synchronize attention cross-modally has attracted much interest and scientific inquiry. Yet, whether associated attentional modulations are indeed rhythmical in that they spring from and map onto an underlying meter has not been clearly established. Here we tested this idea while addressing the shortcomings of previous work associated with confounding (i) metricality and regularity, (ii) rhythmic and temporal expectations or (iii) global and local temporal effects. We designed sound sequences that varied orthogonally (high/low) in metricality and regularity and presented them as task-irrelevant auditory background in four separate blocks. The participants' task was to detect rare visual targets occurring at a silent metrically aligned or misaligned temporal position. We found that target timing was irrelevant for reaction times and visual event-related potentials. High background regularity and to a lesser extent metricality facilitated target processing across metrically aligned and misaligned positions. Additionally, high regularity modulated auditory background frequencies in the EEG recorded over occipital cortex. We conclude that external rhythms, rather than synchronizing attention cross-modally, confer general, nontemporal benefits. Their predictability conserves processing resources that then benefit stimulus representations in other modalities.
Collapse
Affiliation(s)
- Annett Schirmer
- Correspondence should be addressed to Annett Schirmer, Department of Psychology, The Chinese University of Hong Kong, 3rd Floor, Sino Building, Shatin, N.T., Hong Kong. E-mail:
| | - Maria Wijaya
- Department of Psychology, The Chinese University of Hong Kong, Shatin, Hong Kong SAR
| | - Man Hey Chiu
- Department of Psychology, The Chinese University of Hong Kong, Shatin, Hong Kong SAR
| | - Burkhard Maess
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Thomas C Gunter
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| |
Collapse
|
41
|
Alemi R, Nozaradan S, Lehmann A. Free-Field Cortical Steady-State Evoked Potentials in Cochlear Implant Users. Brain Topogr 2021; 34:664-680. [PMID: 34185222 DOI: 10.1007/s10548-021-00860-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Accepted: 06/18/2021] [Indexed: 11/25/2022]
Abstract
Auditory steady-state evoked potentials (SS-EPs) are phase-locked neural responses to periodic stimuli, believed to reflect specific neural generators. As an objective measure, steady-state responses have been used in different clinical settings, including measuring hearing thresholds of normal and hearing-impaired subjects. Recent studies are in favor of recording these responses as a part of the cochlear implant (CI) device-fitting procedure. Considering these potential benefits, the goals of the present study were to assess the feasibility of recording free-field SS-EPs in CI users and to compare their characteristics between CI users and controls. By taking advantage of a recently developed dual-frequency tagging method, we attempted to record subcortical and cortical SS-EPs from adult CI users and controls and measured reliable subcortical and cortical SS-EPs in the control group. Independent component analysis (ICA) was used to remove CI stimulation artifacts, yet subcortical responses of several CIs were heavily contaminated by these artifacts. Consequently, only cortical SS-EPs were compared between groups, which were found to be larger in the controls. The lower cortical SS-EPs' amplitude in CI users might indicate a reduction in neural synchrony evoked by the modulation rate of the auditory input across different neural assemblies in the auditory pathway. The brain topographies of cortical auditory SS-EPs, the time course of cortical responses, and the reconstructed cortical maps were highly similar between groups, confirming their neural origin and possibility to obtain such responses also in CI recipients. As for subcortical SS-EPs, our results highlight a need for sophisticated denoising algorithms to pinpoint and remove artifactual components from the biological response.
Collapse
Affiliation(s)
- Razieh Alemi
- Faculty of Medicine, Department of Otolaryngology, McGill University, Montreal, QC, Canada.
- Centre for Research On Brain, Language & Music (CRBLM), Montreal, Canada.
- International Laboratory for Brain, Music & Sound Research (BRAMS), Montreal, QC, Canada.
| | - Sylvie Nozaradan
- Institute of Neuroscience (IONS), Université Catholique de Louvain (UCL), Ottignies-Louvain-la-Neuve, Belgium
| | - Alexandre Lehmann
- Faculty of Medicine, Department of Otolaryngology, McGill University, Montreal, QC, Canada
- Centre for Research On Brain, Language & Music (CRBLM), Montreal, Canada
- International Laboratory for Brain, Music & Sound Research (BRAMS), Montreal, QC, Canada
| |
Collapse
|
42
|
What you hear first, is what you get: Initial metrical cue presentation modulates syllable detection in sentence processing. Atten Percept Psychophys 2021; 83:1861-1877. [PMID: 33709327 DOI: 10.3758/s13414-021-02251-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/14/2021] [Indexed: 11/08/2022]
Abstract
Auditory rhythms create powerful expectations for the listener. Rhythmic cues with the same temporal structure as subsequent sentences enhance processing compared with irregular or mismatched cues. In the present study, we focus on syllable detection following matched rhythmic cues. Cues were aligned with subsequent sentences at the syllable (low-level cue) or the accented syllable (high-level cue) level. A different group of participants performed the task without cues to provide a baseline. We hypothesized that unaccented syllable detection would be faster after low-level cues, and accented syllable detection would be faster after high-level cues. There was no difference in syllable detection depending on whether the sentence was preceded by a high-level or low-level cue. However, the results revealed a priming effect of the cue that participants heard first. Participants who heard a high-level cue first were faster to detect accented than unaccented syllables, and faster to detect accented syllables than participants who heard a low-level cue first. The low-level-first participants showed no difference between detection of accented and unaccented syllables. The baseline experiment confirmed that hearing a low-level cue first removed the benefit of the high-level grouping structure for accented syllables. These results suggest that the initially perceived rhythmic structure influenced subsequent cue perception and its influence on syllable detection. Results are discussed in terms of dynamic attending, temporal context effects, and implications for context effects in neural entrainment.
Collapse
|
43
|
Proksch S, Comstock DC, Médé B, Pabst A, Balasubramaniam R. Motor and Predictive Processes in Auditory Beat and Rhythm Perception. Front Hum Neurosci 2020; 14:578546. [PMID: 33061902 PMCID: PMC7518112 DOI: 10.3389/fnhum.2020.578546] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Accepted: 08/18/2020] [Indexed: 11/30/2022] Open
Abstract
In this article, we review recent advances in research on rhythm and musical beat perception, focusing on the role of predictive processes in auditory motor interactions. We suggest that experimental evidence of the motor system's role in beat perception, including in passive listening, may be explained by the generation and maintenance of internal predictive models, concordant with the Active Inference framework of sensory processing. We highlight two complementary hypotheses for the neural underpinnings of rhythm perception: The Action Simulation for Auditory Prediction hypothesis (Patel and Iversen, 2014) and the Gradual Audiomotor Evolution hypothesis (Merchant and Honing, 2014) and review recent experimental progress supporting each of these hypotheses. While initial formulations of ASAP and GAE explain different aspects of beat-based timing-the involvement of motor structures in the absence of movement, and physical entrainment to an auditory beat respectively-we suggest that work under both hypotheses provide converging evidence toward understanding the predictive role of the motor system in the perception of rhythm, and the specific neural mechanisms involved. We discuss future experimental work necessary to further evaluate the causal neural mechanisms underlying beat and rhythm perception.
Collapse
Affiliation(s)
- Shannon Proksch
- Sensorimotor Neuroscience Laboratory, Cognitive & Information Sciences, University of California, Merced, Merced, CA, United States
| | - Daniel C Comstock
- Sensorimotor Neuroscience Laboratory, Cognitive & Information Sciences, University of California, Merced, Merced, CA, United States
| | - Butovens Médé
- Sensorimotor Neuroscience Laboratory, Cognitive & Information Sciences, University of California, Merced, Merced, CA, United States
| | - Alexandria Pabst
- Sensorimotor Neuroscience Laboratory, Cognitive & Information Sciences, University of California, Merced, Merced, CA, United States
| | - Ramesh Balasubramaniam
- Sensorimotor Neuroscience Laboratory, Cognitive & Information Sciences, University of California, Merced, Merced, CA, United States
| |
Collapse
|
44
|
Rhythmic priming of grammaticality judgments in children: Duration matters. J Exp Child Psychol 2020; 197:104885. [DOI: 10.1016/j.jecp.2020.104885] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2019] [Revised: 04/28/2020] [Accepted: 04/28/2020] [Indexed: 12/15/2022]
|
45
|
Lenc T, Keller PE, Varlet M, Nozaradan S. Neural and Behavioral Evidence for Frequency-Selective Context Effects in Rhythm Processing in Humans. Cereb Cortex Commun 2020; 1:tgaa037. [PMID: 34296106 PMCID: PMC8152888 DOI: 10.1093/texcom/tgaa037] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 06/30/2020] [Accepted: 07/16/2020] [Indexed: 01/17/2023] Open
Abstract
When listening to music, people often perceive and move along with a periodic meter. However, the dynamics of mapping between meter perception and the acoustic cues to meter periodicities in the sensory input remain largely unknown. To capture these dynamics, we recorded the electroencephalography while nonmusician and musician participants listened to nonrepeating rhythmic sequences, where acoustic cues to meter frequencies either gradually decreased (from regular to degraded) or increased (from degraded to regular). The results revealed greater neural activity selectively elicited at meter frequencies when the sequence gradually changed from regular to degraded compared with the opposite. Importantly, this effect was unlikely to arise from overall gain, or low-level auditory processing, as revealed by physiological modeling. Moreover, the context effect was more pronounced in nonmusicians, who also demonstrated facilitated sensory-motor synchronization with the meter for sequences that started as regular. In contrast, musicians showed weaker effects of recent context in their neural responses and robust ability to move along with the meter irrespective of stimulus degradation. Together, our results demonstrate that brain activity elicited by rhythm does not only reflect passive tracking of stimulus features, but represents continuous integration of sensory input with recent context.
Collapse
Affiliation(s)
- Tomas Lenc
- MARCS Institute for Brain, Behaviour, and Development, Western Sydney University, Penrith, Sydney, NSW 2751, Australia
| | - Peter E Keller
- MARCS Institute for Brain, Behaviour, and Development, Western Sydney University, Penrith, Sydney, NSW 2751, Australia
| | - Manuel Varlet
- MARCS Institute for Brain, Behaviour, and Development, Western Sydney University, Penrith, Sydney, NSW 2751, Australia
- School of Psychology, Western Sydney University, Penrith, Sydney, NSW 2751, Australia
| | - Sylvie Nozaradan
- MARCS Institute for Brain, Behaviour, and Development, Western Sydney University, Penrith, Sydney, NSW 2751, Australia
- Institute of Neuroscience (IONS), Université Catholique de Louvain (UCL), Brussels 1200, Belgium
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal QC H3C 3J7, Canada
| |
Collapse
|
46
|
Kliger Amrani A, Zion Golumbic E. Spontaneous and stimulus-driven rhythmic behaviors in ADHD adults and controls. Neuropsychologia 2020; 146:107544. [PMID: 32598965 DOI: 10.1016/j.neuropsychologia.2020.107544] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2019] [Revised: 05/27/2020] [Accepted: 06/21/2020] [Indexed: 10/24/2022]
Abstract
Many aspects of human behavior are inherently rhythmic, requiring production of rhythmic motor actions as well as synchronizing to rhythms in the environment. It is well-established that individuals with ADHD exhibit deficits in temporal estimation and timing functions, which may impact their ability to accurately produce and interact with rhythmic stimuli. In the current study we seek to understand the specific aspects of rhythmic behavior that are implicated in ADHD. We specifically ask whether they are attributed to imprecision in the internal generation of rhythms or to reduced acuity in rhythm perception. We also test key predictions of the Preferred Period Hypothesis, which suggests that both perceptual and motor rhythmic behaviors are biased towards a specific personal 'default' tempo. To this end, we tested several aspects of rhythmic behavior and the correspondence between them, including spontaneous motor tempo (SMT), preferred auditory perceptual tempo (PPT) and synchronization-continuations tapping in a broad range of rhythms, from sub-second to supra-second intervals. Moreover, we evaluate the intra-subject consistency of rhythmic preferences, as a means for testing the reality and reliability of personal 'default-rhythms'. We used a modified operational definition for assessing SMT and PPT, instructing participants to tap or calibrate the rhythms most comfortable for them to count along with, to avoid subjective interpretations of the task. Our results shed new light on the specific aspect of rhythmic deficits implicated in ADHD adults. We find that individuals with ADHD are primarily challenged in producing and maintaining isochronous self-generated motor rhythms, during both spontaneous and memory-paced tapping. However, they nonetheless exhibit good flexibility for synchronizing to a broad range of external rhythms, suggesting that auditory-motor entrainment for simple rhythms is preserved in ADHD, and that the presence of an external pacer allows overcoming their inherent difficulty in self-generating isochronous motor rhythms. In addition, both groups showed optimal memory-paced tapping for rhythms near their 'counting-based' SMT and PPT, which were slightly faster in the ADHD group. This is in line with the predictions of the Preferred Period Hypothesis, indicating that at least for this well-defined rhythmic behavior (i.e., counting), individuals tend to prefer similar time-scales in both motor production and perceptual evaluation.
Collapse
|
47
|
Makov S, Zion Golumbic E. Irrelevant Predictions: Distractor Rhythmicity Modulates Neural Encoding in Auditory Cortex. Cereb Cortex 2020; 30:5792-5805. [DOI: 10.1093/cercor/bhaa153] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2019] [Revised: 04/10/2020] [Accepted: 05/02/2020] [Indexed: 12/12/2022] Open
Abstract
Abstract
Dynamic attending theory suggests that predicting the timing of upcoming sounds can assist in focusing attention toward them. However, whether similar predictive processes are also applied to background noises and assist in guiding attention “away” from potential distractors, remains an open question. Here we address this question by manipulating the temporal predictability of distractor sounds in a dichotic listening selective attention task. We tested the influence of distractors’ temporal predictability on performance and on the neural encoding of sounds, by comparing the effects of Rhythmic versus Nonrhythmic distractors. Using magnetoencephalography we found that, indeed, the neural responses to both attended and distractor sounds were affected by distractors’ rhythmicity. Baseline activity preceding the onset of Rhythmic distractor sounds was enhanced relative to nonrhythmic distractor sounds, and sensory response to them was suppressed. Moreover, detection of nonmasked targets improved when distractors were Rhythmic, an effect accompanied by stronger lateralization of the neural responses to attended sounds to contralateral auditory cortex. These combined behavioral and neural results suggest that not only are temporal predictions formed for task-irrelevant sounds, but that these predictions bear functional significance for promoting selective attention and reducing distractibility.
Collapse
Affiliation(s)
- Shiri Makov
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat-Gan 5290002, Israel
| | - Elana Zion Golumbic
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat-Gan 5290002, Israel
| |
Collapse
|
48
|
Why do we move to the beat? A multi-scale approach, from physical principles to brain dynamics. Neurosci Biobehav Rev 2020; 112:553-584. [DOI: 10.1016/j.neubiorev.2019.12.024] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2019] [Revised: 10/20/2019] [Accepted: 12/13/2019] [Indexed: 01/08/2023]
|
49
|
Ladányi E, Persici V, Fiveash A, Tillmann B, Gordon RL. Is atypical rhythm a risk factor for developmental speech and language disorders? WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2020; 11:e1528. [PMID: 32244259 PMCID: PMC7415602 DOI: 10.1002/wcs.1528] [Citation(s) in RCA: 60] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/06/2019] [Revised: 03/07/2020] [Accepted: 03/09/2020] [Indexed: 01/07/2023]
Abstract
Although a growing literature points to substantial variation in speech/language abilities related to individual differences in musical abilities, mainstream models of communication sciences and disorders have not yet incorporated these individual differences into childhood speech/language development. This article reviews three sources of evidence in a comprehensive body of research aligning with three main themes: (a) associations between musical rhythm and speech/language processing, (b) musical rhythm in children with developmental speech/language disorders and common comorbid attentional and motor disorders, and (c) individual differences in mechanisms underlying rhythm processing in infants and their relationship with later speech/language development. In light of converging evidence on associations between musical rhythm and speech/language processing, we propose the Atypical Rhythm Risk Hypothesis, which posits that individuals with atypical rhythm are at higher risk for developmental speech/language disorders. The hypothesis is framed within the larger epidemiological literature in which recent methodological advances allow for large-scale testing of shared underlying biology across clinically distinct disorders. A series of predictions for future work testing the Atypical Rhythm Risk Hypothesis are outlined. We suggest that if a significant body of evidence is found to support this hypothesis, we can envision new risk factor models that incorporate atypical rhythm to predict the risk of developing speech/language disorders. Given the high prevalence of speech/language disorders in the population and the negative long-term social and economic consequences of gaps in identifying children at-risk, these new lines of research could potentially positively impact access to early identification and treatment. This article is categorized under: Linguistics > Language in Mind and Brain Neuroscience > Development Linguistics > Language Acquisition.
Collapse
Affiliation(s)
- Enikő Ladányi
- Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Valentina Persici
- Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, Tennessee, USA.,Department of Psychology, Università degli Studi di Milano - Bicocca, Milan, Italy.,Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee, USA
| | - Anna Fiveash
- Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics Team, CRNL, INSERM, University of Lyon 1, U1028, CNRS, UMR5292, Lyon, France
| | - Barbara Tillmann
- Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics Team, CRNL, INSERM, University of Lyon 1, U1028, CNRS, UMR5292, Lyon, France
| | - Reyna L Gordon
- Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, Tennessee, USA.,Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee, USA.,Vanderbilt Genetics Institute, Vanderbilt University, Nashville, Tennessee, USA.,Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| |
Collapse
|
50
|
Rajendran VG, Harper NS, Schnupp JWH. Auditory cortical representation of music favours the perceived beat. ROYAL SOCIETY OPEN SCIENCE 2020; 7:191194. [PMID: 32269783 PMCID: PMC7137933 DOI: 10.1098/rsos.191194] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2019] [Accepted: 02/03/2020] [Indexed: 06/02/2023]
Abstract
Previous research has shown that musical beat perception is a surprisingly complex phenomenon involving widespread neural coordination across higher-order sensory, motor and cognitive areas. However, the question of how low-level auditory processing must necessarily shape these dynamics, and therefore perception, is not well understood. Here, we present evidence that the auditory cortical representation of music, even in the absence of motor or top-down activations, already favours the beat that will be perceived. Extracellular firing rates in the rat auditory cortex were recorded in response to 20 musical excerpts diverse in tempo and genre, for which musical beat perception had been characterized by the tapping behaviour of 40 human listeners. We found that firing rates in the rat auditory cortex were on average higher on the beat than off the beat. This 'neural emphasis' distinguished the beat that was perceived from other possible interpretations of the beat, was predictive of the degree of tapping consensus across human listeners, and was accounted for by a spectrotemporal receptive field model. These findings strongly suggest that the 'bottom-up' processing of music performed by the auditory system predisposes the timing and clarity of the perceived musical beat.
Collapse
Affiliation(s)
- Vani G. Rajendran
- Auditory Neuroscience Group, Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, UK
- Department of Biomedical Sciences, City University of Hong Kong, Kowloon Tong, Hong Kong
| | - Nicol S. Harper
- Auditory Neuroscience Group, Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, UK
| | - Jan W. H. Schnupp
- Auditory Neuroscience Group, Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, UK
- Department of Biomedical Sciences, City University of Hong Kong, Kowloon Tong, Hong Kong
| |
Collapse
|