1
|
Bianco R, Zuk NJ, Bigand F, Quarta E, Grasso S, Arnese F, Ravignani A, Battaglia-Mayer A, Novembre G. Neural encoding of musical expectations in a non-human primate. Curr Biol 2024; 34:444-450.e5. [PMID: 38176416 DOI: 10.1016/j.cub.2023.12.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 10/26/2023] [Accepted: 12/07/2023] [Indexed: 01/06/2024]
Abstract
The appreciation of music is a universal trait of humankind.1,2,3 Evidence supporting this notion includes the ubiquity of music across cultures4,5,6,7 and the natural predisposition toward music that humans display early in development.8,9,10 Are we musical animals because of species-specific predispositions? This question cannot be answered by relying on cross-cultural or developmental studies alone, as these cannot rule out enculturation.11 Instead, it calls for cross-species experiments testing whether homologous neural mechanisms underlying music perception are present in non-human primates. We present music to two rhesus monkeys, reared without musical exposure, while recording electroencephalography (EEG) and pupillometry. Monkeys exhibit higher engagement and neural encoding of expectations based on the previously seeded musical context when passively listening to real music as opposed to shuffled controls. We then compare human and monkey neural responses to the same stimuli and find a species-dependent contribution of two fundamental musical features-pitch and timing12-in generating expectations: while timing- and pitch-based expectations13 are similarly weighted in humans, monkeys rely on timing rather than pitch. Together, these results shed light on the phylogeny of music perception. They highlight monkeys' capacity for processing temporal structures beyond plain acoustic processing, and they identify a species-dependent contribution of time- and pitch-related features to the neural encoding of musical expectations.
Collapse
Affiliation(s)
- Roberta Bianco
- Neuroscience of Perception & Action Lab, Italian Institute of Technology, Viale Regina Elena 291, 00161 Rome, Italy.
| | - Nathaniel J Zuk
- Department of Psychology, Nottingham Trent University, 50 Shakespeare Street, Nottingham NG1 4FQ, UK
| | - Félix Bigand
- Neuroscience of Perception & Action Lab, Italian Institute of Technology, Viale Regina Elena 291, 00161 Rome, Italy
| | - Eros Quarta
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Stefano Grasso
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Flavia Arnese
- Neuroscience of Perception & Action Lab, Italian Institute of Technology, Viale Regina Elena 291, 00161 Rome, Italy
| | - Andrea Ravignani
- Comparative Bioacoustics Group, Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD Nijmegen, the Netherlands; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Universitetsbyen 3, 8000 Aarhus, Denmark; Department of Human Neurosciences, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Alexandra Battaglia-Mayer
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Giacomo Novembre
- Neuroscience of Perception & Action Lab, Italian Institute of Technology, Viale Regina Elena 291, 00161 Rome, Italy.
| |
Collapse
|
2
|
Bouwer FL, Háden GP, Honing H. Probing Beat Perception with Event-Related Potentials (ERPs) in Human Adults, Newborns, and Nonhuman Primates. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1455:227-256. [PMID: 38918355 DOI: 10.1007/978-3-031-60183-5_13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/27/2024]
Abstract
The aim of this chapter is to give an overview of how the perception of rhythmic temporal regularity such as a regular beat in music can be studied in human adults, human newborns, and nonhuman primates using event-related brain potentials (ERPs). First, we discuss different aspects of temporal structure in general, and musical rhythm in particular, and we discuss the possible mechanisms underlying the perception of regularity (e.g., a beat) in rhythm. Additionally, we highlight the importance of dissociating beat perception from the perception of other types of structure in rhythm, such as predictable sequences of temporal intervals, ordinal structure, and rhythmic grouping. In the second section of the chapter, we start with a discussion of auditory ERPs elicited by infrequent and frequent sounds: ERP responses to regularity violations, such as mismatch negativity (MMN), N2b, and P3, as well as early sensory responses to sounds, such as P1 and N1, have been shown to be instrumental in probing beat perception. Subsequently, we discuss how beat perception can be probed by comparing ERP responses to sounds in regular and irregular sequences, and by comparing ERP responses to sounds in different metrical positions in a rhythm, such as on and off the beat or on strong and weak beats. Finally, we will discuss previous research that has used the aforementioned ERPs and paradigms to study beat perception in human adults, human newborns, and nonhuman primates. In doing so, we consider the possible pitfalls and prospects of the technique, as well as future perspectives.
Collapse
Affiliation(s)
- Fleur L Bouwer
- Cognitive Psychology Unit, Institute of Psychology, Leiden Institute for Brain and Cognition, Leiden University, Leiden, The Netherlands.
- Department of Psychology, Brain & Cognition, University of Amsterdam, Amsterdam, The Netherlands.
| | - Gábor P Háden
- Institute of Cognitive Neuroscience and Psychology, Budapest, Hungary
- Department of Telecommunications and Media Informatics, Faculty of Electrical Engineering and Informatics, Budapest University of Technology and Economics, Budapest, Hungary
| | - Henkjan Honing
- Music Cognition group (MCG), Institute for Logic, Language and Computation (ILLC), Amsterdam Brain and Cognition (ABC), University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
3
|
Nguyen T, Reisner S, Lueger A, Wass SV, Hoehl S, Markova G. Sing to me, baby: Infants show neural tracking and rhythmic movements to live and dynamic maternal singing. Dev Cogn Neurosci 2023; 64:101313. [PMID: 37879243 PMCID: PMC10618693 DOI: 10.1016/j.dcn.2023.101313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 09/29/2023] [Accepted: 10/12/2023] [Indexed: 10/27/2023] Open
Abstract
Infant-directed singing has unique acoustic characteristics that may allow even very young infants to respond to the rhythms carried through the caregiver's voice. The goal of this study was to examine neural and movement responses to live and dynamic maternal singing in 7-month-old infants and their relation to linguistic development. In total, 60 mother-infant dyads were observed during two singing conditions (playsong and lullaby). In Study 1 (n = 30), we measured infant EEG and used an encoding approach utilizing ridge regressions to measure neural tracking. In Study 2 (n =40), we coded infant rhythmic movements. In both studies, we assessed children's vocabulary when they were 20 months old. In Study 1, we found above-threshold neural tracking of maternal singing, with superior tracking of lullabies than playsongs. We also found that the acoustic features of infant-directed singing modulated tracking. In Study 2, infants showed more rhythmic movement to playsongs than lullabies. Importantly, neural coordination (Study 1) and rhythmic movement (Study 2) to playsongs were positively related to infants' expressive vocabulary at 20 months. These results highlight the importance of infants' brain and movement coordination to their caregiver's musical presentations, potentially as a function of musical variability.
Collapse
Affiliation(s)
- Trinh Nguyen
- Faculty of Psychology, University of Vienna, Liebiggasse 5, 1010 Vienna, Austria; Neuroscience of Perception and Action Lab, Italian Institute of Technology, Viale Regina Elena 291, 00161 Rome, Italy.
| | - Susanne Reisner
- Faculty of Psychology, University of Vienna, Liebiggasse 5, 1010 Vienna, Austria
| | - Anja Lueger
- Faculty of Psychology, University of Vienna, Liebiggasse 5, 1010 Vienna, Austria
| | - Samuel V Wass
- Department of Psychology, University of East London, University Way, London E16 2RD, United Kingdom
| | - Stefanie Hoehl
- Faculty of Psychology, University of Vienna, Liebiggasse 5, 1010 Vienna, Austria
| | - Gabriela Markova
- Faculty of Psychology, University of Vienna, Liebiggasse 5, 1010 Vienna, Austria; Institute for Early Life Care, Paracelsus Medical University, Strubergasse 13, 5020 Salzburg, Austria.
| |
Collapse
|
4
|
Belo J, Clerc M, Schön D. The effect of familiarity on neural tracking of music stimuli is modulated by mind wandering. AIMS Neurosci 2023; 10:319-331. [PMID: 38188009 PMCID: PMC10767062 DOI: 10.3934/neuroscience.2023025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 10/29/2023] [Accepted: 11/06/2023] [Indexed: 01/09/2024] Open
Abstract
One way to investigate the cortical tracking of continuous auditory stimuli is to use the stimulus reconstruction approach. However, the cognitive and behavioral factors impacting this cortical representation remain largely overlooked. Two possible candidates are familiarity with the stimulus and the ability to resist internal distractions. To explore the possible impacts of these two factors on the cortical representation of natural music stimuli, forty-one participants listened to monodic natural music stimuli while we recorded their neural activity. Using the stimulus reconstruction approach and linear mixed models, we found that familiarity positively impacted the reconstruction accuracy of music stimuli and that this effect of familiarity was modulated by mind wandering.
Collapse
Affiliation(s)
- Joan Belo
- Athena Project Team, INRIA, Université Côte d'Azur, Nice, France
- Aix Marseille University, Inserm, INS, Institut de Neurosciences des Systèmes, Marseille, France
| | - Maureen Clerc
- Athena Project Team, INRIA, Université Côte d'Azur, Nice, France
| | - Daniele Schön
- Aix Marseille University, Inserm, INS, Institut de Neurosciences des Systèmes, Marseille, France
- Institute for Language, Communication, and the Brain, Aix-en-Provence, France
| |
Collapse
|
5
|
Seki Y. Examining the capability for rhythmic synchronization and music production in vocal learning parrot species. Front Psychol 2023; 14:1271552. [PMID: 38023035 PMCID: PMC10646413 DOI: 10.3389/fpsyg.2023.1271552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Accepted: 10/18/2023] [Indexed: 12/01/2023] Open
Abstract
Vocal production learning and beat perception and synchronization (BPS) share some common characteristics, which makes the vocal learning and rhythmic synchronization hypothesis (VLH) a reasonable explanation for the evolution of the capability for rhythmic synchronization. However, even in vocal learners, it is rare to see non-human animals demonstrate BPS to human music. Therefore, the first objective of this article is to propose some possible reasons why we do not see BPS in budgerigars, an excellent vocal learning species, while presenting some of my own findings. The second objective of this article is to propose a seamless bridge to connect the capability for vocal learning and BPS in locomotion. For this purpose, I present my own findings, wherein cockatiels spontaneously sang in synchrony with a melody of human music. This behavior can be considered a vocal version of BPS. Therefore, it can establish a connection between these two capabilities. This article agrees with the possibility that some mechanisms other than the vocal learning system may enable BPS, contrary to the original idea of VLH. Nevertheless, it is still reasonable to connect the capability for vocal learning and that for BPS. At the very least, the capability for vocal learning may contribute to the evolution of BPS. From these arguments, this article also proposes a scenario which includes vocalizing in synchrony as a driving force for the evolution of BPS and the capability for music production.
Collapse
Affiliation(s)
- Yoshimasa Seki
- Department of Psychology, Aichi University, Toyohashi, Japan
| |
Collapse
|
6
|
Brown JA, Bidelman GM. Attention, Musicality, and Familiarity Shape Cortical Speech Tracking at the Musical Cocktail Party. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.10.28.562773. [PMID: 37961204 PMCID: PMC10634879 DOI: 10.1101/2023.10.28.562773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
The "cocktail party problem" challenges our ability to understand speech in noisy environments, which often include background music. Here, we explored the role of background music in speech-in-noise listening. Participants listened to an audiobook in familiar and unfamiliar music while tracking keywords in either speech or song lyrics. We used EEG to measure neural tracking of the audiobook. When speech was masked by music, the modeled peak latency at 50 ms (P1TRF) was prolonged compared to unmasked. Additionally, P1TRF amplitude was larger in unfamiliar background music, suggesting improved speech tracking. We observed prolonged latencies at 100 ms (N1TRF) when speech was not the attended stimulus, though only in less musical listeners. Our results suggest early neural representations of speech are enhanced with both attention and concurrent unfamiliar music, indicating familiar music is more distracting. One's ability to perceptually filter "musical noise" at the cocktail party depends on objective musical abilities.
Collapse
Affiliation(s)
- Jane A. Brown
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA
- Institute for Intelligent Systems, University of Memphis, Memphis, TN 38152, USA
| | - Gavin M. Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, USA
- Cognitive Science Program, Indiana University, Bloomington, IN, USA
| |
Collapse
|
7
|
L'Hermite S, Zoefel B. Rhythmic Entrainment Echoes in Auditory Perception. J Neurosci 2023; 43:6667-6678. [PMID: 37604689 PMCID: PMC10538584 DOI: 10.1523/jneurosci.0051-23.2023] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 03/10/2023] [Accepted: 03/20/2023] [Indexed: 08/23/2023] Open
Abstract
Rhythmic entrainment echoes-rhythmic brain responses that outlast rhythmic stimulation-can demonstrate endogenous neural oscillations entrained by the stimulus rhythm. Here, we tested for such echoes in auditory perception. Participants detected a pure tone target, presented at a variable delay after another pure tone that was rhythmically modulated in amplitude. In four experiments involving 154 human (female and male) participants, we tested (1) which stimulus rate produces the strongest entrainment echo and, inspired by the tonotopical organization of the auditory system and findings in nonhuman primates, (2) whether these are organized according to sound frequency. We found the strongest entrainment echoes after 6 and 8 Hz stimulation, respectively. The best moments for target detection (in phase or antiphase with the preceding rhythm) depended on whether sound frequencies of entraining and target stimuli matched, which is in line with a tonotopical organization. However, for the same experimental condition, best moments were not always consistent across experiments. We provide a speculative explanation for these differences that relies on the notion that neural entrainment and repetition-related adaptation might exercise competing opposite influences on perception. Together, we find rhythmic echoes in auditory perception that seem more complex than those predicted from initial theories of neural entrainment.SIGNIFICANCE STATEMENT Rhythmic entrainment echoes are rhythmic brain responses that are produced by a rhythmic stimulus and persist after its offset. These echoes play an important role for the identification of endogenous brain oscillations, entrained by rhythmic stimulation, and give us insights into whether and how participants predict the timing of events. In four independent experiments involving >150 participants, we examined entrainment echoes in auditory perception. We found that entrainment echoes have a preferred rate (between 6 and 8 Hz) and seem to follow the tonotopic organization of the auditory system. Although speculative, we also found evidence that several, potentially competing processes might interact to produce such echoes, a notion that might need to be considered for future experimental design.
Collapse
Affiliation(s)
| | - Benedikt Zoefel
- Université de Toulouse III-Paul Sabatier, 31062 Toulouse, France
- Centre National de la Recherche Scientifique, Centre de Recherche Cerveau et Cognition, Centre Hospitalier Universitaire Purpan, 31052 Toulouse, France
| |
Collapse
|
8
|
Lindboom E, Nidiffer A, Carney LH, Lalor EC. Incorporating models of subcortical processing improves the ability to predict EEG responses to natural speech. Hear Res 2023; 433:108767. [PMID: 37060895 PMCID: PMC10559335 DOI: 10.1016/j.heares.2023.108767] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 03/29/2023] [Accepted: 04/09/2023] [Indexed: 04/17/2023]
Abstract
The goal of describing how the human brain responds to complex acoustic stimuli has driven auditory neuroscience research for decades. Often, a systems-based approach has been taken, in which neurophysiological responses are modeled based on features of the presented stimulus. This includes a wealth of work modeling electroencephalogram (EEG) responses to complex acoustic stimuli such as speech. Examples of the acoustic features used in such modeling include the amplitude envelope and spectrogram of speech. These models implicitly assume a direct mapping from stimulus representation to cortical activity. However, in reality, the representation of sound is transformed as it passes through early stages of the auditory pathway, such that inputs to the cortex are fundamentally different from the raw audio signal that was presented. Thus, it could be valuable to account for the transformations taking place in lower-order auditory areas, such as the auditory nerve, cochlear nucleus, and inferior colliculus (IC) when predicting cortical responses to complex sounds. Specifically, because IC responses are more similar to cortical inputs than acoustic features derived directly from the audio signal, we hypothesized that linear mappings (temporal response functions; TRFs) fit to the outputs of an IC model would better predict EEG responses to speech stimuli. To this end, we modeled responses to the acoustic stimuli as they passed through the auditory nerve, cochlear nucleus, and inferior colliculus before fitting a TRF to the output of the modeled IC responses. Results showed that using model-IC responses in traditional systems analyzes resulted in better predictions of EEG activity than using the envelope or spectrogram of a speech stimulus. Further, it was revealed that model-IC derived TRFs predict different aspects of the EEG than acoustic-feature TRFs, and combining both types of TRF models provides a more accurate prediction of the EEG response.
Collapse
Affiliation(s)
- Elsa Lindboom
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA
| | - Aaron Nidiffer
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA; Department of Neuroscience and Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY, USA
| | - Laurel H Carney
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA; Department of Neuroscience and Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY, USA; Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY, USA.
| | - Edmund C Lalor
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA; Department of Neuroscience and Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY, USA
| |
Collapse
|
9
|
Yu CY, Cabildo A, Grahn JA, Vanden Bosch der Nederlanden CM. Perceived rhythmic regularity is greater for song than speech: examining acoustic correlates of rhythmic regularity in speech and song. Front Psychol 2023; 14:1167003. [PMID: 37303916 PMCID: PMC10250601 DOI: 10.3389/fpsyg.2023.1167003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Accepted: 05/09/2023] [Indexed: 06/13/2023] Open
Abstract
Rhythm is a key feature of music and language, but the way rhythm unfolds within each domain differs. Music induces perception of a beat, a regular repeating pulse spaced by roughly equal durations, whereas speech does not have the same isochronous framework. Although rhythmic regularity is a defining feature of music and language, it is difficult to derive acoustic indices of the differences in rhythmic regularity between domains. The current study examined whether participants could provide subjective ratings of rhythmic regularity for acoustically matched (syllable-, tempo-, and contour-matched) and acoustically unmatched (varying in tempo, syllable number, semantics, and contour) exemplars of speech and song. We used subjective ratings to index the presence or absence of an underlying beat and correlated ratings with stimulus features to identify acoustic metrics of regularity. Experiment 1 highlighted that ratings based on the term "rhythmic regularity" did not result in consistent definitions of regularity across participants, with opposite ratings for participants who adopted a beat-based definition (song greater than speech), a normal-prosody definition (speech greater than song), or an unclear definition (no difference). Experiment 2 defined rhythmic regularity as how easy it would be to tap or clap to the utterances. Participants rated song as easier to clap or tap to than speech for both acoustically matched and unmatched datasets. Subjective regularity ratings from Experiment 2 illustrated that stimuli with longer syllable durations and with less spectral flux were rated as more rhythmically regular across domains. Our findings demonstrate that rhythmic regularity distinguishes speech from song and several key acoustic features can be used to predict listeners' perception of rhythmic regularity within and across domains as well.
Collapse
Affiliation(s)
- Chu Yi Yu
- The Brain and Mind Institute, Western University, London, ON, Canada
- Department of Psychology, Western University, London, ON, Canada
| | - Anne Cabildo
- Department of Psychology, University of Toronto, Mississauga, ON, Canada
| | - Jessica A. Grahn
- The Brain and Mind Institute, Western University, London, ON, Canada
- Department of Psychology, Western University, London, ON, Canada
| | - Christina M. Vanden Bosch der Nederlanden
- The Brain and Mind Institute, Western University, London, ON, Canada
- Department of Psychology, Western University, London, ON, Canada
- Department of Psychology, University of Toronto, Mississauga, ON, Canada
| |
Collapse
|
10
|
Incorporating models of subcortical processing improves the ability to predict EEG responses to natural speech. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.02.522438. [PMID: 36711934 PMCID: PMC9881851 DOI: 10.1101/2023.01.02.522438] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
The goal of describing how the human brain responds to complex acoustic stimuli has driven auditory neuroscience research for decades. Often, a systems-based approach has been taken, in which neurophysiological responses are modeled based on features of the presented stimulus. This includes a wealth of work modeling electroencephalogram (EEG) responses to complex acoustic stimuli such as speech. Examples of the acoustic features used in such modeling include the amplitude envelope and spectrogram of speech. These models implicitly assume a direct mapping from stimulus representation to cortical activity. However, in reality, the representation of sound is transformed as it passes through early stages of the auditory pathway, such that inputs to the cortex are fundamentally different from the raw audio signal that was presented. Thus, it could be valuable to account for the transformations taking place in lower-order auditory areas, such as the auditory nerve, cochlear nucleus, and inferior colliculus (IC) when predicting cortical responses to complex sounds. Specifically, because IC responses are more similar to cortical inputs than acoustic features derived directly from the audio signal, we hypothesized that linear mappings (temporal response functions; TRFs) fit to the outputs of an IC model would better predict EEG responses to speech stimuli. To this end, we modeled responses to the acoustic stimuli as they passed through the auditory nerve, cochlear nucleus, and inferior colliculus before fitting a TRF to the output of the modeled IC responses. Results showed that using model-IC responses in traditional systems analyses resulted in better predictions of EEG activity than using the envelope or spectrogram of a speech stimulus. Further, it was revealed that model-IC derived TRFs predict different aspects of the EEG than acoustic-feature TRFs, and combining both types of TRF models provides a more accurate prediction of the EEG response.x.
Collapse
|
11
|
Tichko P, Page N, Kim JC, Large EW, Loui P. Neural Entrainment to Musical Pulse in Naturalistic Music Is Preserved in Aging: Implications for Music-Based Interventions. Brain Sci 2022; 12:brainsci12121676. [PMID: 36552136 PMCID: PMC9775503 DOI: 10.3390/brainsci12121676] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 11/21/2022] [Accepted: 12/01/2022] [Indexed: 12/12/2022] Open
Abstract
Neural entrainment to musical rhythm is thought to underlie the perception and production of music. In aging populations, the strength of neural entrainment to rhythm has been found to be attenuated, particularly during attentive listening to auditory streams. However, previous studies on neural entrainment to rhythm and aging have often employed artificial auditory rhythms or limited pieces of recorded, naturalistic music, failing to account for the diversity of rhythmic structures found in natural music. As part of larger project assessing a novel music-based intervention for healthy aging, we investigated neural entrainment to musical rhythms in the electroencephalogram (EEG) while participants listened to self-selected musical recordings across a sample of younger and older adults. We specifically measured neural entrainment to the level of musical pulse-quantified here as the phase-locking value (PLV)-after normalizing the PLVs to each musical recording's detected pulse frequency. As predicted, we observed strong neural phase-locking to musical pulse, and to the sub-harmonic and harmonic levels of musical meter. Overall, PLVs were not significantly different between older and younger adults. This preserved neural entrainment to musical pulse and rhythm could support the design of music-based interventions that aim to modulate endogenous brain activity via self-selected music for healthy cognitive aging.
Collapse
Affiliation(s)
- Parker Tichko
- Department of Music, Northeastern University, Boston, MA 02115, USA
| | - Nicole Page
- Department of Music, Northeastern University, Boston, MA 02115, USA
| | - Ji Chul Kim
- Department of Psychological Sciences, University of Connecticut, Storrs, CT 06269, USA
| | - Edward W. Large
- Department of Psychological Sciences, University of Connecticut, Storrs, CT 06269, USA
| | - Psyche Loui
- Department of Music, Northeastern University, Boston, MA 02115, USA
- Correspondence:
| |
Collapse
|