1
|
Celma-Miralles A, Seeberg AB, Haumann NT, Vuust P, Petersen B. Experience with the cochlear implant enhances the neural tracking of spectrotemporal patterns in the Alberti bass. Hear Res 2024; 452:109105. [PMID: 39216335 DOI: 10.1016/j.heares.2024.109105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 08/08/2024] [Accepted: 08/13/2024] [Indexed: 09/04/2024]
Abstract
Cochlear implant (CI) users experience diminished music enjoyment due to the technical limitations of the CI. Nonetheless, behavioral studies have reported that rhythmic features are well-transmitted through the CI. Still, the gradual improvement of rhythm perception after the CI switch-on has not yet been determined using neurophysiological measures. To fill this gap, we here reanalyzed the electroencephalographic responses of participants from two previous mismatch negativity studies. These studies included eight recently implanted CI users measured twice, within the first six weeks after CI switch-on and approximately three months later; thirteen experienced CI users with a median experience of 7 years; and fourteen normally hearing (NH) controls. All participants listened to a repetitive four-tone pattern (known in music as Alberti bass) for 35 min. Applying frequency tagging, we aimed to estimate the neural activity synchronized to the periodicities of the Alberti bass. We hypothesized that longer experience with the CI would be reflected in stronger frequency-tagged neural responses approaching the responses of NH controls. We found an increase in the frequency-tagged amplitudes after only 3 months of CI use. This increase in neural synchronization may reflect an early adaptation to the CI stimulation. Moreover, the frequency-tagged amplitudes of experienced CI users were significantly greater than those of recently implanted CI users, but still smaller than those of NH controls. The frequency-tagged neural responses did not just reflect spectrotemporal changes in the stimuli (i.e., intensity or spectral content fluctuating over time), but also showed non-linear transformations that seemed to enhance relevant periodicities of the Alberti bass. Our findings provide neurophysiological evidence indicating a gradual adaptation to the CI, which is noticeable already after three months, resulting in close to NH brain processing of spectrotemporal features of musical rhythms after extended CI use.
Collapse
Affiliation(s)
- Alexandre Celma-Miralles
- Center for Music in the Brain, dept. of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Denmark.
| | - Alberte B Seeberg
- Center for Music in the Brain, dept. of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Denmark
| | - Niels T Haumann
- Center for Music in the Brain, dept. of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Denmark
| | - Peter Vuust
- Center for Music in the Brain, dept. of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Denmark
| | - Bjørn Petersen
- Center for Music in the Brain, dept. of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Denmark
| |
Collapse
|
2
|
Kaplan T, Jamone L, Pearce M. Probabilistic modelling of microtiming perception. Cognition 2023; 239:105532. [PMID: 37442021 DOI: 10.1016/j.cognition.2023.105532] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 06/11/2023] [Accepted: 06/21/2023] [Indexed: 07/15/2023]
Abstract
Music performances are rich in systematic temporal irregularities called "microtiming", too fine-grained to be notated in a musical score but important for musical expression and communication. Several studies have examined listeners' preference for rhythms varying in microtiming, but few have addressed precisely how microtiming is perceived, especially in terms of cognitive mechanisms, making the empirical evidence difficult to interpret. Here we provide evidence that microtiming perception can be simulated as a process of probabilistic prediction. Participants performed an XAB discrimination test, in which an archetypal popular drum rhythm was presented with different microtiming. The results indicate that listeners could implicitly discriminate the mean and variance of stimulus microtiming. Furthermore, their responses were effectively simulated by a Bayesian model of entrainment, using a distance function derived from its dynamic posterior estimate over phase. Wide individual differences in participant sensitivity to microtiming were predicted by a model parameter likened to noisy timekeeping processes in the brain. Overall, this suggests that the cognitive mechanisms underlying perception of microtiming reflect a continuous inferential process, potentially driving qualitative judgements of rhythmic feel.
Collapse
Affiliation(s)
- Thomas Kaplan
- School of Electronic Engineering & Computer Science, Queen Mary University of London, London, United Kingdom.
| | - Lorenzo Jamone
- School of Engineering & Materials Science, Queen Mary University of London, London, United Kingdom
| | - Marcus Pearce
- School of Electronic Engineering & Computer Science, Queen Mary University of London, London, United Kingdom; Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| |
Collapse
|
3
|
Lumaca M, Bonetti L, Brattico E, Baggio G, Ravignani A, Vuust P. High-fidelity transmission of auditory symbolic material is associated with reduced right-left neuroanatomical asymmetry between primary auditory regions. Cereb Cortex 2023:7005170. [PMID: 36702496 DOI: 10.1093/cercor/bhad009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Revised: 01/05/2023] [Accepted: 01/06/2023] [Indexed: 01/28/2023] Open
Abstract
The intergenerational stability of auditory symbolic systems, such as music, is thought to rely on brain processes that allow the faithful transmission of complex sounds. Little is known about the functional and structural aspects of the human brain which support this ability, with a few studies pointing to the bilateral organization of auditory networks as a putative neural substrate. Here, we further tested this hypothesis by examining the role of left-right neuroanatomical asymmetries between auditory cortices. We collected neuroanatomical images from a large sample of participants (nonmusicians) and analyzed them with Freesurfer's surface-based morphometry method. Weeks after scanning, the same individuals participated in a laboratory experiment that simulated music transmission: the signaling games. We found that high accuracy in the intergenerational transmission of an artificial tone system was associated with reduced rightward asymmetry of cortical thickness in Heschl's sulcus. Our study suggests that the high-fidelity copying of melodic material may rely on the extent to which computational neuronal resources are distributed across hemispheres. Our data further support the role of interhemispheric brain organization in the cultural transmission and evolution of auditory symbolic systems.
Collapse
Affiliation(s)
- Massimo Lumaca
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Aarhus C 8000, Denmark
| | - Leonardo Bonetti
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Aarhus C 8000, Denmark.,Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford OX3 9BX, United Kingdom.,Department of Psychiatry, University of Oxford, Oxford OX3 7JX, United Kingdom.,Department of Psychology, University of Bologna, Bologna 40127, Italy
| | - Elvira Brattico
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Aarhus C 8000, Denmark.,Department of Education, Psychology, Communication, University of Bari Aldo Moro, Bari 70122, Italy
| | - Giosuè Baggio
- Language Acquisition and Language Processing Lab, Department of Language and Literature, Norwegian University of Science and Technology, Trondheim 7941, Norway
| | - Andrea Ravignani
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Aarhus C 8000, Denmark.,Comparative Bioacoustics Group, Max Planck Institute for Psycholinguistics, Nijmegen 6525 XD, Netherlands
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Aarhus/Aalborg, Aarhus C 8000, Denmark
| |
Collapse
|
4
|
Schiavio A, Maes PJ, van der Schyff D. The dynamics of musical participation. MUSICAE SCIENTIAE : THE JOURNAL OF THE EUROPEAN SOCIETY FOR THE COGNITIVE SCIENCES OF MUSIC 2022; 26:604-626. [PMID: 36090466 PMCID: PMC9449429 DOI: 10.1177/1029864920988319] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In this paper we argue that our comprehension of musical participation-the complex network of interactive dynamics involved in collaborative musical experience-can benefit from an analysis inspired by the existing frameworks of dynamical systems theory and coordination dynamics. These approaches can offer novel theoretical tools to help music researchers describe a number of central aspects of joint musical experience in greater detail, such as prediction, adaptivity, social cohesion, reciprocity, and reward. While most musicians involved in collective forms of musicking already have some familiarity with these terms and their associated experiences, we currently lack an analytical vocabulary to approach them in a more targeted way. To fill this gap, we adopt insights from these frameworks to suggest that musical participation may be advantageously characterized as an open, non-equilibrium, dynamical system. In particular, we suggest that research informed by dynamical systems theory might stimulate new interdisciplinary scholarship at the crossroads of musicology, psychology, philosophy, and cognitive (neuro)science, pointing toward new understandings of the core features of musical participation.
Collapse
Affiliation(s)
- Andrea Schiavio
- Andrea Schiavio, Centre for
Systematic Musicology, University of Graz, Glacisstraße 27a, Graz,
8010, Austria.
| | - Pieter-Jan Maes
- IPEM, Department of Art, Music, and
Theatre Sciences, Ghent University, Belgium
| | | |
Collapse
|
5
|
Kaplan T, Cannon J, Jamone L, Pearce M. Modeling enculturated bias in entrainment to rhythmic patterns. PLoS Comput Biol 2022; 18:e1010579. [PMID: 36174063 PMCID: PMC9553061 DOI: 10.1371/journal.pcbi.1010579] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 10/11/2022] [Accepted: 09/16/2022] [Indexed: 11/19/2022] Open
Abstract
Long-term and culture-specific experience of music shapes rhythm perception, leading to enculturated expectations that make certain rhythms easier to track and more conducive to synchronized movement. However, the influence of enculturated bias on the moment-to-moment dynamics of rhythm tracking is not well understood. Recent modeling work has formulated entrainment to rhythms as a formal inference problem, where phase is continuously estimated based on precise event times and their correspondence to timing expectations: PIPPET (Phase Inference from Point Process Event Timing). Here we propose that the problem of optimally tracking a rhythm also requires an ongoing process of inferring which pattern of event timing expectations is most suitable to predict a stimulus rhythm. We formalize this insight as an extension of PIPPET called pPIPPET (PIPPET with pattern inference). The variational solution to this problem introduces terms representing the likelihood that a stimulus is based on a particular member of a set of event timing patterns, which we initialize according to culturally-learned prior expectations of a listener. We evaluate pPIPPET in three experiments. First, we demonstrate that pPIPPET can qualitatively reproduce enculturated bias observed in human tapping data for simple two-interval rhythms. Second, we simulate categorization of a continuous three-interval rhythm space by Western-trained musicians through derivation of a comprehensive set of priors for pPIPPET from metrical patterns in a sample of Western rhythms. Third, we simulate iterated reproduction of three-interval rhythms, and show that models configured with notated rhythms from different cultures exhibit both universal and enculturated biases as observed experimentally in listeners from those cultures. These results suggest the influence of enculturated timing expectations on human perceptual and motor entrainment can be understood as approximating optimal inference about the rhythmic stimulus, with respect to prototypical patterns in an empirical sample of rhythms that represent the music-cultural environment of the listener.
Collapse
Affiliation(s)
- Thomas Kaplan
- Cognitive Science Research Group, School of Electronic Engineering & Computer Science, Queen Mary University of London, London, United Kingdom
| | - Jonathan Cannon
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario, Canada
| | - Lorenzo Jamone
- Cognitive Science Research Group, School of Electronic Engineering & Computer Science, Queen Mary University of London, London, United Kingdom
- Advanced Robotics at Queen Mary (ARQ), School of Electronic Engineering & Computer Science, Queen Mary University of London, London, United Kingdom
| | - Marcus Pearce
- Cognitive Science Research Group, School of Electronic Engineering & Computer Science, Queen Mary University of London, London, United Kingdom
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| |
Collapse
|
6
|
Dotov D, Trainor LJ. Cross-frequency coupling explains the preference for simple ratios in rhythmic behaviour and the relative stability across non-synchronous patterns. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200333. [PMID: 34420377 DOI: 10.1098/rstb.2020.0333] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022] Open
Abstract
Rhythms are important for understanding coordinated behaviours in ecological systems. The repetitive nature of rhythms affords prediction, planning of movements and coordination of processes within and between individuals. A major challenge is to understand complex forms of coordination when they differ from complete synchronization. By expressing phase as ratio of a cycle, we adapted levels of the Farey tree as a metric of complexity mapped to the range between in-phase and anti-phase synchronization. In a bimanual tapping task, this revealed an increase of variability with ratio complexity, a range of hidden and unstable yet measurable modes, and a rank-frequency scaling law across these modes. We use the phase-attractive circle map to propose an interpretation of these findings in terms of hierarchical cross-frequency coupling (CFC). We also consider the tendency for small-integer attractors in the single-hand repeated tapping of three-interval rhythms reported in the literature. The phase-attractive circle map has wider basins of attractions for such ratios. This work motivates the question whether CFC intrinsic to neural dynamics implements low-level priors for timing and coordination and thus becomes involved in phenomena as diverse as attractor states in bimanual coordination and the cross-cultural tendency for musical rhythms to have simple interval ratios. This article is part of the theme issue 'Synchrony and rhythm interaction: from the brain to behavioural ecology'.
Collapse
Affiliation(s)
- Dobromir Dotov
- LIVELab, McMaster University, 1280 Main Street West, Hamilton, Ontario, Canada L8S4K1.,Psychology, Neuroscience and Behaviour, McMaster University, Ontario, Canada
| | - Laurel J Trainor
- LIVELab, McMaster University, 1280 Main Street West, Hamilton, Ontario, Canada L8S4K1.,Psychology, Neuroscience and Behaviour, McMaster University, Ontario, Canada.,Rotman Research Institute, Toronto, Canada
| |
Collapse
|
7
|
Ravignani A, Dalla Bella S, Falk S, Kello CT, Noriega F, Kotz SA. Rhythm in speech and animal vocalizations: a cross-species perspective. Ann N Y Acad Sci 2019; 1453:79-98. [PMID: 31237365 PMCID: PMC6851814 DOI: 10.1111/nyas.14166] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2019] [Revised: 05/14/2019] [Accepted: 05/24/2019] [Indexed: 12/31/2022]
Abstract
Why does human speech have rhythm? As we cannot travel back in time to witness how speech developed its rhythmic properties and why humans have the cognitive skills to process them, we rely on alternative methods to find out. One powerful tool is the comparative approach: studying the presence or absence of cognitive/behavioral traits in other species to determine which traits are shared between species and which are recent human inventions. Vocalizations of many species exhibit temporal structure, but little is known about how these rhythmic structures evolved, are perceived and produced, their biological and developmental bases, and communicative functions. We review the literature on rhythm in speech and animal vocalizations as a first step toward understanding similarities and differences across species. We extend this review to quantitative techniques that are useful for computing rhythmic structure in acoustic sequences and hence facilitate cross-species research. We report links between vocal perception and motor coordination and the differentiation of rhythm based on hierarchical temporal structure. While still far from a complete cross-species perspective of speech rhythm, our review puts some pieces of the puzzle together.
Collapse
Affiliation(s)
- Andrea Ravignani
- Artificial Intelligence LaboratoryVrije Universiteit BrusselBrusselsBelgium
- Institute for Advanced StudyUniversity of AmsterdamAmsterdamthe Netherlands
| | - Simone Dalla Bella
- International Laboratory for BrainMusic and Sound Research (BRAMS)MontréalQuebecCanada
- Department of PsychologyUniversity of MontrealMontréalQuebecCanada
- Department of Cognitive PsychologyWarsawPoland
| | - Simone Falk
- International Laboratory for BrainMusic and Sound Research (BRAMS)MontréalQuebecCanada
- Laboratoire de Phonétique et Phonologie, UMR 7018, CNRS/Université Sorbonne Nouvelle Paris‐3Institut de Linguistique et Phonétique générales et appliquéesParisFrance
| | | | - Florencia Noriega
- Chair for Network DynamicsCenter for Advancing Electronics Dresden (CFAED), TU DresdenDresdenGermany
- CODE University of Applied SciencesBerlinGermany
| | - Sonja A. Kotz
- International Laboratory for BrainMusic and Sound Research (BRAMS)MontréalQuebecCanada
- Basic and Applied NeuroDynamics Laboratory, Faculty of Psychology and Neuroscience, Department of Neuropsychology and PsychopharmacologyMaastricht UniversityMaastrichtthe Netherlands
- Department of NeuropsychologyMax‐Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| |
Collapse
|