1
|
Cui AX, Kraeutner SN, Kepinska O, Motamed Yeganeh N, Hermiston N, Werker JF, Boyd LA. Musical Sophistication and Multilingualism: Effects on Arcuate Fasciculus Characteristics. Hum Brain Mapp 2024; 45:e70035. [PMID: 39360580 PMCID: PMC11447524 DOI: 10.1002/hbm.70035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Revised: 09/05/2024] [Accepted: 09/17/2024] [Indexed: 10/04/2024] Open
Abstract
The processing of auditory stimuli which are structured in time is thought to involve the arcuate fasciculus, the white matter tract which connects the temporal cortex and the inferior frontal gyrus. Research has indicated effects of both musical and language experience on the structural characteristics of the arcuate fasciculus. Here, we investigated in a sample of n = 84 young adults whether continuous conceptualizations of musical and multilingual experience related to structural characteristics of the arcuate fasciculus, measured using diffusion tensor imaging. Probabilistic tractography was used to identify the dorsal and ventral parts of the white matter tract. Linear regressions indicated that different aspects of musical sophistication related to the arcuate fasciculus' volume (emotional engagement with music), volumetric asymmetry (musical training and music perceptual abilities), and fractional anisotropy (music perceptual abilities). Our conceptualization of multilingual experience, accounting for participants' proficiency in reading, writing, understanding, and speaking different languages, was not related to the structural characteristics of the arcuate fasciculus. We discuss our results in the context of other research on hemispheric specializations and a dual-stream model of auditory processing.
Collapse
Affiliation(s)
- Anja-Xiaoxing Cui
- Department of Musicology, University of Vienna, Vienna, Austria
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
- Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
| | - Sarah N Kraeutner
- Department of Psychology, University of British Columbia Okanagan, Kelowna, British Columbia, Canada
| | - Olga Kepinska
- Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
- Department of Behavioral and Cognitive Biology, Faculty of Life Sciences, University of Vienna, Vienna, Austria
| | - Negin Motamed Yeganeh
- Brain Behaviour Lab, Department of Physical Therapy, University of British Columbia, Vancouver, British Columbia, Canada
| | - Nancy Hermiston
- School of Music, University of British Columbia, Vancouver, British Columbia, Canada
| | - Janet F Werker
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
| | - Lara A Boyd
- Brain Behaviour Lab, Department of Physical Therapy, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
2
|
Kathios N, Lopez KL, Gabard-Durnam LJ, Loui P. Music@Home-Retrospective: A new measure to retrospectively assess childhood home musical environments. Behav Res Methods 2024; 56:8038-8056. [PMID: 39103597 PMCID: PMC11362467 DOI: 10.3758/s13428-024-02469-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/17/2024] [Indexed: 08/07/2024]
Abstract
Early home musical environments can significantly impact sensory, cognitive, and socioemotional development. While longitudinal studies may be resource-intensive, retrospective reports are a relatively quick and inexpensive way to examine associations between early home musical environments and adult outcomes. We present the Music@Home-Retrospective scale, derived partly from the Music@Home-Preschool scale (Politimou et al., 2018), to retrospectively assess the childhood home musical environment. In two studies (total n = 578), we conducted an exploratory factor analysis (Study 1) and confirmatory factor analysis (Study 2) on items, including many adapted from the Music@Home-Preschool scale. This revealed a 20-item solution with five subscales. Items retained for three subscales (Caregiver Beliefs, Caregiver Initiation of Singing, Child Engagement with Music) load identically to three in the Music@Home--Preschool Scale. We also identified two additional dimensions of the childhood home musical environment. The Attitude Toward Childhood Home Musical Environment subscale captures participants' current adult attitudes toward their childhood home musical environment, and the Social Listening Contexts subscale indexes the degree to which participants listened to music at home with others (i.e., friends, siblings, and caregivers). Music@Home-Retrospective scores were related to adult self-reports of musicality, performance on a melodic perception task, and self-reports of well-being, demonstrating utility in measuring the early home music environment as captured through this scale. The Music@Home-Retrospective scale is freely available to enable future investigations exploring how the early home musical environment relates to adult cognition, affect, and behavior.
Collapse
Affiliation(s)
- Nicholas Kathios
- Department of Psychology, College of Science, Northeastern University, Boston, MA, USA
| | - Kelsie L Lopez
- Department of Psychology, College of Science, Northeastern University, Boston, MA, USA
| | | | - Psyche Loui
- Department of Music, College of Arts, Media, and Design, Northeastern University, Boston, MA, USA
| |
Collapse
|
3
|
Lumaca M, Keller PE, Baggio G, Pando-Naude V, Bajada CJ, Martinez MA, Hansen JH, Ravignani A, Joe N, Vuust P, Vulić K, Sandberg K. Frontoparietal network topology as a neural marker of musical perceptual abilities. Nat Commun 2024; 15:8160. [PMID: 39289390 PMCID: PMC11408523 DOI: 10.1038/s41467-024-52479-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Accepted: 09/05/2024] [Indexed: 09/19/2024] Open
Abstract
Why are some individuals more musical than others? Neither cognitive testing nor classical localizationist neuroscience alone can provide a complete answer. Here, we test how the interplay of brain network organization and cognitive function delivers graded perceptual abilities in a distinctively human capacity. We analyze multimodal magnetic resonance imaging, cognitive, and behavioral data from 200+ participants, focusing on a canonical working memory network encompassing prefrontal and posterior parietal regions. Using graph theory, we examine structural and functional frontoparietal network organization in relation to assessments of musical aptitude and experience. Results reveal a positive correlation between perceptual abilities and the integration efficiency of key frontoparietal regions. The linkage between functional networks and musical abilities is mediated by working memory processes, whereas structural networks influence these abilities through sensory integration. Our work lays the foundation for future investigations into the neurobiological roots of individual differences in musicality.
Collapse
Affiliation(s)
- M Lumaca
- Center for Music in the Brain, Department of Clinical Medicine, Health, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark.
| | - P E Keller
- Center for Music in the Brain, Department of Clinical Medicine, Health, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, Australia
| | - G Baggio
- Language Acquisition and Language Processing Lab, Norwegian University of Science and Technology, Trondheim, Norway
| | - V Pando-Naude
- Center for Music in the Brain, Department of Clinical Medicine, Health, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| | - C J Bajada
- Department of Physiology and Biochemistry, Faculty of Medicine and Surgery, University of Malta / University of Malta Magnetic Resonance Imaging Research Platform, Msida, Malta
| | - M A Martinez
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Health, Aarhus University, Aarhus, Denmark
| | - J H Hansen
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Health, Aarhus University, Aarhus, Denmark
| | - A Ravignani
- Center for Music in the Brain, Department of Clinical Medicine, Health, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
- Department of Human Neurosciences, Sapienza University of Rome, Rome, Italy
| | - N Joe
- Center for Music in the Brain, Department of Clinical Medicine, Health, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| | - P Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Health, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| | - K Vulić
- Department for Human Neuroscience, Institute for Medical Research, University of Belgrade, Belgrade, Serbia
| | - K Sandberg
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Health, Aarhus University, Aarhus, Denmark
| |
Collapse
|
4
|
Lad M, Taylor JP, Griffiths TD. The contribution of short-term memory for sound features to speech-in-noise perception and cognition. Hear Res 2024; 451:109081. [PMID: 39004015 DOI: 10.1016/j.heares.2024.109081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 06/20/2024] [Accepted: 07/10/2024] [Indexed: 07/16/2024]
Abstract
Speech-in-noise (SIN) perception is a fundamental ability that declines with aging, as does general cognition. We assess whether auditory cognitive ability, in particular short-term memory for sound features, contributes to both. We examined how auditory memory for fundamental sound features, the carrier frequency and amplitude modulation rate of modulated white noise, contributes to SIN perception. We assessed SIN in 153 healthy participants with varying degrees of hearing loss using measures that require single-digit perception (the Digits-in-Noise, DIN) and sentence perception (Speech-in-Babble, SIB). Independent variables were auditory memory and a range of other factors including the Pure Tone Audiogram (PTA), a measure of dichotic pitch-in-noise perception (Huggins pitch), and demographic variables including age and sex. Multiple linear regression models were compared using Bayesian Model Comparison. The best predictor model for DIN included PTA and Huggins pitch (r2 = 0.32, p < 0.001), whereas the model for SIB included the addition of auditory memory for sound features (r2 = 0.24, p < 0.001). Further analysis demonstrated that auditory memory also explained a significant portion of the variance (28 %) in scores for a screening cognitive test for dementia. Auditory memory for non-speech sounds may therefore provide an important predictor of both SIN and cognitive ability.
Collapse
Affiliation(s)
- Meher Lad
- Translational and Clinical Research Institute, Newcastle University, Newcastle upon Tyne NE2 4HH, United Kingdom.
| | - John-Paul Taylor
- Translational and Clinical Research Institute, Newcastle University, Newcastle upon Tyne NE2 4HH, United Kingdom
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University, Newcastle upon Tyne NE2 4HH, United Kingdom; Wellcome Centre for Human Neuroimaging, University College London, London WC1N 3AR, United Kingdom
| |
Collapse
|
5
|
de Hoyos L, Verhoef E, Okbay A, Vermeulen JR, Figaroa C, Lense M, Fisher SE, Gordon RL, St Pourcain B. Preschool musicality is associated with school-age communication abilities through genes related to rhythmicity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.09.09.611603. [PMID: 39314312 PMCID: PMC11419103 DOI: 10.1101/2024.09.09.611603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 09/25/2024]
Abstract
Early-life musical engagement is an understudied but developmentally important and heritable precursor of later (social) communication and language abilities. This study aims to uncover the aetiological mechanisms linking musical to communication abilities. We derived polygenic scores (PGS) for self-reported beat synchronisation abilities (PGSrhythmicity) in children (N≤6,737) from the Avon Longitudinal Study of Parents and Children and tested their association with preschool musical (0.5-5 years) and school-age (social) communication and cognition-related abilities (9-12 years). We further assessed whether relationships between preschool musicality and school-age communication are shared through PGSrhythmicity, using structural equation modelling techniques. PGSrhythmicity were associated with preschool musicality (Nagelkerke-R2=0.70-0.79%), and school-age communication and cognition-related abilities (R2=0.08-0.41%), but not social communication. We identified links between preschool musicality and school-age speech- and syntax-related communication abilities as captured by known genetic influences underlying rhythmicity (shared effect β=0.0065(SE=0.0021), p=0.0016), above and beyond general cognition, strengthening support for early music intervention programmes.
Collapse
Affiliation(s)
- Lucía de Hoyos
- Language and Genetics Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Ellen Verhoef
- Language and Genetics Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Aysu Okbay
- Department of Economics, School of Business and Economics, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Janne R Vermeulen
- Language and Genetics Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Celeste Figaroa
- Language and Genetics Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Miriam Lense
- Blair School of Music, Vanderbilt University, Nashville, TN, USA
- Department of Otolaryngology - Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| | - Simon E Fisher
- Language and Genetics Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Reyna L Gordon
- Blair School of Music, Vanderbilt University, Nashville, TN, USA
- Department of Otolaryngology - Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Beate St Pourcain
- Language and Genetics Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- MRC Integrative Epidemiology Unit, University of Bristol, Bristol, United Kingdom
| |
Collapse
|
6
|
Whitton SA, Sreenan B, Luo C, Jiang F. Sensorimotor Synchronization and Neural Entrainment to Imagined Rhythms in Individuals With Proficient Imagery Ability. J Neurosci Res 2024; 102:e25383. [PMID: 39286933 PMCID: PMC11410344 DOI: 10.1002/jnr.25383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 07/31/2024] [Accepted: 08/29/2024] [Indexed: 09/19/2024]
Abstract
Sensorimotor synchronization (SMS) is the temporal coordination of motor movements with external or imagined stimuli. Finger-tapping studies indicate better SMS performance with auditory or tactile stimuli compared to visual. However, SMS with a visual rhythm can be improved by enriching stimulus properties (e.g., spatiotemporal content) or individual differences (e.g., one's vividness of auditory imagery). We previously showed that higher self-reported vividness of auditory imagery led to more consistent synchronization-continuation performance when participants continued without a guiding visual rhythm. Here, we examined the contribution of imagery to the SMS performance of proficient imagers, including an auditory or visual distractor task during the continuation phase. While the visual distractor task had minimal effect, SMS consistency was significantly worse when the auditory distractor task was present. Our electroencephalography analysis revealed beat-related neural entrainment, only when the visual or auditory distractor tasks were present. During continuation with the auditory distractor task, the neural entrainment showed an occipital electrode distribution, suggesting the involvement of visual imagery. Unique to SMS continuation with the auditory distractor task, we found neural and sub-vocal (measured with electromyography) entrainment at the three-beat pattern frequency. In this most difficult condition, proficient imagers employed both beat- and pattern-related imagery strategies. However, this combination was insufficient to restore SMS consistency to that observed with visual or no distractor task. Our results suggest that proficient imagers effectively utilized beat-related imagery in one modality when imagery in another modality was limited.
Collapse
Affiliation(s)
| | - Benjamin Sreenan
- Department of Psychology, University of Nevada, Reno, Nevada, USA
| | - Canhuang Luo
- Department of Psychology, University of Nevada, Reno, Nevada, USA
- School of Psychology, Shenzhen University, Shenzhen, China
| | - Fang Jiang
- Department of Psychology, University of Nevada, Reno, Nevada, USA
| |
Collapse
|
7
|
Silva LB, Phillips M, Martins JO. The influence of tonality, tempo, and musical sophistication on the listener's time-duration estimates. Q J Exp Psychol (Hove) 2024; 77:1846-1864. [PMID: 37706292 PMCID: PMC11373168 DOI: 10.1177/17470218231203459] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/15/2023]
Abstract
Music listening affects time perception, with previous studies suggesting that a variety of factors may influence this: musical, individual, and environmental. Two experiments investigated the effect of musical factors (tonality and musical tempo) and individual factors (a listener's level of musical sophistication) on subjective estimates of duration. Participants estimated the duration of different versions of newly composed instrumental music stimuli under retrospective and prospective conditions. Stimuli varied in tempo (90-120 bpm) and tonality (tonal-atonal), in a 2 × 2 factorial design, while other musical parameters remained constant. Estimates were made using written estimates of minutes and seconds in Experiment 1, and the reproduction method in Experiment 2. Two-way analyses of variance (ANOVAs) showed no main effect of tonality on estimates and no significant interactions between tempo and tonality, under any condition. Musical tempo significantly affected estimates, with the faster tempo leading to longer estimates, but only in the prospective condition, and with the use of the reproduction method. Correlation matrices using the Pearson correlation coefficient found no correlation between musical sophistication scores (measured using the Goldsmiths Musical Sophistication Index [Gold-MSI]) and verbal or reproduction estimates. In conclusion, together with the existing literature, findings suggest that (1) changes in tonality, without further changes in rhythm, metre, or melodic contour, do not significantly affect estimates; (2) small changes in musical tempo influence only prospective reproduction estimates, with larger tempo differences or longer stimuli being needed to cause changes in retrospective estimates; (3) participants' level of musical sophistication does not impact estimates of musical duration; and (4) empirical research on music listening and subjective time must consider potential method-dependent results.
Collapse
Affiliation(s)
- Ligia Borges Silva
- Centre for Interdisciplinary Studies (CEIS20), Institute of Interdisciplinary Research, Faculty of Arts and Humanities, University of Coimbra, Coimbra, Portugal
| | | | - José Oliveira Martins
- Centre for Interdisciplinary Studies (CEIS20), Institute of Interdisciplinary Research, Faculty of Arts and Humanities, University of Coimbra, Coimbra, Portugal
| |
Collapse
|
8
|
Reed CN, Pearce M, McPherson A. Auditory imagery ability influences accuracy when singing with altered auditory feedback. MUSICAE SCIENTIAE : THE JOURNAL OF THE EUROPEAN SOCIETY FOR THE COGNITIVE SCIENCES OF MUSIC 2024; 28:478-501. [PMID: 39219861 PMCID: PMC11357896 DOI: 10.1177/10298649231223077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
Abstract
In this preliminary study, we explored the relationship between auditory imagery ability and the maintenance of tonal and temporal accuracy when singing and audiating with altered auditory feedback (AAF). Actively performing participants sang and audiated (sang mentally but not aloud) a self-selected piece in AAF conditions, including upward pitch-shifts and delayed auditory feedback (DAF), and with speech distraction. Participants with higher self-reported scores on the Bucknell Auditory Imagery Scale (BAIS) produced a tonal reference that was less disrupted by pitch shifts and speech distraction than musicians with lower scores. However, there was no observed effect of BAIS score on temporal deviation when singing with DAF. Auditory imagery ability was not related to the experience of having studied music theory formally, but was significantly related to the experience of performing. The significant effect of auditory imagery ability on tonal reference deviation remained even after partialling out the effect of experience of performing. The results indicate that auditory imagery ability plays a key role in maintaining an internal tonal center during singing but has at most a weak effect on temporal consistency. In this article, we outline future directions in understanding the multifaceted role of auditory imagery ability in singers' accuracy and expression.
Collapse
Affiliation(s)
- Courtney N. Reed
- Loughborough University London, UK; Queen Mary University of London, UK
| | | | - Andrew McPherson
- Imperial College London, UK; Queen Mary University of London, UK
| |
Collapse
|
9
|
Hake R, Bürgel M, Nguyen NK, Greasley A, Müllensiefen D, Siedenburg K. Development of an adaptive test of musical scene analysis abilities for normal-hearing and hearing-impaired listeners. Behav Res Methods 2024; 56:5456-5481. [PMID: 37957432 PMCID: PMC11335785 DOI: 10.3758/s13428-023-02279-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/21/2023] [Indexed: 11/15/2023]
Abstract
Auditory scene analysis (ASA) is the process through which the auditory system makes sense of complex acoustic environments by organising sound mixtures into meaningful events and streams. Although music psychology has acknowledged the fundamental role of ASA in shaping music perception, no efficient test to quantify listeners' ASA abilities in realistic musical scenarios has yet been published. This study presents a new tool for testing ASA abilities in the context of music, suitable for both normal-hearing (NH) and hearing-impaired (HI) individuals: the adaptive Musical Scene Analysis (MSA) test. The test uses a simple 'yes-no' task paradigm to determine whether the sound from a single target instrument is heard in a mixture of popular music. During the online calibration phase, 525 NH and 131 HI listeners were recruited. The level ratio between the target instrument and the mixture, choice of target instrument, and number of instruments in the mixture were found to be important factors affecting item difficulty, whereas the influence of the stereo width (induced by inter-aural level differences) only had a minor effect. Based on a Bayesian logistic mixed-effects model, an adaptive version of the MSA test was developed. In a subsequent validation experiment with 74 listeners (20 HI), MSA scores showed acceptable test-retest reliability and moderate correlations with other music-related tests, pure-tone-average audiograms, age, musical sophistication, and working memory capacities. The MSA test is a user-friendly and efficient open-source tool for evaluating musical ASA abilities and is suitable for profiling the effects of hearing impairment on music perception.
Collapse
Affiliation(s)
- Robin Hake
- Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany.
| | - Michel Bürgel
- Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
| | - Ninh K Nguyen
- Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
| | | | - Daniel Müllensiefen
- Department of Psychology, Goldsmiths, University of London, London, UK
- Hanover Music Lab, Hochschule Für Musik, Theater und Medien, Hannover, Germany
| | - Kai Siedenburg
- Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
10
|
Liu M, Teng X, Jiang J. Instrumental music training relates to intensity assessment but not emotional prosody recognition in Mandarin. PLoS One 2024; 19:e0309432. [PMID: 39213300 PMCID: PMC11364251 DOI: 10.1371/journal.pone.0309432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Accepted: 08/12/2024] [Indexed: 09/04/2024] Open
Abstract
Building on research demonstrating the benefits of music training for emotional prosody recognition in nontonal languages, this study delves into its unexplored influence on tonal languages. In tonal languages, the acoustic similarity between lexical tones and music, along with the dual role of pitch in conveying lexical and affective meanings, create a unique interplay. We evaluated 72 participants, half of whom had extensive instrumental music training, with the other half serving as demographically matched controls. All participants completed an online test consisting of 210 Chinese pseudosentences, each designed to express one of five emotions: happiness, sadness, fear, anger, or neutrality. Our robust statistical analyses, which included effect size estimates and Bayesian factors, revealed that music and nonmusic groups exhibit similar abilities in identifying the emotional prosody of various emotions. However, the music group attributed higher intensity ratings to emotional prosodies of happiness, fear, and anger compared to the nonmusic group. These findings suggest that while instrumental music training is not related to emotional prosody recognition, it does appear to be related to perceived emotional intensity. This dissociation between emotion recognition and intensity evaluation adds a new piece to the puzzle of the complex relationship between music training and emotion perception in tonal languages.
Collapse
Affiliation(s)
- Mengting Liu
- Department of Art, Harbin Conservatory of Music, Harbin, China
| | - Xiangbin Teng
- Department of Psychology, The Chinese University of Hong Kong, Shatin, Hong Kong SAR, China
| | - Jun Jiang
- Music College, Shanghai Normal University, Shanghai, China
| |
Collapse
|
11
|
Popescu A, Holman AC. Loop and Enjoy: A Scoping Review of the Research on the Effects of Processing Fluency on Aesthetic Reactions to Auditory Stimuli. Psychol Rep 2024:332941241277474. [PMID: 39206490 DOI: 10.1177/00332941241277474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
Abstract
Processing fluency has been shown to affect how people aesthetically evaluate stimuli. While this effect is well documented for visual stimuli, the evidence accumulated for auditory stimuli has not yet been integrated. Our aim was to examine the relevant research on how processing fluency affects the aesthetic appreciation of auditory stimuli and to identify the extant knowledge gaps in this body of evidence. This scoping review of 19 studies reported across 13 articles found that, similarly to visual stimuli, fluency has a positive effect on liking of auditory stimuli. Additionally, we identified certain elements that impede the generalizability of the current research on the relationship between fluency and aesthetic reactions to auditory stimuli, such as a lack of consistency in the number of repeated exposures, the tendency to omit the affective component and the failure to account for personal variables such as musical abilities developed through musical training or the participants' personality or preferences. These results offer a starting point in developing novel and proper processing fluency manipulations of auditory stimuli and suggest several avenues for future research aiming to clarify the impact and importance of processing fluency and disfluency in this domain.
Collapse
Affiliation(s)
- Alexandru Popescu
- Department of Psychology, Alexandru Ioan Cuza University of Iasi, Romania
| | | |
Collapse
|
12
|
Mednicoff SD, Barashy S, Vollweiler DJ, Benning SD, Snyder JS, Hannon EE. Misophonia reactions in the general population are correlated with strong emotional reactions to other everyday sensory-emotional experiences. Philos Trans R Soc Lond B Biol Sci 2024; 379:20230253. [PMID: 39005036 PMCID: PMC11444238 DOI: 10.1098/rstb.2023.0253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 02/26/2024] [Indexed: 07/16/2024] Open
Abstract
Misophonic experiences are common in the general population, and they may shed light on everyday emotional reactions to multi-modal stimuli. We performed an online study of a non-clinical sample to understand the extent to which adults who have misophonic reactions are generally reactive to a range of audio-visual emotion-inducing stimuli. We also hypothesized that musicality might be predictive of one's emotional reactions to these stimuli because music is an activity that involves strong connections between sensory processing and meaningful emotional experiences. Participants completed self-report scales of misophonia and musicality. They also watched videos meant to induce misophonia, autonomous sensory meridian response (ASMR) and musical chills, and were asked to click a button whenever they had any emotional reaction to the video. They also rated the emotional valence and arousal of each video. Reactions to misophonia videos were predicted by reactions to ASMR and chills videos, which could indicate that the frequency with which individuals experience emotional responses varies similarly across both negative and positive emotional contexts. Musicality scores were not correlated with measures of misophonia. These findings could reflect a general phenotype of stronger emotional reactivity to meaningful sensory inputs. This article is part of the theme issue 'Sensing and feeling: an integrative approach to sensory processing and emotional experience'.
Collapse
Affiliation(s)
- Solena D Mednicoff
- Department of Psychology, University of Nevada, Las Vegas, NV 89154-9900, USA
| | - Sivan Barashy
- Department of Psychology, University of Nevada, Las Vegas, NV 89154-9900, USA
| | - David J Vollweiler
- Department of Psychology, University of Nevada, Las Vegas, NV 89154-9900, USA
| | - Stephen D Benning
- Department of Psychology, University of Nevada, Las Vegas, NV 89154-9900, USA
| | - Joel S Snyder
- Department of Psychology, University of Nevada, Las Vegas, NV 89154-9900, USA
| | - Erin E Hannon
- Department of Psychology, University of Nevada, Las Vegas, NV 89154-9900, USA
| |
Collapse
|
13
|
Martinez DRQ, Rubio GF, Bonetti L, Achyutuni KG, Tzovara A, Knight RT, Vuust P. Decoding reveals the neural representation of perceived and imagined musical sounds. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.08.15.553456. [PMID: 37645733 PMCID: PMC10462096 DOI: 10.1101/2023.08.15.553456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/31/2023]
Abstract
Vividly imagining a song or a melody is a skill that many people accomplish with relatively little effort. However, we are only beginning to understand how the brain represents, holds, and manipulates these musical "thoughts". Here, we decoded perceived and imagined melodies from magnetoencephalography (MEG) brain data (N = 71) to characterize their neural representation. We found that, during perception, auditory regions represent the sensory properties of individual sounds. In contrast, a widespread network including fronto-parietal cortex, hippocampus, basal nuclei, and sensorimotor regions hold the melody as an abstract unit during both perception and imagination. Furthermore, the mental manipulation of a melody systematically changes its neural representation, reflecting volitional control of auditory images. Our work sheds light on the nature and dynamics of auditory representations, informing future research on neural decoding of auditory imagination.
Collapse
Affiliation(s)
- David R. Quiroga Martinez
- Helen Wills Neuroscience Institute & Department of Psychology, University of California Berkeley, Berkeley, CA
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus, Denmark
| | - Gemma Fernandez Rubio
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus, Denmark
| | - Leonardo Bonetti
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus, Denmark
- Center for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford UK
- Department of Psychiatry, University of Oxford, Oxford UK
| | - Kriti G. Achyutuni
- Helen Wills Neuroscience Institute & Department of Psychology, University of California Berkeley, Berkeley, CA
| | - Athina Tzovara
- Helen Wills Neuroscience Institute & Department of Psychology, University of California Berkeley, Berkeley, CA
- Institute of Computer Science, University of Bern, Bern, Switzerland
- Center for Experimental Neurology, Sleep Wake Epilepsy Center, NeuroTec, Department of Neurology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Robert T. Knight
- Helen Wills Neuroscience Institute & Department of Psychology, University of California Berkeley, Berkeley, CA
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus, Denmark
| |
Collapse
|
14
|
Zioga I, Harrison PMC, Pearce M, Bhattacharya J, Di Bernardi Luft C. The association between liking, learning and creativity in music. Sci Rep 2024; 14:19048. [PMID: 39152203 PMCID: PMC11329743 DOI: 10.1038/s41598-024-70027-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Accepted: 08/09/2024] [Indexed: 08/19/2024] Open
Abstract
Aesthetic preference is intricately linked to learning and creativity. Previous studies have largely examined the perception of novelty in terms of pleasantness and the generation of novelty via creativity separately. The current study examines the connection between perception and generation of novelty in music; specifically, we investigated how pleasantness judgements and brain responses to musical notes of varying probability (estimated by a computational model of auditory expectation) are linked to learning and creativity. To facilitate learning de novo, 40 non-musicians were trained on an unfamiliar artificial music grammar. After learning, participants evaluated the pleasantness of the final notes of melodies, which varied in probability, while their EEG was recorded. They also composed their own musical pieces using the learned grammar which were subsequently assessed by experts. As expected, there was an inverted U-shaped relationship between liking and probability: participants were more likely to rate the notes with intermediate probabilities as pleasant. Further, intermediate probability notes elicited larger N100 and P200 at posterior and frontal sites, respectively, associated with prediction error processing. Crucially, individuals who produced less creative compositions preferred higher probability notes, whereas individuals who composed more creative pieces preferred notes with intermediate probability. Finally, evoked brain responses to note probability were relatively independent of learning and creativity, suggesting that these higher-level processes are not mediated by brain responses related to performance monitoring. Overall, our findings shed light on the relationship between perception and generation of novelty, offering new insights into aesthetic preference and its neural correlates.
Collapse
Affiliation(s)
- Ioanna Zioga
- Donders Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 EN, Nijmegen, The Netherlands
| | - Peter M C Harrison
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, E1 4NS, UK
- Faculty of Music, University of Cambridge, Cambridge, UK
| | - Marcus Pearce
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, E1 4NS, UK
| | - Joydeep Bhattacharya
- Department of Psychology, Goldsmiths University of London, New Cross, London, SE14 6NW, UK
| | | |
Collapse
|
15
|
Armitage J, Eerola T, Halpern AR. Play it again, but more sadly: Influence of timbre, mode, and musical experience in melody processing. Mem Cognit 2024:10.3758/s13421-024-01614-8. [PMID: 39095618 DOI: 10.3758/s13421-024-01614-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/27/2024] [Indexed: 08/04/2024]
Abstract
The emotional properties of music are influenced by a host of factors, such as timbre, mode, harmony, and tempo. In this paper, we consider how two of these factors, mode (major vs. minor) and timbre interact to influence ratings of perceived valence, reaction time, and recognition memory. More specifically, we considered the notion of congruence-that is, we used a set of melodies that crossed modes typically perceived as happy and sad (i.e., major and minor) in Western cultures with instruments typically perceived as happy and sad (i.e., marimba and viola). In a reaction-time experiment, participants were asked to classify melodies as happy or sad as quickly as possible. There was a clear congruency effect-that is, when the mode and timbre were congruent (major/marimba or minor/viola), reaction times were shorter than when the mode and timbre were incongruent (major/viola or minor/marimba). In Experiment 2, participants first rated the melodies for valence, before completing a recognition task. Melodies that were initially presented in incongruent conditions in the rating task were subsequently recognized better in the recognition task. The recognition advantage for melodies presented in incongruent conditions is discussed in the context of desirable difficulty.
Collapse
Affiliation(s)
- James Armitage
- Music Department, Durham University, Durham, DH1 3RL, UK.
| | - Tuomas Eerola
- Music Department, Durham University, Durham, DH1 3RL, UK
| | - Andrea R Halpern
- Psychology Department, Bucknell University, Lewisburg, PA, 17837, USA
| |
Collapse
|
16
|
Evans MG, Gaeta P, Davidenko N. Absolute pitch in involuntary musical imagery. Atten Percept Psychophys 2024; 86:2124-2135. [PMID: 39134919 PMCID: PMC11411011 DOI: 10.3758/s13414-024-02936-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/14/2024] [Indexed: 08/25/2024]
Abstract
Memory for isolated absolute pitches is extremely rare in Western, English-speaking populations. However, past research has found that people can voluntarily reproduce well-known songs in the original key much more often than chance. It is unknown whether this requires deliberate effort or if it manifests in involuntary musical imagery (INMI, or earworms). Participants (N = 30, convenience sample) were surveyed at random times over a week and asked to produce a sung recording of any music they were experiencing in their heads. We measured the "pitch error" of each recording to the nearest semitone by comparing participants' recordings to the original song. We found that 44.7% of recordings had a pitch error of 0 semitones, and 68.9% of recordings were within ± 1 semitone of the original song. Our results provide novel evidence that a large proportion of the population has access to absolute pitch, as revealed in their INMI.
Collapse
Affiliation(s)
- Matthew G Evans
- Department of Psychology, University of California Santa Cruz, Santa Cruz, CA, USA.
| | - Pablo Gaeta
- Department of Computer Science and Engineering, UC Santa Cruz, Santa Cruz, CA, USA
| | - Nicolas Davidenko
- Department of Psychology, University of California Santa Cruz, Santa Cruz, CA, USA
| |
Collapse
|
17
|
Harrison PMC, MacConnachie JMC. Consonance in the carillon. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 156:1111-1122. [PMID: 39145812 DOI: 10.1121/10.0028167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Accepted: 07/15/2024] [Indexed: 08/16/2024]
Abstract
Previous psychological studies have shown that musical consonance is not only determined by the frequency ratios between tones, but also by the frequency spectra of those tones. However, these prior studies used artificial tones, specifically tones built from a small number of pure tones, which do not match the acoustic complexity of real musical instruments. The present experiment therefore investigates tones recorded from a real musical instrument, the Westerkerk Carillon, conducting a "dense rating" experiment where participants (N = 113) rated musical intervals drawn from the continuous range 0-15 semitones. Results show that the traditional consonances of the major third and the minor sixth become dissonances in the carillon and that small intervals (in particular 0.5-2.5 semitones) also become particularly dissonant. Computational modelling shows that these effects are primarily caused by interference between partials (e.g., beating), but that preference for harmonicity is also necessary to produce an accurate overall account of participants' preferences. The results support musicians' writings about the carillon and contribute to ongoing debates about the psychological mechanisms underpinning consonance perception, in particular disputing the recent claim that interference is largely irrelevant to consonance perception.
Collapse
Affiliation(s)
- Peter M C Harrison
- Centre for Music and Science, Faculty of Music, University of Cambridge, Cambridge, United Kingdom
| | - James M C MacConnachie
- Centre for Music and Science, Faculty of Music, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
18
|
Bürgel M, Siedenburg K. Impact of interference on vocal and instrument recognition. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 156:922-938. [PMID: 39133041 DOI: 10.1121/10.0028152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Accepted: 07/12/2024] [Indexed: 08/13/2024]
Abstract
Voices arguably occupy a superior role in auditory processing. Specifically, studies have reported that singing voices are processed faster and more accurately and possess greater salience in musical scenes compared to instrumental sounds. However, the underlying acoustic features of this superiority and the generality of these effects remain unclear. This study investigates the impact of frequency micro-modulations (FMM) and the influence of interfering sounds on sound recognition. Thirty young participants, half with musical training, engage in three sound recognition experiments featuring short vocal and instrumental sounds in a go/no-go task. Accuracy and reaction times are measured for sounds from recorded samples and excerpts of popular music. Each sound is presented in separate versions with and without FMM, in isolation or accompanied by a piano. Recognition varies across sound categories, but no general vocal superiority emerges and no effects of FMM. When presented together with interfering sounds, all sounds exhibit degradation in recognition. However, whereas /a/ sounds stand out by showing a distinct robustness to interference (i.e., less degradation of recognition), /u/ sounds lack this robustness. Acoustical analysis implies that recognition differences can be explained by spectral similarities. Together, these results challenge the notion of general vocal superiority in auditory perception.
Collapse
Affiliation(s)
- Michel Bürgel
- Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg 26129, Germany
| | - Kai Siedenburg
- Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg 26129, Germany
- Signal Processing and Speech Communication Laboratory, Graz University of Technology, Graz 8010, Austria
| |
Collapse
|
19
|
Silas S, Müllensiefen D, Kopiez R. Singing Ability Assessment: Development and validation of a singing test based on item response theory and a general open-source software environment for singing data. Behav Res Methods 2024; 56:4358-4384. [PMID: 37672190 PMCID: PMC11289018 DOI: 10.3758/s13428-023-02188-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/30/2023] [Indexed: 09/07/2023]
Abstract
We describe the development of the Singing Ability Assessment (SAA) open-source test environment. The SAA captures and scores different aspects of human singing ability and melodic memory in the context of item response theory. Taking perspectives from both melodic recall and singing accuracy literature, we present results from two online experiments (N = 247; N = 910). On-the-fly audio transcription is produced via a probabilistic algorithm and scored via latent variable approaches. Measures of the ability to sing long notes indicate a three-dimensional principal components analysis solution representing pitch accuracy, pitch volatility and changes in pitch stability (proportion variance explained: 35%; 33%; 32%). For melody singing, a mixed-effects model uses features of melodic structure (e.g., tonality, melody length) to predict overall sung melodic recall performance via a composite score [R2c = .42; R2m = .16]. Additionally, two separate mixed-effects models were constructed to explain performance in singing back melodies in a rhythmic [R2c = .42; R2m = .13] and an arhythmic [R2c = .38; R2m = .11] condition. Results showed that the yielded SAA melodic scores are significantly associated with previously described measures of singing accuracy, the long note singing accuracy measures, demographic variables, and features of participants' hardware setup. Consequently, we release five R packages which facilitate deploying melodic stimuli online and in laboratory contexts, constructing audio production tests, transcribing audio in the R environment, and deploying the test elements and their supporting models. These are published as open-source, easy to access, and flexible to adapt.
Collapse
Affiliation(s)
- Sebastian Silas
- Goldsmiths University of London, London, UK.
- Hanover Music Lab, Hanover University of Music, Drama and Media, Neues Haus 1, 30175, Hannover, Germany.
| | - Daniel Müllensiefen
- Goldsmiths University of London, London, UK
- Hanover Music Lab, Hanover University of Music, Drama and Media, Neues Haus 1, 30175, Hannover, Germany
| | - Reinhard Kopiez
- Hanover Music Lab, Hanover University of Music, Drama and Media, Neues Haus 1, 30175, Hannover, Germany
| |
Collapse
|
20
|
Cannon J, Cardinaux A, Bungert L, Li C, Sinha P. Reduced precision of motor and perceptual rhythmic timing in autistic adults. Heliyon 2024; 10:e34261. [PMID: 39082034 PMCID: PMC11284439 DOI: 10.1016/j.heliyon.2024.e34261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 06/23/2024] [Accepted: 07/05/2024] [Indexed: 08/02/2024] Open
Abstract
Recent results suggest that autistic individuals exhibit reduced accuracy compared to non-autistic peers in temporally coordinating their actions with predictable external cues, e.g., synchronizing finger taps to an auditory metronome. However, it is not yet clear whether these difficulties are driven primarily by motor differences or extend into perceptual rhythmic timing tasks. We recruited autistic and non-autistic participants for an online study testing both finger tapping synchronization and continuation as well as rhythmic time perception (anisochrony detection). We fractionated each participant's synchronization results into parameters representing error correction, motor noise, and internal time-keeper noise, and also investigated error-correcting responses to small metronome timing perturbations. Contrary to previous work, we did not find strong evidence for reduced synchronization error correction. However, we found compelling evidence for noisier internal rhythmic timekeeping in the synchronization, continuation, and perceptual components of the experiment. These results suggest that noisier internal rhythmic timing processes underlie some sensorimotor coordination challenges in autism.
Collapse
Affiliation(s)
- Jonathan Cannon
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, ON, Canada
| | - Annie Cardinaux
- Department of Brain & Cognitive Science, MIT, Cambridge, MA, USA
| | - Lindsay Bungert
- Department of Brain & Cognitive Science, MIT, Cambridge, MA, USA
| | - Cindy Li
- Department of Brain & Cognitive Science, MIT, Cambridge, MA, USA
- McGovern Institute, MIT, Cambridge, MA, USA
| | - Pawan Sinha
- Department of Brain & Cognitive Science, MIT, Cambridge, MA, USA
| |
Collapse
|
21
|
Teng X, Larrouy-Maestri P, Poeppel D. Segmenting and Predicting Musical Phrase Structure Exploits Neural Gain Modulation and Phase Precession. J Neurosci 2024; 44:e1331232024. [PMID: 38926087 PMCID: PMC11270514 DOI: 10.1523/jneurosci.1331-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 05/29/2024] [Accepted: 06/11/2024] [Indexed: 06/28/2024] Open
Abstract
Music, like spoken language, is often characterized by hierarchically organized structure. Previous experiments have shown neural tracking of notes and beats, but little work touches on the more abstract question: how does the brain establish high-level musical structures in real time? We presented Bach chorales to participants (20 females and 9 males) undergoing electroencephalogram (EEG) recording to investigate how the brain tracks musical phrases. We removed the main temporal cues to phrasal structures, so that listeners could only rely on harmonic information to parse a continuous musical stream. Phrasal structures were disrupted by locally or globally reversing the harmonic progression, so that our observations on the original music could be controlled and compared. We first replicated the findings on neural tracking of musical notes and beats, substantiating the positive correlation between musical training and neural tracking. Critically, we discovered a neural signature in the frequency range ∼0.1 Hz (modulations of EEG power) that reliably tracks musical phrasal structure. Next, we developed an approach to quantify the phrasal phase precession of the EEG power, revealing that phrase tracking is indeed an operation of active segmentation involving predictive processes. We demonstrate that the brain establishes complex musical structures online over long timescales (>5 s) and actively segments continuous music streams in a manner comparable to language processing. These two neural signatures, phrase tracking and phrasal phase precession, provide new conceptual and technical tools to study the processes underpinning high-level structure building using noninvasive recording techniques.
Collapse
Affiliation(s)
- Xiangbin Teng
- Department of Psychology, The Chinese University of Hong Kong, Shatin, Hong Kong SAR, China
| | - Pauline Larrouy-Maestri
- Music Department, Max-Planck-Institute for Empirical Aesthetics, Frankfurt 60322, Germany
- Center for Language, Music, and Emotion (CLaME), New York, New York 10003
| | - David Poeppel
- Center for Language, Music, and Emotion (CLaME), New York, New York 10003
- Department of Psychology, New York University, New York, New York 10003
- Ernst Struengmann Institute for Neuroscience, Frankfurt 60528, Germany
- Music and Audio Research Laboratory (MARL), New York, New York 11201
| |
Collapse
|
22
|
Durcan O, Holland P, Bhattacharya J. A framework for neurophysiological experiments on flow states. COMMUNICATIONS PSYCHOLOGY 2024; 2:66. [PMID: 39242976 PMCID: PMC11332228 DOI: 10.1038/s44271-024-00115-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 06/19/2024] [Indexed: 09/09/2024]
Abstract
Csikszentmihalyi's concept of the "flow state" was initially discovered in experts deeply engaged in self-rewarding activities. However, recent neurophysiology research often measures flow in constrained and unfamiliar activities. In this perspective article, we address the challenging yet necessary considerations for studying flow state's neurophysiology. We aggregate an activity-autonomy framework with several testable hypotheses to induce flow, expanding the traditional "challenge skill balance" paradigm. Further, we review and synthesise the best methodological practices from neurophysiological flow studies into a practical 24-item checklist. This checklist offers detailed guidelines for ensuring consistent reporting, personalising and testing isolated challenge types, factoring in participant skills, motivation, and individual differences, and processing self-report data. We argue for a cohesive approach in neurophysiological studies to capture a consistent representation of flow states.
Collapse
Affiliation(s)
- Oliver Durcan
- Department of Psychology, Goldsmiths University of London, London, UK.
| | - Peter Holland
- Department of Psychology, Goldsmiths University of London, London, UK
| | | |
Collapse
|
23
|
Slusarenko A, Rosenberg MC, Kazanski ME, McKay JL, Emmery L, Kesar TM, Hackney ME. Associations Between Music and Dance Relationships, Rhythmic Proficiency, and Spatiotemporal Movement Modulation Ability in Adults with and without Mild Cognitive Impairment. J Alzheimers Dis 2024:JAD231453. [PMID: 38995778 DOI: 10.3233/jad-231453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/14/2024]
Abstract
Background Personalized dance-based movement therapies may improve cognitive and motor function in individuals with mild cognitive impairment (MCI), a precursor to Alzheimer's disease. While age- and MCI-related deficits reduce individuals' abilities to perform dance-like rhythmic movement sequences (RMS)-spatial and temporal modifications to movement-it remains unclear how individuals' relationships to dance and music affect their ability to perform RMS. Objective Characterize associations between RMS performance and music or dance relationships, as well as the ability to perceive rhythm and meter (rhythmic proficiency) in adults with and without MCI. Methods We used wearable inertial sensors to evaluate the ability of 12 young adults (YA; age = 23.9±4.2 years; 9F), 26 older adults without MCI (OA; age = 68.1±8.5 years; 16F), and 18 adults with MCI (MCI; age = 70.8±6.2 years; 10F) to accurately perform spatial, temporal, and spatiotemporal RMS. To quantify self-reported music and dance relationships and rhythmic proficiency, we developed Music (MRQ) and Dance Relationship Questionnaires (DRQ), and a rhythm assessment (RA), respectively. We correlated MRQ, DRQ, and RA scores against RMS performance for each group separately. Results The OA and YA groups exhibited better MRQ and RA scores than the MCI group (p < 0.006). Better MRQ and RA scores were associated with better temporal RMS performance for only the YA and OA groups (r2 = 0.18-0.41; p < 0.045). DRQ scores were not associated with RMS performance in any group. Conclusions Cognitive deficits in adults with MCI likely limit the extent to which music relationships or rhythmic proficiency improve the ability to perform temporal aspects of movements performed during dance-based therapies.
Collapse
Affiliation(s)
| | - Michael C Rosenberg
- Department of Biomedical Engineering, Neuromechanics Laboratory, Emory University & Georgia Institute of Technology, Atlanta, GA, USA
| | - Meghan E Kazanski
- Department of Medicine, Division of Geriatrics and Gerontology, Emory University School of Medicine, Atlanta, GA, USA
| | - J Lucas McKay
- Department of Neurology, Emory University School of Medicine, Atlanta, GA, USA
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, GA, USA
| | - Laura Emmery
- Department of Music, Emory University College of Arts and Sciences, Atlanta, GA, USA
| | - Trisha M Kesar
- Department of Rehabilitation Medicine, Emory University School of Medicine, Atlanta, GA, USA
| | - Madeleine E Hackney
- Department of Medicine, Division of Geriatrics and Gerontology, Emory University School of Medicine, Atlanta, GA, USA
- Emory University School of Nursing, Atlanta, GA, USA
- Atlanta VA Center for Visual & Neurocognitive Rehabilitation, Atlanta, GA, USA
- Birmingham/Atlanta VA Geriatric Research Education and Clinical Center, Atlanta, GA, USA
| |
Collapse
|
24
|
Koch S, Schubert T, Blankenberger S. Simultaneous but independent spatial associations for pitch and loudness. PSYCHOLOGICAL RESEARCH 2024; 88:1602-1615. [PMID: 38720089 PMCID: PMC11282129 DOI: 10.1007/s00426-024-01970-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Accepted: 04/22/2024] [Indexed: 07/28/2024]
Abstract
For the auditory dimensions loudness and pitch a vertical SARC effect (Spatial Association of Response Codes) exists: When responding to loud (high) tones, participants are faster with top-sided responses compared to bottom-sided responses and vice versa for soft (low) tones. These effects are typically explained by two different spatial representations for both dimensions with pitch being represented on a helix structure and loudness being represented as spatially associated magnitude. Prior studies show incoherent results with regard to the question whether two SARC effects can occur at the same time as well as whether SARC effects interact with each other. Therefore, this study aimed to investigate the interrelation between the SARC effect for pitch and the SARC effect for loudness in a timbre discrimination task. Participants (N = 36) heard one tone per trial and had to decide whether the presented tone was a violin tone or an organ tone by pressing a top-sided or bottom-sided response key. Loudness and pitch were varied orthogonally. We tested the occurrence of SARC effects for pitch and loudness as well as their potential interaction by conducting a multiple linear regression with difference of reaction time (dRT) as dependent variable, and loudness and pitch as predictors. Frequentist and Bayesian analyses revealed that the regression coefficients of pitch and loudness were smaller than zero indicating the simultaneous occurrence of a SARC effects for both dimensions. In contrast, the interaction coefficient was not different from zero indicating an additive effect of both predictors.
Collapse
Affiliation(s)
- Sarah Koch
- Department of Psychology, Martin Luther University Halle-Wittenberg, Halle (Saale), Germany.
| | - Torsten Schubert
- Department of Psychology, Martin Luther University Halle-Wittenberg, Halle (Saale), Germany
| | - Sven Blankenberger
- Department of Psychology, Martin Luther University Halle-Wittenberg, Halle (Saale), Germany
| |
Collapse
|
25
|
Whitton SA, Sreenan B, Jiang F. The contribution of auditory imagery and visual rhythm perception to sensorimotor synchronization with external and imagined rhythm. J Exp Psychol Gen 2024; 153:1861-1872. [PMID: 38695803 PMCID: PMC11250674 DOI: 10.1037/xge0001601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/16/2024]
Abstract
Sensorimotor synchronization (SMS) refers to the temporal coordination of an external stimulus with movement. Our previous work revealed that while SMS with visual flashing patterns was less consistent than with auditory or tactile patterns, it was still evident in a sample of nonmusicians. Although previous studies have speculated the potential role of auditory imagery, its contribution to visual SMS performance is not well quantified. Utilizing a synchronization-continuation finger-tapping task with a visual stimulus that included implied motion, we aimed to examine how participants' imagery ability, musicality, and rhythm perception affected SMS performance. We quantified participants' SMS consistency in synchronization (with visual cues) and continuation (without visual cues) phases. Participants also performed a perception task assessing their ability to detect temporal perturbations in the visual rhythm and completed musical ability and imagery questionnaires. Our linear regression model for SMS consistency included the trial phase, self-reported auditory imagery control and musicality, and visual rhythm perception as predictors. Significant effects of trial phase and auditory imagery scores on SMS consistency suggested that participants performed SMS more consistently while the guiding visual stimulus was present and that the higher one's self-reported auditory imagery ability, the better their SMS when continuing with unguided rhythm. One's visual rhythm perception accuracy significantly correlated with SMS consistency during the synchronization phase, and there was no correlation between rhythm perception and auditory imagery control. Overall, our results suggested relatively independent contributions of auditory imagery and visual rhythm perception to SMS with visual rhythm. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
Affiliation(s)
| | | | - Fang Jiang
- Department of Psychology, University of Nevada
| |
Collapse
|
26
|
Engler BH, Zamm A, Møller C. Spontaneous rates exhibit high intra-individual stability across movements involving different biomechanical systems and cognitive demands. Sci Rep 2024; 14:14876. [PMID: 38937553 PMCID: PMC11211469 DOI: 10.1038/s41598-024-65788-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Accepted: 06/24/2024] [Indexed: 06/29/2024] Open
Abstract
Spontaneous rhythmic movements are part of everyday life, e.g., in walking, clapping or music making. Humans perform such spontaneous motor actions at different rates that reflect specific biomechanical constraints of the effector system in use. However, there is some evidence for intra-individual consistency of specific spontaneous rates arguably resulting from common underlying processes. Additionally, individual and contextual factors such as musicianship and circadian rhythms have been suggested to influence spontaneous rates. This study investigated the relative contributions of these factors and provides a comprehensive picture of rates among different spontaneous motor behaviors, i.e., melody production, walking, clapping, tapping with and without sound production, the latter measured online before and in the lab. Participants (n = 60) exhibited high intra-individual stability across tasks. Task-related influences included faster tempi for spontaneous production rates of music and wider ranges of spontaneous motor tempi (SMT) and clapping rates compared to walking and music making rates. Moreover, musicians exhibited slower spontaneous rates across tasks, yet we found no influence of time of day on SMT as measured online in pre-lab sessions. Tapping behavior was similar in pre-lab and in-lab sessions, validating the use of online SMT assessments. Together, the prominent role of individual factors and high stability across domains support the idea that different spontaneous motor behaviors are influenced by common underlying processes.
Collapse
Affiliation(s)
- Ben H Engler
- Department of Psychology, Centre for Cognitive Neuroscience, Paris-Lodron-University of Salzburg, Salzburg, Austria.
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark.
| | - Anna Zamm
- Department of Linguistics, Cognitive Science and Semiotics, Aarhus University, Aarhus, Denmark
| | - Cecilie Møller
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| |
Collapse
|
27
|
Derks-Dijkman MW, Schaefer RS, Baan-Wessels L, van Tilborg IADA, Kessels RPC. Effects of musical mnemonics on working memory performance in cognitively unimpaired older adults and persons with amnestic mild cognitive impairment. J Neuropsychol 2024; 18:286-299. [PMID: 37583255 DOI: 10.1111/jnp.12342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Revised: 07/05/2023] [Accepted: 07/16/2023] [Indexed: 08/17/2023]
Abstract
Episodic memory (EM) and working memory (WM) are negatively affected by healthy ageing, and additional memory impairment typically occurs in clinical ageing-related conditions such as amnestic mild cognitive impairment (aMCI). Recent studies on musical mnemonics in Alzheimer's dementia (AD) showed promising results on EM performance. However, the effects of musical mnemonics on WM performance have not yet been studied in (a)MCI or AD. Particularly in (a)MCI the use of musical mnemonics may benefit the optimisation of (working) memory performance. Therefore, in the present study, we examined the effects of musical presentation of digits consisting of pre-recorded rhythms, sung unfamiliar pitch sequences, and their combinations, as compared to spoken presentation. Furthermore, musical expertise was assessed with two perceptual tests and the Self-Report Inventory of the Goldsmiths Musical Sophistication Index. Thirty-two persons with aMCI and 32 cognitively unimpaired older adults (OA) participated in this study. Confirming and extending previous findings in research on ageing, our results show a facilitating effect of rhythm in both cognitively unimpaired OA and persons with aMCI (p = .001, ηp 2 = .158). Furthermore, pitch (p = .048, ηp 2 = .062) and melody (p = .012, ηp 2 = .098) negatively affected performance in both groups. Musical expertise increased this beneficial effect of musical mnemonics (p = .021, ηp 2 = .090). Implications for the future design of music-based memorisation strategies in (a)MCI are discussed.
Collapse
Affiliation(s)
- Marije W Derks-Dijkman
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Klimmendaal Rehabilitation Center, Arnhem/Zutphen, The Netherlands
- Health, Medical & Neuropsychology Unit, Institute for Psychology, Leiden University, Leiden, The Netherlands
| | - Rebecca S Schaefer
- Health, Medical & Neuropsychology Unit, Institute for Psychology, Leiden University, Leiden, The Netherlands
- Leiden Institute for Brain and Cognition, Leiden University, Leiden, The Netherlands
- Academy of Creative and Performing Arts, Leiden University, Leiden, The Netherlands
| | - Lisa Baan-Wessels
- de Boerhaven Expertisecentrum voor persoonlijkheidsstoornissen, Mediant Geestelijke Gezondheidszorg, Hengelo, The Netherlands
| | | | - Roy P C Kessels
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- Klimmendaal Rehabilitation Center, Arnhem/Zutphen, The Netherlands
- Centre of Excellence for Korsakoff and Alcohol-Related Cognitive Disorders, Vincent van Gogh Institute for Psychiatry, Venray, The Netherlands
- Department of Medical Psychology & Radboudumc Alzheimer Center, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
28
|
Derks-Dijkman MW, Schaefer RS, Kessels RPC. Musical Mnemonics in Cognitively Unimpaired Individuals and Individuals with Alzheimer's Dementia: A Systematic Review. Neuropsychol Rev 2024; 34:455-477. [PMID: 37058191 PMCID: PMC11166747 DOI: 10.1007/s11065-023-09585-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 10/21/2022] [Indexed: 04/15/2023]
Abstract
Based on the idea that music acts as a mnemonic aid, musical mnemonics (i.e., sung presentation of information, also referred to as 'music as a structural prompt'), are being used in educational and therapeutic settings. However, evidence in general and patient populations is still scarce. We investigated whether musical mnemonics affect working and episodic memory performance in cognitively unimpaired individuals and persons with Alzheimer's dementia (AD). Furthermore, we examined the possible contribution of musical expertise. We comprehensively searched the PubMed and PsycINFO databases for studies published between 1970 and 2022. Also, reference lists of all identified papers were manually extracted to identify additional articles. Of 1,126 records identified, 37 were eligible and included. Beneficial effects of musical mnemonics on some aspect of memory performance were reported in 28 of 37 studies, including nine on AD. Nine studies found no beneficial effect. Familiarity contributed positively to this beneficial effect in cognitively unimpaired adults, but require more extensive investigation in AD. Musical expertise generally did not lead to additional benefits for cognitively unimpaired participants, but may benefit people with AD. Musical mnemonics may help to learn and remember verbal information in cognitively unimpaired individuals and individuals with memory impairment. Here, we provide a theoretical model of the possible underlying mechanisms of musical mnemonics, building on previous frameworks. We also discuss the implications for designing music-based mnemonics.
Collapse
Affiliation(s)
- Marije W Derks-Dijkman
- Donders Institute for Brain, Cognition and Behaviour, Neuropsychology & Rehabilitation Psychology, Radboud University, PO Box 9104, 6500 HE, Nijmegen, The Netherlands
- Health, Medical & Neuropsychology Unit, Institute for Psychology, Leiden University, Leiden, The Netherlands
| | - Rebecca S Schaefer
- Health, Medical & Neuropsychology Unit, Institute for Psychology, Leiden University, Leiden, The Netherlands
- Leiden Institute for Brain and Cognition, Leiden University, Leiden, The Netherlands
- Academy of Creative and Performing Arts, Leiden University, Leiden, The Netherlands
| | - Roy P C Kessels
- Donders Institute for Brain, Cognition and Behaviour, Neuropsychology & Rehabilitation Psychology, Radboud University, PO Box 9104, 6500 HE, Nijmegen, The Netherlands.
- Centre of Excellence for Korsakoff and Alcohol-Related Cognitive Disorders, Vincent Van Gogh Institute for Psychiatry, Venray, The Netherlands.
- Department of Medical Psychology & Radboud Alzheimer Center, Radboud University Medical Center, Nijmegen, The Netherlands.
| |
Collapse
|
29
|
Motamed Yeganeh N, McKee T, Werker JF, Hermiston N, Boyd LA, Cui AX. Opera trainees' cognitive functioning is associated with physiological stress during performance. MUSICAE SCIENTIAE : THE JOURNAL OF THE EUROPEAN SOCIETY FOR THE COGNITIVE SCIENCES OF MUSIC 2024; 28:365-374. [PMID: 38784046 PMCID: PMC11108751 DOI: 10.1177/10298649231184817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Abstract
In an opera performance, singers must perform difficult musical repertoire at a high level while dealing with the stress of standing before a large audience. Previous literature suggests that individuals with better cognitive functions experience less stress. During a music performance such functions, especially attention, memory, and executive function, are in high demand, suggesting that cognitive functions may play a role in music performance. This study used physiological and cognitive measures to examine this phenomenon in opera performance. Cardiac activity data were collected from 24 opera trainees during a resting-state period before and during a real-life performance. Heart-rate variability (HRV) was used as an indicator of physiological stress, such that higher HRV indicates lower stress. Standardized neuropsychological tests were used to measure attention (IVA-2), memory (CVLT-3, WMS-IV), and executive function (Trail Making Test). Results showed cognitive function- and state-specific relationships between HRV and cognitive function: HRV during the resting state had a positive correlation with attention, while HRV during a performance had a positive correlation with executive function. These results suggest that greater cognitive function is related to lower stress during opera performance. The findings of this study provide initial evidence for a relationship between cognitive functions and music performance stress in opera trainees.
Collapse
|
30
|
Ishida K, Ishida T, Nittono H. Decoding predicted musical notes from omitted stimulus potentials. Sci Rep 2024; 14:11164. [PMID: 38750185 PMCID: PMC11096333 DOI: 10.1038/s41598-024-61989-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Accepted: 05/13/2024] [Indexed: 05/18/2024] Open
Abstract
Electrophysiological studies have investigated predictive processing in music by examining event-related potentials (ERPs) elicited by the violation of musical expectations. While several studies have reported that the predictability of stimuli can modulate the amplitude of ERPs, it is unclear how specific the representation of the expected note is. The present study addressed this issue by recording the omitted stimulus potentials (OSPs) to avoid contamination of bottom-up sensory processing with top-down predictive processing. Decoding of the omitted content was attempted using a support vector machine, which is a type of machine learning. ERP responses to the omission of four target notes (E, F, A, and C) at the same position in familiar and unfamiliar melodies were recorded from 25 participants. The results showed that the omission N1 were larger in the familiar melody condition than in the unfamiliar melody condition. The decoding accuracy of the four omitted notes was significantly higher in the familiar melody condition than in the unfamiliar melody condition. These results suggest that the OSPs contain discriminable predictive information, and the higher the predictability, the more the specific representation of the expected note is generated.
Collapse
Affiliation(s)
- Kai Ishida
- Graduate School of Human Sciences, Osaka University, 1-2 Yamadaoka, Suita, Osaka, 565-0871, Japan.
- Japan Society for the Promotion of Science, Tokyo, Japan.
| | - Tomomi Ishida
- Graduate School of Human Sciences, Osaka University, 1-2 Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Hiroshi Nittono
- Graduate School of Human Sciences, Osaka University, 1-2 Yamadaoka, Suita, Osaka, 565-0871, Japan
| |
Collapse
|
31
|
Bechtold TA, Curry B, Witek M. The perceived catchiness of music affects the experience of groove. PLoS One 2024; 19:e0303309. [PMID: 38748741 PMCID: PMC11095763 DOI: 10.1371/journal.pone.0303309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Accepted: 04/23/2024] [Indexed: 05/19/2024] Open
Abstract
Catchiness and groove are common phenomena when listening to popular music. Catchiness may be a potential factor for experiencing groove but quantitative evidence for such a relationship is missing. To examine whether and how catchiness influences a key component of groove-the pleasurable urge to move to music (PLUMM)-we conducted a listening experiment with 450 participants and 240 short popular music clips of drum patterns, bass lines or keys/guitar parts. We found four main results: (1) catchiness as measured in a recognition task was only weakly associated with participants' perceived catchiness of music. We showed that perceived catchiness is multi-dimensional, subjective, and strongly associated with pleasure. (2) We found a sizeable positive relationship between PLUMM and perceived catchiness. (3) However, the relationship is complex, as further analysis showed that pleasure suppresses perceived catchiness' effect on the urge to move. (4) We compared common factors that promote perceived catchiness and PLUMM and found that listener-related variables contributed similarly, while the effects of musical content diverged. Overall, our data suggests music perceived as catchy is likely to foster groove experiences.
Collapse
Affiliation(s)
- Toni Amadeus Bechtold
- Department of Music, University of Birmingham, Birmingham, United Kingdom
- Lucerne School of Music, Lucerne University of Applied Sciences and Arts, Lucerne, Switzerland
| | - Ben Curry
- Department of Music, University of Birmingham, Birmingham, United Kingdom
| | - Maria Witek
- Department of Music, University of Birmingham, Birmingham, United Kingdom
| |
Collapse
|
32
|
Kazanski ME, Dharanendra S, Rosenberg MC, Chen D, Brown ER, Emmery L, McKay JL, Kesar TM, Hackney ME. Life-long music and dance relationships inform impressions of music- and dance-based movement therapies in individuals with and without mild cognitive impairment. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.05.09.24307114. [PMID: 38798436 PMCID: PMC11118554 DOI: 10.1101/2024.05.09.24307114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
Background No effective therapies exist to prevent degeneration from Mild Cognitive Impairment (MCI) to Alzheimer's disease. Therapies integrating music and/or dance are promising as effective, non-pharmacological options to mitigate cognitive decline. Objective To deepen our understanding of individuals' relationships (i.e., histories, experiences and attitudes) with music and dance that are not often incorporated into music- and dance-based therapeutic design, yet may affect therapeutic outcomes. Methods Eleven older adults with MCI and five of their care partners/ spouses participated (4M/12F; Black: n=4, White: n=10, Hispanic/ Latino: n=2; Age: 71.4±9.6). We conducted focus groups and administered questionnaires that captured aspects of participants' music and dance relationships. We extracted emergent themes from four major topics, including: (1) experience and history, (2) enjoyment and preferences, (3) confidence and barriers, and (4) impressions of music and dance as therapeutic tools. Results Thematic analysis revealed participants' positive impressions of music and dance as potential therapeutic tools, citing perceived neuropsychological, emotional, and physical benefits. Participants viewed music and dance as integral to their lives, histories, and identities within a culture, family, and/ or community. Participants also identified lifelong engagement barriers that, in conjunction with negative feedback, instilled persistent low self-efficacy regarding dancing and active music engagement. Questionnaires verified individuals' moderately-strong music and dance relationships, strongest in passive forms of music engagement (e.g., listening). Conclusions Our findings support that individuals' music and dance relationships and the associated perceptions toward music and dance therapy may be valuable considerations in enhancing therapy efficacy, participant engagement and satisfaction for individuals with MCI.
Collapse
Affiliation(s)
- Meghan E. Kazanski
- Department of Medicine, Division of Geriatrics & Gerontology, Emory University School of Medicine, Atlanta, GA, USA
| | - Sahrudh Dharanendra
- Department of Medicine, Emory University School of Medicine, Atlanta, GA, USA
| | - Michael C. Rosenberg
- Department of Biomedical Engineering, Emory University & Georgia Institute of Technology, Atlanta, GA, USA
| | - Danyang Chen
- Rollins School of Public Health, Emory University, Atlanta, GA, USA
| | - Emma Rose Brown
- College of Arts and Sciences, Emory University, Atlanta, GA, USA
| | - Laura Emmery
- Department of Music, Emory University College of Arts and Sciences, Atlanta, GA, USA
| | - J. Lucas McKay
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, GA, USA
- Department of Neurology, Emory University School of Medicine, Atlanta, Georgia, USA
| | - Trisha M. Kesar
- Department of Rehabilitation Medicine, Emory University School of Medicine, Atlanta, GA, USA
| | - Madeleine E. Hackney
- Department of Medicine, Division of Geriatrics & Gerontology, Emory University School of Medicine, Atlanta, GA, USA
- Atlanta VA Center for Visual & Neurocognitive Rehabilitation, Atlanta, GA, USA
- Birmingham/Atlanta VA Geriatric Research Education and Clinical Center, Atlanta, GA, USA
| |
Collapse
|
33
|
Moisseinen N, Ahveninen L, Martínez‐Molina N, Sairanen V, Melkas S, Kleber B, Sihvonen AJ, Särkämö T. Choir singing is associated with enhanced structural connectivity across the adult lifespan. Hum Brain Mapp 2024; 45:e26705. [PMID: 38716698 PMCID: PMC11077432 DOI: 10.1002/hbm.26705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 03/06/2024] [Accepted: 04/21/2024] [Indexed: 05/12/2024] Open
Abstract
The global ageing of populations calls for effective, ecologically valid methods to support brain health across adult life. Previous evidence suggests that music can promote white matter (WM) microstructure and grey matter (GM) volume while supporting auditory and cognitive functioning and emotional well-being as well as counteracting age-related cognitive decline. Adding a social component to music training, choir singing is a popular leisure activity among older adults, but a systematic account of its potential to support healthy brain structure, especially with regard to ageing, is currently missing. The present study used quantitative anisotropy (QA)-based diffusion MRI connectometry and voxel-based morphometry to explore the relationship of lifetime choir singing experience and brain structure at the whole-brain level. Cross-sectional multiple regression analyses were carried out in a large, balanced sample (N = 95; age range 21-88) of healthy adults with varying levels of choir singing experience across the whole age range and within subgroups defined by age (young, middle-aged, and older adults). Independent of age, choir singing experience was associated with extensive increases in WM QA in commissural, association, and projection tracts across the brain. Corroborating previous work, these overlapped with language and limbic networks. Enhanced corpus callosum microstructure was associated with choir singing experience across all subgroups. In addition, choir singing experience was selectively associated with enhanced QA in the fornix in older participants. No associations between GM volume and choir singing were found. The present study offers the first systematic account of amateur-level choir singing on brain structure. While no evidence for counteracting GM atrophy was found, the present evidence of enhanced structural connectivity coheres well with age-typical structural changes. Corroborating previous behavioural studies, the present results suggest that regular choir singing holds great promise for supporting brain health across the adult life span.
Collapse
Affiliation(s)
- Nella Moisseinen
- Cognitive Brain Research Unit, Centre of Excellence in Music, Mind, Body and the Brain, Department of Psychology and Logopedics, Faculty of MedicineUniversity of HelsinkiHelsinkiFinland
| | - Lotta Ahveninen
- Cognitive Brain Research Unit, Centre of Excellence in Music, Mind, Body and the Brain, Department of Psychology and Logopedics, Faculty of MedicineUniversity of HelsinkiHelsinkiFinland
| | - Noelia Martínez‐Molina
- Cognitive Brain Research Unit, Centre of Excellence in Music, Mind, Body and the Brain, Department of Psychology and Logopedics, Faculty of MedicineUniversity of HelsinkiHelsinkiFinland
- Center for Brain and Cognition, Department of Information and Communication TechnologiesUniversity Pompeu FabraBarcelonaSpain
| | - Viljami Sairanen
- Department of RadiologyKanta‐Häme Central HospitalHämeenlinnaFinland
- Baby Brain Activity Center, Children's HospitalHelsinki University Hospital and University of HelsinkiHelsinkiFinland
| | - Susanna Melkas
- Clinical Neurosciences, NeurologyUniversity of HelsinkiHelsinkiFinland
| | - Boris Kleber
- Center for Music in the Brain, Department of Clinical MedicineAarhus University and The Royal Academy of Music Aarhus/AalborgAarhusDenmark
| | - Aleksi J. Sihvonen
- Cognitive Brain Research Unit, Centre of Excellence in Music, Mind, Body and the Brain, Department of Psychology and Logopedics, Faculty of MedicineUniversity of HelsinkiHelsinkiFinland
- Centre for Clinical Research, School of Health and Rehabilitation SciencesUniversity of QueenslandBrisbaneAustralia
- Department of NeurologyHelsinki University HospitalHelsinkiFinland
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Centre of Excellence in Music, Mind, Body and the Brain, Department of Psychology and Logopedics, Faculty of MedicineUniversity of HelsinkiHelsinkiFinland
| |
Collapse
|
34
|
Hashim S, Küssner MB, Weinreich A, Omigie D. The neuro-oscillatory profiles of static and dynamic music-induced visual imagery. Int J Psychophysiol 2024; 199:112309. [PMID: 38242363 DOI: 10.1016/j.ijpsycho.2024.112309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 12/22/2023] [Accepted: 01/12/2024] [Indexed: 01/21/2024]
Abstract
Visual imagery, i.e., seeing in the absence of the corresponding retinal input, has been linked to visual and motor processing areas of the brain. Music listening provides an ideal vehicle for exploring the neural correlates of visual imagery because it has been shown to reliably induce a broad variety of content, ranging from abstract shapes to dynamic scenes. Forty-two participants listened with closed eyes to twenty-four excerpts of music, while a 15-channel EEG was recorded, and, after each excerpt, rated the extent to which they experienced static and dynamic visual imagery. Our results show both static and dynamic imagery to be associated with posterior alpha suppression (especially in lower alpha) early in the onset of music listening, while static imagery was associated with an additional alpha enhancement later in the listening experience. With regard to the beta band, our results demonstrate beta enhancement to static imagery, but first beta suppression before enhancement in response to dynamic imagery. We also observed a positive association, early in the listening experience, between gamma power and dynamic imagery ratings that was not present for static imagery ratings. Finally, we offer evidence that musical training may selectively drive effects found with respect to static and dynamic imagery and alpha, beta, and gamma band oscillations. Taken together, our results show the promise of using music listening as an effective stimulus for examining the neural correlates of visual imagery and its contents. Our study also highlights the relevance of future work seeking to study the temporal dynamics of music-induced visual imagery.
Collapse
Affiliation(s)
- Sarah Hashim
- Department of Psychology, Goldsmiths, University of London, United Kingdom.
| | - Mats B Küssner
- Department of Psychology, Goldsmiths, University of London, United Kingdom; Department of Musicology and Media Studies, Humboldt-Universität zu Berlin, Germany
| | - André Weinreich
- Department of Psychology, BSP Business & Law School Berlin, Germany
| | - Diana Omigie
- Department of Psychology, Goldsmiths, University of London, United Kingdom
| |
Collapse
|
35
|
Chang A, Teng X, Assaneo MF, Poeppel D. The human auditory system uses amplitude modulation to distinguish music from speech. PLoS Biol 2024; 22:e3002631. [PMID: 38805517 PMCID: PMC11132470 DOI: 10.1371/journal.pbio.3002631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Accepted: 04/17/2024] [Indexed: 05/30/2024] Open
Abstract
Music and speech are complex and distinct auditory signals that are both foundational to the human experience. The mechanisms underpinning each domain are widely investigated. However, what perceptual mechanism transforms a sound into music or speech and how basic acoustic information is required to distinguish between them remain open questions. Here, we hypothesized that a sound's amplitude modulation (AM), an essential temporal acoustic feature driving the auditory system across processing levels, is critical for distinguishing music and speech. Specifically, in contrast to paradigms using naturalistic acoustic signals (that can be challenging to interpret), we used a noise-probing approach to untangle the auditory mechanism: If AM rate and regularity are critical for perceptually distinguishing music and speech, judging artificially noise-synthesized ambiguous audio signals should align with their AM parameters. Across 4 experiments (N = 335), signals with a higher peak AM frequency tend to be judged as speech, lower as music. Interestingly, this principle is consistently used by all listeners for speech judgments, but only by musically sophisticated listeners for music. In addition, signals with more regular AM are judged as music over speech, and this feature is more critical for music judgment, regardless of musical sophistication. The data suggest that the auditory system can rely on a low-level acoustic property as basic as AM to distinguish music from speech, a simple principle that provokes both neurophysiological and evolutionary experiments and speculations.
Collapse
Affiliation(s)
- Andrew Chang
- Department of Psychology, New York University, New York, New York, United States of America
| | - Xiangbin Teng
- Department of Psychology, Chinese University of Hong Kong, Hong Kong SAR, China
| | - M. Florencia Assaneo
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Juriquilla, Querétaro, México
| | - David Poeppel
- Department of Psychology, New York University, New York, New York, United States of America
- Ernst Struengmann Institute for Neuroscience, Frankfurt am Main, Germany
- Center for Language, Music, and Emotion (CLaME), New York University, New York, New York, United States of America
- Music and Audio Research Lab (MARL), New York University, New York, New York, United States of America
| |
Collapse
|
36
|
Aker SC, Faulkner KF, Innes-Brown H, Vatti M, Marozeau J. Some, but not all, cochlear implant users prefer music stimuli with congruent haptic stimulation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:3101-3117. [PMID: 38722101 DOI: 10.1121/10.0025854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Accepted: 04/10/2024] [Indexed: 09/20/2024]
Abstract
Cochlear implant (CI) users often report being unsatisfied by music listening through their hearing device. Vibrotactile stimulation could help alleviate those challenges. Previous research has shown that musical stimuli was given higher preference ratings by normal-hearing listeners when concurrent vibrotactile stimulation was congruent in intensity and timing with the corresponding auditory signal compared to incongruent. However, it is not known whether this is also the case for CI users. Therefore, in this experiment, we presented 18 CI users and 24 normal-hearing listeners with five melodies and five different audio-to-tactile maps. Each map varied the congruence between the audio and tactile signals related to intensity, fundamental frequency, and timing. Participants were asked to rate the maps from zero to 100, based on preference. It was shown that almost all normal-hearing listeners, as well as a subset of the CI users, preferred tactile stimulation, which was congruent with the audio in intensity and timing. However, many CI users had no difference in preference between timing aligned and timing unaligned stimuli. The results provide evidence that vibrotactile music enjoyment enhancement could be a solution for some CI users; however, more research is needed to understand which CI users can benefit from it most.
Collapse
Affiliation(s)
- Scott C Aker
- Music and CI Lab, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, 1165, Denmark
- Oticon A/S, Smørum, 2765, Denmark
| | | | - Hamish Innes-Brown
- Eriksholm Research Centre, Snekkersten, 3070, Denmark
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, 1165, Denmark
| | | | - Jeremy Marozeau
- Music and CI Lab, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, 1165, Denmark
| |
Collapse
|
37
|
Velasquez MA, Winston JL, Sur S, Yurgil K, Upman AE, Wroblewski SR, Huddle A, Colombo PJ. Music training is related to late ERP modulation and enhanced performance during Simon task but not Stroop task. Front Hum Neurosci 2024; 18:1384179. [PMID: 38711801 PMCID: PMC11070544 DOI: 10.3389/fnhum.2024.1384179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Accepted: 04/08/2024] [Indexed: 05/08/2024] Open
Abstract
Increasing evidence suggests that music training correlates with better performance in tasks measuring executive function components including inhibitory control, working memory and selective attention. The Stroop and Simon tasks measure responses to congruent and incongruent information reflecting cognitive conflict resolution. However, there are more reports of a music-training advantage in the Simon than the Stroop task. Reports indicate that these tasks may differ in the timing of conflict resolution: the Stroop task might involve early sensory stage conflict resolution, while the Simon task may do so at a later motor output planning stage. We hypothesize that musical experience relates to conflict resolution at the late motor output stage rather than the early sensory stage. Behavioral responses, and event-related potentials (ERP) were measured in participants with varying musical experience during these tasks. It was hypothesized that musical experience correlates with better performance in the Simon but not the Stroop task, reflected in ERP components in the later stage of motor output processing in the Simon task. Participants were classified into high- and low-music training groups based on the Goldsmith Musical Sophistication Index. Electrical brain activity was recorded while they completed visual Stroop and Simon tasks. The high-music training group outperformed the low-music training group on the Simon, but not the Stroop task. Mean amplitude difference (incongruent-congruent trials) was greater for the high-music training group at N100 for midline central (Cz) and posterior (Pz) sites in the Simon task and midline central (Cz) and frontal (Fz) sites in the Stroop task, and at N450 at Cz and Pz in the Simon task. N450 difference peaks occurred earlier in the high-music training group at Pz. Differences between the groups at N100 indicate that music training may be related to better sensory discrimination. These differences were not related to better behavioral performance. Differences in N450 responses between the groups, particularly in regions encompassing the motor and parietal cortices, suggest a role of music training in action selection during response conflict situations. Overall, this supports the hypothesis that music training selectively enhances cognitive conflict resolution during late motor output planning stages.
Collapse
Affiliation(s)
| | - Jenna L. Winston
- Department of Psychological Sciences, Loyola University New Orleans, New Orleans, LA, United States
| | - Sandeepa Sur
- Department of Neurology, Johns Hopkins School of Medicine, Baltimore, MD, United States
| | - Kate Yurgil
- Department of Psychological Sciences, Loyola University New Orleans, New Orleans, LA, United States
| | - Anna E. Upman
- Department of Psychological Sciences, Loyola University New Orleans, New Orleans, LA, United States
| | | | - Annabelle Huddle
- Department of Psychology, Tulane University, New Orleans, LA, United States
| | - Paul J. Colombo
- Department of Psychology, Tulane University, New Orleans, LA, United States
- Brain Institute, Tulane University, New Orleans, LA, United States
| |
Collapse
|
38
|
Worschech F, Passarotto E, Losch H, Oku T, Lee A, Altenmüller E. What Does It Take to Play the Piano? Cognito-Motor Functions Underlying Motor Learning in Older Adults. Brain Sci 2024; 14:405. [PMID: 38672054 PMCID: PMC11048694 DOI: 10.3390/brainsci14040405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Revised: 04/14/2024] [Accepted: 04/18/2024] [Indexed: 04/28/2024] Open
Abstract
The acquisition of skills, such as learning to play a musical instrument, involves various phases that make specific demands on the learner. Knowledge of the cognitive and motor contributions during learning phases can be helpful in developing effective and targeted interventions for healthy aging. Eighty-six healthy older participants underwent an extensive cognitive, motoric, and musical test battery. Within one session, one piano-related and one music-independent movement sequence were both learned. We tested the associations between skill performance and cognito-motor abilities with Bayesian mixed models accounting for individual learning rates. Results showed that performance was positively associated with all cognito-motor abilities. Learning a piano-related task was characterized by relatively strong initial associations between performance and abilities. These associations then weakened considerably before increasing exponentially from the second trial onwards, approaching a plateau. Similar performance-ability relationships were detected in the course of learning a music-unrelated motor task. Positive performance-ability associations emphasize the potential of learning new skills to produce positive cognitive and motor transfer effects. Consistent high-performance tasks that demand maximum effort from the participants could be very effective. However, interventions should be sufficiently long so that the transfer potential can be fully exploited.
Collapse
Affiliation(s)
- Florian Worschech
- Institute of Music Physiology and Musician’s Medicine, Hanover University of Music, Drama and Media, 30175 Hanover, Germany
- Center for Systems Neuroscience, 30559 Hanover, Germany
| | - Edoardo Passarotto
- Institute of Music Physiology and Musician’s Medicine, Hanover University of Music, Drama and Media, 30175 Hanover, Germany
- Department of Neuroscience, University of Padova, 35121 Padova, Italy
| | - Hannah Losch
- Institute of Music Physiology and Musician’s Medicine, Hanover University of Music, Drama and Media, 30175 Hanover, Germany
- Institute for Music Education Research, Hanover University of Music, Drama and Media, 30175 Hanover, Germany
| | - Takanori Oku
- NeuroPiano Institute, Kyoto 600-8086, Japan
- College of Engineering and Design, Shibaura Institute of Technology, Tokyo 135-8548, Japan
| | - André Lee
- Institute of Music Physiology and Musician’s Medicine, Hanover University of Music, Drama and Media, 30175 Hanover, Germany
- Center for Systems Neuroscience, 30559 Hanover, Germany
- Department of Neurology, Klinikum Rechts der Isar Technische Universität München, 80333 Munich, Germany
| | - Eckart Altenmüller
- Institute of Music Physiology and Musician’s Medicine, Hanover University of Music, Drama and Media, 30175 Hanover, Germany
- Center for Systems Neuroscience, 30559 Hanover, Germany
| |
Collapse
|
39
|
Bruder C, Poeppel D, Larrouy-Maestri P. Perceptual (but not acoustic) features predict singing voice preferences. Sci Rep 2024; 14:8977. [PMID: 38637516 PMCID: PMC11026466 DOI: 10.1038/s41598-024-58924-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 04/03/2024] [Indexed: 04/20/2024] Open
Abstract
Why do we prefer some singers to others? We investigated how much singing voice preferences can be traced back to objective features of the stimuli. To do so, we asked participants to rate short excerpts of singing performances in terms of how much they liked them as well as in terms of 10 perceptual attributes (e.g.: pitch accuracy, tempo, breathiness). We modeled liking ratings based on these perceptual ratings, as well as based on acoustic features and low-level features derived from Music Information Retrieval (MIR). Mean liking ratings for each stimulus were highly correlated between Experiments 1 (online, US-based participants) and 2 (in the lab, German participants), suggesting a role for attributes of the stimuli in grounding average preferences. We show that acoustic and MIR features barely explain any variance in liking ratings; in contrast, perceptual features of the voices achieved around 43% of prediction. Inter-rater agreement in liking and perceptual ratings was low, indicating substantial (and unsurprising) individual differences in participants' preferences and perception of the stimuli. Our results indicate that singing voice preferences are not grounded in acoustic attributes of the voices per se, but in how these features are perceptually interpreted by listeners.
Collapse
Affiliation(s)
- Camila Bruder
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
| | - David Poeppel
- New York University, New York, NY, USA
- Ernst Strüngmann Institute for Neuroscience, Frankfurt, Germany
- Max Planck-NYU Center for Language, Music, and Emotion (CLaME), New York, USA
| | - Pauline Larrouy-Maestri
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Max Planck-NYU Center for Language, Music, and Emotion (CLaME), New York, USA
| |
Collapse
|
40
|
Marin MM, Gingras B. How music-induced emotions affect sexual attraction: evolutionary implications. Front Psychol 2024; 15:1269820. [PMID: 38659690 PMCID: PMC11039867 DOI: 10.3389/fpsyg.2024.1269820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Accepted: 03/29/2024] [Indexed: 04/26/2024] Open
Abstract
More than a century ago, Darwin proposed a putative role for music in sexual attraction (i.e., sex appeal), a hypothesis that has recently gained traction in the field of music psychology. In his writings, Darwin particularly emphasized the charming aspects of music. Across a broad range of cultures, music has a profound impact on humans' feelings, thoughts and behavior. Human mate choice is determined by the interplay of several factors. A number of studies have shown that music and musicality (i.e., the ability to produce and enjoy music) exert a positive influence on the evaluation of potential sexual partners. Here, we critically review the latest empirical literature on how and why music and musicality affect sexual attraction by considering the role of music-induced emotion and arousal in listeners as well as other socio-biological mechanisms. Following a short overview of current theories about the origins of musicality, we present studies that examine the impact of music and musicality on sexual attraction in different social settings. We differentiate between emotion-based influences related to the subjective experience of music as sound and effects associated with perceived musical ability or creativity in a potential partner. By integrating studies using various behavioral methods, we link current research strands that investigate how music influences sexual attraction and suggest promising avenues for future research.
Collapse
Affiliation(s)
- Manuela M. Marin
- Department of Cognition, Emotion and Methods in Psychology, University of Vienna, Vienna, Austria
- Austrian Research Institute of Empirical Aesthetics, Innsbruck, Austria
| | - Bruno Gingras
- Austrian Research Institute of Empirical Aesthetics, Innsbruck, Austria
- Department of Cognitive Biology, Faculty of Life Sciences, University of Vienna, Vienna, Austria
| |
Collapse
|
41
|
Pounder Z, Eardley AF, Loveday C, Evans S. No clear evidence of a difference between individuals who self-report an absence of auditory imagery and typical imagers on auditory imagery tasks. PLoS One 2024; 19:e0300219. [PMID: 38568916 PMCID: PMC10990234 DOI: 10.1371/journal.pone.0300219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 02/25/2024] [Indexed: 04/05/2024] Open
Abstract
Aphantasia is characterised by the inability to create mental images in one's mind. Studies investigating impairments in imagery typically focus on the visual domain. However, it is possible to generate many different forms of imagery including imagined auditory, kinesthetic, tactile, motor, taste and other experiences. Recent studies show that individuals with aphantasia report a lack of imagery in modalities, other than vision, including audition. However, to date, no research has examined whether these reductions in self-reported auditory imagery are associated with decrements in tasks that require auditory imagery. Understanding the extent to which visual and auditory imagery deficits co-occur can help to better characterise the core deficits of aphantasia and provide an alternative perspective on theoretical debates on the extent to which imagery draws on modality-specific or modality-general processes. In the current study, individuals that self-identified as being aphantasic and matched control participants with typical imagery performed two tasks: a musical pitch-based imagery and voice-based categorisation task. The majority of participants with aphantasia self-reported significant deficits in both auditory and visual imagery. However, we did not find a concomitant decrease in performance on tasks which require auditory imagery, either in the full sample or only when considering those participants that reported significant deficits in both domains. These findings are discussed in relation to the mechanisms that might obscure observation of imagery deficits in auditory imagery tasks in people that report reduced auditory imagery.
Collapse
Affiliation(s)
- Zoë Pounder
- Department of Psychology, School of Social Sciences, University of Westminster, London, United Kingdom
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| | - Alison F. Eardley
- Department of Psychology, School of Social Sciences, University of Westminster, London, United Kingdom
| | - Catherine Loveday
- Department of Psychology, School of Social Sciences, University of Westminster, London, United Kingdom
| | - Samuel Evans
- Department of Psychology, School of Social Sciences, University of Westminster, London, United Kingdom
- Neuroimaging, King’s College London, London, United Kingdom
| |
Collapse
|
42
|
Siedenburg K, Bürgel M, Özgür E, Scheicht C, Töpken S. Vibrotactile enhancement of musical engagement. Sci Rep 2024; 14:7764. [PMID: 38565622 PMCID: PMC10987628 DOI: 10.1038/s41598-024-57961-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 03/23/2024] [Indexed: 04/04/2024] Open
Abstract
Sound is sensed by the ear but can also be felt on the skin, by means of vibrotactile stimulation. Only little research has addressed perceptual implications of vibrotactile stimulation in the realm of music. Here, we studied which perceptual dimensions of music listening are affected by vibrotactile stimulation and whether the spatial segregation of vibrations improves vibrotactile stimulation. Forty-one listeners were presented with vibrotactile stimuli via a chair's surfaces (left and right arm rests, back rest, seat) in addition to music presented over headphones. Vibrations for each surface were derived from individual tracks of the music (multi condition) or conjointly by a mono-rendering, in addition to incongruent and headphones-only conditions. Listeners evaluated unknown music from popular genres according to valence, arousal, groove, the feeling of being part of a live performance, the feeling of being part of the music, and liking. Results indicated that the multi- and mono vibration conditions robustly enhanced the nature of the musical experience compared to listening via headphones alone. Vibrotactile enhancement was strong in the latent dimension of 'musical engagement', encompassing the sense of being a part of the music, arousal, and groove. These findings highlight the potential of vibrotactile cues for creating intensive musical experiences.
Collapse
Affiliation(s)
- Kai Siedenburg
- Graz University of Technology, Signal Processing and Speech Communication Laboratory, 8010, Graz, Austria.
- Department of Medical Physics and Acoustics, Carl von Ossietzy Universität Oldenburg, 26129, Oldenburg, Germany.
| | - Michel Bürgel
- Department of Medical Physics and Acoustics, Carl von Ossietzy Universität Oldenburg, 26129, Oldenburg, Germany
| | - Elif Özgür
- Department of Medical Physics and Acoustics, Carl von Ossietzy Universität Oldenburg, 26129, Oldenburg, Germany
| | - Christoph Scheicht
- Department of Medical Physics and Acoustics, Carl von Ossietzy Universität Oldenburg, 26129, Oldenburg, Germany
| | - Stephan Töpken
- Department of Medical Physics and Acoustics, Carl von Ossietzy Universität Oldenburg, 26129, Oldenburg, Germany
| |
Collapse
|
43
|
Shorey AE, King CJ, Whiteford KL, Stilp CE. Musical training is not associated with spectral context effects in instrument sound categorization. Atten Percept Psychophys 2024; 86:991-1007. [PMID: 38216848 DOI: 10.3758/s13414-023-02839-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/21/2023] [Indexed: 01/14/2024]
Abstract
Musicians display a variety of auditory perceptual benefits relative to people with little or no musical training; these benefits are collectively referred to as the "musician advantage." Importantly, musicians consistently outperform nonmusicians for tasks relating to pitch, but there are mixed reports as to musicians outperforming nonmusicians for timbre-related tasks. Due to their experience manipulating the timbre of their instrument or voice in performance, we hypothesized that musicians would be more sensitive to acoustic context effects stemming from the spectral changes in timbre across a musical context passage (played by a string quintet then filtered) and a target instrument sound (French horn or tenor saxophone; Experiment 1). Additionally, we investigated the role of a musician's primary instrument of instruction by recruiting French horn and tenor saxophone players to also complete this task (Experiment 2). Consistent with the musician advantage literature, musicians exhibited superior pitch discrimination to nonmusicians. Contrary to our main hypothesis, there was no difference between musicians and nonmusicians in how spectral context effects shaped instrument sound categorization. Thus, musicians may only outperform nonmusicians for some auditory skills relevant to music (e.g., pitch perception) but not others (e.g., timbre perception via spectral differences).
Collapse
Affiliation(s)
- Anya E Shorey
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, KY, 40292, USA.
| | - Caleb J King
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, KY, 40292, USA.
| | - Kelly L Whiteford
- Department of Psychology, University of Minnesota, Minneapolis, MN, 55455, USA
| | - Christian E Stilp
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, KY, 40292, USA
| |
Collapse
|
44
|
Etani T, Miura A, Kawase S, Fujii S, Keller PE, Vuust P, Kudo K. A review of psychological and neuroscientific research on musical groove. Neurosci Biobehav Rev 2024; 158:105522. [PMID: 38141692 DOI: 10.1016/j.neubiorev.2023.105522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 12/18/2023] [Accepted: 12/19/2023] [Indexed: 12/25/2023]
Abstract
When listening to music, we naturally move our bodies rhythmically to the beat, which can be pleasurable and difficult to resist. This pleasurable sensation of wanting to move the body to music has been called "groove." Following pioneering humanities research, psychological and neuroscientific studies have provided insights on associated musical features, behavioral responses, phenomenological aspects, and brain structural and functional correlates of the groove experience. Groove research has advanced the field of music science and more generally informed our understanding of bidirectional links between perception and action, and the role of the motor system in prediction. Activity in motor and reward-related brain networks during music listening is associated with the groove experience, and this neural activity is linked to temporal prediction and learning. This article reviews research on groove as a psychological phenomenon with neurophysiological correlates that link musical rhythm perception, sensorimotor prediction, and reward processing. Promising future research directions range from elucidating specific neural mechanisms to exploring clinical applications and socio-cultural implications of groove.
Collapse
Affiliation(s)
- Takahide Etani
- School of Medicine, College of Medical, Pharmaceutical, and Health, Kanazawa University, Kanazawa, Japan; Graduate School of Media and Governance, Keio University, Fujisawa, Japan; Advanced Research Center for Human Sciences, Waseda University, Tokorozawa, Japan.
| | - Akito Miura
- Faculty of Human Sciences, Waseda University, Tokorozawa, Japan
| | - Satoshi Kawase
- The Faculty of Psychology, Kobe Gakuin University, Kobe, Japan
| | - Shinya Fujii
- Faculty of Environment and Information Studies, Keio University, Fujisawa, Japan
| | - Peter E Keller
- Center for Music in the Brain, Aarhus University, Aarhus, Denmark/The Royal Academy of Music Aarhus/Aalborg, Denmark; The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, Australia
| | - Peter Vuust
- Center for Music in the Brain, Aarhus University, Aarhus, Denmark/The Royal Academy of Music Aarhus/Aalborg, Denmark
| | - Kazutoshi Kudo
- Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
45
|
Strauss H, Reiche S, Dick M, Zentner M. Online assessment of musical ability in 10 minutes: Development and validation of the Micro-PROMS. Behav Res Methods 2024; 56:1968-1983. [PMID: 37221344 PMCID: PMC10991059 DOI: 10.3758/s13428-023-02130-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/14/2023] [Indexed: 05/25/2023]
Abstract
We describe the development and validation of a test battery to assess musical ability that taps into a broad range of music perception skills and can be administered in 10 minutes or less. In Study 1, we derived four very brief versions from the Profile of Music Perception Skills (PROMS) and examined their properties in a sample of 280 participants. In Study 2 (N = 109), we administered the version retained from Study 1-termed Micro-PROMS-with the full-length PROMS, finding a short-to-long-form correlation of r = .72. In Study 3 (N = 198), we removed redundant trials and examined test-retest reliability as well as convergent, discriminant, and criterion validity. Results showed adequate internal consistency ( ω ¯ = .73) and test-retest reliability (ICC = .83). Findings supported convergent validity of the Micro-PROMS (r = .59 with the MET, p < .01) as well as discriminant validity with short-term and working memory (r ≲ .20). Criterion-related validity was evidenced by significant correlations of the Micro-PROMS with external indicators of musical proficiency ( r ¯ = .37, ps < .01), and with Gold-MSI General Musical Sophistication (r = .51, p<.01). In virtue of its brevity, psychometric qualities, and suitability for online administration, the battery fills a gap in the tools available to objectively assess musical ability.
Collapse
Affiliation(s)
- Hannah Strauss
- Department of Psychology, University of Innsbruck, Innsbruck, Austria
| | - Stephan Reiche
- Department of Psychology, University of Innsbruck, Innsbruck, Austria
| | - Maximilian Dick
- Department of Psychology, University of Innsbruck, Innsbruck, Austria
| | - Marcel Zentner
- Department of Psychology, University of Innsbruck, Innsbruck, Austria.
| |
Collapse
|
46
|
Caprini F, Zhao S, Chait M, Agus T, Pomper U, Tierney A, Dick F. Generalization of auditory expertise in audio engineers and instrumental musicians. Cognition 2024; 244:105696. [PMID: 38160651 DOI: 10.1016/j.cognition.2023.105696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 12/04/2023] [Accepted: 12/13/2023] [Indexed: 01/03/2024]
Abstract
From auditory perception to general cognition, the ability to play a musical instrument has been associated with skills both related and unrelated to music. However, it is unclear if these effects are bound to the specific characteristics of musical instrument training, as little attention has been paid to other populations such as audio engineers and designers whose auditory expertise may match or surpass that of musicians in specific auditory tasks or more naturalistic acoustic scenarios. We explored this possibility by comparing students of audio engineering (n = 20) to matched conservatory-trained instrumentalists (n = 24) and to naive controls (n = 20) on measures of auditory discrimination, auditory scene analysis, and speech in noise perception. We found that audio engineers and performing musicians had generally lower psychophysical thresholds than controls, with pitch perception showing the largest effect size. Compared to controls, audio engineers could better memorise and recall auditory scenes composed of non-musical sounds, whereas instrumental musicians performed best in a sustained selective attention task with two competing streams of tones. Finally, in a diotic speech-in-babble task, musicians showed lower signal-to-noise-ratio thresholds than both controls and engineers; however, a follow-up online study did not replicate this musician advantage. We also observed differences in personality that might account for group-based self-selection biases. Overall, we showed that investigating a wider range of forms of auditory expertise can help us corroborate (or challenge) the specificity of the advantages previously associated with musical instrument training.
Collapse
Affiliation(s)
- Francesco Caprini
- Department of Psychological Sciences, Birkbeck, University of London, UK.
| | - Sijia Zhao
- Department of Experimental Psychology, University of Oxford, UK
| | - Maria Chait
- University College London (UCL) Ear Institute, UK
| | - Trevor Agus
- School of Arts, English and Languages, Queen's University Belfast, UK
| | - Ulrich Pomper
- Department of Cognition, Emotion, and Methods in Psychology, Universität Wien, Austria
| | - Adam Tierney
- Department of Psychological Sciences, Birkbeck, University of London, UK
| | - Fred Dick
- Department of Experimental Psychology, University College London (UCL), UK
| |
Collapse
|
47
|
Clemente A, Kaplan TM, Pearce MT. Perceptual representations mediate effects of stimulus properties on liking for music. Ann N Y Acad Sci 2024; 1533:169-180. [PMID: 38319962 DOI: 10.1111/nyas.15106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2024]
Abstract
Perceptual pleasure and its concomitant hedonic value play an essential role in everyday life, motivating behavior and thus influencing how individuals choose to spend their time and resources. However, how pleasure arises from perception of sensory information remains relatively poorly understood. In particular, research has neglected the question of how perceptual representations mediate the relationships between stimulus properties and liking (e.g., stimulus symmetry can only affect liking if it is perceived). The present research addresses this gap for the first time, analyzing perceptual and liking ratings of 96 nonmusicians (power of 0.99) and finding that perceptual representations mediate effects of feature-based and information-based stimulus properties on liking for a novel set of melodies varying in balance, contour, symmetry, or complexity. Moreover, variability due to individual differences and stimuli accounts for most of the variance in liking. These results have broad implications for psychological research on sensory valuation, advocating a more explicit account of random variability and the mediating role of perceptual representations of stimulus properties.
Collapse
Affiliation(s)
- Ana Clemente
- Human Evolution and Cognition Research Group, University of the Balearic Islands, Palma de Mallorca, Spain
- Department of Cognition, Development and Educational Psychology, Institute of Neurosciences, University of Barcelona, Barcelona, Spain
- Cognition and Brain Plasticity Unit, Bellvitge Institute for Biomedical Research, L'Hospitalet De Llobregat, Spain
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK
| | - Thomas M Kaplan
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK
| | - Marcus T Pearce
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| |
Collapse
|
48
|
Bannister S, Greasley AE, Cox TJ, Akeroyd MA, Barker J, Fazenda B, Firth J, Graetzer SN, Roa Dabike G, Vos RR, Whitmer WM. Muddy, muddled, or muffled? Understanding the perception of audio quality in music by hearing aid users. Front Psychol 2024; 15:1310176. [PMID: 38449751 PMCID: PMC10916511 DOI: 10.3389/fpsyg.2024.1310176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 02/09/2024] [Indexed: 03/08/2024] Open
Abstract
Introduction Previous work on audio quality evaluation has demonstrated a developing convergence of the key perceptual attributes underlying judgments of quality, such as timbral, spatial and technical attributes. However, across existing research there remains a limited understanding of the crucial perceptual attributes that inform audio quality evaluation for people with hearing loss, and those who use hearing aids. This is especially the case with music, given the unique problems it presents in contrast to human speech. Method This paper presents a sensory evaluation study utilising descriptive analysis methods, in which a panel of hearing aid users collaborated, through consensus, to identify the most important perceptual attributes of music audio quality and developed a series of rating scales for future listening tests. Participants (N = 12), with a hearing loss ranging from mild to severe, first completed an online elicitation task, providing single-word terms to describe the audio quality of original and processed music samples; this was completed twice by each participant, once with hearing aids, and once without. Participants were then guided in discussing these raw terms across three focus groups, in which they reduced the term space, identified important perceptual groupings of terms, and developed perceptual attributes from these groups (including rating scales and definitions for each). Results Findings show that there were seven key perceptual dimensions underlying music audio quality (clarity, harshness, distortion, spaciousness, treble strength, middle strength, and bass strength), alongside a music audio quality attribute and possible alternative frequency balance attributes. Discussion We outline how these perceptual attributes align with extant literature, how attribute rating instruments might be used in future work, and the importance of better understanding the music listening difficulties of people with varied profiles of hearing loss.
Collapse
Affiliation(s)
| | | | - Trevor J. Cox
- Acoustics Research Centre, University of Salford, Salford, United Kingdom
| | - Michael A. Akeroyd
- School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Jon Barker
- Department of Computer Science, University of Sheffield, Sheffield, United Kingdom
| | - Bruno Fazenda
- Acoustics Research Centre, University of Salford, Salford, United Kingdom
| | - Jennifer Firth
- School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Simone N. Graetzer
- Acoustics Research Centre, University of Salford, Salford, United Kingdom
| | - Gerardo Roa Dabike
- Acoustics Research Centre, University of Salford, Salford, United Kingdom
| | - Rebecca R. Vos
- Acoustics Research Centre, University of Salford, Salford, United Kingdom
| | - William M. Whitmer
- School of Medicine, University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
49
|
Marjieh R, Harrison PMC, Lee H, Deligiannaki F, Jacoby N. Timbral effects on consonance disentangle psychoacoustic mechanisms and suggest perceptual origins for musical scales. Nat Commun 2024; 15:1482. [PMID: 38369535 PMCID: PMC11258268 DOI: 10.1038/s41467-024-45812-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 12/11/2023] [Indexed: 02/20/2024] Open
Abstract
The phenomenon of musical consonance is an essential feature in diverse musical styles. The traditional belief, supported by centuries of Western music theory and psychological studies, is that consonance derives from simple (harmonic) frequency ratios between tones and is insensitive to timbre. Here we show through five large-scale behavioral studies, comprising 235,440 human judgments from US and South Korean populations, that harmonic consonance preferences can be reshaped by timbral manipulations, even as far as to induce preferences for inharmonic intervals. We show how such effects may suggest perceptual origins for diverse scale systems ranging from the gamelan's slendro scale to the tuning of Western mean-tone and equal-tempered scales. Through computational modeling we show that these timbral manipulations dissociate competing psychoacoustic mechanisms underlying consonance, and we derive an updated computational model combining liking of harmonicity, disliking of fast beats (roughness), and liking of slow beats. Altogether, this work showcases how large-scale behavioral experiments can inform classical questions in auditory perception.
Collapse
Affiliation(s)
- Raja Marjieh
- Department of Psychology, Princeton University, Princeton, NJ, USA.
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
| | - Peter M C Harrison
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
- Centre for Music and Science, University of Cambridge, Cambridge, UK.
| | - Harin Lee
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Fotini Deligiannaki
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- German Aerospace Center (DLR), Institute for AI Safety and Security, Bonn, Germany
| | - Nori Jacoby
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
| |
Collapse
|
50
|
Kasdan AV, Butera IM, DeFreese AJ, Rowland J, Hilbun AL, Gordon RL, Wallace MT, Gifford RH. Cochlear implant users experience the sound-to-music effect. AUDITORY PERCEPTION & COGNITION 2024; 7:179-202. [PMID: 39391629 PMCID: PMC11463729 DOI: 10.1080/25742442.2024.2313430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Accepted: 01/23/2024] [Indexed: 10/12/2024]
Abstract
Introduction The speech-to-song illusion is a robust effect where repeated speech induces the perception of singing; this effect has been extended to repeated excerpts of environmental sounds (sound-to-music effect). Here we asked whether repetition could elicit musical percepts in cochlear implant (CI) users, who experience challenges with perceiving music due to both physiological and device limitations. Methods Thirty adult CI users and thirty age-matched controls with normal hearing (NH) completed two repetition experiments for speech and nonspeech sounds (water droplets). We hypothesized that CI users would experience the sound-to-music effect from temporal/rhythmic cues alone, but to a lesser magnitude compared to NH controls, given the limited access to spectral information CI users receive from their implants. Results We found that CI users did experience the sound-to-music effect but to a lesser degree compared to NH participants. Musicality ratings were not associated with musical training or frequency resolution, and among CI users, clinical variables like duration of hearing loss also did not influence ratings. Discussion Cochlear implants provide a strong clinical model for disentangling the effects of spectral and temporal information in an acoustic signal; our results suggest that temporal cues are sufficient to perceive the sound-to-music effect when spectral resolution is limited. Additionally, incorporating short repetitions into music specially designed for CI users may provide a promising way for them to experience music.
Collapse
Affiliation(s)
- Anna V. Kasdan
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Curb Center for Art, Enterprise, and Public Policy, Nashville, TN, USA
| | - Iliza M. Butera
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Andrea J. DeFreese
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Jess Rowland
- Lewis Center for the Arts, Princeton University, Princeton, NJ, USA
| | | | - Reyna L. Gordon
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Curb Center for Art, Enterprise, and Public Policy, Nashville, TN, USA
- Department of Otolaryngology – Head and & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Mark T. Wallace
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - René H. Gifford
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|