1
|
Morita M, Nishikawa Y, Tokumasu Y. Human musical capacity and products should have been induced by the hominin-specific combination of several biosocial features: A three-phase scheme on socio-ecological, cognitive, and cultural evolution. Evol Anthropol 2024; 33:e22031. [PMID: 38757853 DOI: 10.1002/evan.22031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2024] [Revised: 04/14/2024] [Accepted: 04/26/2024] [Indexed: 05/18/2024]
Abstract
Various selection pressures have shaped human uniqueness, for instance, music. When and why did musical universality and diversity emerge? Our hypothesis is that "music" initially originated from manipulative calls with limited musical elements. Thereafter, vocalizations became more complex and flexible along with a greater degree of social learning. Finally, constructed musical instruments and the language faculty resulted in diverse and context-specific music. Music precursors correspond to vocal communication among nonhuman primates, songbirds, and cetaceans. To place this scenario in hominin history, a three-phase scheme for music evolution is presented herein. We emphasize (1) the evolution of sociality and life history in australopithecines, (2) the evolution of cognitive and learning abilities in early/middle Homo, and (3) cultural evolution, primarily in Homo sapiens. Human musical capacity and products should be due to the hominin-specific combination of several biosocial features, including bipedalism, stable pair bonding, alloparenting, expanded brain size, and sexual selection.
Collapse
Affiliation(s)
- Masahito Morita
- Evolutionary Anthropology Lab, Department of Biological Sciences, The University of Tokyo, Tokyo, Japan
- Department of Health Sciences of Mind and Body, University of Human Arts and Sciences, Saitama, Japan
| | - Yuri Nishikawa
- Evolutionary Anthropology Lab, Department of Biological Sciences, The University of Tokyo, Tokyo, Japan
- Department of Molecular Life Science, Tokai University School of Medicine, Kanagawa, Japan
| | - Yudai Tokumasu
- Evolutionary Anthropology Lab, Department of Biological Sciences, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
2
|
Abalde SF, Rigby A, Keller PE, Novembre G. A Framework for Joint Music Making: Behavioral Findings, Neural Processes, and Computational Models. Neurosci Biobehav Rev 2024:105816. [PMID: 39032841 DOI: 10.1016/j.neubiorev.2024.105816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 07/15/2024] [Accepted: 07/16/2024] [Indexed: 07/23/2024]
Abstract
Across different epochs and societies, humans occasionally gather to jointly make music. This universal form of collective behavior is as fascinating as it is fragmentedly understood. As the interest in joint music making (JMM) rapidly grows, we review the state-of-the-art of this emerging science, blending behavioral, neural, and computational contributions. We present a conceptual framework synthesizing research on JMM within four components. The framework is centered upon interpersonal coordination, a crucial requirement for JMM. The other components imply the influence of individuals' (past) experience, (current) social factors, and (future) goals on real-time coordination. Our aim is to promote the development of JMM research by organizing existing work, inspiring new questions, and fostering accessibility for researchers belonging to other research communities.
Collapse
Affiliation(s)
- Sara F Abalde
- Neuroscience of Perception and Action Lab, Italian Institute of Technology, Rome, Italy; The Open University Affiliated Research Centre at the Istituto Italiano di Tecnologia, Italy.
| | - Alison Rigby
- Neurosciences Graduate Program, University of California, San Diego, US
| | - Peter E Keller
- Center for Music in the Brain, Aarhus University, Denmark; Department of Clinical Medicine, Center for Music in the Brain, Department of Clinical Medicine, Aarhus University, Denmark; The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Australia
| | - Giacomo Novembre
- Neuroscience of Perception and Action Lab, Italian Institute of Technology, Rome, Italy
| |
Collapse
|
3
|
Trevor C, Frühholz S. Music as an Evolved Tool for Socio-Affective Fiction. EMOTION REVIEW 2024; 16:180-194. [PMID: 39101012 PMCID: PMC11294008 DOI: 10.1177/17540739241259562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/06/2024]
Abstract
The question of why music evolved has been contemplated and debated for centuries across multiple disciplines. While many theories have been posited, they still do not fully answer the question of why humans began making music. Adding to the effort to solve this mystery, we propose the socio-affective fiction (SAF) hypothesis. Humans have a unique biological need for emotion regulation strengthening. Simulated emotional situations, like dreams, can help address that need. Immersion is key for such simulations to successfully exercise people's emotions. Therefore, we propose that music evolved as a signal for SAF to increase the immersive potential of storytelling and thereby better exercise people's emotions. In this review, we outline the SAF hypothesis and present cross-disciplinary evidence.
Collapse
Affiliation(s)
- Caitlyn Trevor
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zürich, Switzerland
- Music Department, University of Birmingham, Birmingham, UK
| | - Sascha Frühholz
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zürich, Switzerland
- Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zürich, Switzerland
- Department of Psychology, University of Oslo, Oslo, Norway
| |
Collapse
|
4
|
Shilton D, Savage PE. Conflicting predictions in the cross-cultural study of music and sociality - Comment on "Musical engagement as a duet of tight synchrony and loose Interpretability" by Tal-Chen Rabinowitch. Phys Life Rev 2024; 49:7-9. [PMID: 38442459 DOI: 10.1016/j.plrev.2024.02.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Accepted: 02/27/2024] [Indexed: 03/07/2024]
Affiliation(s)
- Dor Shilton
- Cohn Institute for the History and Philosophy of Science and Ideas, Tel Aviv University, Israel; Edelstein Center for the History and Philosophy of Science, Technology, and Medicine, Hebrew University of Jerusalem, Israel.
| | - Patrick E Savage
- School of Psychology, University of Auckland, Auckland, New Zealand; Faculty of Environment and Information Studies, Keio University, Fujisawa, Japan
| |
Collapse
|
5
|
Xu L, Xu B, Sun Z, Li H. Associations between lyric and musical depth in Chinese songs: Evidence from computational modeling. Psych J 2024. [PMID: 38898366 DOI: 10.1002/pchj.785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Accepted: 05/28/2024] [Indexed: 06/21/2024]
Abstract
Musical depth, which encompasses the intellectual and emotional complexity of music, is a robust dimension that influences music preference. However, there remains a dearth of research exploring the relationship between lyrics and musical depth. This study addressed this gap by analyzing linguistic inquiry and word count-based lyric features extracted from a comprehensive dataset of 2372 Chinese songs. Correlation analysis and machine learning techniques revealed compelling connections between musical depth and various lyric features, such as the usage frequency of emotion words, time words, and insight words. To further investigate these relationships, prediction models for musical depth were constructed using a combination of audio and lyric features as inputs. The results demonstrated that the random forest regressions (RFR) that integrated both audio and lyric features yielded superior prediction performance compared to those relying solely on lyric inputs. Notably, when assessing the feature importance to interpret the RFR models, it became evident that audio features played a decisive role in predicting musical depth. This finding highlights the paramount significance of melody over lyrics in effectively conveying the intricacies of musical depth.
Collapse
Affiliation(s)
- Liang Xu
- Department of Psychology, College of Education, Zhejiang University of Technology, Hangzhou, China
| | - Bingfei Xu
- Department of Psychology, College of Education, Zhejiang University of Technology, Hangzhou, China
| | - Zaoyi Sun
- Department of Psychology, College of Education, Zhejiang University of Technology, Hangzhou, China
| | - Hongting Li
- Department of Psychology, College of Education, Zhejiang University of Technology, Hangzhou, China
| |
Collapse
|
6
|
Albouy P, Mehr SA, Hoyer RS, Ginzburg J, Du Y, Zatorre RJ. Spectro-temporal acoustical markers differentiate speech from song across cultures. Nat Commun 2024; 15:4835. [PMID: 38844457 PMCID: PMC11156671 DOI: 10.1038/s41467-024-49040-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 05/21/2024] [Indexed: 06/09/2024] Open
Abstract
Humans produce two forms of cognitively complex vocalizations: speech and song. It is debated whether these differ based primarily on culturally specific, learned features, or if acoustical features can reliably distinguish them. We study the spectro-temporal modulation patterns of vocalizations produced by 369 people living in 21 urban, rural, and small-scale societies across six continents. Specific ranges of spectral and temporal modulations, overlapping within categories and across societies, significantly differentiate speech from song. Machine-learning classification shows that this effect is cross-culturally robust, vocalizations being reliably classified solely from their spectro-temporal features across all 21 societies. Listeners unfamiliar with the cultures classify these vocalizations using similar spectro-temporal cues as the machine learning algorithm. Finally, spectro-temporal features are better able to discriminate song from speech than a broad range of other acoustical variables, suggesting that spectro-temporal modulation-a key feature of auditory neuronal tuning-accounts for a fundamental difference between these categories.
Collapse
Affiliation(s)
- Philippe Albouy
- CERVO Brain Research Centre, School of Psychology, Laval University, Québec City, QC, Canada.
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada.
- Centre for Research in Brain, Language and Music and Centre for Interdisciplinary Research in Music, Media, and Technology, Montréal, QC, Canada.
| | - Samuel A Mehr
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
- School of Psychology, University of Auckland, Auckland, 1010, New Zealand
- Child Study Center, Yale University, New Haven, CT, 06511, USA
| | - Roxane S Hoyer
- CERVO Brain Research Centre, School of Psychology, Laval University, Québec City, QC, Canada
| | - Jérémie Ginzburg
- CERVO Brain Research Centre, School of Psychology, Laval University, Québec City, QC, Canada
- Lyon Neuroscience Research Center, CNRS, UMR5292, INSERM, U1028 - Université Claude Bernard Lyon 1, F-69000, Lyon, France
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC, Canada
| | - Yi Du
- Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| | - Robert J Zatorre
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada.
- Centre for Research in Brain, Language and Music and Centre for Interdisciplinary Research in Music, Media, and Technology, Montréal, QC, Canada.
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC, Canada.
| |
Collapse
|
7
|
Maymon CN, Crawford MT, Blackburne K, Botes A, Carnegie K, Mehr SA, Meier J, Murphy J, Miles NL, Robinson K, Tooley M, Grimshaw GM. The presence of fear: How subjective fear, not physiological changes, shapes the experience of presence. J Exp Psychol Gen 2024; 153:1500-1516. [PMID: 38635168 PMCID: PMC11182719 DOI: 10.1037/xge0001576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/19/2024]
Abstract
When we become engrossed in novels, films, games, or even our own wandering thoughts, we can feel present in a reality distinct from the real world. Although this subjective sense of presence is, presumably, a ubiquitous aspect of conscious experience, the mechanisms that produce it are unknown. Correlational studies conducted in virtual reality have shown that we feel more present when we are afraid, motivating claims that physiological changes contribute to presence; however, such causal claims remain to be evaluated. Here, we report two experiments that test the causal role of subjective and physiological components of fear (i.e., activation of the sympathetic nervous system) in generating presence. In Study 1, we validated a virtual reality simulation capable of inducing fear. Participants rated their emotions while they crossed a wooden plank that appeared to be suspended above a city street; at the same time, we recorded heart rate and skin conductance levels. Height exposure increased ratings of fear, presence, and both measures of sympathetic activation. Although presence and fear ratings were correlated during height exposure, presence and sympathetic activation were unrelated. In Study 2, we manipulated whether the plank appeared at height or at ground level. We also captured participants' movements, which revealed that alongside increases in subjective fear, presence, and sympathetic activation, participants also moved more slowly at height relative to controls. Using a mediational approach, we found that the relationship between height exposure and presence on the plank was fully mediated by self-reported fear, and not by sympathetic activation. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
Affiliation(s)
| | | | | | - André Botes
- University of Auckland, Auckland, New Zealand
| | | | - Samuel A. Mehr
- University of Auckland, Auckland, New Zealand
- Yale Child Study Center, New Haven, USA
| | | | - Justin Murphy
- Victoria University of Wellington, Wellington, New Zealand
| | | | - Kealagh Robinson
- Victoria University of Wellington, Wellington, New Zealand
- Massey University, Palmerston North, New Zealand
| | - Michael Tooley
- Victoria University of Wellington, Wellington, New Zealand
| | | |
Collapse
|
8
|
Baliga RR. Sing for a long and healthy life? Eur Heart J 2024; 45:1774-1775. [PMID: 38607286 DOI: 10.1093/eurheartj/ehad819] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 04/13/2024] Open
Affiliation(s)
- Ragavendra R Baliga
- Cardiology/Internal Medicine, The Ohio State University Hospital, 473 W 12th Avenue, Columbus, OH 43210, USA
| |
Collapse
|
9
|
Hippe L, Hennessy V, Ramirez NF, Zhao TC. Comparison of speech and music input in North American infants' home environment over the first 2 years of life. Dev Sci 2024:e13528. [PMID: 38770599 DOI: 10.1111/desc.13528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Revised: 03/07/2024] [Accepted: 04/28/2024] [Indexed: 05/22/2024]
Abstract
Infants are immersed in a world of sounds from the moment their auditory system becomes functional, and experience with the auditory world shapes how their brain processes sounds in their environment. Across cultures, speech and music are two dominant auditory signals in infants' daily lives. Decades of research have repeatedly shown that both quantity and quality of speech input play critical roles in infant language development. Less is known about the music input infants receive in their environment. This study is the first to compare music input to speech input across infancy by analyzing a longitudinal dataset of daylong audio recordings collected in English-learning infants' home environments, at 6, 10, 14, 18, and 24 months of age. Using a crowdsourcing approach, 643 naïve listeners annotated 12,000 short snippets (10 s) randomly sampled from the recordings using Zooniverse, an online citizen-science platform. Results show that infants overall receive significantly more speech input than music input and the gap widens as the infants get older. At every age point, infants were exposed to more music from an electronic device than an in-person source; this pattern was reversed for speech. The percentage of input intended for infants remained the same over time for music while that percentage significantly increased for speech. We propose possible explanations for the limited music input compared to speech input observed in the present (North American) dataset and discuss future directions. We also discuss the opportunities and caveats in using a crowdsourcing approach to analyze large audio datasets. A video abstract of this article can be viewed at https://youtu.be/lFj_sEaBMN4.
Collapse
Affiliation(s)
- Lindsay Hippe
- Institute for Learning and Brain Sciences, University of Washington, Seattle, Washington, USA
- Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington, USA
| | - Victoria Hennessy
- Institute for Learning and Brain Sciences, University of Washington, Seattle, Washington, USA
| | - Naja Ferjan Ramirez
- Institute for Learning and Brain Sciences, University of Washington, Seattle, Washington, USA
- Department of Linguistics, University of Washington, Seattle, Washington, USA
| | - T Christina Zhao
- Institute for Learning and Brain Sciences, University of Washington, Seattle, Washington, USA
- Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington, USA
| |
Collapse
|
10
|
Ozaki Y, Tierney A, Pfordresher PQ, McBride JM, Benetos E, Proutskova P, Chiba G, Liu F, Jacoby N, Purdy SC, Opondo P, Fitch WT, Hegde S, Rocamora M, Thorne R, Nweke F, Sadaphal DP, Sadaphal PM, Hadavi S, Fujii S, Choo S, Naruse M, Ehara U, Sy L, Parselelo ML, Anglada-Tort M, Hansen NC, Haiduk F, Færøvik U, Magalhães V, Krzyżanowski W, Shcherbakova O, Hereld D, Barbosa BS, Varella MAC, van Tongeren M, Dessiatnitchenko P, Zar SZ, El Kahla I, Muslu O, Troy J, Lomsadze T, Kurdova D, Tsope C, Fredriksson D, Arabadjiev A, Sarbah JP, Arhine A, Meachair TÓ, Silva-Zurita J, Soto-Silva I, Millalonco NEM, Ambrazevičius R, Loui P, Ravignani A, Jadoul Y, Larrouy-Maestri P, Bruder C, Teyxokawa TP, Kuikuro U, Natsitsabui R, Sagarzazu NB, Raviv L, Zeng M, Varnosfaderani SD, Gómez-Cañón JS, Kolff K, der Nederlanden CVB, Chhatwal M, David RM, Setiawan IPG, Lekakul G, Borsan VN, Nguqu N, Savage PE. Globally, songs and instrumental melodies are slower and higher and use more stable pitches than speech: A Registered Report. SCIENCE ADVANCES 2024; 10:eadm9797. [PMID: 38748798 PMCID: PMC11095461 DOI: 10.1126/sciadv.adm9797] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Accepted: 04/19/2024] [Indexed: 05/19/2024]
Abstract
Both music and language are found in all known human societies, yet no studies have compared similarities and differences between song, speech, and instrumental music on a global scale. In this Registered Report, we analyzed two global datasets: (i) 300 annotated audio recordings representing matched sets of traditional songs, recited lyrics, conversational speech, and instrumental melodies from our 75 coauthors speaking 55 languages; and (ii) 418 previously published adult-directed song and speech recordings from 209 individuals speaking 16 languages. Of our six preregistered predictions, five were strongly supported: Relative to speech, songs use (i) higher pitch, (ii) slower temporal rate, and (iii) more stable pitches, while both songs and speech used similar (iv) pitch interval size and (v) timbral brightness. Exploratory analyses suggest that features vary along a "musi-linguistic" continuum when including instrumental melodies and recited lyrics. Our study provides strong empirical evidence of cross-cultural regularities in music and speech.
Collapse
Affiliation(s)
- Yuto Ozaki
- Graduate School of Media and Governance, Keio University, Fujisawa, Kanagawa, Japan
| | - Adam Tierney
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Peter Q. Pfordresher
- Department of Psychology, University at Buffalo, State University of New York, Buffalo, NY, USA
| | - John M. McBride
- Center for Algorithmic and Robotized Synthesis, Institute for Basic Science, Ulsan, South Korea
| | - Emmanouil Benetos
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK
| | - Polina Proutskova
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK
| | - Gakuto Chiba
- Graduate School of Media and Governance, Keio University, Fujisawa, Kanagawa, Japan
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Nori Jacoby
- Computational Auditory Perception Group, Max-Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
| | - Suzanne C. Purdy
- School of Psychology, University of Auckland, Auckland, New Zealand
- Centre for Brain Research and Eisdell Moore Centre for Hearing and Balance Research, University of Auckland, Auckland, New Zealand
| | - Patricia Opondo
- School of Arts, Music Discipline, University of KwaZulu Natal, Durban, South Africa
| | - W. Tecumseh Fitch
- Department of Behavioral and Cognitive Biology, University of Vienna, Vienna, Austria
| | - Shantala Hegde
- Music Cognition Lab, Department of Clinical Psychology, National Institute of Mental Health and Neuro Sciences, Bangalore, Karnataka, India
| | - Martín Rocamora
- Universidad de la República, Montevideo, Uruguay
- Music Technology Group, Universitat Pompeu Fabra, Barcelona, Spain
| | - Rob Thorne
- School of Music, Victoria University of Wellington, Wellington, New Zealand
| | - Florence Nweke
- Department of Creative Arts, University of Lagos, Lagos, Nigeria
- Department of Music, Mountain Top University, Ogun, Nigeria
| | - Dhwani P. Sadaphal
- Department of Behavioral and Cognitive Biology, University of Vienna, Vienna, Austria
| | | | - Shafagh Hadavi
- Graduate School of Media and Governance, Keio University, Fujisawa, Kanagawa, Japan
| | - Shinya Fujii
- Faculty of Environment and Information Studies, Keio University, Fujisawa, Kanagawa, Japan
| | - Sangbuem Choo
- Graduate School of Media and Governance, Keio University, Fujisawa, Kanagawa, Japan
| | - Marin Naruse
- Faculty of Policy Management, Keio University, Fujisawa, Kanagawa, Japan
| | | | - Latyr Sy
- Independent researcher, Tokyo, Japan
- Independent researcher, Dakar, Sénégal
| | - Mark Lenini Parselelo
- Memorial University of Newfoundland, St. John’s, NL, Canada
- Department of Music and Dance, Kenyatta University, Nairobi, Kenya
| | | | - Niels Chr. Hansen
- Aarhus Institute of Advanced Studies, Aarhus University, Aarhus, Denmark
- Centre of Excellence in Music, Mind, Body and Brain, University of Jyväskylä, Jyväskylä, Finland
- Interacting Minds Centre, School of Culture and Society, Aarhus University, Aarhus, Denmark
- Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| | - Felix Haiduk
- Department of Behavioral and Cognitive Biology, University of Vienna, Vienna, Austria
- Department of General Psychology, University of Padua, Padua, Italy
| | - Ulvhild Færøvik
- Institute of Biological and Medical Psychology, Department of Psychology, University of Bergen, Bergen, Norway
| | - Violeta Magalhães
- Centre of Linguistics of the University of Porto (CLUP), Porto, Portugal
- Faculty of Arts and Humanities of the University of Porto (FLUP), Porto, Portugal
- School of Education of the Polytechnic of Porto (ESE IPP), Porto, Portugal
| | - Wojciech Krzyżanowski
- Adam Mickiewicz University, Faculty of Art Studies, Musicology Institute, Poznań, Poland
| | | | - Diana Hereld
- Department of Psychiatry, UCLA Semel Institute for Neuroscience and Human Behavior, Los Angeles, CA, USA
| | | | | | | | | | - Su Zar Zar
- Headmistress, The Royal Music Academy, Yangon, Myanmar
| | - Iyadh El Kahla
- Department of Cultural Policy, University of Hildesheim, Hildesheim, Germany
| | - Olcay Muslu
- Centre for the Study of Higher Education, University of Kent, Canterbury, UK
- MIRAS, Centre for Cultural Sustainability, Istanbul, Turkey
| | - Jakelin Troy
- Director, Indigenous Research, Office of the Deputy Vice-Chancellor (Research); Department of Linguistics, Faculty of Arts and Social Sciences, The University of Sydney, Camperdown, NSW, Australia
| | - Teona Lomsadze
- International Research Center for Traditional Polyphony of the Tbilisi State Conservatoire, Tbilisi, Georgia
- Georgian Studies Fellow, University of Oxford, Oxford, UK
| | - Dilyana Kurdova
- South-West University Neofit Rilski, Blagoevgrad, Bulgaria
- Phoenix Perpeticum Foundation, Sofia, Bulgaria
| | | | | | - Aleksandar Arabadjiev
- Department of Folk Music Research and Ethnomusicology, University of Music and Performing Arts–MDW, Wien, Austria
| | | | - Adwoa Arhine
- Department of Music, University of Ghana, Accra, Ghana
| | - Tadhg Ó Meachair
- Department of Ethnomusicology and Folklore, Indiana University, Bloomington, IN, USA
| | - Javier Silva-Zurita
- Department of Humanities and Arts, University of Los Lagos, Osorno, Chile
- Millennium Nucleus on Musical and Sound Cultures (CMUS NCS 2022-16), Santiago, Chile
| | - Ignacio Soto-Silva
- Department of Humanities and Arts, University of Los Lagos, Osorno, Chile
- Millennium Nucleus on Musical and Sound Cultures (CMUS NCS 2022-16), Santiago, Chile
| | | | | | - Psyche Loui
- Music, Imaging and Neural Dynamics Lab, Northeastern University, Boston, MA, USA
| | - Andrea Ravignani
- Department of Human Neurosciences, Sapienza University of Rome, Rome, Italy
- Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| | - Yannick Jadoul
- Department of Human Neurosciences, Sapienza University of Rome, Rome, Italy
- Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
| | - Pauline Larrouy-Maestri
- Music Department, Max-Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Max Planck—NYU Center for Language, Music, and Emotion (CLaME), New York, NY, USA
| | - Camila Bruder
- Music Department, Max-Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
| | - Tutushamum Puri Teyxokawa
- Txemim Puri Project–Puri Language Research, Vitalization and Teaching/Recording and Preservation of Puri History and Culture, Rio de Janeiro, Brasil
| | | | | | | | - Limor Raviv
- Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
- cSCAN, University of Glasgow, Glasgow, UK
| | - Minyu Zeng
- Graduate School of Media and Governance, Keio University, Fujisawa, Kanagawa, Japan
- Rhode Island School of Design, Providence, RI, USA
| | - Shahaboddin Dabaghi Varnosfaderani
- Institute for English and American Studies (IEAS), Goethe University of Frankfurt am Main, Frankfurt am Main, Germany
- Cognitive and Developmental Psychology Unit, Centre, for Cognitive Science, University of Kaiserslautern-Landau (RPTU), Kaiserslautern, Germany
| | | | - Kayla Kolff
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
| | | | - Meyha Chhatwal
- Department of Psychology, University of Toronto Mississauga, Mississauga, ON, Canada
| | - Ryan Mark David
- Department of Psychology, University of Toronto Mississauga, Mississauga, ON, Canada
| | | | - Great Lekakul
- Faculty of Fine Arts, Chiang Mai University, Chiang Mai, Thailand
| | - Vanessa Nina Borsan
- Graduate School of Media and Governance, Keio University, Fujisawa, Kanagawa, Japan
- Université de Lille, CNRS, Centrale Lille, UMR 9189 CRIStAL, F-59000 Lille, France
| | - Nozuko Nguqu
- School of Arts, Music Discipline, University of KwaZulu Natal, Durban, South Africa
| | - Patrick E. Savage
- School of Psychology, University of Auckland, Auckland, New Zealand
- Faculty of Environment and Information Studies, Keio University, Fujisawa, Kanagawa, Japan
| |
Collapse
|
11
|
Passmore S, Wood ALC, Barbieri C, Shilton D, Daikoku H, Atkinson QD, Savage PE. Global musical diversity is largely independent of linguistic and genetic histories. Nat Commun 2024; 15:3964. [PMID: 38729968 PMCID: PMC11087526 DOI: 10.1038/s41467-024-48113-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Accepted: 04/19/2024] [Indexed: 05/12/2024] Open
Abstract
Music is a universal yet diverse cultural trait transmitted between generations. The extent to which global musical diversity traces cultural and demographic history, however, is unresolved. Using a global musical dataset of 5242 songs from 719 societies, we identify five axes of musical diversity and show that music contains geographical and historical structures analogous to linguistic and genetic diversity. After creating a matched dataset of musical, genetic, and linguistic data spanning 121 societies containing 981 songs, 1296 individual genetic profiles, and 121 languages, we show that global musical similarities are only weakly and inconsistently related to linguistic or genetic histories, with some regional exceptions such as within Southeast Asia and sub-Saharan Africa. Our results suggest that global musical traditions are largely distinct from some non-musical aspects of human history.
Collapse
Affiliation(s)
- Sam Passmore
- Graduate School of Media and Governance, Keio University, Fujisawa, Japan.
- Evolution of Cultural Diversity Initiative (ECDI), Australian National University, Canberra, Australia.
| | | | - Chiara Barbieri
- Department of Evolutionary Biology and Environmental Studies, University of Zurich, Zurich, 8057, Switzerland
- Centre for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, 8050, Switzerland
- Department of Life and Environmental Sciences, University of Cagliari, 09126, Cagliari, Italy
| | - Dor Shilton
- Cohn Institute for the History and Philosophy of Science and Ideas, Tel Aviv University, Tel Aviv, Israel
- Edelstein Centre for the History and Philosophy of Science, Technology, and Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Hideo Daikoku
- Graduate School of Media and Governance, Keio University, Fujisawa, Japan
| | | | - Patrick E Savage
- School of Psychology, University of Auckland, Auckland, New Zealand.
- Faculty of Environment and Information Studies, Keio University, Fujisawa, Japan.
| |
Collapse
|
12
|
Kachlicka M, Patel AD, Liu F, Tierney A. Weighting of cues to categorization of song versus speech in tone-language and non-tone-language speakers. Cognition 2024; 246:105757. [PMID: 38442588 DOI: 10.1016/j.cognition.2024.105757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 02/09/2024] [Accepted: 02/20/2024] [Indexed: 03/07/2024]
Abstract
One of the most important auditory categorization tasks a listener faces is determining a sound's domain, a process which is a prerequisite for successful within-domain categorization tasks such as recognizing different speech sounds or musical tones. Speech and song are universal in human cultures: how do listeners categorize a sequence of words as belonging to one or the other of these domains? There is growing interest in the acoustic cues that distinguish speech and song, but it remains unclear whether there are cross-cultural differences in the evidence upon which listeners rely when making this fundamental perceptual categorization. Here we use the speech-to-song illusion, in which some spoken phrases perceptually transform into song when repeated, to investigate cues to this domain-level categorization in native speakers of tone languages (Mandarin and Cantonese speakers residing in the United Kingdom and China) and in native speakers of a non-tone language (English). We find that native tone-language and non-tone-language listeners largely agree on which spoken phrases sound like song after repetition, and we also find that the strength of this transformation is not significantly different across language backgrounds or countries of residence. Furthermore, we find a striking similarity in the cues upon which listeners rely when perceiving word sequences as singing versus speech, including small pitch intervals, flat within-syllable pitch contours, and steady beats. These findings support the view that there are certain widespread cross-cultural similarities in the mechanisms by which listeners judge if a word sequence is spoken or sung.
Collapse
Affiliation(s)
- Magdalena Kachlicka
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, United Kingdom
| | - Aniruddh D Patel
- Department of Psychology, Tufts University, 419 Boston Ave, Medford, USA; Program in Brain, Mind, and Consciousness, Canadian Institute for Advanced Research, 661 University Avenue, Toronto, Canada
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Whiteknights, Reading, United Kingdom
| | - Adam Tierney
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, United Kingdom.
| |
Collapse
|
13
|
Jacoby N, Polak R, Grahn JA, Cameron DJ, Lee KM, Godoy R, Undurraga EA, Huanca T, Thalwitzer T, Doumbia N, Goldberg D, Margulis EH, Wong PCM, Jure L, Rocamora M, Fujii S, Savage PE, Ajimi J, Konno R, Oishi S, Jakubowski K, Holzapfel A, Mungan E, Kaya E, Rao P, Rohit MA, Alladi S, Tarr B, Anglada-Tort M, Harrison PMC, McPherson MJ, Dolan S, Durango A, McDermott JH. Commonality and variation in mental representations of music revealed by a cross-cultural comparison of rhythm priors in 15 countries. Nat Hum Behav 2024; 8:846-877. [PMID: 38438653 PMCID: PMC11132990 DOI: 10.1038/s41562-023-01800-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2021] [Accepted: 12/07/2023] [Indexed: 03/06/2024]
Abstract
Music is present in every known society but varies from place to place. What, if anything, is universal to music cognition? We measured a signature of mental representations of rhythm in 39 participant groups in 15 countries, spanning urban societies and Indigenous populations. Listeners reproduced random 'seed' rhythms; their reproductions were fed back as the stimulus (as in the game of 'telephone'), such that their biases (the prior) could be estimated from the distribution of reproductions. Every tested group showed a sparse prior with peaks at integer-ratio rhythms. However, the importance of different integer ratios varied across groups, often reflecting local musical practices. Our results suggest a common feature of music cognition: discrete rhythm 'categories' at small-integer ratios. These discrete representations plausibly stabilize musical systems in the face of cultural transmission but interact with culture-specific traditions to yield the diversity that is evident when mental representations are probed across many cultures.
Collapse
Affiliation(s)
- Nori Jacoby
- Computational Auditory Perception Group, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
- Presidential Scholars in Society and Neuroscience, Columbia University, New York, NY, USA.
| | - Rainer Polak
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Blindern, Oslo, Norway
| | - Jessica A Grahn
- Brain and Mind Institute and Department of Psychology, University of Western Ontario, London, Ontario, Canada
| | - Daniel J Cameron
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, Canada
| | - Kyung Myun Lee
- School of Digital Humanities and Social Sciences, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
- Graduate School of Culture Technology, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Ricardo Godoy
- Heller School for Social Policy and Management, Brandeis University, Waltham, MA, USA
| | - Eduardo A Undurraga
- Escuela de Gobierno, Pontificia Universidad Católica de Chile, Santiago, Chile
- CIFAR Azrieli Global Scholars programme, CIFAR, Toronto, Ontario, Canada
| | - Tomás Huanca
- Centro Boliviano de Investigación y Desarrollo Socio Integral, San Borja, Bolivia
| | | | - Noumouké Doumbia
- Sciences de l'Education, Université Catholique d'Afrique de l'Ouest, Bamako, Mali
| | - Daniel Goldberg
- Department of Music, University of Connecticut, Storrs, CT, USA
| | | | - Patrick C M Wong
- Department of Linguistics & Modern Languages and Brain and Mind Institute, Chinese University of Hong Kong, Hong Kong SAR, China
| | - Luis Jure
- School of Music, Universidad de la República, Montevideo, Uruguay
| | - Martín Rocamora
- Signal Processing Department, School of Engineering, Universidad de la República, Montevideo, Uruguay
- Music Technology Group, Universitat Pompeu Fabra, Barcelona, Spain
| | - Shinya Fujii
- Faculty of Environment and Information Studies, Keio University, Fujisawa, Japan
| | - Patrick E Savage
- Faculty of Environment and Information Studies, Keio University, Fujisawa, Japan
- School of Psychology, University of Auckland, Auckland, New Zealand
| | - Jun Ajimi
- Department of Traditional Japanese Music, Tokyo University of the Arts, Tokyo, Japan
| | - Rei Konno
- Faculty of Environment and Information Studies, Keio University, Fujisawa, Japan
| | - Sho Oishi
- Faculty of Environment and Information Studies, Keio University, Fujisawa, Japan
| | | | - Andre Holzapfel
- Division of Media Technology and Interaction Design, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Esra Mungan
- Department of Psychology, Bogazici University, Istanbul, Turkey
| | - Ece Kaya
- Max Planck Research Group 'Neural and Environmental Rhythms', Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Cognitive Science Master Program, Bogazici University, Istanbul, Turkey
| | - Preeti Rao
- Department of Electrical Engineering, Indian Institute of Technology Bombay, Mumbai, India
| | - Mattur A Rohit
- Department of Electrical Engineering, Indian Institute of Technology Bombay, Mumbai, India
| | | | - Bronwyn Tarr
- Department of Cognitive and Evolutionary Anthropology, University of Oxford, Oxford, UK
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Manuel Anglada-Tort
- Computational Auditory Perception Group, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Department of Psychology, Goldsmiths, University of London, London, UK
| | - Peter M C Harrison
- Computational Auditory Perception Group, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Faculty of Music, University of Cambridge, Cambridge, UK
| | - Malinda J McPherson
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- Program in Speech and Hearing Biosciences and Technology, Harvard University, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Sophie Dolan
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Brain and Cognitive Sciences, Wellesley College, Wellesley, MA, USA
| | - Alex Durango
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
- Neurosciences Graduate Program, Stanford University, Stanford, CA, USA
| | - Josh H McDermott
- Faculty of Music, University of Cambridge, Cambridge, UK.
- Program in Speech and Hearing Biosciences and Technology, Harvard University, Cambridge, MA, USA.
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA.
- Center for Brains, Minds & Machines, Massachusetts Institute of Technology, Cambridge, MA, USA.
| |
Collapse
|
14
|
Abrams EB, Namballa R, He R, Poeppel D, Ripollés P. Elevator music as a tool for the quantitative characterization of reward. Ann N Y Acad Sci 2024; 1535:121-136. [PMID: 38566486 DOI: 10.1111/nyas.15131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
While certain musical genres and songs are widely popular, there is still large variability in the music that individuals find rewarding or emotional, even among those with a similar musical enculturation. Interestingly, there is one Western genre that is intended to attract minimal attention and evoke a mild emotional response: elevator music. In a series of behavioral experiments, we show that elevator music consistently elicits low pleasure and surprise. Participants reported elevator music as being less pleasurable than music from popular genres, even when participants did not regularly listen to the comparison genre. Participants reported elevator music to be familiar even when they had not explicitly heard the presented song before. Computational and behavioral measures of surprisal showed that elevator music was less surprising, and thus more predictable, than other well-known genres. Elevator music covers of popular songs were rated as less pleasurable, surprising, and arousing than their original counterparts. Finally, we used elevator music as a control for self-selected rewarding songs in a proof-of-concept physiological (electrodermal activity and piloerection) experiment. Our results suggest that elevator music elicits low emotional responses consistently across Western music listeners, making it a unique control stimulus for studying musical novelty, pleasure, and surprise.
Collapse
Affiliation(s)
- Ellie Bean Abrams
- Department of Psychology, New York University, New York, New York, USA
- Center for Language, Music, and Emotion (CLaME), New York University, New York, New York, USA
- Music and Audio Research Laboratory (MARL), New York University, New York, New York, USA
| | - Richa Namballa
- Music and Audio Research Laboratory (MARL), New York University, New York, New York, USA
| | - Richard He
- Department of Psychology, New York University, New York, New York, USA
- Center for Language, Music, and Emotion (CLaME), New York University, New York, New York, USA
- Music and Audio Research Laboratory (MARL), New York University, New York, New York, USA
| | - David Poeppel
- Department of Psychology, New York University, New York, New York, USA
- Center for Language, Music, and Emotion (CLaME), New York University, New York, New York, USA
| | - Pablo Ripollés
- Department of Psychology, New York University, New York, New York, USA
- Center for Language, Music, and Emotion (CLaME), New York University, New York, New York, USA
- Music and Audio Research Laboratory (MARL), New York University, New York, New York, USA
| |
Collapse
|
15
|
Lu L, Tao M, Gao J, Gao M, Zhu H, He X. The difference of affect improvement effect of music intervention in aerobic exercise at different time periods. Front Physiol 2024; 15:1341351. [PMID: 38742155 PMCID: PMC11090102 DOI: 10.3389/fphys.2024.1341351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Accepted: 04/04/2024] [Indexed: 05/16/2024] Open
Abstract
Objectives: A randomized controlled experimental design that combines exercise and music intervention was adopted in this study to verify whether this approach could help improve human affect. The differences in the effect of music listening on affective improvement were compared in four different periods: before, during, and after aerobic power cycling exercise and the whole exercise course. Method: A total of 140 subjects aged 19-30 years (average age: 23.6 years) were recruited and randomly divided into four music intervention groups, namely, the pre-exercise, during-exercise, post-exercise, and the whole-course groups. The subjects' demographic and sociological variables and daily physical activities were collected using questionnaires. Individual factors, such as the subjects' noise sensitivity, personality traits, and degree of learning burnout, were collected via scale scoring. A laboratory in Zhejiang Normal University was selected as the experimental site. The testing procedure can be summarized as follows. In a quiet environment, the subjects were asked to sit quietly for 5 min after completing a preparation work, and then they were informed to take a pre-test. The four subject groups wore headphones and completed 20 min of aerobic cycling (i.e., 7 min of moderate-intensity cycling [50%*HRR + RHR] + 6 min of low-intensity interval cycling [30%*HRR + RHR] + 7 min of moderate-intensity cycling [50%*HRR + RHR] after returning to a calm state (no less than 20 min) for post-testing. The affect improvement indicators (dependent variables) collected in the field included blood pressure (BP), positive/negative affect, and heart rate variability indicators (RMSSD, SDNN, and LF/HF). Results: 1) Significant differences were found in the participants' systolic BP (SBP) indices and the effect of improvement of the positive affect during the exercise-music intervention among the four groups at different durations for the same exercise intensity (F = 2.379, p = 0.030, ɳp 2 = 0.058; F = 2.451, p = 0.043, ɳp 2 = 0.091). 2) Music intervention for individuals during exercise contribute more to the reduction of SBP than the other three time periods (F = 3.170, p = 0.047, ɳp 2 = 0.068). Improvement in the participants' negativity affective score was also better during exercise, and it was significantly different than the other three time periods (F = 5.516, p = 0.006, ɳp 2 = 0.113). No significant differences were found in the improvement effects of the other effective indicators for the four periods. Conclusion: Exercise combined with music intervention has a facilitative effect on human affect improvement, and listening to music during exercise has a better impact on affective improvement than music interventions at the other periods. When people perform physical activities, listening to music during exercise positively affects the progress effect among them.
Collapse
Affiliation(s)
- Li Lu
- Department of Physical Education and Health Science, Zhejiang Normal University, Jinhua, China
| | - Meng Tao
- School of Exercise and Health, Shanghai University of Sport, Shanghai, China
| | - Jingchuan Gao
- Department of Physical Education and Health Science, Zhejiang Normal University, Jinhua, China
| | - Mengru Gao
- Department of Physical Education and Health Science, Zhejiang Normal University, Jinhua, China
| | - Houwei Zhu
- Department of Physical Education and Health Science, Zhejiang Normal University, Jinhua, China
| | - Xiaolong He
- Department of Physical Education and Health Science, Zhejiang Normal University, Jinhua, China
| |
Collapse
|
16
|
Bruder C, Poeppel D, Larrouy-Maestri P. Perceptual (but not acoustic) features predict singing voice preferences. Sci Rep 2024; 14:8977. [PMID: 38637516 PMCID: PMC11026466 DOI: 10.1038/s41598-024-58924-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 04/03/2024] [Indexed: 04/20/2024] Open
Abstract
Why do we prefer some singers to others? We investigated how much singing voice preferences can be traced back to objective features of the stimuli. To do so, we asked participants to rate short excerpts of singing performances in terms of how much they liked them as well as in terms of 10 perceptual attributes (e.g.: pitch accuracy, tempo, breathiness). We modeled liking ratings based on these perceptual ratings, as well as based on acoustic features and low-level features derived from Music Information Retrieval (MIR). Mean liking ratings for each stimulus were highly correlated between Experiments 1 (online, US-based participants) and 2 (in the lab, German participants), suggesting a role for attributes of the stimuli in grounding average preferences. We show that acoustic and MIR features barely explain any variance in liking ratings; in contrast, perceptual features of the voices achieved around 43% of prediction. Inter-rater agreement in liking and perceptual ratings was low, indicating substantial (and unsurprising) individual differences in participants' preferences and perception of the stimuli. Our results indicate that singing voice preferences are not grounded in acoustic attributes of the voices per se, but in how these features are perceptually interpreted by listeners.
Collapse
Affiliation(s)
- Camila Bruder
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
| | - David Poeppel
- New York University, New York, NY, USA
- Ernst Strüngmann Institute for Neuroscience, Frankfurt, Germany
- Max Planck-NYU Center for Language, Music, and Emotion (CLaME), New York, USA
| | - Pauline Larrouy-Maestri
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Max Planck-NYU Center for Language, Music, and Emotion (CLaME), New York, USA
| |
Collapse
|
17
|
Kopal J, Kumar K, Shafighi K, Saltoun K, Modenato C, Moreau CA, Huguet G, Jean-Louis M, Martin CO, Saci Z, Younis N, Douard E, Jizi K, Beauchamp-Chatel A, Kushan L, Silva AI, van den Bree MBM, Linden DEJ, Owen MJ, Hall J, Lippé S, Draganski B, Sønderby IE, Andreassen OA, Glahn DC, Thompson PM, Bearden CE, Zatorre R, Jacquemont S, Bzdok D. Using rare genetic mutations to revisit structural brain asymmetry. Nat Commun 2024; 15:2639. [PMID: 38531844 PMCID: PMC10966068 DOI: 10.1038/s41467-024-46784-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 03/11/2024] [Indexed: 03/28/2024] Open
Abstract
Asymmetry between the left and right hemisphere is a key feature of brain organization. Hemispheric functional specialization underlies some of the most advanced human-defining cognitive operations, such as articulated language, perspective taking, or rapid detection of facial cues. Yet, genetic investigations into brain asymmetry have mostly relied on common variants, which typically exert small effects on brain-related phenotypes. Here, we leverage rare genomic deletions and duplications to study how genetic alterations reverberate in human brain and behavior. We designed a pattern-learning approach to dissect the impact of eight high-effect-size copy number variations (CNVs) on brain asymmetry in a multi-site cohort of 552 CNV carriers and 290 non-carriers. Isolated multivariate brain asymmetry patterns spotlighted regions typically thought to subserve lateralized functions, including language, hearing, as well as visual, face and word recognition. Planum temporale asymmetry emerged as especially susceptible to deletions and duplications of specific gene sets. Targeted analysis of common variants through genome-wide association study (GWAS) consolidated partly diverging genetic influences on the right versus left planum temporale structure. In conclusion, our gene-brain-behavior data fusion highlights the consequences of genetically controlled brain lateralization on uniquely human cognitive capacities.
Collapse
Affiliation(s)
- Jakub Kopal
- Mila - Québec Artificial Intelligence Institute, Montréal, QC, Canada
- Department of Biomedical Engineering, Faculty of Medicine, McGill University, Montreal, Canada
| | - Kuldeep Kumar
- Centre de recherche CHU Sainte-Justine, Montréal, Quebec, Canada
| | - Kimia Shafighi
- Mila - Québec Artificial Intelligence Institute, Montréal, QC, Canada
- Department of Biomedical Engineering, Faculty of Medicine, McGill University, Montreal, Canada
| | - Karin Saltoun
- Mila - Québec Artificial Intelligence Institute, Montréal, QC, Canada
- Department of Biomedical Engineering, Faculty of Medicine, McGill University, Montreal, Canada
| | - Claudia Modenato
- LREN - Department of Clinical Neurosciences, Centre Hospitalier Universitaire Vaudois and University of Lausanne, Lausanne, Switzerland
| | - Clara A Moreau
- Imaging Genetics Center, Stevens Neuroimaging and Informatics Institute, Keck School of Medicine of USC, Marina del Rey, CA, USA
| | - Guillaume Huguet
- Centre de recherche CHU Sainte-Justine, Montréal, Quebec, Canada
| | | | | | - Zohra Saci
- Centre de recherche CHU Sainte-Justine, Montréal, Quebec, Canada
| | - Nadine Younis
- Centre de recherche CHU Sainte-Justine, Montréal, Quebec, Canada
| | - Elise Douard
- Centre de recherche CHU Sainte-Justine, Montréal, Quebec, Canada
| | - Khadije Jizi
- Centre de recherche CHU Sainte-Justine, Montréal, Quebec, Canada
| | - Alexis Beauchamp-Chatel
- Institut universitaire en santé mentale de Montréal, University of Montréal, Montréal, Canada
- Department of Psychiatry, University of Montreal, Montréal, Canada
| | - Leila Kushan
- Semel Institute for Neuroscience and Human Behavior, Departments of Psychiatry and Biobehavioral Sciences and Psychology, UCLA, Los Angeles, USA
| | - Ana I Silva
- School for Mental Health and Neuroscience, Maastricht University, Maastricht, Netherlands
- Centre for Neuropsychiatric Genetics and Genomics, Cardiff University, Cardiff, UK
| | - Marianne B M van den Bree
- Centre for Neuropsychiatric Genetics and Genomics, Cardiff University, Cardiff, UK
- Division of Psychological Medicine and Clinical Neurosciences, School of Medicine, Cardiff University, Cardiff, UK
- Neuroscience and Mental Health Innovation Institute, Cardiff University, Cardiff, UK
| | - David E J Linden
- School for Mental Health and Neuroscience, Maastricht University, Maastricht, Netherlands
- Centre for Neuropsychiatric Genetics and Genomics, Cardiff University, Cardiff, UK
- Neuroscience and Mental Health Innovation Institute, Cardiff University, Cardiff, UK
| | - Michael J Owen
- Centre for Neuropsychiatric Genetics and Genomics, Cardiff University, Cardiff, UK
- Division of Psychological Medicine and Clinical Neurosciences, School of Medicine, Cardiff University, Cardiff, UK
| | - Jeremy Hall
- Centre for Neuropsychiatric Genetics and Genomics, Cardiff University, Cardiff, UK
- Division of Psychological Medicine and Clinical Neurosciences, School of Medicine, Cardiff University, Cardiff, UK
| | - Sarah Lippé
- Centre de recherche CHU Sainte-Justine, Montréal, Quebec, Canada
| | - Bogdan Draganski
- LREN - Department of Clinical Neurosciences, Centre Hospitalier Universitaire Vaudois and University of Lausanne, Lausanne, Switzerland
- Neurology Department, Max-Planck-Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Ida E Sønderby
- NORMENT, Division of Mental Health and Addiction, Oslo University Hospital and University of Oslo, Oslo, Norway
- Department of Medical Genetics, Oslo University Hospital, Oslo, Norway
- KG Jebsen Centre for Neurodevelopmental Disorders, University of Oslo, Oslo, Norway
| | - Ole A Andreassen
- NORMENT, Division of Mental Health and Addiction, Oslo University Hospital and University of Oslo, Oslo, Norway
- KG Jebsen Centre for Neurodevelopmental Disorders, University of Oslo, Oslo, Norway
| | - David C Glahn
- Department of Psychiatry, Boston Children's Hospital and Harvard Medical School, Boston, MA, USA
| | - Paul M Thompson
- Imaging Genetics Center, Stevens Neuroimaging and Informatics Institute, Keck School of Medicine of USC, Marina del Rey, CA, USA
| | - Carrie E Bearden
- Semel Institute for Neuroscience and Human Behavior, Departments of Psychiatry and Biobehavioral Sciences and Psychology, UCLA, Los Angeles, USA
| | - Robert Zatorre
- International Laboratory for Brain, Music and Sound Research, Montreal, QC, Canada
- TheNeuro - Montreal Neurological Institute (MNI), McConnell Brain Imaging Centre, Faculty of Medicine, McGill University, Montreal, QC, Canada
| | - Sébastien Jacquemont
- Centre de recherche CHU Sainte-Justine, Montréal, Quebec, Canada
- Department of Pediatrics, University of Montréal, Montréal, Quebec, Canada
| | - Danilo Bzdok
- Mila - Québec Artificial Intelligence Institute, Montréal, QC, Canada.
- Department of Biomedical Engineering, Faculty of Medicine, McGill University, Montreal, Canada.
- TheNeuro - Montreal Neurological Institute (MNI), McConnell Brain Imaging Centre, Faculty of Medicine, McGill University, Montreal, QC, Canada.
| |
Collapse
|
18
|
Clink DJ. Isochronous rhythms: Facilitating song coordination across taxa? Curr Biol 2024; 34:R201-R203. [PMID: 38471449 DOI: 10.1016/j.cub.2024.01.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2024]
Abstract
The biological expression of isochronous rhythms - rhythms like those produced by a metronome - was once thought to be unique to humans. A new study reports that faster and more isochronous rhythms lead to more successful duets in singing gibbons: isochronous rhythms might be an important component of song coordination across taxa.
Collapse
Affiliation(s)
- Dena Jane Clink
- K. Lisa Yang Center for Conservation Bioacoustics, Cornell Lab of Ornithology, Cornell University, Ithaca, NY 14850, USA.
| |
Collapse
|
19
|
Ma H, Wang Z, Han P, Fan P, Chapman CA, Garber PA, Fan P. Small apes adjust rhythms to facilitate song coordination. Curr Biol 2024; 34:935-945.e3. [PMID: 38266649 DOI: 10.1016/j.cub.2023.12.071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 11/03/2023] [Accepted: 12/22/2023] [Indexed: 01/26/2024]
Abstract
Song coordination is a universal characteristic of human music. Many animals also produce well-coordinated duets or choruses that resemble human music. However, the mechanism and evolution of song coordination have only recently been studied in animals. Here, we studied the mechanism of song coordination in three closely related species of wild Nomascus gibbons that live in polygynous groups. In each species, song bouts were dominated by male solo sequences (referred to hereafter as male sequence), and females contributed stereotyped great calls to coordinate with males. Considering the function of rhythm in facilitating song coordination in human music and animal vocalizations, we predicted that adult males adjust their song rhythm to facilitate song coordination with females. In support of this prediction, we found that adult males produced significantly more isochronous rhythms with a faster tempo in male sequences that were followed by successful female great calls (a complete sequence with "introductory" and "wa" notes). The difference in isochrony and tempos between successful great call sequences and male sequences was smaller in N. concolor compared with the other two species, which may make it difficult for females to predict a male's precise temporal pattern. Consequently, adult females of N. concolor produced more failed great call (an incomplete sequence with only introductory notes) sequences. We propose that the high degree of rhythm change functions as an unambiguous signal that can be easily perceived by receivers. In this regard, gibbon vocalizations offer an instructive model to understand the origins and evolution of human music.
Collapse
Affiliation(s)
- Haigang Ma
- School of Life Sciences, Sun Yat-Sen University, Guangzhou 510275, Guangdong, China
| | - Zidi Wang
- School of Life Sciences, Sun Yat-Sen University, Guangzhou 510275, Guangdong, China
| | - Pu Han
- School of Life Sciences, Sun Yat-Sen University, Guangzhou 510275, Guangdong, China
| | - Penglai Fan
- Key Laboratory of Ecology of Rare and Endangered Species and Environmental Protection (Guangxi Normal University), Ministry of Education, Guilin 541006, Guangxi, China; Endangered Animal Ecology, College of Life Sciences, Guangxi Normal University, Guilin 541006, Guangxi, China
| | - Colin A Chapman
- Biology Department, Vancouver Island University, Nanaimo, BC V9R 5S5, Canada; Wilson Center, 1300 Pennsylvania Avenue NW, Washington, DC 20004, USA; School of Life Sciences, University of KwaZulu-Natal, Scottsville, Pietermaritzburg 3209, South Africa; Shanxi Key Laboratory for Animal Conservation, Northwest University, Xi'an 710127, China
| | - Paul A Garber
- Department of Anthropology, Program in Ecology and Evolutionary Biology, University of Illinois, Urbana, IL 61801, USA; International Centre of Biodiversity and Primate Conservation, Dali University, Dali 671003, Yunnan, China
| | - Pengfei Fan
- School of Life Sciences, Sun Yat-Sen University, Guangzhou 510275, Guangdong, China.
| |
Collapse
|
20
|
Zalta A, Large EW, Schön D, Morillon B. Neural dynamics of predictive timing and motor engagement in music listening. SCIENCE ADVANCES 2024; 10:eadi2525. [PMID: 38446888 PMCID: PMC10917349 DOI: 10.1126/sciadv.adi2525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Accepted: 01/30/2024] [Indexed: 03/08/2024]
Abstract
Why do humans spontaneously dance to music? To test the hypothesis that motor dynamics reflect predictive timing during music listening, we created melodies with varying degrees of rhythmic predictability (syncopation) and asked participants to rate their wanting-to-move (groove) experience. Degree of syncopation and groove ratings are quadratically correlated. Magnetoencephalography data showed that, while auditory regions track the rhythm of melodies, beat-related 2-hertz activity and neural dynamics at delta (1.4 hertz) and beta (20 to 30 hertz) rates in the dorsal auditory pathway code for the experience of groove. Critically, the left sensorimotor cortex coordinates these groove-related delta and beta activities. These findings align with the predictions of a neurodynamic model, suggesting that oscillatory motor engagement during music listening reflects predictive timing and is effected by interaction of neural dynamics along the dorsal auditory pathway.
Collapse
Affiliation(s)
- Arnaud Zalta
- Aix Marseille Université, Inserm, INS, Institut de Neurosciences des Systèmes, Marseille, France
- APHM, INSERM, Inst Neurosci Syst, Service de Pharmacologie Clinique et Pharmacovigilance, Aix Marseille Université, Marseille, France
| | - Edward W. Large
- Department of Psychological Sciences, Ecological Psychology Division, University of Connecticut, Storrs, CT, USA
- Department of Physics, University of Connecticut, Storrs, CT, USA
| | - Daniele Schön
- Aix Marseille Université, Inserm, INS, Institut de Neurosciences des Systèmes, Marseille, France
| | - Benjamin Morillon
- Aix Marseille Université, Inserm, INS, Institut de Neurosciences des Systèmes, Marseille, France
| |
Collapse
|
21
|
Arnon I, Kirby S. Cultural evolution creates the statistical structure of language. Sci Rep 2024; 14:5255. [PMID: 38438558 PMCID: PMC10912608 DOI: 10.1038/s41598-024-56152-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 03/01/2024] [Indexed: 03/06/2024] Open
Abstract
Human language is unique in its structure: language is made up of parts that can be recombined in a productive way. The parts are not given but have to be discovered by learners exposed to unsegmented wholes. Across languages, the frequency distribution of those parts follows a power law. Both statistical properties-having parts and having them follow a particular distribution-facilitate learning, yet their origin is still poorly understood. Where do the parts come from and why do they follow a particular frequency distribution? Here, we show how these two core properties emerge from the process of cultural evolution with whole-to-part learning. We use an experimental analog of cultural transmission in which participants copy sets of non-linguistic sequences produced by a previous participant: This design allows us to ask if parts will emerge purely under pressure for the system to be learnable, even without meanings to convey. We show that parts emerge from initially unsegmented sequences, that their distribution becomes closer to a power law over generations, and, importantly, that these properties make the sets of sequences more learnable. We argue that these two core statistical properties of language emerge culturally both as a cause and effect of greater learnability.
Collapse
Affiliation(s)
- Inbal Arnon
- Psychology Department, Hebrew University of Jerusalem, Jerusalem, Israel.
| | - Simon Kirby
- School of Philosophy, Psychology and Language Sciences, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
22
|
Crespo-Bojorque P, Cauvet E, Pallier C, Toro JM. Recognizing structure in novel tunes: differences between human and rats. Anim Cogn 2024; 27:17. [PMID: 38429431 PMCID: PMC10907461 DOI: 10.1007/s10071-024-01848-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 10/27/2023] [Accepted: 11/08/2023] [Indexed: 03/03/2024]
Abstract
A central feature in music is the hierarchical organization of its components. Musical pieces are not a simple concatenation of chords, but are characterized by rhythmic and harmonic structures. Here, we explore if sensitivity to music structure might emerge in the absence of any experience with musical stimuli. For this, we tested if rats detect the difference between structured and unstructured musical excerpts and compared their performance with that of humans. Structured melodies were excerpts of Mozart's sonatas. Unstructured melodies were created by the recombination of fragments of different sonatas. We trained listeners (both human participants and Long-Evans rats) with a set of structured and unstructured excerpts, and tested them with completely novel excerpts they had not heard before. After hundreds of training trials, rats were able to tell apart novel structured from unstructured melodies. Human listeners required only a few trials to reach better performance than rats. Interestingly, such performance was increased in humans when tonality changes were included, while it decreased to chance in rats. Our results suggest that, with enough training, rats might learn to discriminate acoustic differences differentiating hierarchical music structures from unstructured excerpts. More importantly, the results point toward species-specific adaptations on how tonality is processed.
Collapse
Affiliation(s)
| | - Elodie Cauvet
- Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin Center, Gif-Sur-Yvette, France
- DIS Study Abroad in Scandinavia, Stockholm, Sweden
| | - Christophe Pallier
- Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin Center, Gif-Sur-Yvette, France
| | - Juan M Toro
- Universitat Pompeu Fabra, C. Ramon Trias Fargas, 25-27, CP. 08005, Barcelona, Spain.
- Institució Catalana de Recerca I Estudis Avançats (ICREA), Barcelona, Spain.
| |
Collapse
|
23
|
Etani T, Miura A, Kawase S, Fujii S, Keller PE, Vuust P, Kudo K. A review of psychological and neuroscientific research on musical groove. Neurosci Biobehav Rev 2024; 158:105522. [PMID: 38141692 DOI: 10.1016/j.neubiorev.2023.105522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 12/18/2023] [Accepted: 12/19/2023] [Indexed: 12/25/2023]
Abstract
When listening to music, we naturally move our bodies rhythmically to the beat, which can be pleasurable and difficult to resist. This pleasurable sensation of wanting to move the body to music has been called "groove." Following pioneering humanities research, psychological and neuroscientific studies have provided insights on associated musical features, behavioral responses, phenomenological aspects, and brain structural and functional correlates of the groove experience. Groove research has advanced the field of music science and more generally informed our understanding of bidirectional links between perception and action, and the role of the motor system in prediction. Activity in motor and reward-related brain networks during music listening is associated with the groove experience, and this neural activity is linked to temporal prediction and learning. This article reviews research on groove as a psychological phenomenon with neurophysiological correlates that link musical rhythm perception, sensorimotor prediction, and reward processing. Promising future research directions range from elucidating specific neural mechanisms to exploring clinical applications and socio-cultural implications of groove.
Collapse
Affiliation(s)
- Takahide Etani
- School of Medicine, College of Medical, Pharmaceutical, and Health, Kanazawa University, Kanazawa, Japan; Graduate School of Media and Governance, Keio University, Fujisawa, Japan; Advanced Research Center for Human Sciences, Waseda University, Tokorozawa, Japan.
| | - Akito Miura
- Faculty of Human Sciences, Waseda University, Tokorozawa, Japan
| | - Satoshi Kawase
- The Faculty of Psychology, Kobe Gakuin University, Kobe, Japan
| | - Shinya Fujii
- Faculty of Environment and Information Studies, Keio University, Fujisawa, Japan
| | - Peter E Keller
- Center for Music in the Brain, Aarhus University, Aarhus, Denmark/The Royal Academy of Music Aarhus/Aalborg, Denmark; The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, Australia
| | - Peter Vuust
- Center for Music in the Brain, Aarhus University, Aarhus, Denmark/The Royal Academy of Music Aarhus/Aalborg, Denmark
| | - Kazutoshi Kudo
- Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
24
|
Qirko H. Pace setting as an adaptive precursor of rhythmic musicality. Ann N Y Acad Sci 2024; 1533:5-15. [PMID: 38412090 DOI: 10.1111/nyas.15120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/29/2024]
Abstract
Human musicality (the capacity to make and appreciate music) is difficult to explain in evolutionary terms, though many theories attempt to do so. This paper focuses on musicality's potential adaptive precursors, particularly as related to rhythm. It suggests that pace setting for walking and running long distances over extended time periods (endurance locomotion, EL) is a good candidate for an adaptive building block of rhythmic musicality. The argument is as follows: (1) over time, our hominin lineage developed a host of adaptations for efficient EL; (2) the ability to set and maintain a regular pace was a crucial adaptation in the service of EL, providing proximate rewards for successful execution; (3) maintaining a pace in EL occasioned hearing, feeling, and attending to regular rhythmic patterns; (4) these rhythmic patterns, as well as proximate rewards for maintaining them, became disassociated from locomotion and entrained in new proto-musical contexts. Support for the model and possibilities for generating predictions to test it are discussed.
Collapse
Affiliation(s)
- Hector Qirko
- Department of Sociology and Anthropology, College of Charleston, Charleston, South Carolina, USA
| |
Collapse
|
25
|
Clemente A, Kaplan TM, Pearce MT. Perceptual representations mediate effects of stimulus properties on liking for music. Ann N Y Acad Sci 2024; 1533:169-180. [PMID: 38319962 DOI: 10.1111/nyas.15106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2024]
Abstract
Perceptual pleasure and its concomitant hedonic value play an essential role in everyday life, motivating behavior and thus influencing how individuals choose to spend their time and resources. However, how pleasure arises from perception of sensory information remains relatively poorly understood. In particular, research has neglected the question of how perceptual representations mediate the relationships between stimulus properties and liking (e.g., stimulus symmetry can only affect liking if it is perceived). The present research addresses this gap for the first time, analyzing perceptual and liking ratings of 96 nonmusicians (power of 0.99) and finding that perceptual representations mediate effects of feature-based and information-based stimulus properties on liking for a novel set of melodies varying in balance, contour, symmetry, or complexity. Moreover, variability due to individual differences and stimuli accounts for most of the variance in liking. These results have broad implications for psychological research on sensory valuation, advocating a more explicit account of random variability and the mediating role of perceptual representations of stimulus properties.
Collapse
Affiliation(s)
- Ana Clemente
- Human Evolution and Cognition Research Group, University of the Balearic Islands, Palma de Mallorca, Spain
- Department of Cognition, Development and Educational Psychology, Institute of Neurosciences, University of Barcelona, Barcelona, Spain
- Cognition and Brain Plasticity Unit, Bellvitge Institute for Biomedical Research, L'Hospitalet De Llobregat, Spain
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK
| | - Thomas M Kaplan
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK
| | - Marcus T Pearce
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, UK
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| |
Collapse
|
26
|
Marjieh R, Harrison PMC, Lee H, Deligiannaki F, Jacoby N. Timbral effects on consonance disentangle psychoacoustic mechanisms and suggest perceptual origins for musical scales. Nat Commun 2024; 15:1482. [PMID: 38369535 PMCID: PMC11258268 DOI: 10.1038/s41467-024-45812-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 12/11/2023] [Indexed: 02/20/2024] Open
Abstract
The phenomenon of musical consonance is an essential feature in diverse musical styles. The traditional belief, supported by centuries of Western music theory and psychological studies, is that consonance derives from simple (harmonic) frequency ratios between tones and is insensitive to timbre. Here we show through five large-scale behavioral studies, comprising 235,440 human judgments from US and South Korean populations, that harmonic consonance preferences can be reshaped by timbral manipulations, even as far as to induce preferences for inharmonic intervals. We show how such effects may suggest perceptual origins for diverse scale systems ranging from the gamelan's slendro scale to the tuning of Western mean-tone and equal-tempered scales. Through computational modeling we show that these timbral manipulations dissociate competing psychoacoustic mechanisms underlying consonance, and we derive an updated computational model combining liking of harmonicity, disliking of fast beats (roughness), and liking of slow beats. Altogether, this work showcases how large-scale behavioral experiments can inform classical questions in auditory perception.
Collapse
Affiliation(s)
- Raja Marjieh
- Department of Psychology, Princeton University, Princeton, NJ, USA.
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
| | - Peter M C Harrison
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
- Centre for Music and Science, University of Cambridge, Cambridge, UK.
| | - Harin Lee
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Fotini Deligiannaki
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- German Aerospace Center (DLR), Institute for AI Safety and Security, Bonn, Germany
| | - Nori Jacoby
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
| |
Collapse
|
27
|
Bamford JS, Vigl J, Hämäläinen M, Saarikallio SH. Love songs and serenades: a theoretical review of music and romantic relationships. Front Psychol 2024; 15:1302548. [PMID: 38420176 PMCID: PMC10899422 DOI: 10.3389/fpsyg.2024.1302548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 01/23/2024] [Indexed: 03/02/2024] Open
Abstract
In this theoretical review, we examine how the roles of music in mate choice and social bonding are expressed in romantic relationships. Darwin's Descent of Man originally proposed the idea that musicality might have evolved as a sexually selected trait. This proposition, coupled with the portrayal of popular musicians as sex symbols and the prevalence of love-themed lyrics in music, suggests a possible link between music and attraction. However, recent scientific exploration of the evolutionary functions of music has predominantly focused on theories of social bonding and group signaling, with limited research addressing the sexual selection hypothesis. We identify two distinct types of music-making for these different functions: music for attraction, which would be virtuosic in nature to display physical and cognitive fitness to potential mates; and music for connection, which would facilitate synchrony between partners and likely engage the same reward mechanisms seen in the general synchrony-bonding effect, enhancing perceived interpersonal intimacy as a facet of love. Linking these two musical functions to social psychological theories of relationship development and the components of love, we present a model that outlines the potential roles of music in romantic relationships, from initial attraction to ongoing relationship maintenance. In addition to synthesizing the existing literature, our model serves as a roadmap for empirical research aimed at rigorously investigating the possible functions of music for romantic relationships.
Collapse
Affiliation(s)
- Joshua S Bamford
- Centre of Excellence in Music, Mind, Body and Brain, University of Jyväskylä, Jyväskylä, Finland
- Institute of Human Sciences, University of Oxford, Oxford, United Kingdom
| | - Julia Vigl
- Centre of Excellence in Music, Mind, Body and Brain, University of Jyväskylä, Jyväskylä, Finland
- Department of Psychology, University of Innsbruck, Innsbruck, Austria
| | - Matias Hämäläinen
- Centre of Excellence in Music, Mind, Body and Brain, University of Jyväskylä, Jyväskylä, Finland
| | - Suvi Helinä Saarikallio
- Centre of Excellence in Music, Mind, Body and Brain, University of Jyväskylä, Jyväskylä, Finland
| |
Collapse
|
28
|
Kim K, Askin N, Evans JA. Disrupted routines anticipate musical exploration. Proc Natl Acad Sci U S A 2024; 121:e2306549121. [PMID: 38300861 PMCID: PMC10861857 DOI: 10.1073/pnas.2306549121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 11/13/2023] [Indexed: 02/03/2024] Open
Abstract
Understanding and predicting the emergence and evolution of cultural tastes manifested in consumption patterns is of central interest to social scientists, analysts of culture, and purveyors of content. Prior research suggests that taste preferences relate to personality traits, values, shifts in mood, and immigration destination. Understanding everyday patterns of listening and the function music plays in life has remained elusive, however, despite speculation that musical nostalgia may compensate for local disruption. Using more than one hundred million streams of four million songs by tens of thousands of international listeners from a global music service, we show that breaches in personal routine are systematically associated with personal musical exploration. As people visited new cities and countries, their preferences diversified, converging toward their travel destinations. As people experienced the very different disruptions associated with COVID-19 lockdowns, their preferences diversified further. Personal explorations did not tend to veer toward the global listening average, but away from it, toward distinctive regional musical content. Exposure to novel music explored during periods of routine disruption showed a persistent influence on listeners' future consumption patterns. Across all of these settings, musical preference reflected rather than compensated for life's surprises, leaving a lasting legacy on tastes. We explore the relationship between these findings and global patterns of behavior and cultural consumption.
Collapse
Affiliation(s)
- Khwan Kim
- Area of Organizational Behaviour, INSEAD, Fontainebleau77300, France
| | - Noah Askin
- Department of Organization and Management, The Paul Merage School of Business, University of California–Irvine, Irvine, CA92697
| | - James A. Evans
- Department of Sociology, University of Chicago, Chicago, IL60637
- Knowledge Lab, University of Chicago, Chicago, IL60637
| |
Collapse
|
29
|
Kathios N, Patel AD, Loui P. Musical anhedonia, timbre, and the rewards of music listening. Cognition 2024; 243:105672. [PMID: 38086279 DOI: 10.1016/j.cognition.2023.105672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 10/18/2023] [Accepted: 11/21/2023] [Indexed: 12/22/2023]
Abstract
Pleasure in music has been linked to predictive coding of melodic and rhythmic patterns, subserved by connectivity between regions in the brain's auditory and reward networks. Specific musical anhedonics derive little pleasure from music and have altered auditory-reward connectivity, but no difficulties with music perception abilities and no generalized physical anhedonia. Recent research suggests that specific musical anhedonics experience pleasure in nonmusical sounds, suggesting that the implicated brain pathways may be specific to music reward. However, this work used sounds with clear real-world sources (e.g., babies laughing, crowds cheering), so positive hedonic responses could be based on the referents of these sounds rather than the sounds themselves. We presented specific musical anhedonics and matched controls with isolated short pleasing and displeasing synthesized sounds of varying timbres with no clear real-world referents. While the two groups found displeasing sounds equally displeasing, the musical anhedonics gave substantially lower pleasure ratings to the pleasing sounds, indicating that their sonic anhedonia is not limited to musical rhythms and melodies. Furthermore, across a large sample of participants, mean pleasure ratings for pleasing synthesized sounds predicted significant and similar variance in six dimensions of musical reward considered to be relatively independent, suggesting that pleasure in sonic timbres play a role in eliciting reward-related responses to music. We replicate the earlier findings of preserved pleasure ratings for semantically referential sounds in musical anhedonics and find that pleasure ratings of semantic referents, when presented without sounds, correlated with ratings for the sounds themselves. This association was stronger in musical anhedonics than in controls, suggesting the use of semantic knowledge as a compensatory mechanism for affective sound processing. Our results indicate that specific musical anhedonia is not entirely specific to melodic and rhythmic processing, and suggest that timbre merits further research as a source of pleasure in music.
Collapse
Affiliation(s)
- Nicholas Kathios
- Dept. of Psychology, Northeastern University, United States of America
| | - Aniruddh D Patel
- Dept. of Psychology, Tufts University, United States of America; Program in Brain Mind and Consciousness, Canadian Institute for Advanced Research, Canada
| | - Psyche Loui
- Dept. of Psychology, Northeastern University, United States of America; Dept. of Music, Northeastern University, United States of America.
| |
Collapse
|
30
|
Putkinen V, Zhou X, Gan X, Yang L, Becker B, Sams M, Nummenmaa L. Bodily maps of musical sensations across cultures. Proc Natl Acad Sci U S A 2024; 121:e2308859121. [PMID: 38271338 PMCID: PMC10835118 DOI: 10.1073/pnas.2308859121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 12/01/2023] [Indexed: 01/27/2024] Open
Abstract
Emotions, bodily sensations and movement are integral parts of musical experiences. Yet, it remains unknown i) whether emotional connotations and structural features of music elicit discrete bodily sensations and ii) whether these sensations are culturally consistent. We addressed these questions in a cross-cultural study with Western (European and North American, n = 903) and East Asian (Chinese, n = 1035). We precented participants with silhouettes of human bodies and asked them to indicate the bodily regions whose activity they felt changing while listening to Western and Asian musical pieces with varying emotional and acoustic qualities. The resulting bodily sensation maps (BSMs) varied as a function of the emotional qualities of the songs, particularly in the limb, chest, and head regions. Music-induced emotions and corresponding BSMs were replicable across Western and East Asian subjects. The BSMs clustered similarly across cultures, and cluster structures were similar for BSMs and self-reports of emotional experience. The acoustic and structural features of music were consistently associated with the emotion ratings and music-induced bodily sensations across cultures. These results highlight the importance of subjective bodily experience in music-induced emotions and demonstrate consistent associations between musical features, music-induced emotions, and bodily sensations across distant cultures.
Collapse
Affiliation(s)
- Vesa Putkinen
- Turku PET Centre, University of Turku, Turku 20520, Finland
- Turku Institute for Advanced Studies, Department of Psychology, University of Turku, Turku 20014, Finland
| | - Xinqi Zhou
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu 610066, China
| | - Xianyang Gan
- The Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu 610072, China
- MOE Key Laboratory for Neuroinformation, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Linyu Yang
- College of Mathematics, Sichuan University, Chengdu 610064, China
| | - Benjamin Becker
- State Key Laboratory of Brain and Cognitive Sciences, The University of Hong Kong, Hong Kong, China
- Department of Psychology, The University of Hong Kong, Hong Kong, China
| | - Mikko Sams
- Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, Espoo 00076, Finland
| | - Lauri Nummenmaa
- Turku PET Centre, University of Turku, Turku 20520, Finland
- Department of Psychology, University of Turku, Turku 20520, Finland
| |
Collapse
|
31
|
Cheung VKM, Harrison PMC, Koelsch S, Pearce MT, Friederici AD, Meyer L. Cognitive and sensory expectations independently shape musical expectancy and pleasure. Philos Trans R Soc Lond B Biol Sci 2024; 379:20220420. [PMID: 38104601 PMCID: PMC10725761 DOI: 10.1098/rstb.2022.0420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 10/20/2023] [Indexed: 12/19/2023] Open
Abstract
Expectation is crucial for our enjoyment of music, yet the underlying generative mechanisms remain unclear. While sensory models derive predictions based on local acoustic information in the auditory signal, cognitive models assume abstract knowledge of music structure acquired over the long term. To evaluate these two contrasting mechanisms, we compared simulations from four computational models of musical expectancy against subjective expectancy and pleasantness ratings of over 1000 chords sampled from 739 US Billboard pop songs. Bayesian model comparison revealed that listeners' expectancy and pleasantness ratings were predicted by the independent, non-overlapping, contributions of cognitive and sensory expectations. Furthermore, cognitive expectations explained over twice the variance in listeners' perceived surprise compared to sensory expectations, suggesting a larger relative importance of long-term representations of music structure over short-term sensory-acoustic information in musical expectancy. Our results thus emphasize the distinct, albeit complementary, roles of cognitive and sensory expectations in shaping musical pleasure, and suggest that this expectancy-driven mechanism depends on musical information represented at different levels of abstraction along the neural hierarchy. This article is part of the theme issue 'Art, aesthetics and predictive processing: theoretical and empirical perspectives'.
Collapse
Affiliation(s)
- Vincent K. M. Cheung
- Sony Computer Science Laboratories, Inc., Shinagawa-ku, Tokyo 141-0022, Japan
- Department of Neuropsychology, Sony Computer Science Laboratories, Inc., Shinagawa-ku, Tokyo 141-0022, Japan
- Institute of Information Science, Academia Sinica, Taipei 115, Taiwan
| | - Peter M. C. Harrison
- Centre for Music and Science, University of Cambridge, Faculty of Music, 11 West Road, Cambridge, CB3 9DP, UK
- Centre for Digital Music, Queen Mary University of London, E1 4NS, UK
| | - Stefan Koelsch
- Department of Biological and Medical Psychology, University of Bergen, Bergen, 5009, Norway
| | - Marcus T. Pearce
- Centre for Digital Music, Queen Mary University of London, E1 4NS, UK
- Department of Clinical Medicine, Aarhus University, Aarhus N, 8200, Denmark
| | - Angela D. Friederici
- Department of Neuropsychology, Sony Computer Science Laboratories, Inc., Shinagawa-ku, Tokyo 141-0022, Japan
| | - Lars Meyer
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
- Clinic for Phoniatrics and Pedaudiology, University Hospital Münster, Münster, 48149, Germany
| |
Collapse
|
32
|
Bianco R, Zuk NJ, Bigand F, Quarta E, Grasso S, Arnese F, Ravignani A, Battaglia-Mayer A, Novembre G. Neural encoding of musical expectations in a non-human primate. Curr Biol 2024; 34:444-450.e5. [PMID: 38176416 DOI: 10.1016/j.cub.2023.12.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 10/26/2023] [Accepted: 12/07/2023] [Indexed: 01/06/2024]
Abstract
The appreciation of music is a universal trait of humankind.1,2,3 Evidence supporting this notion includes the ubiquity of music across cultures4,5,6,7 and the natural predisposition toward music that humans display early in development.8,9,10 Are we musical animals because of species-specific predispositions? This question cannot be answered by relying on cross-cultural or developmental studies alone, as these cannot rule out enculturation.11 Instead, it calls for cross-species experiments testing whether homologous neural mechanisms underlying music perception are present in non-human primates. We present music to two rhesus monkeys, reared without musical exposure, while recording electroencephalography (EEG) and pupillometry. Monkeys exhibit higher engagement and neural encoding of expectations based on the previously seeded musical context when passively listening to real music as opposed to shuffled controls. We then compare human and monkey neural responses to the same stimuli and find a species-dependent contribution of two fundamental musical features-pitch and timing12-in generating expectations: while timing- and pitch-based expectations13 are similarly weighted in humans, monkeys rely on timing rather than pitch. Together, these results shed light on the phylogeny of music perception. They highlight monkeys' capacity for processing temporal structures beyond plain acoustic processing, and they identify a species-dependent contribution of time- and pitch-related features to the neural encoding of musical expectations.
Collapse
Affiliation(s)
- Roberta Bianco
- Neuroscience of Perception & Action Lab, Italian Institute of Technology, Viale Regina Elena 291, 00161 Rome, Italy.
| | - Nathaniel J Zuk
- Department of Psychology, Nottingham Trent University, 50 Shakespeare Street, Nottingham NG1 4FQ, UK
| | - Félix Bigand
- Neuroscience of Perception & Action Lab, Italian Institute of Technology, Viale Regina Elena 291, 00161 Rome, Italy
| | - Eros Quarta
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Stefano Grasso
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Flavia Arnese
- Neuroscience of Perception & Action Lab, Italian Institute of Technology, Viale Regina Elena 291, 00161 Rome, Italy
| | - Andrea Ravignani
- Comparative Bioacoustics Group, Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD Nijmegen, the Netherlands; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Universitetsbyen 3, 8000 Aarhus, Denmark; Department of Human Neurosciences, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Alexandra Battaglia-Mayer
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Giacomo Novembre
- Neuroscience of Perception & Action Lab, Italian Institute of Technology, Viale Regina Elena 291, 00161 Rome, Italy.
| |
Collapse
|
33
|
Kim G, Kim DK, Jeong H. Spontaneous emergence of rudimentary music detectors in deep neural networks. Nat Commun 2024; 15:148. [PMID: 38168097 PMCID: PMC10761941 DOI: 10.1038/s41467-023-44516-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Accepted: 12/15/2023] [Indexed: 01/05/2024] Open
Abstract
Music exists in almost every society, has universal acoustic features, and is processed by distinct neural circuits in humans even with no experience of musical training. However, it remains unclear how these innate characteristics emerge and what functions they serve. Here, using an artificial deep neural network that models the auditory information processing of the brain, we show that units tuned to music can spontaneously emerge by learning natural sound detection, even without learning music. The music-selective units encoded the temporal structure of music in multiple timescales, following the population-level response characteristics observed in the brain. We found that the process of generalization is critical for the emergence of music-selectivity and that music-selectivity can work as a functional basis for the generalization of natural sound, thereby elucidating its origin. These findings suggest that evolutionary adaptation to process natural sounds can provide an initial blueprint for our sense of music.
Collapse
Affiliation(s)
- Gwangsu Kim
- Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea
| | - Dong-Kyum Kim
- Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea
| | - Hawoong Jeong
- Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea.
- Center for Complex Systems, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea.
| |
Collapse
|
34
|
Coull JT, Korolczuk I, Morillon B. The Motor of Time: Coupling Action to Temporally Predictable Events Heightens Perception. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1455:199-213. [PMID: 38918353 DOI: 10.1007/978-3-031-60183-5_11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/27/2024]
Abstract
Timing and motor function share neural circuits and dynamics, which underpin their close and synergistic relationship. For instance, the temporal predictability of a sensory event optimizes motor responses to that event. Knowing when an event is likely to occur lowers response thresholds, leading to faster and more efficient motor behavior though in situations of response conflict can induce impulsive and inappropriate responding. In turn, through a process of active sensing, coupling action to temporally predictable sensory input enhances perceptual processing. Action not only hones perception of the event's onset or duration, but also boosts sensory processing of its non-temporal features such as pitch or shape. The effects of temporal predictability on motor behavior and sensory processing involve motor and left parietal cortices and are mediated by changes in delta and beta oscillations in motor areas of the brain.
Collapse
Affiliation(s)
- Jennifer T Coull
- Centre for Research in Psychology and Neuroscience (UMR 7077), Aix-Marseille Université & CNRS, Marseille, France.
| | - Inga Korolczuk
- Department of Pathophysiology, Medical University of Lublin, Lublin, Poland
| | - Benjamin Morillon
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des Systèmes, Marseille, France
| |
Collapse
|
35
|
Li W, Germine LT, Mehr SA, Srinivasan M, Hartshorne J. Developmental psychologists should adopt citizen science to improve generalization and reproducibility. INFANT AND CHILD DEVELOPMENT 2024; 33:e2348. [PMID: 38515737 PMCID: PMC10957098 DOI: 10.1002/icd.2348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2021] [Accepted: 05/17/2022] [Indexed: 11/08/2022]
Abstract
Widespread failures of replication and generalization are, ironically, a scientific triumph, in that they confirm the fundamental metascientific theory that underlies our field. Generalizable and replicable findings require testing large numbers of subjects from a wide range of demographics with a large, randomly-sampled stimulus set, and using a variety of experimental parameters. Because few studies accomplish any of this, meta-scientists predict that findings will frequently fail to replicate or generalize. We argue that to be more robust and replicable, developmental psychology needs to find a mechanism for collecting data at greater scale and from more diverse populations. Luckily, this mechanism already exists: Citizen science, in which large numbers of uncompensated volunteers provide data. While best-known for its contributions to astronomy and ecology, citizen science has also produced major findings in neuroscience and psychology, and increasingly in developmental psychology. We provide examples, address practical challenges, discuss limitations, and compare to other methods of obtaining large datasets. Ultimately, we argue that the range of studies where it makes sense *not* to use citizen science is steadily dwindling.
Collapse
Affiliation(s)
- Wei Li
- Department of Psychology and Neuroscience, Boston College, Chestnut Hill, MA, USA
| | - Laura Thi Germine
- McLean Hospital, Belmont, MA, USA
- Department of Psychiatry, Harvard Medical School, Cambridge, MA
| | - Samuel A. Mehr
- Data Science Initiative, Harvard University, Cambridge, MA
- School of Psychology, Victoria University of Wellington, Wellington, New Zealand
| | | | - Joshua Hartshorne
- Department of Psychology and Neuroscience, Boston College, Chestnut Hill, MA, USA
| |
Collapse
|
36
|
Agrawal T, Schachner A. Aesthetic Motivation Impacts Judgments of Others' Prosociality and Mental Life. Open Mind (Camb) 2023; 7:947-980. [PMID: 38111474 PMCID: PMC10727777 DOI: 10.1162/opmi_a_00113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Accepted: 10/25/2023] [Indexed: 12/20/2023] Open
Abstract
The ability to infer others' prosocial vs. antisocial behavioral tendencies from minimal information is core to social reasoning. Aesthetic motivation (the value or appreciation of aesthetic beauty) is linked with prosocial tendencies, raising the question of whether this factor is used in interpersonal reasoning and in the attribution of mental capacities. We propose and test a model of this reasoning, predicting that evidence of others' aesthetic motivations should impact judgments of others' prosocial (and antisocial) tendencies by signaling a heightened capacity for emotional experience. In a series of four pre-registered experiments (total N = 1440), participants saw pairs of characters (as photos/vignettes), and judged which in each pair showed more of a mental capacity of interest. Distractor items prevented participants from guessing the hypothesis. For one critical pair of characters, both characters performed the same activity (music listening, painting, cooking, exercising, being in nature, doing math), but one was motivated by the activities' aesthetic value, and the other by its functional value. Across all activities, participants robustly chose aesthetically-motivated characters as more likely to behave compassionately (Exp. 1; 3), less likely to behave selfishly/manipulatively (Exp. 1; 3), and as more emotionally sensitive, but not more intelligent (Exp. 2; 3; 4). Emotional sensitivity best predicted compassionate behavior judgements (Exp. 3). Aesthetically-motivated characters were not reliably chosen as more helpful; intelligence best predicted helpfulness judgements (Exp. 4). Evidence of aesthetic motivation conveys important social information about others, impacting fundamental interpersonal judgments about others' mental life and social behavior.
Collapse
Affiliation(s)
| | - Adena Schachner
- Department of Psychology, University of California San Diego
| |
Collapse
|
37
|
Childress A, Lou M. Illness Narratives in Popular Music: An Untapped Resource for Medical Education. THE JOURNAL OF MEDICAL HUMANITIES 2023; 44:533-552. [PMID: 37566168 DOI: 10.1007/s10912-023-09813-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/19/2023] [Indexed: 08/12/2023]
Abstract
Illness narratives convey a person's feelings, thoughts, beliefs, and descriptions of suffering and healing as a result of physical or mental breakdown. Recognized genres include fiction, nonfiction, poetry, plays, and films. Like poets and playwrights, musicians also use their life experiences as fodder for their art. However, illness narratives as expressed through popular music are an understudied and underutilized source of insights into the experience of suffering, healing, and coping with illness, disease, and death. Greater attention to the value of music within medical education is needed to improve students' perspective-taking and communication. Like reading a good book, songs that resonate with listeners speak to shared experiences or invite them into a universe of possibilities that they had not yet imagined. In this article, we show how uncovering these themes in popular music might be integrated into medical education, thus creating a space for reflection on the nature and meaning of illness and the fragility of the human condition. We describe three kinds of illness narratives that may be found in popular music (autobiographical, biographical, and metaphorical) and show how developing skills of close listening through exposure to these narrative forms can improve patient-physician communication and expand students' moral imaginations.
Collapse
Affiliation(s)
- Andrew Childress
- Humanities Expression and Arts Lab, Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, USA.
| | - Monica Lou
- Department of Medicine, Baylor College of Medicine, Houston, TX, USA
| |
Collapse
|
38
|
Alagöz G, Eising E, Mekki Y, Bignardi G, Fontanillas P, Nivard MG, Luciano M, Cox NJ, Fisher SE, Gordon RL. The shared genetic architecture and evolution of human language and musical rhythm. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.11.01.564908. [PMID: 37961248 PMCID: PMC10634981 DOI: 10.1101/2023.11.01.564908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
Rhythm and language-related traits are phenotypically correlated, but their genetic overlap is largely unknown. Here, we leveraged two large-scale genome-wide association studies performed to shed light on the shared genetics of rhythm (N=606,825) and dyslexia (N=1,138,870). Our results reveal an intricate shared genetic and neurobiological architecture, and lay groundwork for resolving longstanding debates about the potential co-evolution of human language and musical traits.
Collapse
Affiliation(s)
- Gökberk Alagöz
- Language and Genetics Department, Max Planck Institute for Psycholinguistics, 6500 AH Nijmegen, The Netherlands
| | - Else Eising
- Language and Genetics Department, Max Planck Institute for Psycholinguistics, 6500 AH Nijmegen, The Netherlands
| | - Yasmina Mekki
- Department of Otolaryngology - Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Giacomo Bignardi
- Language and Genetics Department, Max Planck Institute for Psycholinguistics, 6500 AH Nijmegen, The Netherlands
- Max Planck School of Cognition, Leipzig, Germany
| | | | - Michel G Nivard
- Department of Biological Psychology, Vrije Universiteit, Amsterdam, the Netherlands
| | - Michelle Luciano
- Department of Psychology, University of Edinburgh, Edinburgh, UK
| | - Nancy J Cox
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Simon E Fisher
- Language and Genetics Department, Max Planck Institute for Psycholinguistics, 6500 AH Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6500 HB Nijmegen, The Netherlands
| | - Reyna L Gordon
- Department of Otolaryngology - Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- The Curb Center, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
39
|
Persici V, Blain SD, Iversen JR, Key AP, Kotz SA, Devin McAuley J, Gordon RL. Individual differences in neural markers of beat processing relate to spoken grammar skills in six-year-old children. BRAIN AND LANGUAGE 2023; 246:105345. [PMID: 37994830 DOI: 10.1016/j.bandl.2023.105345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 10/05/2023] [Accepted: 10/10/2023] [Indexed: 11/24/2023]
Abstract
Based on the idea that neural entrainment establishes regular attentional fluctuations that facilitate hierarchical processing in both music and language, we hypothesized that individual differences in syntactic (grammatical) skills will be partly explained by patterns of neural responses to musical rhythm. To test this hypothesis, we recorded neural activity using electroencephalography (EEG) while children (N = 25) listened passively to rhythmic patterns that induced different beat percepts. Analysis of evoked beta and gamma activity revealed that individual differences in the magnitude of neural responses to rhythm explained variance in six-year-olds' expressive grammar abilities, beyond and complementarily to their performance in a behavioral rhythm perception task. These results reinforce the idea that mechanisms of neural beat entrainment may be a shared neural resource supporting hierarchical processing across music and language and suggest a relevant marker of the relationship between rhythm processing and grammar abilities in elementary-school-age children, previously observed only behaviorally.
Collapse
Affiliation(s)
- Valentina Persici
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA; Department of Psychology, University of Milano - Bicocca, Milan, Italy; Department of Otolaryngology - Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA; Department of Human Sciences, University of Verona, Verona, Italy.
| | - Scott D Blain
- Department of Psychiatry, University of Michigan, Ann Arbor, MI, USA
| | - John R Iversen
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, Canada; Institute for Neural Computation, University of California San Diego, La Jolla, CA, USA
| | - Alexandra P Key
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA; Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Sonja A Kotz
- Department of Neuropsychology and Psychopharmacology, Maastricht University, Maastricht, the Netherlands; Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - J Devin McAuley
- Department of Psychology, Michigan State University, East Lansing, MI, USA
| | - Reyna L Gordon
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA; Department of Otolaryngology - Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA; Department of Psychology, Vanderbilt University, Nashville, TN, USA.
| |
Collapse
|
40
|
Harris I, Niven EC, Griffin A, Scott SK. Is song processing distinct and special in the auditory cortex? Nat Rev Neurosci 2023; 24:711-722. [PMID: 37783820 DOI: 10.1038/s41583-023-00743-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/30/2023] [Indexed: 10/04/2023]
Abstract
Is the singing voice processed distinctively in the human brain? In this Perspective, we discuss what might distinguish song processing from speech processing in light of recent work suggesting that some cortical neuronal populations respond selectively to song and we outline the implications for our understanding of auditory processing. We review the literature regarding the neural and physiological mechanisms of song production and perception and show that this provides evidence for key differences between song and speech processing. We conclude by discussing the significance of the notion that song processing is special in terms of how this might contribute to theories of the neurobiological origins of vocal communication and to our understanding of the neural circuitry underlying sound processing in the human cortex.
Collapse
Affiliation(s)
- Ilana Harris
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Efe C Niven
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Alex Griffin
- Department of Psychology, University of Cambridge, Cambridge, UK
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK.
| |
Collapse
|
41
|
Chittar CR, Jang H, Samuni L, Lewis J, Honing H, van Loon EE, Janmaat KRL. Music production and its role in coalition signaling during foraging contexts in a hunter-gatherer society. Front Psychol 2023; 14:1218394. [PMID: 38022909 PMCID: PMC10646562 DOI: 10.3389/fpsyg.2023.1218394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Accepted: 09/06/2023] [Indexed: 12/01/2023] Open
Abstract
Music is a cultural activity universally present in all human societies. Several hypotheses have been formulated to understand the possible origins of music and the reasons for its emergence. Here, we test two hypotheses: (1) the coalition signaling hypothesis which posits that music could have emerged as a tool to signal cooperative intent and signal strength of alliances and (2) music as a strategy to deter potential predators. In addition, we further explore the link between tactile cues and the propensity of mothers to sing toward infants. For this, we investigated the singing behaviors of hunter-gatherer mothers during daily foraging trips among the Mbendjele BaYaka in the Republic of the Congo. Although singing is a significant component of their daily activities, such as when walking in the forest or collecting food sources, studies on human music production in hunter-gatherer societies are mostly conducted during their ritual ceremonies. In this study, we collected foraging and singing behavioral data of mothers by using focal follows of five BaYaka women during their foraging trips in the forest. In accordance with our predictions for the coalition signaling hypothesis, women were more likely to sing when present in large groups, especially when group members were less familiar. However, predictions of the predation deterrence hypothesis were not supported as the interaction between group size and distance from the village did not have a significant effect on the likelihood of singing. The latter may be due to limited variation in predation risk in the foraging areas, because of the intense bush meat trade, and hence, future studies should include foraging areas with higher densities of wild animals. Lastly, we found that mothers were more likely to sing when they were carrying infants compared to when infants were close, but carried by others, supporting the prediction that touch plays an important prerequisite role in musical interaction between the mother and child. Our study provides important insight into the role of music as a tool in displaying the intent between or within groups to strengthen potentially conflict-free alliances during joint foraging activities.
Collapse
Affiliation(s)
- Chirag Rajendra Chittar
- Institute for Biodiversity and Ecosystem Dynamics (IBED), University of Amsterdam, Amsterdam, Netherlands
- Department of Evolutionary Anthropology, University of Zurich, Zurich, Switzerland
| | - Haneul Jang
- Department of Human Behavior, Ecology and Culture, Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany
| | - Liran Samuni
- Department of Human Evolutionary Biology, Harvard University, Cambridge, MA, United States
- Cooperative Evolution Lab, German Primate Center, Göttingen, Germany
| | - Jerome Lewis
- Department of Anthropology, University College London, London, United Kingdom
| | - Henkjan Honing
- Music Cognition Group, Institute for Logic, Language and Computation, University of Amsterdam, Amsterdam, Netherlands
| | - E. Emiel van Loon
- Institute for Biodiversity and Ecosystem Dynamics (IBED), University of Amsterdam, Amsterdam, Netherlands
| | - Karline R. L. Janmaat
- Institute for Biodiversity and Ecosystem Dynamics (IBED), University of Amsterdam, Amsterdam, Netherlands
- Department of Cognitive Psychology, Leiden University, Leiden, Netherlands
- ARTIS Amsterdam Royal Zoo, Amsterdam, Netherlands
| |
Collapse
|
42
|
Daikoku H, Shimozono T, Fujii S, Hegde S, Savage PE. Cross-cultural Perception of Musical Similarity Within and Between India and Japan. MUSIC & SCIENCE 2023; 2023:6. [PMID: 38798704 PMCID: PMC7615992 DOI: 10.1177/20592043231207998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
Cross-cultural perception of musical similarity is important for understanding musical diversity and universality. In this study we analyzed cross-cultural music similarity ratings on a global song sample from 110 participants (62 previously published from Japan, 48 newly collected from musicians and non-musicians from north and south India). Our pre-registered hypothesis that average Indian and Japanese ratings would be correlated was strongly supported (r = .80, p <.001). Exploratory analyses showed that ratings from experts in Hindustani music from the north and Carnatic music from the south showed the lowest correlations (r= .25). These analyses suggest that the correlations we found are likely due more to shared musical exposure than to innate universals of music perception.
Collapse
Affiliation(s)
- Hideo Daikoku
- Graduate School of Media and Governance, Keio University, Fujisawa, Japan
| | | | - Shinya Fujii
- Faculty of Environment and Information Studies, Keio University, Fujisawa, Japan
| | - Shantala Hegde
- Music Cognition Lab, & Clinical Neuropsychology and Cognitive Neuroscience Center, Department of Clinical Psychology, National Institute of Mental Health and Neurosciences, Bengaluru, India
| | - Patrick E. Savage
- Faculty of Environment and Information Studies, Keio University, Fujisawa, Japan
- School of Psychology, University of Auckland, Auckland, New Zealand
| |
Collapse
|
43
|
Bruder C, Larrouy-Maestri P. Classical singers are also proficient in non-classical singing. Front Psychol 2023; 14:1215370. [PMID: 38023013 PMCID: PMC10630913 DOI: 10.3389/fpsyg.2023.1215370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Accepted: 10/02/2023] [Indexed: 12/01/2023] Open
Abstract
Classical singers train intensively for many years to achieve a high level of vocal control and specific sound characteristics. However, the actual span of singers' activities often includes venues other than opera halls and requires performing in styles outside their strict training (e.g., singing pop songs at weddings). We examine classical singers' ability to adjust their vocal productions to other styles, in relation with their formal training. Twenty-two highly trained female classical singers (aged from 22 to 45 years old; vocal training ranging from 4.5 to 27 years) performed six different melody excerpts a cappella in contrasting ways: as an opera aria, as a pop song and as a lullaby. All melodies were sung both with lyrics and with a /lu/ sound. All productions were acoustically analyzed in terms of seven common acoustic descriptors of voice/singing performances and perceptually evaluated by a total of 50 lay listeners (aged from 21 to 73 years old) who were asked to identify the intended singing style in a forced-choice lab experiment. Acoustic analyses of the 792 performances suggest distinct acoustic profiles, implying that singers were able to produce contrasting sounding performances. Furthermore, the high overall style recognition rate (78.5% Correct Responses, hence CR) confirmed singers' proficiency in performing in operatic style (86% CR) and their versatility when it comes to lullaby (80% CR) and pop performances (69% CR), albeit with occasional confusion between the latter two. Interestingly, different levels of competence among singers appeared, with versatility (as estimated based on correct recognition in pop/lullaby styles) ranging from 62 to 83% depending on the singer. Importantly, this variability was not linked to formal training per se. Our results indicate that classical singers are versatile, and prompt the need for further investigations to clarify the role of singers' broader professional and personal experiences in the development of this valuable ability.
Collapse
Affiliation(s)
- Camila Bruder
- Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
| | - Pauline Larrouy-Maestri
- Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Max Planck-NYU Center for Language, Music, and Emotion (CLaME), New York, NY, United States
| |
Collapse
|
44
|
Reschke-Hernández AE, Gfeller K, Oleson J, Tranel D. Music Therapy Increases Social and Emotional Well-Being in Persons With Dementia: A Randomized Clinical Crossover Trial Comparing Singing to Verbal Discussion. J Music Ther 2023; 60:314-342. [PMID: 37220880 PMCID: PMC10560009 DOI: 10.1093/jmt/thad015] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
The number of people living with Alzheimer's disease and related dementias (ADRD) is growing proportional to our aging population. Although music-based interventions may offer meaningful support to these individuals, most music therapy research lacks well-matched comparison conditions and specific intervention focus, which limits evaluation of intervention effectiveness and possible mechanisms. Here, we report a randomized clinical crossover trial in which we examined the impact of a singing-based music therapy intervention on feelings, emotions, and social engagement in 32 care facility residents with ADRD (aged 65-97 years), relative to an analogous nonmusic condition (verbal discussion). Both conditions were informed by the Clinical Practice Model for Persons with Dementia and occurred in a small group format, three times per week for two weeks (six 25-minute sessions), with a two-week washout at crossover. We followed National Institutes of Health Behavior Change Consortium strategies to enhance methodological rigor. We predicted that music therapy would improve feelings, positive emotions, and social engagement, significantly more so than the comparison condition. We used a linear mixed model approach to analysis. In support of our hypotheses, the music therapy intervention yielded significant positive effects on feelings, emotions, and social engagement, particularly for those with moderate dementia. Our study contributes empirical support for the use of music therapy to improve psychosocial well-being in this population. Results also highlight the importance of considering patient characteristics in intervention design and offer practical implications for music selection and implementation within interventions for persons with ADRD.
Collapse
|
45
|
Brown S, Phillips E. The vocal origin of musical scales: the Interval Spacing model. Front Psychol 2023; 14:1261218. [PMID: 37868594 PMCID: PMC10587400 DOI: 10.3389/fpsyg.2023.1261218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 09/11/2023] [Indexed: 10/24/2023] Open
Affiliation(s)
- Steven Brown
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, ON, Canada
| | | |
Collapse
|
46
|
Varella MAC. Nocturnal selective pressures on the evolution of human musicality as a missing piece of the adaptationist puzzle. Front Psychol 2023; 14:1215481. [PMID: 37860295 PMCID: PMC10582961 DOI: 10.3389/fpsyg.2023.1215481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 09/11/2023] [Indexed: 10/21/2023] Open
Abstract
Human musicality exhibits the necessary hallmarks for biological adaptations. Evolutionary explanations focus on recurrent adaptive problems that human musicality possibly solved in ancestral environments, such as mate selection and competition, social bonding/cohesion and social grooming, perceptual and motor skill development, conflict reduction, safe time-passing, transgenerational communication, mood regulation and synchronization, and credible signaling of coalition and territorial/predator defense. Although not mutually exclusive, these different hypotheses are still not conceptually integrated nor clearly derived from independent principles. I propose The Nocturnal Evolution of Human Musicality and Performativity Theory in which the night-time is the missing piece of the adaptationist puzzle of human musicality and performing arts. The expansion of nocturnal activities throughout human evolution, which is tied to tree-to-ground sleep transition and habitual use of fire, might help (i) explain the evolution of musicality from independent principles, (ii) explain various seemingly unrelated music features and functions, and (iii) integrate many ancestral adaptive values proposed. The expansion into the nocturnal niche posed recurrent ancestral adaptive challenges/opportunities: lack of luminosity, regrouping to cook before sleep, imminent dangerousness, low temperatures, peak tiredness, and concealment of identity. These crucial night-time features might have selected evening-oriented individuals who were prone to acoustic communication, more alert and imaginative, gregarious, risk-taking and novelty-seeking, prone to anxiety modulation, hedonistic, promiscuous, and disinhibited. Those night-time selected dispositions may have converged and enhanced protomusicality into human musicality by facilitating it to assume many survival- and reproduction-enhancing roles (social cohesion and coordination, signaling of coalitions, territorial defense, antipredatorial defense, knowledge transference, safe passage of time, children lullabies, and sexual selection) that are correspondent to the co-occurring night-time adaptive challenges/opportunities. The nocturnal dynamic may help explain musical features (sound, loudness, repetitiveness, call and response, song, elaboration/virtuosity, and duetting/chorusing). Across vertebrates, acoustic communication mostly occurs in nocturnal species. The eveningness chronotype is common among musicians and composers. Adolescents, who are the most evening-oriented humans, enjoy more music. Contemporary tribal nocturnal activities around the campfire involve eating, singing/dancing, storytelling, and rituals. I discuss the nocturnal integration of musicality's many roles and conclude that musicality is probably a multifunctional mental adaptation that evolved along with the night-time adaptive landscape.
Collapse
|
47
|
Nguyen T, Flaten E, Trainor LJ, Novembre G. Early social communication through music: State of the art and future perspectives. Dev Cogn Neurosci 2023; 63:101279. [PMID: 37515832 PMCID: PMC10407289 DOI: 10.1016/j.dcn.2023.101279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 07/03/2023] [Accepted: 07/14/2023] [Indexed: 07/31/2023] Open
Abstract
A growing body of research shows that the universal capacity for music perception and production emerges early in development. Possibly building on this predisposition, caregivers around the world often communicate with infants using songs or speech entailing song-like characteristics. This suggests that music might be one of the earliest developing and most accessible forms of interpersonal communication, providing a platform for studying early communicative behavior. However, little research has examined music in truly communicative contexts. The current work aims to facilitate the development of experimental approaches that rely on dynamic and naturalistic social interactions. We first review two longstanding lines of research that examine musical interactions by focusing either on the caregiver or the infant. These include defining the acoustic and non-acoustic features that characterize infant-directed (ID) music, as well as behavioral and neurophysiological research examining infants' processing of musical timing and pitch. Next, we review recent studies looking at early musical interactions holistically. This research focuses on how caregivers and infants interact using music to achieve co-regulation, mutual engagement, and increase affiliation and prosocial behavior. We conclude by discussing methodological, technological, and analytical advances that might empower a comprehensive study of musical communication in early childhood.
Collapse
Affiliation(s)
- Trinh Nguyen
- Neuroscience of Perception and Action Lab, Italian Institute of Technology, Rome, Italy.
| | - Erica Flaten
- Department of Psychology, Neuroscience and Behavior, McMaster University, Hamilton, Canada
| | - Laurel J Trainor
- Department of Psychology, Neuroscience and Behavior, McMaster University, Hamilton, Canada; McMaster Institute for Music and the Mind, McMaster University, Hamilton, Canada; Rotman Research Institute, Baycrest Hospital, Toronto, Canada
| | - Giacomo Novembre
- Neuroscience of Perception and Action Lab, Italian Institute of Technology, Rome, Italy
| |
Collapse
|
48
|
Greenfield MD, Merker B. Coordinated rhythms in animal species, including humans: Entrainment from bushcricket chorusing to the philharmonic orchestra. Neurosci Biobehav Rev 2023; 153:105382. [PMID: 37673282 DOI: 10.1016/j.neubiorev.2023.105382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 08/28/2023] [Accepted: 09/01/2023] [Indexed: 09/08/2023]
Abstract
Coordinated group displays featuring precise entrainment of rhythmic behavior between neighbors occur not only in human music, dance and drill, but in the acoustic or optical signaling of a number of species of arthropods and anurans. In this review we describe the mechanisms of phase resetting and phase and tempo adjustments that allow the periodic output of signaling individuals to be aligned in synchronized rhythmic group displays. These mechanisms are well described in some of the synchronizing arthropod species, in which conspecific signals reset an individual's endogenous output oscillators in such a way that the joint rhythmic signals are locked in phase. Some of these species are capable of mutually adjusting both the phase and tempo of their rhythmic signaling, thereby achieving what is called perfect synchrony, a capacity which otherwise is found only in humans. We discuss this disjoint phylogenetic distribution of inter-individual rhythmic entrainment in the context of the functions such entrainment might perform in the various species concerned, and the adaptive circumstances in which it might evolve.
Collapse
Affiliation(s)
- Michael D Greenfield
- ENES Bioacoustics Research Lab, CRNL, University of Saint-Etienne, CNRS, Inserm, Saint-Etienne, France; Department of Ecology and Evolutionary Biology, University of Kansas, Lawrence, KS 66045, USA.
| | - Bjorn Merker
- Independent Scholar, SE-29194 Kristianstad, Sweden
| |
Collapse
|
49
|
Dolscheid S, Çelik S, Erkan H, Küntay A, Majid A. Children's associations between space and pitch are differentially shaped by language. Dev Sci 2023; 26:e13341. [PMID: 36315982 DOI: 10.1111/desc.13341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Revised: 09/08/2022] [Accepted: 10/18/2022] [Indexed: 11/30/2022]
Abstract
Musical properties, such as auditory pitch, are not expressed in the same way across cultures. In some languages, pitch is expressed in terms of spatial height (high vs. low), whereas others rely on thickness vocabulary (thick = low frequency vs. thin = high frequency). We investigated how children represent pitch in the face of this variable linguistic input by examining the developmental trajectory of linguistic and non-linguistic space-pitch associations in children who acquire Dutch (a height-pitch language) or Turkish (a thickness-pitch language). Five-year-olds, 7-year-olds, 9-year-olds, and 11-year-olds were tested for their understanding of pitch terminology and their associations of spatial dimensions with auditory pitch when no language was used. Across tasks, thickness-pitch associations were more robust than height-pitch associations. This was true for Turkish children, and also Dutch children not exposed to thickness-pitch vocabulary. Height-pitch associations, on the other hand, were not reliable-not even in Dutch-speaking children until age 11-the age when they demonstrated full comprehension of height-pitch terminology. Moreover, Turkish-speaking children reversed height-pitch associations. Taken together, these findings suggest thickness-pitch associations are acquired in similar ways by children from different cultures, but the acquisition of height-pitch associations is more susceptible to linguistic input. Overall, then, despite cross-cultural stability in some components, there is variation in how children come to represent musical pitch, one of the building blocks of music. RESEARCH HIGHLIGHTS: Children from diverse cultures differ in their understanding of music vocabulary and in their nonlinguistic associations between spatial dimensions and auditory pitch. Height-pitch mappings are acquired late and require additional scaffolding from language, whereas thickness-pitch mappings are acquired early and are less susceptible to language input. Space-pitch mappings are not static from birth to adulthood, but change over development, suggesting music cognition is shaped by cross-cultural experience.
Collapse
Affiliation(s)
- Sarah Dolscheid
- University of Cologne, Department of Rehabilitation and Special Education, Cologne, Germany
| | | | - Hasan Erkan
- Radboud University, Nijmegen, The Netherlands
| | | | | |
Collapse
|
50
|
James LS, Wang AS, Bertolo M, Sakata JT. Learning to pause: Fidelity of and biases in the developmental acquisition of gaps in the communicative signals of a songbird. Dev Sci 2023; 26:e13382. [PMID: 36861437 DOI: 10.1111/desc.13382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Revised: 01/21/2023] [Accepted: 02/10/2023] [Indexed: 03/03/2023]
Abstract
The temporal organization of sounds used in social contexts can provide information about signal function and evoke varying responses in listeners (receivers). For example, music is a universal and learned human behavior that is characterized by different rhythms and tempos that can evoke disparate responses in listeners. Similarly, birdsong is a social behavior in songbirds that is learned during critical periods in development and used to evoke physiological and behavioral responses in receivers. Recent investigations have begun to reveal the breadth of universal patterns in birdsong and their similarities to common patterns in speech and music, but relatively little is known about the degree to which biological predispositions and developmental experiences interact to shape the temporal patterning of birdsong. Here, we investigated how biological predispositions modulate the acquisition and production of an important temporal feature of birdsong, namely the duration of silent pauses ("gaps") between vocal elements ("syllables"). Through analyses of semi-naturally raised and experimentally tutored zebra finches, we observed that juvenile zebra finches imitate the durations of the silent gaps in their tutor's song. Further, when juveniles were experimentally tutored with stimuli containing a wide range of gap durations, we observed biases in the prevalence and stereotypy of gap durations. Together, these studies demonstrate how biological predispositions and developmental experiences differently affect distinct temporal features of birdsong and highlight similarities in developmental plasticity across birdsong, speech, and music. RESEARCH HIGHLIGHTS: The temporal organization of learned acoustic patterns can be similar across human cultures and across species, suggesting biological predispositions in acquisition. We studied how biological predispositions and developmental experiences affect an important temporal feature of birdsong, namely the duration of silent intervals between vocal elements ("gaps"). Semi-naturally and experimentally tutored zebra finches imitated the durations of gaps in their tutor's song and displayed some biases in the learning and production of gap durations and in gap variability. These findings in the zebra finch provide parallels with the acquisition of temporal features of speech and music in humans.
Collapse
Affiliation(s)
- Logan S James
- Department of Biology, McGill University, Montréal, Quebec, Canada
- Department of Integrative Biology, University of Texas at Austin, Austin, TX, USA
| | - Angela S Wang
- Department of Biology, McGill University, Montréal, Quebec, Canada
| | - Mila Bertolo
- Centre for Research in Brain, Language and Music, McGill University, Montréal, Quebec, Canada
- Integrated Program in Neuroscience, McGill University, Montréal, Quebec, Canada
| | - Jon T Sakata
- Department of Biology, McGill University, Montréal, Quebec, Canada
- Centre for Research in Brain, Language and Music, McGill University, Montréal, Quebec, Canada
- Integrated Program in Neuroscience, McGill University, Montréal, Quebec, Canada
| |
Collapse
|