1
|
Marjieh R, Harrison PMC, Lee H, Deligiannaki F, Jacoby N. Timbral effects on consonance disentangle psychoacoustic mechanisms and suggest perceptual origins for musical scales. Nat Commun 2024; 15:1482. [PMID: 38369535 DOI: 10.1038/s41467-024-45812-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 12/11/2023] [Indexed: 02/20/2024] Open
Abstract
The phenomenon of musical consonance is an essential feature in diverse musical styles. The traditional belief, supported by centuries of Western music theory and psychological studies, is that consonance derives from simple (harmonic) frequency ratios between tones and is insensitive to timbre. Here we show through five large-scale behavioral studies, comprising 235,440 human judgments from US and South Korean populations, that harmonic consonance preferences can be reshaped by timbral manipulations, even as far as to induce preferences for inharmonic intervals. We show how such effects may suggest perceptual origins for diverse scale systems ranging from the gamelan's slendro scale to the tuning of Western mean-tone and equal-tempered scales. Through computational modeling we show that these timbral manipulations dissociate competing psychoacoustic mechanisms underlying consonance, and we derive an updated computational model combining liking of harmonicity, disliking of fast beats (roughness), and liking of slow beats. Altogether, this work showcases how large-scale behavioral experiments can inform classical questions in auditory perception.
Collapse
Affiliation(s)
- Raja Marjieh
- Department of Psychology, Princeton University, Princeton, NJ, USA.
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
| | - Peter M C Harrison
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
- Centre for Music and Science, University of Cambridge, Cambridge, UK.
| | - Harin Lee
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Fotini Deligiannaki
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- German Aerospace Center (DLR), Institute for AI Safety and Security, Bonn, Germany
| | - Nori Jacoby
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
| |
Collapse
|
2
|
Kathios N, Sachs ME, Zhang E, Ou Y, Loui P. Generating New Musical Preferences From Multilevel Mapping of Predictions to Reward. Psychol Sci 2024; 35:34-54. [PMID: 38019607 DOI: 10.1177/09567976231214185] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2023] Open
Abstract
Much of what we know and love about music hinges on our ability to make successful predictions, which appears to be an intrinsically rewarding process. Yet the exact process by which learned predictions become pleasurable is unclear. Here we created novel melodies in an alternative scale different from any established musical culture to show how musical preference is generated de novo. Across nine studies (n = 1,185), adult participants learned to like more frequently presented items that adhered to this rapidly learned structure, suggesting that exposure and prediction errors both affected self-report liking ratings. Learning trajectories varied by music-reward sensitivity but were similar for U.S. and Chinese participants. Furthermore, functional MRI activity in auditory areas reflected prediction errors, whereas functional connectivity between auditory and medial prefrontal regions reflected both exposure and prediction errors. Collectively, results support predictive coding as a cognitive mechanism by which new musical sounds become rewarding.
Collapse
Affiliation(s)
- Nicholas Kathios
- Department of Psychology, College of Science, Northeastern University
| | | | - Euan Zhang
- Department of Music, College of Arts, Media and Design, Northeastern University
| | - Yongtian Ou
- Faculty of Psychology, Beijing Normal University
| | - Psyche Loui
- Department of Psychology, College of Science, Northeastern University
- Department of Music, College of Arts, Media and Design, Northeastern University
| |
Collapse
|
3
|
Lukics KS, Lukács Á. Modality, presentation, domain and training effects in statistical learning. Sci Rep 2022; 12:20878. [PMID: 36463280 PMCID: PMC9719496 DOI: 10.1038/s41598-022-24951-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 11/22/2022] [Indexed: 12/07/2022] Open
Abstract
While several studies suggest that the nature and properties of the input have significant effects on statistical learning, they have rarely been investigated systematically. In order to understand how input characteristics and their interactions impact statistical learning, we explored the effects of modality (auditory vs. visual), presentation type (serial vs. simultaneous), domain (linguistic vs. non-linguistic), and training type (random, starting small, starting big) on artificial grammar learning in young adults (N = 360). With serial presentation of stimuli, learning was more effective in the auditory than in the visual modality. However, with simultaneous presentation of visual and serial presentation of auditory stimuli, the modality effect was not present. We found a significant domain effect as well: a linguistic advantage over nonlinguistic material, which was driven by the domain effect in the auditory modality. Overall, the auditory linguistic condition had an advantage over other modality-domain types. Training types did not have any overall effect on learning; starting big enhanced performance only in the case of serial visual presentation. These results show that input characteristics such as modality, presentation type, domain and training type influence statistical learning, and suggest that their effects are also dependent on the specific stimuli and structure to be learned.
Collapse
Affiliation(s)
- Krisztina Sára Lukics
- grid.6759.d0000 0001 2180 0451Department of Cognitive Science, Budapest University of Technology and Economics, Műegyetem rkp. 3., H-1111 Budapest, Hungary ,grid.5018.c0000 0001 2149 4407MTA-BME Momentum Language Acquisition Research Group, Eötvös Loránd Research Network (ELKH), Budapest, Hungary
| | - Ágnes Lukács
- grid.6759.d0000 0001 2180 0451Department of Cognitive Science, Budapest University of Technology and Economics, Műegyetem rkp. 3., H-1111 Budapest, Hungary ,grid.5018.c0000 0001 2149 4407MTA-BME Momentum Language Acquisition Research Group, Eötvös Loránd Research Network (ELKH), Budapest, Hungary
| |
Collapse
|
4
|
Chen WG, Iversen JR, Kao MH, Loui P, Patel AD, Zatorre RJ, Edwards E. Music and Brain Circuitry: Strategies for Strengthening Evidence-Based Research for Music-Based Interventions. J Neurosci 2022; 42:8498-8507. [PMID: 36351825 PMCID: PMC9665917 DOI: 10.1523/jneurosci.1135-22.2022] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 09/07/2022] [Accepted: 09/10/2022] [Indexed: 11/17/2022] Open
Abstract
The neuroscience of music and music-based interventions (MBIs) is a fascinating but challenging research field. While music is a ubiquitous component of every human society, MBIs may encompass listening to music, performing music, music-based movement, undergoing music education and training, or receiving treatment from music therapists. Unraveling the brain circuits activated and influenced by MBIs may help us gain better understanding of the therapeutic and educational values of MBIs by gathering strong research evidence. However, the complexity and variety of MBIs impose unique research challenges. This article reviews the recent endeavor led by the National Institutes of Health to support evidence-based research of MBIs and their impact on health and diseases. It also highlights fundamental challenges and strategies of MBI research with emphases on the utilization of animal models, human brain imaging and stimulation technologies, behavior and motion capturing tools, and computational approaches. It concludes with suggestions of basic requirements when studying MBIs and promising future directions to further strengthen evidence-based research on MBIs in connections with brain circuitry.SIGNIFICANCE STATEMENT Music and music-based interventions (MBI) engage a wide range of brain circuits and hold promising therapeutic potentials for a variety of health conditions. Comparative studies using animal models have helped in uncovering brain circuit activities involved in rhythm perception, while human imaging, brain stimulation, and motion capture technologies have enabled neural circuit analysis underlying the effects of MBIs on motor, affective/reward, and cognitive function. Combining computational analysis, such as prediction method, with mechanistic studies in animal models and humans may unravel the complexity of MBIs and their effects on health and disease.
Collapse
Affiliation(s)
- Wen Grace Chen
- Division of Extramural Research, National Center for Complementary and Integrative Health, National Institutes of Health, Bethesda, Maryland, 20892
| | | | - Mimi H Kao
- Tufts University, Medford, Massachusetts 02155
| | - Psyche Loui
- Northeastern University, Boston, Massachusetts 02115
| | | | - Robert J Zatorre
- Montreal Neurological Institute, McGill University, Montreal, Quebec H3A2B4, Canada
| | - Emmeline Edwards
- Division of Extramural Research, National Center for Complementary and Integrative Health, National Institutes of Health, Bethesda, Maryland, 20892
| |
Collapse
|
5
|
Daikoku T, Goswami U. Hierarchical amplitude modulation structures and rhythm patterns: Comparing Western musical genres, song, and nature sounds to Babytalk. PLoS One 2022; 17:e0275631. [PMID: 36240225 PMCID: PMC9565671 DOI: 10.1371/journal.pone.0275631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Accepted: 09/20/2022] [Indexed: 11/19/2022] Open
Abstract
Statistical learning of physical stimulus characteristics is important for the development of cognitive systems like language and music. Rhythm patterns are a core component of both systems, and rhythm is key to language acquisition by infants. Accordingly, the physical stimulus characteristics that yield speech rhythm in "Babytalk" may also describe the hierarchical rhythmic relationships that characterize human music and song. Computational modelling of the amplitude envelope of "Babytalk" (infant-directed speech, IDS) using a demodulation approach (Spectral-Amplitude Modulation Phase Hierarchy model, S-AMPH) can describe these characteristics. S-AMPH modelling of Babytalk has shown previously that bands of amplitude modulations (AMs) at different temporal rates and their phase relations help to create its structured inherent rhythms. Additionally, S-AMPH modelling of children's nursery rhymes shows that different rhythm patterns (trochaic, iambic, dactylic) depend on the phase relations between AM bands centred on ~2 Hz and ~5 Hz. The importance of these AM phase relations was confirmed via a second demodulation approach (PAD, Probabilistic Amplitude Demodulation). Here we apply both S-AMPH and PAD to demodulate the amplitude envelopes of Western musical genres and songs. Quasi-rhythmic and non-human sounds found in nature (birdsong, rain, wind) were utilized for control analyses. We expected that the physical stimulus characteristics in human music and song from an AM perspective would match those of IDS. Given prior speech-based analyses, we also expected that AM cycles derived from the modelling may identify musical units like crotchets, quavers and demi-quavers. Both models revealed an hierarchically-nested AM modulation structure for music and song, but not nature sounds. This AM modulation structure for music and song matched IDS. Both models also generated systematic AM cycles yielding musical units like crotchets and quavers. Both music and language are created by humans and shaped by culture. Acoustic rhythm in IDS and music appears to depend on many of the same physical characteristics, facilitating learning.
Collapse
Affiliation(s)
- Tatsuya Daikoku
- Centre for Neuroscience in Education, University of Cambridge, Cambridge, United Kingdom
- International Research Center for Neurointelligence, The University of Tokyo, Bunkyo City, Tokyo, Japan
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Hiroshima, Japan
- * E-mail:
| | - Usha Goswami
- Centre for Neuroscience in Education, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
6
|
Savage PE, Fujii S. Towards a cross-cultural framework for predictive coding of music. Nat Rev Neurosci 2022; 23:641. [PMID: 35995944 DOI: 10.1038/s41583-022-00622-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Affiliation(s)
- Patrick E Savage
- Faculty of Environment and Information Studies, Keio University, Fujisawa, Japan.
| | - Shinya Fujii
- Faculty of Environment and Information Studies, Keio University, Fujisawa, Japan
| |
Collapse
|