1
|
Wu D, Jia X, Rao W, Dou W, Li Y, Li B. Construction of a Chinese traditional instrumental music dataset: A validated set of naturalistic affective music excerpts. Behav Res Methods 2024; 56:3757-3778. [PMID: 38702502 PMCID: PMC11133124 DOI: 10.3758/s13428-024-02411-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/22/2024] [Indexed: 05/06/2024]
Abstract
Music is omnipresent among human cultures and moves us both physically and emotionally. The perception of emotions in music is influenced by both psychophysical and cultural factors. Chinese traditional instrumental music differs significantly from Western music in cultural origin and music elements. However, previous studies on music emotion perception are based almost exclusively on Western music. Therefore, the construction of a dataset of Chinese traditional instrumental music is important for exploring the perception of music emotions in the context of Chinese culture. The present dataset included 273 10-second naturalistic music excerpts. We provided rating data for each excerpt on ten variables: familiarity, dimensional emotions (valence and arousal), and discrete emotions (anger, gentleness, happiness, peacefulness, sadness, solemnness, and transcendence). The excerpts were rated by a total of 168 participants on a seven-point Likert scale for the ten variables. Three labels for the excerpts were obtained: familiarity, discrete emotion, and cluster. Our dataset demonstrates good reliability, and we believe it could contribute to cross-cultural studies on emotional responses to music.
Collapse
Affiliation(s)
- Di Wu
- Institute of Brain Science and Department of Physiology, School of Basic Medical Sciences, Hangzhou Normal University, Hangzhou, 311121, China
- Zhejiang Philosophy and Social Science Laboratory for Research in Early Development and Childcare, Hangzhou Normal University, Hangzhou, 311121, China
| | - Xi Jia
- Institute of Brain Science and Department of Physiology, School of Basic Medical Sciences, Hangzhou Normal University, Hangzhou, 311121, China
- Zhejiang Philosophy and Social Science Laboratory for Research in Early Development and Childcare, Hangzhou Normal University, Hangzhou, 311121, China
| | - Wenxin Rao
- Institute of Brain Science and Department of Physiology, School of Basic Medical Sciences, Hangzhou Normal University, Hangzhou, 311121, China
| | - Wenjie Dou
- Institute of Brain Science and Department of Physiology, School of Basic Medical Sciences, Hangzhou Normal University, Hangzhou, 311121, China
- Zhejiang Philosophy and Social Science Laboratory for Research in Early Development and Childcare, Hangzhou Normal University, Hangzhou, 311121, China
| | - Yangping Li
- Institute of Brain Science and Department of Physiology, School of Basic Medical Sciences, Hangzhou Normal University, Hangzhou, 311121, China
- School of Foreign Studies, Xi'an Jiaotong University, Xi'an, 710049, China
| | - Baoming Li
- Institute of Brain Science and Department of Physiology, School of Basic Medical Sciences, Hangzhou Normal University, Hangzhou, 311121, China.
- Zhejiang Philosophy and Social Science Laboratory for Research in Early Development and Childcare, Hangzhou Normal University, Hangzhou, 311121, China.
| |
Collapse
|
2
|
Parada-Cabaleiro E, Batliner A, Zentner M, Schedl M. Exploring emotions in Bach chorales: a multi-modal perceptual and data-driven study. ROYAL SOCIETY OPEN SCIENCE 2023; 10:230574. [PMID: 38126059 PMCID: PMC10731325 DOI: 10.1098/rsos.230574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/30/2023] [Accepted: 11/20/2023] [Indexed: 12/23/2023]
Abstract
The relationship between music and emotion has been addressed within several disciplines, from more historico-philosophical and anthropological ones, such as musicology and ethnomusicology, to others that are traditionally more empirical and technological, such as psychology and computer science. Yet, understanding the link between music and emotion is limited by the scarce interconnections between these disciplines. Trying to narrow this gap, this data-driven exploratory study aims at assessing the relationship between linguistic, symbolic and acoustic features-extracted from lyrics, music notation and audio recordings-and perception of emotion. Employing a listening experiment, statistical analysis and unsupervised machine learning, we investigate how a data-driven multi-modal approach can be used to explore the emotions conveyed by eight Bach chorales. Through a feature selection strategy based on a set of more than 300 Bach chorales and a transdisciplinary methodology integrating approaches from psychology, musicology and computer science, we aim to initiate an efficient dialogue between disciplines, able to promote a more integrative and holistic understanding of emotions in music.
Collapse
Affiliation(s)
- Emilia Parada-Cabaleiro
- Institute of Computational Perception, Johannes Kepler University Linz, Linz, Austria
- Human-Centered AI Group, AI Laboratory, Linz Institute of Technology (LIT), Linz, Austria
- Department of Music Pedagogy, Nuremberg University of Music, Nuremberg, Germany
| | - Anton Batliner
- Chair of Embedded Intelligence for Health Care and Wellbeing, University of Augsburg, Augsburg, Germany
| | - Marcel Zentner
- Department of Psychology, University of Innsbruck, Innsbruck, Austria
| | - Markus Schedl
- Institute of Computational Perception, Johannes Kepler University Linz, Linz, Austria
- Human-Centered AI Group, AI Laboratory, Linz Institute of Technology (LIT), Linz, Austria
| |
Collapse
|
3
|
Blasco-Magraner JS, Bernabé-Valero G, Marín-Liébana P, Botella-Nicolás AM. Changing positive and negative affects through music experiences: a study with university students. BMC Psychol 2023; 11:76. [PMID: 36944996 PMCID: PMC10031901 DOI: 10.1186/s40359-023-01110-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2022] [Accepted: 03/06/2023] [Indexed: 03/23/2023] Open
Abstract
BACKGROUND Currently, there are few empirical studies that demonstrate the effects of music on specific emotions, especially in the educational context. For this reason, this study was carried out to examine the impact of music to identify affective changes after exposure to three musical stimuli. METHODS The participants were 71 university students engaged in a music education course and none of them were musicians. Changes in the affective state of non-musical student teachers were studied after listening to three pieces of music. An inter-subject repeated measures ANOVA test was carried out using the Positive and Negative Affect Schedule (PANAS) to measure their affective state. RESULTS The results revealed that: (i) the three musical experiences were beneficial in increasing positive affects and reducing negative affects, with significant differences between the interaction of Music Experiences × Moment (pre-post); (ii) listening to Mahler's sad fifth symphony reduced more negative affects than the other experimental conditions; (iii) performing the blues had the highest positive effects. CONCLUSIONS These findings provide applied keys aspects for music education and research, as they show empirical evidence on how music can modify specific affects of personal experience.
Collapse
Affiliation(s)
| | - Gloria Bernabé-Valero
- Department of Occupational Sciences, Speech Therapy, Evolutionary and Educational Psychology, Catholic University of Valencia San Vicente Mártir, Av. De La Ilustración, 2, 46100, Burjassot, Valencia, Spain
| | - Pablo Marín-Liébana
- Department of Music Education, University of Valencia, Av. Dels Tarongers, 4, 46022, Valencia, Spain
| | - Ana María Botella-Nicolás
- Department of Music Education, University of Valencia, Av. Dels Tarongers, 4, 46022, Valencia, Spain
| |
Collapse
|
4
|
MacGregor C, Ruth N, Müllensiefen D. Development and validation of the first adaptive test of emotion perception in music. Cogn Emot 2023; 37:284-302. [PMID: 36592153 DOI: 10.1080/02699931.2022.2162003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
ABSTRACTThe Musical Emotion Discrimination Task (MEDT) is a short, non-adaptive test of the ability to discriminate emotions in music. Test-takers hear two performances of the same melody, both played by the same performer but each trying to communicate a different basic emotion, and are asked to determine which one is "happier", for example. The goal of the current study was to construct a new version of the MEDT using a larger set of shorter, more diverse music clips and an adaptive framework to expand the ability range for which the test can deliver measurements. The first study analysed responses from a large sample of participants (N = 624) to determine how musical features contributed to item difficulty, which resulted in a quantitative model of musical emotion discrimination ability rooted in Item Response Theory (IRT). This model informed the construction of the adaptive MEDT. A second study contributed preliminary evidence for the validity and reliability of the adaptive MEDT, and demonstrated that the new version of the test is suitable for a wider range of abilities. This paper therefore presents the first adaptive musical emotion discrimination test, a new resource for investigating emotion processing which is freely available for research use.
Collapse
Affiliation(s)
- Chloe MacGregor
- Department of Psychology, Goldsmiths, University of London, London, England
| | - Nicolas Ruth
- Institute for Cultural Management and Media, University of Music and Performing Arts Munich, Munchen, Germany
| | | |
Collapse
|
5
|
Micallef Grimaud A, Eerola T. Emotional expression through musical cues: A comparison of production and perception approaches. PLoS One 2022; 17:e0279605. [PMID: 36584186 PMCID: PMC9803112 DOI: 10.1371/journal.pone.0279605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 12/10/2022] [Indexed: 01/01/2023] Open
Abstract
Multiple approaches have been used to investigate how musical cues are used to shape different emotions in music. The most prominent approach is a perception study, where musical stimuli varying in cue levels are assessed by participants in terms of their conveyed emotion. However, this approach limits the number of cues and combinations simultaneously investigated, since each variation produces another musical piece to be evaluated. Another less used approach is a production approach, where participants use cues to change the emotion conveyed in music, allowing participants to explore a larger number of cue combinations than the former approach. These approaches provide different levels of accuracy and economy for identifying how cues are used to convey different emotions in music. However, do these approaches provide converging results? This paper's aims are two-fold. The role of seven musical cues (tempo, pitch, dynamics, brightness, articulation, mode, and instrumentation) in communicating seven emotions (sadness, joy, calmness, anger, fear, power, and surprise) in music is investigated. Additionally, this paper explores whether the two approaches will yield similar findings on how the cues are used to shape different emotions in music. The first experiment utilises a production approach where participants adjust the cues in real-time to convey target emotions. The second experiment uses a perception approach where participants rate pre-rendered systematic variations of the stimuli for all emotions. Overall, the cues operated similarly in the majority (32/49) of cue-emotion combinations across both experiments, with the most variance produced by the dynamics and instrumentation cues. A comparison of the prediction accuracy rates of cue combinations representing the intended emotions found that prediction rates in Experiment 1 were higher than the ones obtained in Experiment 2, suggesting that a production approach may be a more efficient method to explore how cues are used to shape different emotions in music.
Collapse
Affiliation(s)
| | - Tuomas Eerola
- Department of Music, Music and Science Lab, Durham University, Durham, United Kingdom
| |
Collapse
|
6
|
Mednicoff SD, Barashy S, Gonzales D, Benning SD, Snyder JS, Hannon EE. Auditory affective processing, musicality, and the development of misophonic reactions. Front Neurosci 2022; 16:924806. [PMID: 36213735 PMCID: PMC9537735 DOI: 10.3389/fnins.2022.924806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 09/02/2022] [Indexed: 11/13/2022] Open
Abstract
Misophonia can be characterized both as a condition and as a negative affective experience. Misophonia is described as feeling irritation or disgust in response to hearing certain sounds, such as eating, drinking, gulping, and breathing. Although the earliest misophonic experiences are often described as occurring during childhood, relatively little is known about the developmental pathways that lead to individual variation in these experiences. This literature review discusses evidence of misophonic reactions during childhood and explores the possibility that early heightened sensitivities to both positive and negative sounds, such as to music, might indicate a vulnerability for misophonia and misophonic reactions. We will review when misophonia may develop, how it is distinguished from other auditory conditions (e.g., hyperacusis, phonophobia, or tinnitus), and how it relates to developmental disorders (e.g., autism spectrum disorder or Williams syndrome). Finally, we explore the possibility that children with heightened musicality could be more likely to experience misophonic reactions and develop misophonia.
Collapse
Affiliation(s)
| | | | | | | | | | - Erin E. Hannon
- Department of Psychology, University of Nevada Las Vegas, Las Vegas, NV, United States
| |
Collapse
|
7
|
Zhang D, Akhter S, Kumar T, Nguyen NT. Lack of Emotional Experience, Resistance to Innovation, and Dissatisfied Musicians Influence on Music Unattractive Education. Front Psychol 2022; 13:922400. [PMID: 35756285 PMCID: PMC9226546 DOI: 10.3389/fpsyg.2022.922400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Accepted: 05/20/2022] [Indexed: 11/13/2022] Open
Abstract
Music education is frequently growing around the globe and needs emotional attachments and adoption innovation for the attractive music education that needs researcher's emphasis. Thus, the current article investigates the impact of lack of emotional experience and resistance to innovation on unattractive music education in China. The current research also investigates the mediating impact of dissatisfied musicians among the association of lack of emotional experience, resistance to innovation, and unattractive music education in China. The study has used the primary data collected using questionnaires. The current article examines the validity and reliability using the measurement assessment model and also tests the hypotheses using the structural assessment model with the help of smart-PLS. The results indicated that the lack of emotional experience and resistance to innovation has a positive and significant impact on unattractive music education in China. The findings also revealed that dissatisfied musicians significantly mediate among lack of emotional experience, resistance to innovation, and unattractive music education in China. This article helps policymakers establish policies about making music education attractive for musicians by adopting innovation.
Collapse
Affiliation(s)
- Dongjun Zhang
- School of Humanities and Law, China University of Petroleum, Shandong, China
| | - Shamim Akhter
- School of Languages, Civilisation and Philosophy, Universiti Utara Malaysia, Sintok, Malaysia
| | - Tribhuwan Kumar
- College of Science and Humanities at Sulail, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Nhat Tan Nguyen
- Faculty of Business Administration, Ho Chi Minh City University of Foreign Languages and Information Technology, Ho Chi Minh City, Vietnam
| |
Collapse
|
8
|
Lange EB, Fünderich J, Grimm H. Multisensory integration of musical emotion perception in singing. PSYCHOLOGICAL RESEARCH 2022; 86:2099-2114. [PMID: 35001181 PMCID: PMC9470688 DOI: 10.1007/s00426-021-01637-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Accepted: 12/16/2021] [Indexed: 11/25/2022]
Abstract
We investigated how visual and auditory information contributes to emotion communication during singing. Classically trained singers applied two different facial expressions (expressive/suppressed) to pieces from their song and opera repertoire. Recordings of the singers were evaluated by laypersons or experts, presented to them in three different modes: auditory, visual, and audio–visual. A manipulation check confirmed that the singers succeeded in manipulating the face while keeping the sound highly expressive. Analyses focused on whether the visual difference or the auditory concordance between the two versions determined perception of the audio–visual stimuli. When evaluating expressive intensity or emotional content a clear effect of visual dominance showed. Experts made more use of the visual cues than laypersons. Consistency measures between uni-modal and multimodal presentations did not explain the visual dominance. The evaluation of seriousness was applied as a control. The uni-modal stimuli were rated as expected, but multisensory evaluations converged without visual dominance. Our study demonstrates that long-term knowledge and task context affect multisensory integration. Even though singers’ orofacial movements are dominated by sound production, their facial expressions can communicate emotions composed into the music, and observes do not rely on audio information instead. Studies such as ours are important to understand multisensory integration in applied settings.
Collapse
Affiliation(s)
- Elke B Lange
- Department of Music, Max Planck Institute for Empirical Aesthetics (MPIEA), Grüneburgweg 14, 60322, Frankfurt/M., Germany.
| | - Jens Fünderich
- Department of Music, Max Planck Institute for Empirical Aesthetics (MPIEA), Grüneburgweg 14, 60322, Frankfurt/M., Germany.,University of Erfurt, Erfurt, Germany
| | - Hartmut Grimm
- Department of Music, Max Planck Institute for Empirical Aesthetics (MPIEA), Grüneburgweg 14, 60322, Frankfurt/M., Germany
| |
Collapse
|
9
|
Design of the Piano Score Recommendation Image Analysis System Based on the Big Data and Convolutional Neural Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:4953288. [PMID: 34868290 PMCID: PMC8642031 DOI: 10.1155/2021/4953288] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Accepted: 10/15/2021] [Indexed: 11/21/2022]
Abstract
In the era of big data, the problem of information overload is becoming more and more obvious. A piano music image analysis and recommendation system based on the CNN classifier and user preference is designed by using the convolutional neural network (CNN), which can realize accurate piano music recommendation for users in the big data environment. The piano music recommendation system based on the CNN is mainly composed of user modeling, music feature extraction, recommendation algorithm, and so on. In the recommendation algorithm module, the potential characteristics of music are predicted by the regression model, and the matching degree between users and music is calculated according to user preferences. Then, music that users may be interested in is generated and sorted in order to recommend new piano music to relevant users. The image analysis model contains four “convolution + pooling” layers. The classification accuracy and gradient change law of the CNN under RMSProp and Adam optimal controllers are compared. The image analysis results show that the Adam optimal controller can quickly find the direction, and the gradient decreases greatly. In addition, the accuracy of the recommendation system is 55.84%. Compared with the traditional CNN algorithm, this paper uses the convolutional neural network (CNN) to analyze and recommend piano music images according to users' preferences, which can realize more accurate piano music recommendation for users in the big data environment. Therefore, the piano music recommendation system based on the CNN has strong feature learning ability and good prediction and recommendation ability.
Collapse
|
10
|
Harris I, Küssner MB. Come on Baby, Light My Fire: Sparking Further Research in Socio-Affective Mechanisms of Music Using Computational Advancements. Front Psychol 2020; 11:557162. [PMID: 33363492 PMCID: PMC7753094 DOI: 10.3389/fpsyg.2020.557162] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Accepted: 10/30/2020] [Indexed: 12/13/2022] Open
Affiliation(s)
- Ilana Harris
- Centre for Music and Science, Faculty of Music, University of Cambridge, Cambridge, United Kingdom
| | - Mats B Küssner
- Department of Musicology and Media Studies, Humboldt-Universität zu Berlin, Berlin, Germany
| |
Collapse
|
11
|
Russo M, Kraljević L, Stella M, Sikora M. Cochleogram-based approach for detecting perceived emotions in music. Inf Process Manag 2020. [DOI: 10.1016/j.ipm.2020.102270] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
12
|
MacGregor C, Müllensiefen D. The Musical Emotion Discrimination Task: A New Measure for Assessing the Ability to Discriminate Emotions in Music. Front Psychol 2019; 10:1955. [PMID: 31551857 PMCID: PMC6736617 DOI: 10.3389/fpsyg.2019.01955] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2019] [Accepted: 08/08/2019] [Indexed: 11/13/2022] Open
Abstract
Previous research has shown that levels of musical training and emotional engagement with music are associated with an individual's ability to decode the intended emotional expression from a music performance. The present study aimed to assess traits and abilities that might influence emotion recognition, and to create a new test of emotion discrimination ability. The first experiment investigated musical features that influenced the difficulty of the stimulus items (length, type of melody, instrument, target-/comparison emotion) to inform the creation of a short test of emotion discrimination. The second experiment assessed the contribution of individual differences measures of emotional and musical abilities as well as psychoacoustic abilities. Finally, the third experiment established the validity of the new test against other measures currently used to assess similar abilities. Performance on the Musical Emotion Discrimination Task (MEDT) was significantly associated with high levels of self-reported emotional engagement with music as well as with performance on a facial emotion recognition task. Results are discussed in the context of a process model for emotion discrimination in music and psychometric properties of the MEDT are provided. The MEDT is freely available for research use.
Collapse
Affiliation(s)
- Chloe MacGregor
- Department of Psychology, Goldsmiths, University of London, London, United Kingdom
| | - Daniel Müllensiefen
- Department of Psychology, Goldsmiths, University of London, London, United Kingdom
| |
Collapse
|