1
|
Stamkou E, Keltner D, Corona R, Aksoy E, Cowen AS. Emotional palette: a computational mapping of aesthetic experiences evoked by visual art. Sci Rep 2024; 14:19932. [PMID: 39198545 PMCID: PMC11358466 DOI: 10.1038/s41598-024-69686-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 08/07/2024] [Indexed: 09/01/2024] Open
Abstract
Despite the evolutionary history and cultural significance of visual art, the structure of aesthetic experiences it evokes has only attracted recent scientific attention. What kinds of experience does visual art evoke? Guided by Semantic Space Theory, we identify the concepts that most precisely describe people's aesthetic experiences using new computational techniques. Participants viewed 1457 artworks sampled from diverse cultural and historical traditions and reported on the emotions they felt and their perceived artwork qualities. Results show that aesthetic experiences are high-dimensional, comprising 25 categories of feeling states. Extending well beyond hedonism and broad evaluative judgments (e.g., pleasant/unpleasant), aesthetic experiences involve emotions of daily social living (e.g., "sad", "joy"), the imagination (e.g., "psychedelic", "mysterious"), profundity (e.g., "disgust", "awe"), and perceptual qualities attributed to the artwork (e.g., "whimsical", "disorienting"). Aesthetic emotions and perceptual qualities jointly predict viewers' liking of the artworks, indicating that we conceptualize aesthetic experiences in terms of the emotions we feel but also the qualities we perceive in the artwork. Aesthetic experiences are often mixed and lie along continuous gradients between categories rather than within discrete clusters. Our collection of artworks is visualized within an interactive map ( https://barradeau.com/2021/emotions-map/ ), revealing the high-dimensional space of aesthetic experiences associated with visual art.
Collapse
Affiliation(s)
- Eftychia Stamkou
- Department of Psychology, University of Amsterdam, 1001 NK, Amsterdam, The Netherlands.
| | - Dacher Keltner
- Department of Psychology, University of California Berkeley, Berkeley, CA, 94720, USA
| | - Rebecca Corona
- Department of Psychology, University of California Berkeley, Berkeley, CA, 94720, USA
| | - Eda Aksoy
- Google Arts and Culture, 75009, Paris, France
| | - Alan S Cowen
- Department of Psychology, University of California Berkeley, Berkeley, CA, 94720, USA
- Hume AI, New York, NY, 10010, USA
| |
Collapse
|
2
|
Qiu L, Wan X. Nature's beauty versus urban bustle: Chinese folk music influences food choices by inducing mental imagery of different scenes. Appetite 2024; 199:107507. [PMID: 38768925 DOI: 10.1016/j.appet.2024.107507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2024] [Revised: 05/09/2024] [Accepted: 05/18/2024] [Indexed: 05/22/2024]
Abstract
Previous research has demonstrated that music can impact people's food choices by triggering emotional states. We reported two virtual reality (VR) experiments designed to examine how Chinese folk music influences people's food choices by inducing mental imagery of different scenes. In both experiments, young healthy Chinese participants were asked to select three dishes from an assortment of two meat and two vegetable dishes while listening to Chinese folk music that could elicit mental imagery of nature or urban scenes. The results of Experiment 1 revealed that they chose vegetable-forward meals more frequently while listening to Chinese folk music eliciting mental imagery of nature versus urban scenes. In Experiment 2, the participants were randomly divided into three groups, in which the prevalence of their mental imagery was enhanced, moderately suppressed, or strongly suppressed by performing different tasks while listening to the music pieces. We replicated the results of Experiment 1 when the participants' mental imagery was enhanced, whereas no such effect was observed when the participants' mental imagery was moderately or strongly suppressed. Collectively, these findings suggest that music may influence the food choices people make in virtual food choice tasks by inducing mental imagery, which provides insights into utilizing environmental cues to promote healthier food choices.
Collapse
Affiliation(s)
- Linbo Qiu
- Department of Psychological and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Xiaoang Wan
- Department of Psychological and Cognitive Sciences, Tsinghua University, Beijing, China.
| |
Collapse
|
3
|
Cowen AS, Brooks JA, Prasad G, Tanaka M, Kamitani Y, Kirilyuk V, Somandepalli K, Jou B, Schroff F, Adam H, Sauter D, Fang X, Manokara K, Tzirakis P, Oh M, Keltner D. How emotion is experienced and expressed in multiple cultures: a large-scale experiment across North America, Europe, and Japan. Front Psychol 2024; 15:1350631. [PMID: 38966733 PMCID: PMC11223574 DOI: 10.3389/fpsyg.2024.1350631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 03/04/2024] [Indexed: 07/06/2024] Open
Abstract
Core to understanding emotion are subjective experiences and their expression in facial behavior. Past studies have largely focused on six emotions and prototypical facial poses, reflecting limitations in scale and narrow assumptions about the variety of emotions and their patterns of expression. We examine 45,231 facial reactions to 2,185 evocative videos, largely in North America, Europe, and Japan, collecting participants' self-reported experiences in English or Japanese and manual and automated annotations of facial movement. Guided by Semantic Space Theory, we uncover 21 dimensions of emotion in the self-reported experiences of participants in Japan, the United States, and Western Europe, and considerable cross-cultural similarities in experience. Facial expressions predict at least 12 dimensions of experience, despite massive individual differences in experience. We find considerable cross-cultural convergence in the facial actions involved in the expression of emotion, and culture-specific display tendencies-many facial movements differ in intensity in Japan compared to the U.S./Canada and Europe but represent similar experiences. These results quantitatively detail that people in dramatically different cultures experience and express emotion in a high-dimensional, categorical, and similar but complex fashion.
Collapse
Affiliation(s)
- Alan S. Cowen
- Hume AI, New York, NY, United States
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| | - Jeffrey A. Brooks
- Hume AI, New York, NY, United States
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| | | | - Misato Tanaka
- Advanced Telecommunications Research Institute, Kyoto, Japan
- Graduate School of Informatics, Kyoto University, Kyoto, Japan
| | - Yukiyasu Kamitani
- Advanced Telecommunications Research Institute, Kyoto, Japan
- Graduate School of Informatics, Kyoto University, Kyoto, Japan
| | | | - Krishna Somandepalli
- Google Research, Mountain View, CA, United States
- Department of Electrical Engineering, University of Southern California, Los Angeles, CA, United States
| | - Brendan Jou
- Google Research, Mountain View, CA, United States
| | | | - Hartwig Adam
- Google Research, Mountain View, CA, United States
| | - Disa Sauter
- Faculty of Social and Behavioural Sciences, University of Amsterdam, Amsterdam, Netherlands
| | - Xia Fang
- Zhejiang University, Zhejiang, China
| | - Kunalan Manokara
- Faculty of Social and Behavioural Sciences, University of Amsterdam, Amsterdam, Netherlands
| | | | - Moses Oh
- Hume AI, New York, NY, United States
| | - Dacher Keltner
- Hume AI, New York, NY, United States
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| |
Collapse
|
4
|
Hannon E, Snyder J. What rhythm production can tell us about culture. Trends Cogn Sci 2024; 28:487-488. [PMID: 38664158 DOI: 10.1016/j.tics.2024.04.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2024] [Accepted: 04/08/2024] [Indexed: 06/07/2024]
Abstract
Jacoby and colleagues used an iterative rhythm reproduction paradigm with listeners from around the world to provide evidence for both rhythm universals (simple-integer ratios 1:1 and 2:1) and cross-cultural variation for specific rhythmic categories that can be linked to local music traditions in different regions of the world.
Collapse
|
5
|
Alberhasky M, Durkee PK. Songs tell a story: The Arc of narrative for music. PLoS One 2024; 19:e0303188. [PMID: 38753825 PMCID: PMC11098490 DOI: 10.1371/journal.pone.0303188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 04/19/2024] [Indexed: 05/18/2024] Open
Abstract
Research suggests that a core lexical structure characterized by words that define plot staging, plot progression, and cognitive tension underlies written narratives. Here, we investigate the extent to which song lyrics follow this underlying narrative structure. Using a text analytic approach and two publicly available datasets of song lyrics including a larger dataset (N = 12,280) and a smaller dataset of greatest hits (N = 2,823), we find that music lyrics tend to exhibit a core Arc of Narrative structure: setting the stage at the beginning, progressing the plot steadily until the end of the song, and peaking in cognitive tension in the middle. We also observe differences in narrative structure based on musical genre, suggesting different genres set the scene in greater detail (Country, Rap) or progress the plot faster and have a higher rate of internal conflict (Pop). These findings add to the evidence that storytelling exhibits predictable language patterns and that storytelling is evident in music lyrics.
Collapse
Affiliation(s)
- Max Alberhasky
- Department of Marketing, California State University Long Beach, Long Beach, CA, United States of America
| | - Patrick K. Durkee
- Department of Psychology, California State University Fresno, Fresno, CA, United States of America
| |
Collapse
|
6
|
Abrams EB, Namballa R, He R, Poeppel D, Ripollés P. Elevator music as a tool for the quantitative characterization of reward. Ann N Y Acad Sci 2024; 1535:121-136. [PMID: 38566486 DOI: 10.1111/nyas.15131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
While certain musical genres and songs are widely popular, there is still large variability in the music that individuals find rewarding or emotional, even among those with a similar musical enculturation. Interestingly, there is one Western genre that is intended to attract minimal attention and evoke a mild emotional response: elevator music. In a series of behavioral experiments, we show that elevator music consistently elicits low pleasure and surprise. Participants reported elevator music as being less pleasurable than music from popular genres, even when participants did not regularly listen to the comparison genre. Participants reported elevator music to be familiar even when they had not explicitly heard the presented song before. Computational and behavioral measures of surprisal showed that elevator music was less surprising, and thus more predictable, than other well-known genres. Elevator music covers of popular songs were rated as less pleasurable, surprising, and arousing than their original counterparts. Finally, we used elevator music as a control for self-selected rewarding songs in a proof-of-concept physiological (electrodermal activity and piloerection) experiment. Our results suggest that elevator music elicits low emotional responses consistently across Western music listeners, making it a unique control stimulus for studying musical novelty, pleasure, and surprise.
Collapse
Affiliation(s)
- Ellie Bean Abrams
- Department of Psychology, New York University, New York, New York, USA
- Center for Language, Music, and Emotion (CLaME), New York University, New York, New York, USA
- Music and Audio Research Laboratory (MARL), New York University, New York, New York, USA
| | - Richa Namballa
- Music and Audio Research Laboratory (MARL), New York University, New York, New York, USA
| | - Richard He
- Department of Psychology, New York University, New York, New York, USA
- Center for Language, Music, and Emotion (CLaME), New York University, New York, New York, USA
- Music and Audio Research Laboratory (MARL), New York University, New York, New York, USA
| | - David Poeppel
- Department of Psychology, New York University, New York, New York, USA
- Center for Language, Music, and Emotion (CLaME), New York University, New York, New York, USA
| | - Pablo Ripollés
- Department of Psychology, New York University, New York, New York, USA
- Center for Language, Music, and Emotion (CLaME), New York University, New York, New York, USA
- Music and Audio Research Laboratory (MARL), New York University, New York, New York, USA
| |
Collapse
|
7
|
Wu D, Jia X, Rao W, Dou W, Li Y, Li B. Construction of a Chinese traditional instrumental music dataset: A validated set of naturalistic affective music excerpts. Behav Res Methods 2024; 56:3757-3778. [PMID: 38702502 PMCID: PMC11133124 DOI: 10.3758/s13428-024-02411-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/22/2024] [Indexed: 05/06/2024]
Abstract
Music is omnipresent among human cultures and moves us both physically and emotionally. The perception of emotions in music is influenced by both psychophysical and cultural factors. Chinese traditional instrumental music differs significantly from Western music in cultural origin and music elements. However, previous studies on music emotion perception are based almost exclusively on Western music. Therefore, the construction of a dataset of Chinese traditional instrumental music is important for exploring the perception of music emotions in the context of Chinese culture. The present dataset included 273 10-second naturalistic music excerpts. We provided rating data for each excerpt on ten variables: familiarity, dimensional emotions (valence and arousal), and discrete emotions (anger, gentleness, happiness, peacefulness, sadness, solemnness, and transcendence). The excerpts were rated by a total of 168 participants on a seven-point Likert scale for the ten variables. Three labels for the excerpts were obtained: familiarity, discrete emotion, and cluster. Our dataset demonstrates good reliability, and we believe it could contribute to cross-cultural studies on emotional responses to music.
Collapse
Affiliation(s)
- Di Wu
- Institute of Brain Science and Department of Physiology, School of Basic Medical Sciences, Hangzhou Normal University, Hangzhou, 311121, China
- Zhejiang Philosophy and Social Science Laboratory for Research in Early Development and Childcare, Hangzhou Normal University, Hangzhou, 311121, China
| | - Xi Jia
- Institute of Brain Science and Department of Physiology, School of Basic Medical Sciences, Hangzhou Normal University, Hangzhou, 311121, China
- Zhejiang Philosophy and Social Science Laboratory for Research in Early Development and Childcare, Hangzhou Normal University, Hangzhou, 311121, China
| | - Wenxin Rao
- Institute of Brain Science and Department of Physiology, School of Basic Medical Sciences, Hangzhou Normal University, Hangzhou, 311121, China
| | - Wenjie Dou
- Institute of Brain Science and Department of Physiology, School of Basic Medical Sciences, Hangzhou Normal University, Hangzhou, 311121, China
- Zhejiang Philosophy and Social Science Laboratory for Research in Early Development and Childcare, Hangzhou Normal University, Hangzhou, 311121, China
| | - Yangping Li
- Institute of Brain Science and Department of Physiology, School of Basic Medical Sciences, Hangzhou Normal University, Hangzhou, 311121, China
- School of Foreign Studies, Xi'an Jiaotong University, Xi'an, 710049, China
| | - Baoming Li
- Institute of Brain Science and Department of Physiology, School of Basic Medical Sciences, Hangzhou Normal University, Hangzhou, 311121, China.
- Zhejiang Philosophy and Social Science Laboratory for Research in Early Development and Childcare, Hangzhou Normal University, Hangzhou, 311121, China.
| |
Collapse
|
8
|
Strauss H, Vigl J, Jacobsen PO, Bayer M, Talamini F, Vigl W, Zangerle E, Zentner M. The Emotion-to-Music Mapping Atlas (EMMA): A systematically organized online database of emotionally evocative music excerpts. Behav Res Methods 2024; 56:3560-3577. [PMID: 38286947 PMCID: PMC11133078 DOI: 10.3758/s13428-024-02336-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/02/2024] [Indexed: 01/31/2024]
Abstract
Selecting appropriate musical stimuli to induce specific emotions represents a recurring challenge in music and emotion research. Most existing stimuli have been categorized according to taxonomies derived from general emotion models (e.g., basic emotions, affective circumplex), have been rated for perceived emotions, and are rarely defined in terms of interrater agreement. To redress these limitations, we present research that served in the development of a new interactive online database, including an initial set of 364 music excerpts from three different genres (classical, pop, and hip/hop) that were rated for felt emotion using the Geneva Emotion Music Scale (GEMS), a music-specific emotion scale. The sample comprised 517 English- and German-speaking participants and each excerpt was rated by an average of 28.76 participants (SD = 7.99). Data analyses focused on research questions that are of particular relevance for musical database development, notably the number of raters required to obtain stable estimates of emotional effects of music and the adequacy of the GEMS as a tool for describing music-evoked emotions across three prominent music genres. Overall, our findings suggest that 10-20 raters are sufficient to obtain stable estimates of emotional effects of music excerpts in most cases, and that the GEMS shows promise as a valid and comprehensive annotation tool for music databases.
Collapse
Affiliation(s)
- Hannah Strauss
- Department of Psychology, University of Innsbruck, Universitätsstrasse 15, 6020, Innsbruck, Austria.
| | - Julia Vigl
- Department of Psychology, University of Innsbruck, Universitätsstrasse 15, 6020, Innsbruck, Austria
| | - Peer-Ole Jacobsen
- Department of Computer Science, Universität Innsbruck, Innsbruck, Austria
| | - Martin Bayer
- Department of Computer Science, Universität Innsbruck, Innsbruck, Austria
| | - Francesca Talamini
- Department of Psychology, University of Innsbruck, Universitätsstrasse 15, 6020, Innsbruck, Austria
| | - Wolfgang Vigl
- Department of Psychology, University of Innsbruck, Universitätsstrasse 15, 6020, Innsbruck, Austria
| | - Eva Zangerle
- Department of Computer Science, Universität Innsbruck, Innsbruck, Austria
| | - Marcel Zentner
- Department of Psychology, University of Innsbruck, Universitätsstrasse 15, 6020, Innsbruck, Austria.
| |
Collapse
|
9
|
Brooks JA, Kim L, Opara M, Keltner D, Fang X, Monroy M, Corona R, Tzirakis P, Baird A, Metrick J, Taddesse N, Zegeye K, Cowen AS. Deep learning reveals what facial expressions mean to people in different cultures. iScience 2024; 27:109175. [PMID: 38433918 PMCID: PMC10906517 DOI: 10.1016/j.isci.2024.109175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 09/05/2023] [Accepted: 02/06/2024] [Indexed: 03/05/2024] Open
Abstract
Cross-cultural studies of the meaning of facial expressions have largely focused on judgments of small sets of stereotypical images by small numbers of people. Here, we used large-scale data collection and machine learning to map what facial expressions convey in six countries. Using a mimicry paradigm, 5,833 participants formed facial expressions found in 4,659 naturalistic images, resulting in 423,193 participant-generated facial expressions. In their own language, participants also rated each expression in terms of 48 emotions and mental states. A deep neural network tasked with predicting the culture-specific meanings people attributed to facial movements while ignoring physical appearance and context discovered 28 distinct dimensions of facial expression, with 21 dimensions showing strong evidence of universality and the remainder showing varying degrees of cultural specificity. These results capture the underlying dimensions of the meanings of facial expressions within and across cultures in unprecedented detail.
Collapse
Affiliation(s)
- Jeffrey A. Brooks
- Research Division, Hume AI, New York, NY 10010, USA
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Lauren Kim
- Research Division, Hume AI, New York, NY 10010, USA
| | | | - Dacher Keltner
- Research Division, Hume AI, New York, NY 10010, USA
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Xia Fang
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, Zhejiang, China
| | - Maria Monroy
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Rebecca Corona
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
| | | | - Alice Baird
- Research Division, Hume AI, New York, NY 10010, USA
| | | | | | | | - Alan S. Cowen
- Research Division, Hume AI, New York, NY 10010, USA
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
| |
Collapse
|
10
|
Zhang Z, Fort JM, Giménez Mateu L. Decoding emotional responses to AI-generated architectural imagery. Front Psychol 2024; 15:1348083. [PMID: 38533213 PMCID: PMC10963507 DOI: 10.3389/fpsyg.2024.1348083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 02/29/2024] [Indexed: 03/28/2024] Open
Abstract
Introduction The integration of AI in architectural design represents a significant shift toward creating emotionally resonant spaces. This research investigates AI's ability to evoke specific emotional responses through architectural imagery and examines the impact of professional training on emotional interpretation. Methods We utilized Midjourney AI software to generate images based on direct and metaphorical prompts across two architectural settings: home interiors and museum exteriors. A survey was designed to capture participants' emotional responses to these images, employing a scale that rated their immediate emotional reaction. The study involved 789 university students, categorized into architecture majors (Group A) and non-architecture majors (Group B), to explore differences in emotional perception attributable to educational background. Results Findings revealed that AI is particularly effective in depicting joy, especially in interior settings. However, it struggles to accurately convey negative emotions, indicating a gap in AI's emotional range. Architecture students exhibited a greater sensitivity to emotional nuances in the images compared to non-architecture students, suggesting that architectural training enhances emotional discernment. Notably, the study observed minimal differences in the perception of emotions between direct and metaphorical prompts among architecture students, indicating a consistent emotional interpretation across prompt types. Conclusion AI holds significant promise in creating spaces that resonate on an emotional level, particularly in conveying positive emotions like joy. The study contributes to the understanding of AI's role in architectural design, emphasizing the importance of emotional intelligence in creating spaces that reflect human experiences. Future research should focus on expanding AI's emotional range and further exploring the impact of architectural training on emotional perception.
Collapse
Affiliation(s)
| | - Josep M. Fort
- Escola Tècnica Superior d'Arquitectura de Barcelona, Universitat Politècnica de Catalunya, Barcelona, Spain
| | | |
Collapse
|
11
|
Curwen C, Timmers R, Schiavio A. Action, emotion, and music-colour synaesthesia: an examination of sensorimotor and emotional responses in synaesthetes and non-synaesthetes. PSYCHOLOGICAL RESEARCH 2024; 88:348-362. [PMID: 37453940 PMCID: PMC10857979 DOI: 10.1007/s00426-023-01856-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Accepted: 06/27/2023] [Indexed: 07/18/2023]
Abstract
Synaesthesia has been conceptualised as a joining of sensory experiences. Taking a holistic, embodied perspective, we investigate in this paper the role of action and emotion, testing hypotheses related to (1) changes to action-related qualities of a musical stimulus affect the resulting synaesthetic experience; (2) a comparable relationship exists between music, sensorimotor and emotional responses in synaesthetes and the general population; and (3) sensorimotor responses are more strongly associated with synaesthesia than emotion. 29 synaesthetes and 33 non-synaesthetes listened to 12 musical excerpts performed on a musical instrument they had first-hand experience playing, an instrument never played before, and a deadpan performance generated by notation software, i.e., a performance without expression. They evaluated the intensity of their experience of the music using a list of dimensions that relate to sensorimotor, emotional or synaesthetic sensations. Results demonstrated that the intensity of listeners' responses was most strongly influenced by whether or not music is performed by a human, more so than familiarity with a particular instrument. Furthermore, our findings reveal a shared relationship between emotional and sensorimotor responses among both synaesthetes and non-synaesthetes. Yet it was sensorimotor intensity that was shown to be fundamentally associated with the intensity of the synaesthetic response. Overall, the research argues for, and gives first evidence of a key role of action in shaping the experiences of music-colour synaesthesia.
Collapse
Affiliation(s)
- Caroline Curwen
- Department of Music, The University of Sheffield, Jessop Building, 34 Leavygreave Road, Sheffield, S3 7RD, UK.
| | - Renee Timmers
- Department of Music, The University of Sheffield, Jessop Building, 34 Leavygreave Road, Sheffield, S3 7RD, UK
| | - Andrea Schiavio
- School of Arts and Creative Technologies, University of York, Sally Baldwin Building D, York, YO10 5DD, UK
| |
Collapse
|
12
|
Putkinen V, Zhou X, Gan X, Yang L, Becker B, Sams M, Nummenmaa L. Bodily maps of musical sensations across cultures. Proc Natl Acad Sci U S A 2024; 121:e2308859121. [PMID: 38271338 PMCID: PMC10835118 DOI: 10.1073/pnas.2308859121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 12/01/2023] [Indexed: 01/27/2024] Open
Abstract
Emotions, bodily sensations and movement are integral parts of musical experiences. Yet, it remains unknown i) whether emotional connotations and structural features of music elicit discrete bodily sensations and ii) whether these sensations are culturally consistent. We addressed these questions in a cross-cultural study with Western (European and North American, n = 903) and East Asian (Chinese, n = 1035). We precented participants with silhouettes of human bodies and asked them to indicate the bodily regions whose activity they felt changing while listening to Western and Asian musical pieces with varying emotional and acoustic qualities. The resulting bodily sensation maps (BSMs) varied as a function of the emotional qualities of the songs, particularly in the limb, chest, and head regions. Music-induced emotions and corresponding BSMs were replicable across Western and East Asian subjects. The BSMs clustered similarly across cultures, and cluster structures were similar for BSMs and self-reports of emotional experience. The acoustic and structural features of music were consistently associated with the emotion ratings and music-induced bodily sensations across cultures. These results highlight the importance of subjective bodily experience in music-induced emotions and demonstrate consistent associations between musical features, music-induced emotions, and bodily sensations across distant cultures.
Collapse
Affiliation(s)
- Vesa Putkinen
- Turku PET Centre, University of Turku, Turku 20520, Finland
- Turku Institute for Advanced Studies, Department of Psychology, University of Turku, Turku 20014, Finland
| | - Xinqi Zhou
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu 610066, China
| | - Xianyang Gan
- The Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu 610072, China
- MOE Key Laboratory for Neuroinformation, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Linyu Yang
- College of Mathematics, Sichuan University, Chengdu 610064, China
| | - Benjamin Becker
- State Key Laboratory of Brain and Cognitive Sciences, The University of Hong Kong, Hong Kong, China
- Department of Psychology, The University of Hong Kong, Hong Kong, China
| | - Mikko Sams
- Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, Espoo 00076, Finland
| | - Lauri Nummenmaa
- Turku PET Centre, University of Turku, Turku 20520, Finland
- Department of Psychology, University of Turku, Turku 20520, Finland
| |
Collapse
|
13
|
Monno Y, Nawa NE, Yamagishi N. Duration of mood effects following a Japanese version of the mood induction task. PLoS One 2024; 19:e0293871. [PMID: 38180997 PMCID: PMC10769078 DOI: 10.1371/journal.pone.0293871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Accepted: 10/23/2023] [Indexed: 01/07/2024] Open
Abstract
Researchers have employed a variety of methodologies to induce positive and negative mood states in study participants to investigate the influence that mood has on psychological, physiological, and cognitive processes both in health and illness. Here, we investigated the effectiveness and the duration of mood effects following the mood induction task (MIT), a protocol that combines mood-inducing sentences, auditory stimuli, and autobiographical memory recall in a cohort of healthy Japanese adult individuals. In Study 1, we translated and augmented the mood-inducing sentences originally proposed by Velten in 1968 and verified that people perceived the translations as being largely congruent with the valence of the original sentences. In Study 2, we developed a Japanese version of the mood induction task (J-MIT) and examined its effectiveness using an online implementation. Results based on data collected immediately after induction showed that the J-MIT was able to modulate the mood in the intended direction. However, mood effects were not observed during the subsequent performance of a cognitive task, the Tower of London task, suggesting that the effects did not persist long enough. Overall, the current results show that mood induction procedures such as the J-MIT can alter the mood of study participants in the short term; however, at the same time, they highlight the need to further examine how mood effects evolve and persist through time to better understand how mood induction protocols can be used to study affective processes more effectively.
Collapse
Affiliation(s)
- Yasunaga Monno
- Research Organization of Open Innovation and Collaboration, Ritsumeikan University, Ibaraki, Osaka, Japan
- Center for Information and Neural Networks, Advanced ICT Research Institute, National Institute of Information and Communications Technology, Suita, Osaka, Japan
| | - Norberto Eiji Nawa
- Center for Information and Neural Networks, Advanced ICT Research Institute, National Institute of Information and Communications Technology, Suita, Osaka, Japan
- Graduate School of Frontier Biosciences, Osaka University, Suita, Osaka, Japan
| | - Noriko Yamagishi
- Center for Information and Neural Networks, Advanced ICT Research Institute, National Institute of Information and Communications Technology, Suita, Osaka, Japan
- College of Global Liberal Arts, Ritsumeikan University, Ibaraki, Osaka, Japan
| |
Collapse
|
14
|
Parada-Cabaleiro E, Batliner A, Zentner M, Schedl M. Exploring emotions in Bach chorales: a multi-modal perceptual and data-driven study. ROYAL SOCIETY OPEN SCIENCE 2023; 10:230574. [PMID: 38126059 PMCID: PMC10731325 DOI: 10.1098/rsos.230574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/30/2023] [Accepted: 11/20/2023] [Indexed: 12/23/2023]
Abstract
The relationship between music and emotion has been addressed within several disciplines, from more historico-philosophical and anthropological ones, such as musicology and ethnomusicology, to others that are traditionally more empirical and technological, such as psychology and computer science. Yet, understanding the link between music and emotion is limited by the scarce interconnections between these disciplines. Trying to narrow this gap, this data-driven exploratory study aims at assessing the relationship between linguistic, symbolic and acoustic features-extracted from lyrics, music notation and audio recordings-and perception of emotion. Employing a listening experiment, statistical analysis and unsupervised machine learning, we investigate how a data-driven multi-modal approach can be used to explore the emotions conveyed by eight Bach chorales. Through a feature selection strategy based on a set of more than 300 Bach chorales and a transdisciplinary methodology integrating approaches from psychology, musicology and computer science, we aim to initiate an efficient dialogue between disciplines, able to promote a more integrative and holistic understanding of emotions in music.
Collapse
Affiliation(s)
- Emilia Parada-Cabaleiro
- Institute of Computational Perception, Johannes Kepler University Linz, Linz, Austria
- Human-Centered AI Group, AI Laboratory, Linz Institute of Technology (LIT), Linz, Austria
- Department of Music Pedagogy, Nuremberg University of Music, Nuremberg, Germany
| | - Anton Batliner
- Chair of Embedded Intelligence for Health Care and Wellbeing, University of Augsburg, Augsburg, Germany
| | - Marcel Zentner
- Department of Psychology, University of Innsbruck, Innsbruck, Austria
| | - Markus Schedl
- Institute of Computational Perception, Johannes Kepler University Linz, Linz, Austria
- Human-Centered AI Group, AI Laboratory, Linz Institute of Technology (LIT), Linz, Austria
| |
Collapse
|
15
|
Hou Y, Ren Q, Zhang H, Mitchell A, Aletta F, Kang J, Botteldooren D. AI-based soundscape analysis: Jointly identifying sound sources and predicting annoyancea). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:3145-3157. [PMID: 37966335 DOI: 10.1121/10.0022408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 10/31/2023] [Indexed: 11/16/2023]
Abstract
Soundscape studies typically attempt to capture the perception and understanding of sonic environments by surveying users. However, for long-term monitoring or assessing interventions, sound-signal-based approaches are required. To this end, most previous research focused on psycho-acoustic quantities or automatic sound recognition. Few attempts were made to include appraisal (e.g., in circumplex frameworks). This paper proposes an artificial intelligence (AI)-based dual-branch convolutional neural network with cross-attention-based fusion (DCNN-CaF) to analyze automatic soundscape characterization, including sound recognition and appraisal. Using the DeLTA dataset containing human-annotated sound source labels and perceived annoyance, the DCNN-CaF is proposed to perform sound source classification (SSC) and human-perceived annoyance rating prediction (ARP). Experimental findings indicate that (1) the proposed DCNN-CaF using loudness and Mel features outperforms the DCNN-CaF using only one of them. (2) The proposed DCNN-CaF with cross-attention fusion outperforms other typical AI-based models and soundscape-related traditional machine learning methods on the SSC and ARP tasks. (3) Correlation analysis reveals that the relationship between sound sources and annoyance is similar for humans and the proposed AI-based DCNN-CaF model. (4) Generalization tests show that the proposed model's ARP in the presence of model-unknown sound sources is consistent with expert expectations and can explain previous findings from the literature on soundscape augmentation.
Collapse
Affiliation(s)
- Yuanbo Hou
- Wireless, Acoustics, Environmental, and Expert Systems Research Group, Department of Information Technology, Ghent University, Gent, 9052, Belgium
| | - Qiaoqiao Ren
- AI and Robotics, Internet Technology and Data Science Lab, Department of Electronics and Information Systems, Interuniversity Microelectronics Centre, Ghent University, Gent, 9052, Belgium
| | - Huizhong Zhang
- Institute for Environmental Design and Engineering, The Bartlett, University College London, London, WC1H 0NN, United Kingdom
| | - Andrew Mitchell
- Institute for Environmental Design and Engineering, The Bartlett, University College London, London, WC1H 0NN, United Kingdom
| | - Francesco Aletta
- Institute for Environmental Design and Engineering, The Bartlett, University College London, London, WC1H 0NN, United Kingdom
| | - Jian Kang
- Institute for Environmental Design and Engineering, The Bartlett, University College London, London, WC1H 0NN, United Kingdom
| | - Dick Botteldooren
- Wireless, Acoustics, Environmental, and Expert Systems Research Group, Department of Information Technology, Ghent University, Gent, 9052, Belgium
| |
Collapse
|
16
|
Korsmit IR, Montrey M, Wong-Min AYT, McAdams S. A comparison of dimensional and discrete models for the representation of perceived and induced affect in response to short musical sounds. Front Psychol 2023; 14:1287334. [PMID: 38023037 PMCID: PMC10644370 DOI: 10.3389/fpsyg.2023.1287334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 10/09/2023] [Indexed: 12/01/2023] Open
Abstract
Introduction In musical affect research, there is considerable discussion on the best method to represent affective response. This discussion mainly revolves around the dimensional (valence, tension arousal, energy arousal) and discrete (anger, fear, sadness, happiness, tenderness) models of affect. Here, we compared these models' ability to capture self-reported affect in response to short, affectively ambiguous sounds. Methods In two online experiments (n1 = 263, n2 = 152), participants rated perceived and induced affect in response to single notes (Exp 1) and chromatic scales (Exp 2), which varied across instrument family and pitch register. Additionally, participants completed questionnaires measuring pre-existing mood, trait empathy, Big-Five personality, musical sophistication, and musical preferences. Results Rater consistency and agreement were high across all affect scales. Correlation and principal component analyses showed that two dimensions or two affect categories captured most of the variation in affective response. Canonical correlation and regression analyses also showed that energy arousal varied in a manner that was not captured by discrete affect ratings. Furthermore, all sources of individual differences were moderately correlated with all affect scales, particularly pre-existing mood and dimensional affect. Discussion We conclude that when it comes to single notes and chromatic scales, the dimensions of valence and energy arousal best capture the perceived and induced affective response to affectively ambiguous sounds, although the role of individual differences should also be considered.
Collapse
Affiliation(s)
- Iza Ray Korsmit
- Music Research Department, Schulich School of Music, McGill University, Montreal, QC, Canada
| | - Marcel Montrey
- Department of Psychology, McGill University, Montreal, QC, Canada
| | | | - Stephen McAdams
- Music Research Department, Schulich School of Music, McGill University, Montreal, QC, Canada
| |
Collapse
|
17
|
Xiao X, Tan J, Liu X, Zheng M. The dual effect of background music on creativity: perspectives of music preference and cognitive interference. Front Psychol 2023; 14:1247133. [PMID: 37868605 PMCID: PMC10588669 DOI: 10.3389/fpsyg.2023.1247133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 09/19/2023] [Indexed: 10/24/2023] Open
Abstract
Music, an influential environmental factor, significantly shapes cognitive processing and everyday experiences, thus rendering its effects on creativity a dynamic topic within the field of cognitive science. However, debates continue about whether music bolsters, obstructs, or exerts a dual influence on individual creativity. Among the points of contention is the impact of contrasting musical emotions-both positive and negative-on creative tasks. In this study, we focused on traditional Chinese music, drawn from a culture known for its 'preference for sadness,' as our selected emotional stimulus and background music. This choice, underrepresented in previous research, was based on its uniqueness. We examined the effects of differing music genres (including vocal and instrumental), each characterized by a distinct emotional valence (positive or negative), on performance in the Alternative Uses Task (AUT). To conduct this study, we utilized an affective arousal paradigm, with a quiet background serving as a neutral control setting. A total of 114 participants were randomly assigned to three distinct groups after completing a music preference questionnaire: instrumental, vocal, and silent. Our findings showed that when compared to a quiet environment, both instrumental and vocal music as background stimuli significantly affected AUT performance. Notably, music with a negative emotional charge bolstered individual originality in creative performance. These results lend support to the dual role of background music in creativity, with instrumental music appearing to enhance creativity through factors such as emotional arousal, cognitive interference, music preference, and psychological restoration. This study challenges conventional understanding that only positive background music boosts creativity and provides empirical validation for the two-path model (positive and negative) of emotional influence on creativity.
Collapse
Affiliation(s)
- Xinyao Xiao
- China Institute of Music Mental Health, Chongqing, China
- School of Music, Southwest University, Chongqing, China
| | - Junying Tan
- Guizhou University of Finance and Economics, Guiyang, China
| | - Xiaolin Liu
- China Institute of Music Mental Health, Chongqing, China
- School of Psychology, Southwest University, Chongqing, China
| | - Maoping Zheng
- China Institute of Music Mental Health, Chongqing, China
- School of Music, Southwest University, Chongqing, China
| |
Collapse
|
18
|
Yurdum L, Singh M, Glowacki L, Vardy T, Atkinson QD, Hilton CB, Sauter D, Krasnow MM, Mehr SA. Universal interpretations of vocal music. Proc Natl Acad Sci U S A 2023; 120:e2218593120. [PMID: 37676911 PMCID: PMC10500275 DOI: 10.1073/pnas.2218593120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Accepted: 06/21/2023] [Indexed: 09/09/2023] Open
Abstract
Despite the variability of music across cultures, some types of human songs share acoustic characteristics. For example, dance songs tend to be loud and rhythmic, and lullabies tend to be quiet and melodious. Human perceptual sensitivity to the behavioral contexts of songs, based on these musical features, suggests that basic properties of music are mutually intelligible, independent of linguistic or cultural content. Whether these effects reflect universal interpretations of vocal music, however, is unclear because prior studies focus almost exclusively on English-speaking participants, a group that is not representative of humans. Here, we report shared intuitions concerning the behavioral contexts of unfamiliar songs produced in unfamiliar languages, in participants living in Internet-connected industrialized societies (n = 5,516 native speakers of 28 languages) or smaller-scale societies with limited access to global media (n = 116 native speakers of three non-English languages). Participants listened to songs randomly selected from a representative sample of human vocal music, originally used in four behavioral contexts, and rated the degree to which they believed the song was used for each context. Listeners in both industrialized and smaller-scale societies inferred the contexts of dance songs, lullabies, and healing songs, but not love songs. Within and across cohorts, inferences were mutually consistent. Further, increased linguistic or geographical proximity between listeners and singers only minimally increased the accuracy of the inferences. These results demonstrate that the behavioral contexts of three common forms of music are mutually intelligible cross-culturally and imply that musical diversity, shaped by cultural evolution, is nonetheless grounded in some universal perceptual phenomena.
Collapse
Affiliation(s)
- Lidya Yurdum
- Child Study Center, Yale University, New Haven, CT06520
- Department of Psychology, University of Amsterdam, Amsterdam1018WT, Netherlands
| | - Manvir Singh
- Department of Anthropology, University of California, Davis, DavisCA95616
| | - Luke Glowacki
- Department of Anthropology, Boston University, Boston, MA02215
| | - Thomas Vardy
- School of Psychology, University of Auckland, Auckland1010, New Zealand
| | | | | | - Disa Sauter
- Department of Psychology, University of Amsterdam, Amsterdam1018WT, Netherlands
| | - Max M. Krasnow
- Division of Continuing Education, Harvard University, Cambridge, MA02138
| | - Samuel A. Mehr
- Child Study Center, Yale University, New Haven, CT06520
- School of Psychology, University of Auckland, Auckland1010, New Zealand
| |
Collapse
|
19
|
Wang X, Huang W. Determining the role of music attitude and its precursors in stimulating the psychological wellbeing of immigrants during COVID quarantine - a moderated mediation approach. Front Psychol 2023; 14:1121180. [PMID: 37519375 PMCID: PMC10382205 DOI: 10.3389/fpsyg.2023.1121180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Accepted: 06/07/2023] [Indexed: 08/01/2023] Open
Abstract
Based on social cognitive theory (SCT), the purpose of this study is to examine the role of music attitude and its essential precursors in stimulating the psychological wellbeing of immigrants in isolation (quarantine) during the COVID pandemic. This study employed quantitative methodology; an online survey was administered to collect sufficient data from 300 immigrants who traveled to China during the pandemic. Data were collected from five centralized quarantine centers situated in different cities in China. Additionally, the valid data set was analyzed using structural equation modeling (SEM) via AMOS 24 and SPSS 24. The results indicate that potential predictors such as cognitive - music experience (MEX), environmental - social media peer influence (SPI), and cultural factors such as native music (NM) have a direct, significant, and positive effect on music attitude (MA), which further influences immigrants' psychological wellbeing (PW) during their quarantine period. Moreover, in the presence of the mediator (MA), the mediating relationships between MEX and PW, and NM and PW, are positive, significant, and regarded as partial mediation. However, the moderated mediation effects of music type (MT) on MEX-MA-PW and NM-MA-PW were found to be statistically not significant and unsupported. This study contributes to the literature on the effectiveness of individuals' music attitude and its associated outcomes, focusing on mental health care in lonely situations such as quarantine during the COVID pandemic. More importantly, this study has raised awareness about music, music attitude, and their beneficial outcomes, such as mental calmness and peacefulness for the general public, particularly during social distancing, isolation, and quarantine in the COVID pandemic situation.
Collapse
Affiliation(s)
- Xiaokang Wang
- College of Music and Dance, Guizhou Minzu University, Guiyang, Guizhou, China
| | | |
Collapse
|
20
|
Singh M, Mehr SA. Universality, domain-specificity, and development of psychological responses to music. NATURE REVIEWS PSYCHOLOGY 2023; 2:333-346. [PMID: 38143935 PMCID: PMC10745197 DOI: 10.1038/s44159-023-00182-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/30/2023] [Indexed: 12/26/2023]
Abstract
Humans can find music happy, sad, fearful, or spiritual. They can be soothed by it or urged to dance. Whether these psychological responses reflect cognitive adaptations that evolved expressly for responding to music is an ongoing topic of study. In this Review, we examine three features of music-related psychological responses that help to elucidate whether the underlying cognitive systems are specialized adaptations: universality, domain-specificity, and early expression. Focusing on emotional and behavioural responses, we find evidence that the relevant psychological mechanisms are universal and arise early in development. However, the existing evidence cannot establish that these mechanisms are domain-specific. To the contrary, many findings suggest that universal psychological responses to music reflect more general properties of emotion, auditory perception, and other human cognitive capacities that evolved for non-musical purposes. Cultural evolution, driven by the tinkering of musical performers, evidently crafts music to compellingly appeal to shared psychological mechanisms, resulting in both universal patterns (such as form-function associations) and culturally idiosyncratic styles.
Collapse
Affiliation(s)
- Manvir Singh
- Institute for Advanced Study in Toulouse, University of
Toulouse 1 Capitole, Toulouse, France
| | - Samuel A. Mehr
- Yale Child Study Center, Yale University, New Haven, CT,
USA
- School of Psychology, University of Auckland, Auckland,
New Zealand
| |
Collapse
|
21
|
Kurzom N, Lorenzi I, Mendelsohn A. Increasing the complexity of isolated musical chords benefits concurrent associative memory formation. Sci Rep 2023; 13:7563. [PMID: 37161040 PMCID: PMC10169783 DOI: 10.1038/s41598-023-34345-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Accepted: 04/27/2023] [Indexed: 05/11/2023] Open
Abstract
The effects of background music on learning and memory are inconsistent, partially due to the intrinsic complexity and diversity of music, as well as variability in music perception and preference. By stripping down musical harmony to its building blocks, namely discrete chords, we explored their effects on memory formation of unfamiliar word-image associations. Chords, defined as two or more simultaneously played notes, differ in the number of tones and inter-tone intervals, yielding varying degrees of harmonic complexity, which translate into a continuum of consonance to dissonance percepts. In the current study, participants heard four different types of musical chords (major, minor, medium complex, and high complex chords) while they learned new word-image pairs of a foreign language. One day later, their memory for the word-image pairs was tested, along with a chord rating session, in which they were required to assess the musical chords in terms of perceived valence, tension, and the extent to which the chords grabbed their attention. We found that musical chords containing dissonant elements were associated with higher memory performance for the word-image pairs compared with consonant chords. Moreover, tension positively mediated the relationship between roughness (a key feature of complexity) and memory, while valence negatively mediated this relationship. The reported findings are discussed in light of the effects that basic musical features have on tension and attention, in turn affecting cognitive processes of associative learning.
Collapse
Affiliation(s)
- Nawras Kurzom
- Sagol Department of Neurobiology, University of Haifa, Haifa, Israel.
- The Institute of Information Processing and Decision Making (IIPDM), University of Haifa, Haifa, Israel.
| | - Ilaria Lorenzi
- Sagol Department of Neurobiology, University of Haifa, Haifa, Israel
- The Institute of Information Processing and Decision Making (IIPDM), University of Haifa, Haifa, Israel
- Department of Biology, University of Pisa, Pisa, Italy
| | - Avi Mendelsohn
- Sagol Department of Neurobiology, University of Haifa, Haifa, Israel
- The Institute of Information Processing and Decision Making (IIPDM), University of Haifa, Haifa, Israel
| |
Collapse
|
22
|
Plate RC, Jones C, Zhao S, Flum MW, Steinberg J, Daley G, Corbett N, Neumann C, Waller R. "But not the music": psychopathic traits and difficulties recognising and resonating with the emotion in music. Cogn Emot 2023; 37:748-762. [PMID: 37104122 DOI: 10.1080/02699931.2023.2205105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 12/23/2022] [Accepted: 04/05/2023] [Indexed: 04/28/2023]
Abstract
Recognising and responding appropriately to emotions is critical to adaptive psychological functioning. Psychopathic traits (e.g. callous, manipulative, impulsive, antisocial) are related to differences in recognition and response when emotion is conveyed through facial expressions and language. Use of emotional music stimuli represents a promising approach to improve our understanding of the specific emotion processing difficulties underlying psychopathic traits because it decouples recognition of emotion from cues directly conveyed by other people (e.g. facial signals). In Experiment 1, participants listened to clips of emotional music and identified the emotional content (Sample 1, N = 196) or reported on their feelings elicited by the music (Sample 2, N = 197). Participants accurately recognised (t(195) = 32.78, p < .001, d = 4.69) and reported feelings consistent with (t(196) = 7.84, p < .001, d = 1.12) the emotion conveyed in the music. However, psychopathic traits were associated with reduced emotion recognition accuracy (F(1, 191) = 19.39, p < .001) and reduced likelihood of feeling the emotion (F(1, 193) = 35.45, p < .001), particularly for fearful music. In Experiment 2, we replicated findings for broad difficulties with emotion recognition (Sample 3, N = 179) and emotional resonance (Sample 4, N = 199) associated with psychopathic traits. Results offer new insight into emotion recognition and response difficulties that are associated with psychopathic traits.
Collapse
Affiliation(s)
- R C Plate
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - C Jones
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - S Zhao
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - M W Flum
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - J Steinberg
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - G Daley
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - N Corbett
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - C Neumann
- Department of Psychology, University of North Texas, Denton, TX, USA
| | - R Waller
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
23
|
The EEG microstate representation of discrete emotions. Int J Psychophysiol 2023; 186:33-41. [PMID: 36773887 DOI: 10.1016/j.ijpsycho.2023.02.002] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 02/03/2023] [Accepted: 02/07/2023] [Indexed: 02/11/2023]
Abstract
Understanding how human emotions are represented in our brain is a central question in the field of affective neuroscience. While previous studies have mainly adopted a modular and static perspective on the neural representation of emotions, emerging research suggests that emotions may rely on a distributed and dynamic representation. The present study aimed to explore the EEG microstate representations for nine discrete emotions (Anger, Disgust, Fear, Sadness, Neutral, Amusement, Inspiration, Joy and Tenderness). Seventy-eight participants were recruited to watch emotion eliciting videos with their EEGs recorded. Multivariate analysis revealed that different emotions had distinct EEG microstate features. By using the EEG microstate features in the Neutral condition as the reference, the coverage of C, duration of C and occurrence of B were found to be the top-contributing microstate features for the discrete positive and negative emotions. The emotions of Disgust, Fear and Joy were found to be most effectively represented by EEG microstate. The present study provided the first piece of evidence of EEG microstate representation for discrete emotions, highlighting a whole-brain, dynamical representation of human emotions.
Collapse
|
24
|
Tervaniemi M. The neuroscience of music – towards ecological validity. Trends Neurosci 2023; 46:355-364. [PMID: 37012175 DOI: 10.1016/j.tins.2023.03.001] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 01/28/2023] [Accepted: 03/02/2023] [Indexed: 04/03/2023]
Abstract
Studies in the neuroscience of music gained momentum in the 1990s as an integrated part of the well-controlled experimental research tradition. However, during the past two decades, these studies have moved toward more naturalistic, ecologically valid paradigms. Here, I introduce this move in three frameworks: (i) sound stimulation and empirical paradigms, (ii) study participants, and (iii) methods and contexts of data acquisition. I wish to provide a narrative historical overview of the development of the field and, in parallel, to stimulate innovative thinking to further advance the ecological validity of the studies without overlooking experimental rigor.
Collapse
Affiliation(s)
- Mari Tervaniemi
- Centre of Excellence in Music, Mind, Body, and Brain, Faculty of Educational Sciences, University of Helsinki, Helsinki, Finland; Cognitive Brain Research Unit, Department of Psychology and Locopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland.
| |
Collapse
|
25
|
Lévêque Y, Schellenberg EG, Fornoni L, Bouchet P, Caclin A, Tillmann B. Individuals with congenital amusia remember music they like. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2023:10.3758/s13415-023-01084-6. [PMID: 36949277 DOI: 10.3758/s13415-023-01084-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 02/22/2023] [Indexed: 03/24/2023]
Abstract
Music is better recognized when it is liked. Does this association remain evident when music perception and memory are severely impaired, as in congenital amusia? We tested 11 amusic and 11 matched control participants, asking whether liking of a musical excerpt influences subsequent recognition. In an initial exposure phase, participants-unaware that their recognition would be tested subsequently-listened to 24 musical excerpts and judged how much they liked each excerpt. In the test phase that followed, participants rated whether they recognized the previously heard excerpts, which were intermixed with an equal number of foils matched for mode, tempo, and musical genre. As expected, recognition was in general impaired for amusic participants compared with control participants. For both groups, however, recognition was better for excerpts that were liked, and the liking enhancement did not differ between groups. These results contribute to a growing body of research that examines the complex interplay between emotions and cognitive processes. More specifically, they extend previous findings related to amusics' impairments to a new memory paradigm and suggest that (1) amusic individuals are sensitive to an aesthetic and subjective dimension of the music-listening experience, and (2) emotions can support memory processes even in a population with impaired music perception and memory.
Collapse
Affiliation(s)
- Yohana Lévêque
- Lyon Neuroscience Research Center, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France.
- University Lyon 1, F-69000, Lyon, France.
| | - E Glenn Schellenberg
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal
- Department of Psychology, University of Toronto Mississauga, Mississauga, Canada
| | - Lesly Fornoni
- Lyon Neuroscience Research Center, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France
- University Lyon 1, F-69000, Lyon, France
| | - Patrick Bouchet
- Lyon Neuroscience Research Center, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France
- University Lyon 1, F-69000, Lyon, France
| | - Anne Caclin
- Lyon Neuroscience Research Center, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France
- University Lyon 1, F-69000, Lyon, France
| | - Barbara Tillmann
- Lyon Neuroscience Research Center, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France
- University Lyon 1, F-69000, Lyon, France
| |
Collapse
|
26
|
Nummenmaa L, Hari R. Bodily feelings and aesthetic experience of art. Cogn Emot 2023:1-14. [PMID: 36912601 DOI: 10.1080/02699931.2023.2183180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2023]
Abstract
Humans all around the world are drawn to creating and consuming art due to its capability to evoke emotions, but the mechanisms underlying art-evoked feelings remain poorly characterised. Here we show how embodiement contributes to emotions evoked by a large database of visual art pieces (n = 336). In four experiments, we mapped the subjective feeling space of art-evoked emotions (n = 244), quantified "bodily fingerprints" of these emotions (n = 615), and recorded the subjects' interest annotations (n = 306) and eye movements (n = 21) while viewing the art. We show that art evokes a wide spectrum of feelings, and that the bodily fingerprints triggered by art are central to these feelings, especially in artworks where human figures are salient. Altogether these results support the model that bodily sensations are central to the aesthetic experience.
Collapse
Affiliation(s)
- Lauri Nummenmaa
- Turku PET Centre, University of Turku, Turku, Finland.,Department of Psychology, University of Turku, Turku, Finland.,Turku University Hospital, University of Turku, Turku, Finland
| | - Riitta Hari
- Department of Art and Media, Aalto University, Espoo, Finland
| |
Collapse
|
27
|
van Rijn P, Larrouy-Maestri P. Modelling individual and cross-cultural variation in the mapping of emotions to speech prosody. Nat Hum Behav 2023; 7:386-396. [PMID: 36646838 PMCID: PMC10038802 DOI: 10.1038/s41562-022-01505-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Accepted: 11/28/2022] [Indexed: 01/18/2023]
Abstract
The existence of a mapping between emotions and speech prosody is commonly assumed. We propose a Bayesian modelling framework to analyse this mapping. Our models are fitted to a large collection of intended emotional prosody, yielding more than 3,000 minutes of recordings. Our descriptive study reveals that the mapping within corpora is relatively constant, whereas the mapping varies across corpora. To account for this heterogeneity, we fit a series of increasingly complex models. Model comparison reveals that models taking into account mapping differences across countries, languages, sexes and individuals outperform models that only assume a global mapping. Further analysis shows that differences across individuals, cultures and sexes contribute more to the model prediction than a shared global mapping. Our models, which can be explored in an online interactive visualization, offer a description of the mapping between acoustic features and emotions in prosody.
Collapse
Affiliation(s)
- Pol van Rijn
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
| | - Pauline Larrouy-Maestri
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Max Planck-NYU Center for Language, Music, and Emotion, New York, NY, USA
| |
Collapse
|
28
|
Abstract
How do experiences in nature or in spiritual contemplation or in being moved by music or with psychedelics promote mental and physical health? Our proposal in this article is awe. To make this argument, we first review recent advances in the scientific study of awe, an emotion often considered ineffable and beyond measurement. Awe engages five processes-shifts in neurophysiology, a diminished focus on the self, increased prosocial relationality, greater social integration, and a heightened sense of meaning-that benefit well-being. We then apply this model to illuminate how experiences of awe that arise in nature, spirituality, music, collective movement, and psychedelics strengthen the mind and body.
Collapse
Affiliation(s)
- Maria Monroy
- Maria Monroy, Department of Psychology,
University of California Berkeley
| | | |
Collapse
|
29
|
Stamkou E, Brummelman E, Dunham R, Nikolic M, Keltner D. Awe Sparks Prosociality in Children. Psychol Sci 2023; 34:455-467. [PMID: 36745740 DOI: 10.1177/09567976221150616] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023] Open
Abstract
Rooted in the novel and the mysterious, awe is a common experience in childhood, but research is almost silent with respect to the import of this emotion for children. Awe makes individuals feel small, thereby shifting their attention to the social world. Here, we studied the effects of art-elicited awe on children's prosocial behavior toward an out-group and its unique physiological correlates. In two preregistered studies (Study 1: N = 159, Study 2: N = 353), children between 8 and 13 years old viewed movie clips that elicited awe, joy, or a neutral (control) response. Children who watched the awe-eliciting clip were more likely to spend their time on an effortful task (Study 1) and to donate their experimental earnings (Studies 1 and 2), all toward benefiting refugees. They also exhibited increased respiratory sinus arrhythmia, an index of parasympathetic nervous system activation associated with social engagement. We discuss implications for fostering prosociality by reimagining children's environments to inspire awe at a critical age.
Collapse
Affiliation(s)
| | - Eddie Brummelman
- Research Institute of Child Development and Education, University of Amsterdam
| | - Rohan Dunham
- Department of Psychology, University of Amsterdam
| | - Milica Nikolic
- Research Institute of Child Development and Education, University of Amsterdam
| | - Dacher Keltner
- Department of Psychology, University of California, Berkeley
| |
Collapse
|
30
|
Brooks JA, Tzirakis P, Baird A, Kim L, Opara M, Fang X, Keltner D, Monroy M, Corona R, Metrick J, Cowen AS. Deep learning reveals what vocal bursts express in different cultures. Nat Hum Behav 2023; 7:240-250. [PMID: 36577898 DOI: 10.1038/s41562-022-01489-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Accepted: 10/26/2022] [Indexed: 12/29/2022]
Abstract
Human social life is rich with sighs, chuckles, shrieks and other emotional vocalizations, called 'vocal bursts'. Nevertheless, the meaning of vocal bursts across cultures is only beginning to be understood. Here, we combined large-scale experimental data collection with deep learning to reveal the shared and culture-specific meanings of vocal bursts. A total of n = 4,031 participants in China, India, South Africa, the USA and Venezuela mimicked vocal bursts drawn from 2,756 seed recordings. Participants also judged the emotional meaning of each vocal burst. A deep neural network tasked with predicting the culture-specific meanings people attributed to vocal bursts while disregarding context and speaker identity discovered 24 acoustic dimensions, or kinds, of vocal expression with distinct emotion-related meanings. The meanings attributed to these complex vocal modulations were 79% preserved across the five countries and three languages. These results reveal the underlying dimensions of human emotional vocalization in remarkable detail.
Collapse
Affiliation(s)
- Jeffrey A Brooks
- Research Division, Hume AI, New York, NY, USA. .,University of California, Berkeley, Berkeley, CA, USA.
| | | | - Alice Baird
- Research Division, Hume AI, New York, NY, USA
| | - Lauren Kim
- Research Division, Hume AI, New York, NY, USA
| | | | - Xia Fang
- Zhejiang University, Hangzhou, China
| | - Dacher Keltner
- Research Division, Hume AI, New York, NY, USA.,University of California, Berkeley, Berkeley, CA, USA
| | - Maria Monroy
- University of California, Berkeley, Berkeley, CA, USA
| | | | | | - Alan S Cowen
- Research Division, Hume AI, New York, NY, USA. .,University of California, Berkeley, Berkeley, CA, USA.
| |
Collapse
|
31
|
Papatzikis E, Agapaki M, Selvan RN, Pandey V, Zeba F. Quality standards and recommendations for research in music and neuroplasticity. Ann N Y Acad Sci 2023; 1520:20-33. [PMID: 36478395 DOI: 10.1111/nyas.14944] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Research on how music influences brain plasticity has gained momentum in recent years. Considering, however, the nonuniform methodological standards implemented, the findings end up being nonreplicable and less generalizable. To address the need for a standardized baseline of research quality, we gathered all the studies in the music and neuroplasticity field in 2019 and appraised their methodological rigor systematically and critically. The aim was to provide a preliminary and, at the minimum, acceptable quality threshold-and, ipso facto, suggested recommendations-whereupon further discussion and development may take place. Quality appraisal was performed on 89 articles by three independent raters, following a standardized scoring system. The raters' scoring was cross-referenced following an inter-rater reliability measure, and further studied by performing multiple ratings comparisons and matrix analyses. The results for methodological quality were at a quite good level (quantitative articles: mean = 0.737, SD = 0.084; qualitative articles: mean = 0.677, SD = 0.144), following a moderate but statistically significant level of agreement between the raters (W = 0.44, χ2 = 117.249, p = 0.020). We conclude that the standards for implementation and reporting are of high quality; however, certain improvements are needed to reach the stringent levels presumed for such an influential interdisciplinary scientific field.
Collapse
Affiliation(s)
- Efthymios Papatzikis
- Department of Early Childhood Education and Care, Oslo Metropolitan University, Oslo, Norway
| | - Maria Agapaki
- Department of Early Childhood Education and Care, Oslo Metropolitan University, Oslo, Norway
| | - Rosari Naveena Selvan
- Institute for Physics 3 - Biophysics and Bernstein Center for Computational Neuroscience (BCCN), University of Göttingen, Göttingen, Germany.,Department of Psychology, University of Münster, Münster, Germany
| | | | - Fathima Zeba
- School of Humanities and Social Sciences, Manipal Academy of Higher Education Dubai, Dubai, United Arab Emirates
| |
Collapse
|
32
|
Liew K, Uchida Y, Domae H, Koh AHQ. Energetic music is used for anger downregulation: A cross‐cultural differentiation of intensity from rhythmic arousal. JOURNAL OF APPLIED SOCIAL PSYCHOLOGY 2022. [DOI: 10.1111/jasp.12951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
Affiliation(s)
- Kongmeng Liew
- Graduate School of Human and Environmental Studies Kyoto University Kyoto Japan
- Graduate School of Science and Technology Nara Institute of Science and Technology Ikoma Japan
| | - Yukiko Uchida
- Institute for the Future of Human Society Kyoto University Kyoto Japan
| | - Hiina Domae
- Graduate School of Human and Environmental Studies Kyoto University Kyoto Japan
| | - Alethea H. Q. Koh
- Institute for the Future of Human Society Kyoto University Kyoto Japan
| |
Collapse
|
33
|
Gómez-Cañón JS, Gutiérrez-Páez N, Porcaro L, Porter A, Cano E, Herrera-Boyer P, Gkiokas A, Santos P, Hernández-Leo D, Karreman C, Gómez E. TROMPA-MER: an open dataset for personalized music emotion recognition. J Intell Inf Syst 2022. [DOI: 10.1007/s10844-022-00746-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
AbstractWe present a platform and a dataset to help research on Music Emotion Recognition (MER). We developed the Music Enthusiasts platform aiming to improve the gathering and analysis of the so-called “ground truth” needed as input to MER systems. Firstly, our platform involves engaging participants using citizen science strategies and generate music emotion annotations – the platform presents didactic information and musical recommendations as incentivization, and collects data regarding demographics, mood, and language from each participant. Participants annotated each music excerpt with single free-text emotion words (in native language), distinct forced-choice emotion categories, preference, and familiarity. Additionally, participants stated the reasons for each annotation – including those distinctive of emotion perception and emotion induction. Secondly, our dataset was created for personalized MER and contains information from 181 participants, 4721 annotations, and 1161 music excerpts. To showcase the use of the dataset, we present a methodology for personalization of MER models based on active learning. The experiments show evidence that using the judgment of the crowd as prior knowledge for active learning allows for more effective personalization of MER systems for this particular dataset. Our dataset is publicly available and we invite researchers to use it for testing MER systems.
Collapse
|
34
|
Martins MDJD, Baumard N. How to Develop Reliable Instruments to Measure the Cultural Evolution of Preferences and Feelings in History? Front Psychol 2022; 13:786229. [PMID: 35923745 PMCID: PMC9340072 DOI: 10.3389/fpsyg.2022.786229] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 06/20/2022] [Indexed: 11/13/2022] Open
Abstract
While we cannot directly measure the psychological preferences of individuals, and the moral, emotional, and cognitive tendencies of people from the past, we can use cultural artifacts as a window to the zeitgeist of societies in particular historical periods. At present, an increasing number of digitized texts spanning several centuries is available for a computerized analysis. In addition, developments form historical economics have enabled increasingly precise estimations of sociodemographic realities from the past. Crossing these datasets offer a powerful tool to test how the environment changes psychology and vice versa. However, designing the appropriate proxies of relevant psychological constructs is not trivial. The gold standard to measure psychological constructs in modern texts - Linguistic Inquiry and Word Count (LIWC) - has been validated by psychometric experimentation with modern participants. However, as a tool to investigate the psychology of the past, the LIWC is limited in two main aspects: (1) it does not cover the entire range of relevant psychological dimensions and (2) the meaning, spelling, and pragmatic use of certain words depend on the historical period from which the fiction work is sampled. These LIWC limitations make the design of custom tools inevitable. However, without psychometric validation, there is uncertainty regarding what exactly is being measured. To overcome these pitfalls, we suggest several internal and external validation procedures, to be conducted prior to diachronic analyses. First, the semantic adequacy of search terms in bags-of-words approaches should be verified by training semantic vector spaces with the historical text corpus using tools like word2vec. Second, we propose factor analyses to evaluate the internal consistency between distinct bag-of-words proxying the same underlying psychological construct. Third, these proxies can be externally validated using prior knowledge on the differences between genres or other literary dimensions. Finally, while LIWC is limited in the analysis of historical documents, it can be used as a sanity check for external validation of custom measures. This procedure allows a robust estimation of psychological constructs and how they change throughout history. Together with historical economics, it also increases our power in testing the relationship between environmental change and the expression of psychological traits from the past.
Collapse
|
35
|
Keltner D, Sauter D, Tracy JL, Wetchler E, Cowen AS. How emotions, relationships, and culture constitute each other: advances in social functionalist theory. Cogn Emot 2022; 36:388-401. [PMID: 35639090 DOI: 10.1080/02699931.2022.2047009] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
Social Functionalist Theory (SFT) emerged 20 years ago to orient emotion science to the social nature of emotion. Here we expand upon SFT and make the case for how emotions, relationships, and culture constitute one another. First, we posit that emotions enable the individual to meet six "relational needs" within social interactions: security, commitment, status, trust, fairness, and belongingness. Building upon this new theorising, we detail four principles concerning emotional experience, cognition, expression, and the cultural archiving of emotion. We conclude by considering the bidirectional influences between culture, relationships, and emotion, outlining areas of future inquiry.
Collapse
Affiliation(s)
- Dacher Keltner
- Psychology Department, University of California at Berkeley, Berkeley, CA, USA
| | - Disa Sauter
- Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
| | | | - Everett Wetchler
- Psychology Department, University of California at Berkeley, Berkeley, CA, USA
| | - Alan S Cowen
- Psychology Department, University of California at Berkeley, Berkeley, CA, USA
| |
Collapse
|
36
|
Dieterich-Hartwell R, Gilman A, Hecker V. Music in the Practice of Dance/Movement Therapy. ARTS IN PSYCHOTHERAPY 2022. [DOI: 10.1016/j.aip.2022.101938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
37
|
Vuust P, Heggli OA, Friston KJ, Kringelbach ML. Music in the brain. Nat Rev Neurosci 2022; 23:287-305. [PMID: 35352057 DOI: 10.1038/s41583-022-00578-5] [Citation(s) in RCA: 99] [Impact Index Per Article: 49.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/22/2022] [Indexed: 02/06/2023]
Abstract
Music is ubiquitous across human cultures - as a source of affective and pleasurable experience, moving us both physically and emotionally - and learning to play music shapes both brain structure and brain function. Music processing in the brain - namely, the perception of melody, harmony and rhythm - has traditionally been studied as an auditory phenomenon using passive listening paradigms. However, when listening to music, we actively generate predictions about what is likely to happen next. This enactive aspect has led to a more comprehensive understanding of music processing involving brain structures implicated in action, emotion and learning. Here we review the cognitive neuroscience literature of music perception. We show that music perception, action, emotion and learning all rest on the human brain's fundamental capacity for prediction - as formulated by the predictive coding of music model. This Review elucidates how this formulation of music perception and expertise in individuals can be extended to account for the dynamics and underlying brain mechanisms of collective music making. This in turn has important implications for human creativity as evinced by music improvisation. These recent advances shed new light on what makes music meaningful from a neuroscientific perspective.
Collapse
Affiliation(s)
- Peter Vuust
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark.
| | - Ole A Heggli
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
| | - Morten L Kringelbach
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark.,Department of Psychiatry, University of Oxford, Oxford, UK.,Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, UK
| |
Collapse
|
38
|
From aesthetics to ethics: Testing the link between an emotional experience of awe and the motive of quixoteism on (un)ethical behavior. MOTIVATION AND EMOTION 2022; 46:508-520. [PMID: 35340283 PMCID: PMC8935891 DOI: 10.1007/s11031-022-09935-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/24/2022] [Indexed: 10/29/2022]
Abstract
According to the awe-quixoteism hypothesis, one experience of awe may lead to the engagement in challenging actions aimed at increasing the welfare of the world. However, what if the action involves damaging one individual? Across four experiments (N = 876), half participants were induced to feel either awe or a different (pleasant, activating, or neutral-control) emotion, and then decided whether achieving a prosocial goal (local vs. global). In the first three experiments this decision was assessed through a dilemma that involved to sacrifice one individual's life, additionally in Experiments 2 and 3 we varied the quality of the action (ordinary vs. challenging). In Experiment 4, participants decided whether performing a real helping action. Overall, in line with the awe-quixoteism hypothesis, the results showed that previously inducing awe enhanced the willingness to sacrifice someone (Experiments 1, 2 and 3) or the acceptance to help (Experiment 4) when the decision involved engaging in challenges aimed at improving the welfare of the world.
Collapse
|
39
|
Liu W, Zheng WL, Li Z, Wu SY, Gan L, Lu BL. Identifying similarities and differences in emotion recognition with EEG and eye movements among Chinese, German, and French people. J Neural Eng 2022; 19. [PMID: 35272271 DOI: 10.1088/1741-2552/ac5c8d] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Accepted: 03/10/2022] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Cultures have essential influences on emotions. However, most studies on cultural influences on emotions are in the areas of psychology and neuroscience, while the existing affective models are mostly built with data from the same culture. In this paper, we identify the similarities and differences among Chinese, German, and French individuals in emotion recognition with electroencephalogram (EEG) and eye movements from an affective computing perspective. APPROACH Three experimental settings were designed: intraculture subject dependent, intraculture subject independent, and cross-culture subject independent. EEG and eye movements are acquired simultaneously from Chinese, German, and French subjects while watching positive, neutral, and negative movie clips. The affective models for Chinese, German, and French subjects are constructed by using machine learning algorithms. A systematic analysis is performed from four aspects: affective model performance, neural patterns, complementary information from different modalities, and cross-cultural emotion recognition. MAIN RESULTS From emotion recognition accuracies, we find that EEG and eye movements can adapt to Chinese, German, and French cultural diversities and that a cultural in-group advantage phenomenon does exist in emotion recognition with EEG. From the topomaps of EEG, we find that the gamma and beta bands exhibit decreasing activities for Chinese, while for German and French, theta and alpha bands exhibit increasing activities. From confusion matrices and attentional weights, we find that EEG and eye movements have complementary characteristics. From a cross-cultural emotion recognition perspective, we observe that German and French people share more similarities in topographical patterns and attentional weight distributions than Chinese people while the data from Chinese are a good fit for test data but not suitable for training data for the other two cultures. SIGNIFICANCE Our experimental results provide concrete evidence of the in-group advantage phenomenon, cultural influences on emotion recognition, and different neural patterns among Chinese, German, and French individuals.
Collapse
Affiliation(s)
- Wei Liu
- Computer Science and Engineering, Shanghai Jiao Tong University, No 800, Dongchuan Road, Minhang District, Shanghai ,China, Shanghai, Shanghai, Shanghai, 200240, CHINA
| | - Wei-Long Zheng
- Massachusetts General Hospital, 77 Massachusetts Avenue, Room 46-2005 Cambridge, MA, USA, Boston, Massachusetts, 02114-2696, UNITED STATES
| | - Ziyi Li
- Shanghai Jiao Tong University, No 800, Dongchuan Road Minhang District, Shanghai ,China, Shanghai, 200240, CHINA
| | - Si-Yuan Wu
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, No 800, Dongchuan Road Minhang District, Shanghai, 200240, CHINA
| | - Lu Gan
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, No 800, Dongchuan Road Minhang District, Shanghai ,China, Shanghai, 200240, CHINA
| | - Bao-Liang Lu
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai 200030, P R China, Shanghai, 200240, CHINA
| |
Collapse
|
40
|
Margulis EH, Wong PCM, Turnbull C, Kubit BM, McAuley JD. Narratives imagined in response to instrumental music reveal culture-bounded intersubjectivity. Proc Natl Acad Sci U S A 2022; 119:e2110406119. [PMID: 35064081 PMCID: PMC8795501 DOI: 10.1073/pnas.2110406119] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Accepted: 12/13/2021] [Indexed: 11/18/2022] Open
Abstract
The scientific literature sometimes considers music an abstract stimulus, devoid of explicit meaning, and at other times considers it a universal language. Here, individuals in three geographically distinct locations spanning two cultures performed a highly unconstrained task: they provided free-response descriptions of stories they imagined while listening to instrumental music. Tools from natural language processing revealed that listeners provide highly similar stories to the same musical excerpts when they share an underlying culture, but when they do not, the generated stories show limited overlap. These results paint a more complex picture of music's power: music can generate remarkably similar stories in listeners' minds, but the degree to which these imagined narratives are shared depends on the degree to which culture is shared across listeners. Thus, music is neither an abstract stimulus nor a universal language but has semantic affordances shaped by culture, requiring more sustained attention from psychology.
Collapse
Affiliation(s)
| | - Patrick C M Wong
- Department of Linguistics and Modern Languages, Chinese University of Hong Kong, Hong Kong SAR, China
| | - Cara Turnbull
- Department of Music, Princeton University, Princeton, NJ 08544
| | - Benjamin M Kubit
- Department of Psychology, Princeton University, Princeton, NJ 08544
| | - J Devin McAuley
- Department of Psychology, Michigan State University, East Lansing, MI 48824
| |
Collapse
|
41
|
Lange EB, Fünderich J, Grimm H. Multisensory integration of musical emotion perception in singing. PSYCHOLOGICAL RESEARCH 2022; 86:2099-2114. [PMID: 35001181 PMCID: PMC9470688 DOI: 10.1007/s00426-021-01637-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Accepted: 12/16/2021] [Indexed: 11/25/2022]
Abstract
We investigated how visual and auditory information contributes to emotion communication during singing. Classically trained singers applied two different facial expressions (expressive/suppressed) to pieces from their song and opera repertoire. Recordings of the singers were evaluated by laypersons or experts, presented to them in three different modes: auditory, visual, and audio–visual. A manipulation check confirmed that the singers succeeded in manipulating the face while keeping the sound highly expressive. Analyses focused on whether the visual difference or the auditory concordance between the two versions determined perception of the audio–visual stimuli. When evaluating expressive intensity or emotional content a clear effect of visual dominance showed. Experts made more use of the visual cues than laypersons. Consistency measures between uni-modal and multimodal presentations did not explain the visual dominance. The evaluation of seriousness was applied as a control. The uni-modal stimuli were rated as expected, but multisensory evaluations converged without visual dominance. Our study demonstrates that long-term knowledge and task context affect multisensory integration. Even though singers’ orofacial movements are dominated by sound production, their facial expressions can communicate emotions composed into the music, and observes do not rely on audio information instead. Studies such as ours are important to understand multisensory integration in applied settings.
Collapse
Affiliation(s)
- Elke B Lange
- Department of Music, Max Planck Institute for Empirical Aesthetics (MPIEA), Grüneburgweg 14, 60322, Frankfurt/M., Germany.
| | - Jens Fünderich
- Department of Music, Max Planck Institute for Empirical Aesthetics (MPIEA), Grüneburgweg 14, 60322, Frankfurt/M., Germany.,University of Erfurt, Erfurt, Germany
| | - Hartmut Grimm
- Department of Music, Max Planck Institute for Empirical Aesthetics (MPIEA), Grüneburgweg 14, 60322, Frankfurt/M., Germany
| |
Collapse
|
42
|
Wang X, Wei Y, Heng L, McAdams S. A Cross-Cultural Analysis of the Influence of Timbre on Affect Perception in Western Classical Music and Chinese Music Traditions. Front Psychol 2021; 12:732865. [PMID: 34659045 PMCID: PMC8511703 DOI: 10.3389/fpsyg.2021.732865] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Accepted: 09/01/2021] [Indexed: 12/04/2022] Open
Abstract
Timbre is one of the psychophysical cues that has a great impact on affect perception, although, it has not been the subject of much cross-cultural research. Our aim is to investigate the influence of timbre on the perception of affect conveyed by Western and Chinese classical music using a cross-cultural approach. Four listener groups (Western musicians, Western nonmusicians, Chinese musicians, and Chinese nonmusicians; 40 per group) were presented with 48 musical excerpts, which included two musical excerpts (one piece of Chinese and one piece of Western classical music) per affect quadrant from the valence-arousal space, representing angry, happy, peaceful, and sad emotions and played with six different instruments (erhu, dizi, pipa, violin, flute, and guitar). Participants reported ratings of valence, tension arousal, energy arousal, preference, and familiarity on continuous scales ranging from 1 to 9. ANOVA reveals that participants’ cultural backgrounds have a greater impact on affect perception than their musical backgrounds, and musicians more clearly distinguish between a perceived measure (valence) and a felt measure (preference) than do nonmusicians. We applied linear partial least squares regression to explore the relation between affect perception and acoustic features. The results show that the important acoustic features for valence and energy arousal are similar, which are related mostly to spectral variation, the shape of the temporal envelope, and the dynamic range. The important acoustic features for tension arousal describe the shape of the spectral envelope, noisiness, and the shape of the temporal envelope. The explanation for the similarity of perceived affect ratings between instruments is the similar acoustic features that were caused by the physical characteristics of specific instruments and performing techniques.
Collapse
Affiliation(s)
- Xin Wang
- School of Music and Recording Art, Communication University of China, Beijing, China
| | - Yujia Wei
- School of Music and Recording Art, Communication University of China, Beijing, China
| | - Lena Heng
- Schulich School of Music, McGill University, Montreal, QC, Canada
| | - Stephen McAdams
- Schulich School of Music, McGill University, Montreal, QC, Canada
| |
Collapse
|
43
|
Thompson-Bell J, Martin A, Hobkinson C. ‘Unusual ingredients’: Developing a cross-domain model for multisensory artistic practice linking food and music. INTERNATIONAL JOURNAL OF FOOD DESIGN 2021. [DOI: 10.1386/ijfd_00032_1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022] Open
Abstract
This article explores linkages between sensory experiences of food and music in light of recent research from gastrophysics, 4E cognition (i.e. embodied, embedded, extended and enactive) and ecological perception theory. Drawing on these research disciplines, this article outlines a
model for multisensory artistic practice, and a taxonomy of cross-domain creative strategies, based on the identification of sensory affordances between the domains of food and music. Food objects are shown to ‘afford’ cross-domain interrelationships with sound stimuli based on
our capacity to sense their material characteristics, and to make sense of them through prior experience and contextual association. We propose that multisensory artistic works can themselves afford extended forms of sensory awareness by synthesizing and mediating stimuli across the selected
domains, in order to form novel, or unexpected sensory linkages. These ideas are explored with reference to an ongoing artistic research project entitled ‘Unusual ingredients’, creating new music to complement and enhance the characteristics of selected food.
Collapse
Affiliation(s)
| | | | - Caroline Hobkinson
- Independent Artist and Fellow 0000000404244934Royal Anthropological Institute
| |
Collapse
|
44
|
Wang X, Wei Y, Yang D. Cross‐cultural analysis of the correlation between musical elements and emotion. COGNITIVE COMPUTATION AND SYSTEMS 2021. [DOI: 10.1049/ccs2.12032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Affiliation(s)
- Xin Wang
- School of Music and Recording Art Communication University of China Beijing China
| | - Yujia Wei
- School of Music and Recording Art Communication University of China Beijing China
| | - Dasheng Yang
- School of Music and Recording Art Communication University of China Beijing China
| |
Collapse
|
45
|
Liu X, Liu Y, Shi H, Zheng M. Effects of Mindfulness Meditation on Musical Aesthetic Emotion Processing. Front Psychol 2021; 12:648062. [PMID: 34366968 PMCID: PMC8334183 DOI: 10.3389/fpsyg.2021.648062] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Accepted: 06/21/2021] [Indexed: 11/21/2022] Open
Abstract
Mindfulness meditation is a form of self-regulatory training for the mind and the body. The relationship between mindfulness meditation and musical aesthetic emotion processing (MAEP) remains unclear. This study aimed to explore the effect of temporary mindfulness meditation on MAEP while listening to Chinese classical folk instrumental musical works. A 2 [(groups: mindfulness meditation group (MMG); control group (CG)] × 3 (music emotions: calm music, happy music, and sad music) mixed experimental design and a convenience sample of university students were used to verify our hypotheses, which were based on the premise that temporary mindfulness meditation may affect MAEP (MMG vs. CG). Sixty-seven non-musically trained participants (65.7% female, age range: 18–22 years) were randomly assigned to two groups (MMG or CG). Participants in MMG were given a single 10-min recorded mindfulness meditation training before and when listening to music. The instruments for psychological measurement comprised of the Five Facet Mindfulness Questionnaire (FFMQ) and the Positive and Negative Affect Schedule (PANAS). Self-report results showed no significant between-group differences for PANAS and for the scores of four subscales of the FFMQ (p > 0.05 throughout), except for the non-judging of inner experience subscale. Results showed that temporary mindfulness meditation training decreased the negative emotional experiences of happy and sad music and the positive emotional experiences of calm music during recognition and experience and promoted beautiful musical experiences in individuals with no musical training. Maintaining a state of mindfulness while listening to music enhanced body awareness and led to experiencing a faster passage of musical time. In addition, it was found that Chinese classical folk instrumental musical works effectively induced aesthetic emotion and produced multidimensional aesthetic experiences among non-musically trained adults. This study provides new insights into the relationship between mindfulness and music emotion.
Collapse
Affiliation(s)
- Xiaolin Liu
- Key Laboratory of Cognition and Personality (Ministry of Education), Southwest University, Chongqing, China.,School of Psychology, Southwest University, Chongqing, China.,Research Institute of Aesthetics Psychology of Chinese Classical Music and Basic Theory of Music Performance, Chongqing Institute of Foreign Studies, Chongqing, China
| | - Yong Liu
- Key Laboratory of Cognition and Personality (Ministry of Education), Southwest University, Chongqing, China.,School of Psychology, Southwest University, Chongqing, China
| | - Huijuan Shi
- Research Institute of Aesthetics Psychology of Chinese Classical Music and Basic Theory of Music Performance, Chongqing Institute of Foreign Studies, Chongqing, China
| | - Maoping Zheng
- Key Laboratory of Cognition and Personality (Ministry of Education), Southwest University, Chongqing, China.,School of Music, Southwest University, Chongqing, China
| |
Collapse
|
46
|
Tervaniemi M, Putkinen V, Nie P, Wang C, Du B, Lu J, Li S, Cowley BU, Tammi T, Tao S. Improved Auditory Function Caused by Music Versus Foreign Language Training at School Age: Is There a Difference? Cereb Cortex 2021; 32:63-75. [PMID: 34265850 PMCID: PMC8634570 DOI: 10.1093/cercor/bhab194] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Revised: 05/28/2021] [Accepted: 05/28/2021] [Indexed: 12/03/2022] Open
Abstract
In adults, music and speech share many neurocognitive functions, but how do they interact in a developing brain? We compared the effects of music and foreign language training on auditory neurocognition in Chinese children aged 8–11 years. We delivered group-based training programs in music and foreign language using a randomized controlled trial. A passive control group was also included. Before and after these year-long extracurricular programs, auditory event-related potentials were recorded (n = 123 and 85 before and after the program, respectively). Through these recordings, we probed early auditory predictive brain processes. To our surprise, the language program facilitated the children’s early auditory predictive brain processes significantly more than did the music program. This facilitation was most evident in pitch encoding when the experimental paradigm was musically relevant. When these processes were probed by a paradigm more focused on basic sound features, we found early predictive pitch encoding to be facilitated by music training. Thus, a foreign language program is able to foster auditory and music neurocognition, at least in tonal language speakers, in a manner comparable to that by a music program. Our results support the tight coupling of musical and linguistic brain functions also in the developing brain.
Collapse
Affiliation(s)
- Mari Tervaniemi
- Cicero Learning, Faculty of Educational Sciences, University of Helsinki, Helsinki, Finland.,Cognitive Brain Research Unit, Faculty of Medicine, University of Helsinki, Helsinki, Finland.,Advanced Innovation Center for Future Education, Beijing Normal University, Beijing, China
| | - Vesa Putkinen
- Cognitive Brain Research Unit, Faculty of Medicine, University of Helsinki, Helsinki, Finland.,Turku PET Centre, University of Turku, Turku, Finland
| | - Peixin Nie
- Cicero Learning, Faculty of Educational Sciences, University of Helsinki, Helsinki, Finland.,Cognitive Brain Research Unit, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Cuicui Wang
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Bin Du
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Jing Lu
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Shuting Li
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Benjamin Ultan Cowley
- Faculty of Educational Sciences, University of Helsinki, Finland.,Cognitive Science, Department of Digital Humanities, Faculty of Arts, University of Helsinki, Finland
| | - Tuisku Tammi
- Cognitive Science, Department of Digital Humanities, Faculty of Arts, University of Helsinki, Finland
| | - Sha Tao
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| |
Collapse
|
47
|
Leahy J, Kim SG, Wan J, Overath T. An Analytical Framework of Tonal and Rhythmic Hierarchy in Natural Music Using the Multivariate Temporal Response Function. Front Neurosci 2021; 15:665767. [PMID: 34335154 PMCID: PMC8322238 DOI: 10.3389/fnins.2021.665767] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 06/24/2021] [Indexed: 11/13/2022] Open
Abstract
Even without formal training, humans experience a wide range of emotions in response to changes in musical features, such as tonality and rhythm, during music listening. While many studies have investigated how isolated elements of tonal and rhythmic properties are processed in the human brain, it remains unclear whether these findings with such controlled stimuli are generalizable to complex stimuli in the real world. In the current study, we present an analytical framework of a linearized encoding analysis based on a set of music information retrieval features to investigate the rapid cortical encoding of tonal and rhythmic hierarchies in natural music. We applied this framework to a public domain EEG dataset (OpenMIIR) to deconvolve overlapping EEG responses to various musical features in continuous music. In particular, the proposed framework investigated the EEG encoding of the following features: tonal stability, key clarity, beat, and meter. This analysis revealed a differential spatiotemporal neural encoding of beat and meter, but not of tonal stability and key clarity. The results demonstrate that this framework can uncover associations of ongoing brain activity with relevant musical features, which could be further extended to other relevant measures such as time-resolved emotional responses in future studies.
Collapse
Affiliation(s)
- Jasmine Leahy
- Department of Psychology and Neuroscience, Duke University, Durham, NC, United States
| | - Seung-Goo Kim
- Department of Psychology and Neuroscience, Duke University, Durham, NC, United States
| | - Jie Wan
- Department of Psychology and Neuroscience, Duke University, Durham, NC, United States.,Department of Cognitive Sciences, University of California, Irvine, Irvine, CA, United States
| | - Tobias Overath
- Department of Psychology and Neuroscience, Duke University, Durham, NC, United States.,Duke Institute for Brain Sciences, Duke University, Durham, NC, United States.,Center for Cognitive Neuroscience, Duke University, Durham, NC, United States
| |
Collapse
|
48
|
Jonauskaite D, Sutton A, Cristianini N, Mohr C. English colour terms carry gender and valence biases: A corpus study using word embeddings. PLoS One 2021; 16:e0251559. [PMID: 34061875 PMCID: PMC8168888 DOI: 10.1371/journal.pone.0251559] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Accepted: 04/29/2021] [Indexed: 11/19/2022] Open
Abstract
In Western societies, the stereotype prevails that pink is for girls and blue is for boys. A third possible gendered colour is red. While liked by women, it represents power, stereotypically a masculine characteristic. Empirical studies confirmed such gendered connotations when testing colour-emotion associations or colour preferences in males and females. Furthermore, empirical studies demonstrated that pink is a positive colour, blue is mainly a positive colour, and red is both a positive and a negative colour. Here, we assessed if the same valence and gender connotations appear in widely available written texts (Wikipedia and newswire articles). Using a word embedding method (GloVe), we extracted gender and valence biases for blue, pink, and red, as well as for the remaining basic colour terms from a large English-language corpus containing six billion words. We found and confirmed that pink was biased towards femininity and positivity, and blue was biased towards positivity. We found no strong gender bias for blue, and no strong gender or valence biases for red. For the remaining colour terms, we only found that green, white, and brown were positively biased. Our finding on pink shows that writers of widely available English texts use this colour term to convey femininity. This gendered communication reinforces the notion that results from research studies find their analogue in real word phenomena. Other findings were either consistent or inconsistent with results from research studies. We argue that widely available written texts have biases on their own, because they have been filtered according to context, time, and what is appropriate to be reported.
Collapse
Affiliation(s)
| | - Adam Sutton
- Department of Computer Science, University of Bristol, Bristol, United Kingdom
| | - Nello Cristianini
- Department of Computer Science, University of Bristol, Bristol, United Kingdom
| | - Christine Mohr
- Institute of Psychology, University of Lausanne, Lausanne, Switzerland
| |
Collapse
|
49
|
|
50
|
|