51
|
Jiam NT, Caldwell M, Deroche ML, Chatterjee M, Limb CJ. Voice emotion perception and production in cochlear implant users. Hear Res 2017; 352:30-39. [PMID: 28088500 DOI: 10.1016/j.heares.2017.01.006] [Citation(s) in RCA: 53] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/05/2016] [Revised: 12/14/2016] [Accepted: 01/06/2017] [Indexed: 10/20/2022]
Abstract
Voice emotion is a fundamental component of human social interaction and social development. Unfortunately, cochlear implant users are often forced to interface with highly degraded prosodic cues as a result of device constraints in extraction, processing, and transmission. As such, individuals with cochlear implants frequently demonstrate significant difficulty in recognizing voice emotions in comparison to their normal hearing counterparts. Cochlear implant-mediated perception and production of voice emotion is an important but relatively understudied area of research. However, a rich understanding of the voice emotion auditory processing offers opportunities to improve upon CI biomedical design and to develop training programs benefiting CI performance. In this review, we will address the issues, current literature, and future directions for improved voice emotion processing in cochlear implant users.
Collapse
Affiliation(s)
- N T Jiam
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, School of Medicine, San Francisco, CA, USA
| | - M Caldwell
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, School of Medicine, San Francisco, CA, USA
| | - M L Deroche
- Centre for Research on Brain, Language and Music, McGill University Montreal, QC, Canada
| | - M Chatterjee
- Auditory Prostheses and Perception Laboratory, Boys Town National Research Hospital, Omaha, NE, USA
| | - C J Limb
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, School of Medicine, San Francisco, CA, USA.
| |
Collapse
|
52
|
Psychophysiological effects of music on acute recovery from high-intensity interval training. Physiol Behav 2016; 170:106-114. [PMID: 27989717 DOI: 10.1016/j.physbeh.2016.12.017] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2016] [Revised: 12/09/2016] [Accepted: 12/09/2016] [Indexed: 11/23/2022]
Abstract
Numerous studies have examined the multifarious effects of music applied during exercise but few have assessed the efficacy of music as an aid to recovery. Music might facilitate physiological recovery via the entrainment of respiratory rhythms with music tempo. High-intensity exercise training is not typically associated with positive affective responses, and thus ways of assuaging negative affect warrant further exploration. This study assessed the psychophysiological effects of music on acute recovery and prevalence of entrainment in between bouts of high-intensity exercise. Thirteen male runners (Mage=20.2±1.9years; BMI=21.7±1.7; V̇O2 max=61.6±6.1mL·kg·min-1) completed three exercise sessions comprising 5×5-min bouts of high-intensity intervals interspersed with 3-min periods of passive recovery. During recovery, participants were administered positively-valenced music of a slow-tempo (55-65bpm), fast-tempo (125-135bpm), or a no-music control. A range of measures including affective responses, RPE, cardiorespiratory indices (gas exchange and pulmonary ventilation), and music tempo-respiratory entrainment were recorded during exercise and recovery. Fast-tempo, positively-valenced music resulted in higher Feeling Scale scores throughout recovery periods (p<0.01, ηp2=0.38). There were significant differences in HR during initial recovery periods (p<0.05, ηp2=0.16), but no other music-moderated differences in cardiorespiratory responses. In conclusion, fast-tempo, positively-valenced music applied during recovery periods engenders a more pleasant experience. However, there is limited evidence that music expedites cardiorespiratory recovery in between bouts of high-intensity exercise. These findings have implications for athletic training strategies and individuals seeking to make high-intensity exercise sessions more pleasant.
Collapse
|
53
|
Common modulation of limbic network activation underlies musical emotions as they unfold. Neuroimage 2016; 141:517-529. [DOI: 10.1016/j.neuroimage.2016.07.002] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2015] [Revised: 07/01/2016] [Accepted: 07/02/2016] [Indexed: 11/21/2022] Open
|
54
|
Daydreams and trait affect: The role of the listener's state of mind in the emotional response to music. Conscious Cogn 2016; 46:27-35. [PMID: 27677051 DOI: 10.1016/j.concog.2016.09.014] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2015] [Revised: 08/31/2016] [Accepted: 09/17/2016] [Indexed: 02/08/2023]
Abstract
Music creates room for the mind to wander, mental time travel, and departures into more fantastical worlds. We examined the mediating role of daydreams and the moderating function of personality differences for the emotional response to music by using a moderated mediation approach. The results showed that the valence of daydreams played a mediating role in the reaction to the musical experience: happy music was related to more positive daydreams, which were associated with greater relaxation with the happy music and to greater liking of the happy music. Furthermore, negative affect (trait) moderated the direct effect of sad vs. happy music on the liking of the music: individuals with high scores on negative affect preferred sad music. The results are discussed with regard to the interplay of general and personality-specific processes as it is relevant to better understand the effects music can have on the listeners.
Collapse
|
55
|
Fernández-Sotos A, Fernández-Caballero A, Latorre JM. Influence of Tempo and Rhythmic Unit in Musical Emotion Regulation. Front Comput Neurosci 2016; 10:80. [PMID: 27536232 PMCID: PMC4971092 DOI: 10.3389/fncom.2016.00080] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2016] [Accepted: 07/19/2016] [Indexed: 11/17/2022] Open
Abstract
This article is based on the assumption of musical power to change the listener's mood. The paper studies the outcome of two experiments on the regulation of emotional states in a series of participants who listen to different auditions. The present research focuses on note value, an important musical cue related to rhythm. The influence of two concepts linked to note value is analyzed separately and discussed together. The two musical cues under investigation are tempo and rhythmic unit. The participants are asked to label music fragments by using opposite meaningful words belonging to four semantic scales, namely "Tension" (ranging from Relaxing to Stressing), "Expressiveness" (Expressionless to Expressive), "Amusement" (Boring to Amusing) and "Attractiveness" (Pleasant to Unpleasant). The participants also have to indicate how much they feel certain basic emotions while listening to each music excerpt. The rated emotions are "Happiness," "Surprise," and "Sadness." This study makes it possible to draw some interesting conclusions about the associations between note value and emotions.
Collapse
Affiliation(s)
| | - Antonio Fernández-Caballero
- Departamento de Sistemas Informáticos, Instituto de Investigación en Informática de Albacete, Universidad de Castilla-La ManchaAlbacete, Spain
| | - José M. Latorre
- Facultad de Medicina de Albacete, Universidad de Castilla-La ManchaAlbacete, Spain
| |
Collapse
|
56
|
Khatchatourov A, Pachet F, Rowe V. Action Identity in Style Simulation Systems: Do Players Consider Machine-Generated Music As of Their Own Style? Front Psychol 2016; 7:474. [PMID: 27199788 PMCID: PMC4859091 DOI: 10.3389/fpsyg.2016.00474] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2015] [Accepted: 03/17/2016] [Indexed: 11/13/2022] Open
Abstract
The generation of musical material in a given style has been the subject of many studies with the increased sophistication of artificial intelligence models of musical style. In this paper we address a question of primary importance for artificial intelligence and music psychology: can such systems generate music that users indeed consider as corresponding to their own style? We address this question through an experiment involving both performance and recognition tasks with musically naïve school-age children. We asked 56 children to perform a free-form improvisation from which two kinds of music excerpt were created. One was a mere recording of original performances. The other was created by a software program designed to simulate the participants' style, based on their original performances. Two hours after the performance task, the children completed the recognition task in two conditions, one with the original excerpts and one with machine-generated music. Results indicate that the success rate is practically equivalent in two conditions: children tended to make correct attribution of the excerpts to themselves or to others, whether the music was human-produced or machine-generated (mean accuracy = 0.75 and = 0.71, respectively). We discuss this equivalence in accuracy for machine-generated and human produced music in the light of the literature on memory effects and action identity which addresses the recognition of one's own production.
Collapse
Affiliation(s)
| | | | - Victoria Rowe
- College of Social Science and International Studies, University of Exeter Exeter, UK
| |
Collapse
|
57
|
Rogenmoser L, Zollinger N, Elmer S, Jäncke L. Independent component processes underlying emotions during natural music listening. Soc Cogn Affect Neurosci 2016; 11:1428-39. [PMID: 27217116 DOI: 10.1093/scan/nsw048] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2016] [Accepted: 03/31/2016] [Indexed: 12/12/2022] Open
Abstract
The aim of this study was to investigate the brain processes underlying emotions during natural music listening. To address this, we recorded high-density electroencephalography (EEG) from 22 subjects while presenting a set of individually matched whole musical excerpts varying in valence and arousal. Independent component analysis was applied to decompose the EEG data into functionally distinct brain processes. A k-means cluster analysis calculated on the basis of a combination of spatial (scalp topography and dipole location mapped onto the Montreal Neurological Institute brain template) and functional (spectra) characteristics revealed 10 clusters referring to brain areas typically involved in music and emotion processing, namely in the proximity of thalamic-limbic and orbitofrontal regions as well as at frontal, fronto-parietal, parietal, parieto-occipital, temporo-occipital and occipital areas. This analysis revealed that arousal was associated with a suppression of power in the alpha frequency range. On the other hand, valence was associated with an increase in theta frequency power in response to excerpts inducing happiness compared to sadness. These findings are partly compatible with the model proposed by Heller, arguing that the frontal lobe is involved in modulating valenced experiences (the left frontal hemisphere for positive emotions) whereas the right parieto-temporal region contributes to the emotional arousal.
Collapse
Affiliation(s)
- Lars Rogenmoser
- Division of Neuropsychology, Institute of Psychology, University of Zurich, 8050, Zurich, Switzerland Neuroimaging and Stroke Recovery Laboratory, Department of Neurology, Beth Israel Deaconess Medical Center and Harvard Medical School, 02215, Boston, MA, USA Neuroscience Center Zurich, University of Zurich and ETH Zurich, 8050, Zurich, Switzerland
| | - Nina Zollinger
- Division of Neuropsychology, Institute of Psychology, University of Zurich, 8050, Zurich, Switzerland
| | - Stefan Elmer
- Division of Neuropsychology, Institute of Psychology, University of Zurich, 8050, Zurich, Switzerland
| | - Lutz Jäncke
- Division of Neuropsychology, Institute of Psychology, University of Zurich, 8050, Zurich, Switzerland Center for Integrative Human Physiology (ZIHP), University of Zurich, 8050, Zurich, Switzerland International Normal Aging and Plasticity Imaging Center (INAPIC), University of Zurich, 8050, Zurich, Switzerland University Research Priority Program (URPP) "Dynamic of Healthy Aging," University of Zurich, 8050, Zurich, Switzerland Department of Special Education, King Abdulaziz University, 21589, Jeddah, Saudi Arabia
| |
Collapse
|
58
|
Hausmann M, Hodgetts S, Eerola T. Music-induced changes in functional cerebral asymmetries. Brain Cogn 2016; 104:58-71. [PMID: 26970942 DOI: 10.1016/j.bandc.2016.03.001] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2015] [Revised: 02/22/2016] [Accepted: 03/01/2016] [Indexed: 11/17/2022]
Abstract
After decades of research, it remains unclear whether emotion lateralization occurs because one hemisphere is dominant for processing the emotional content of the stimuli, or whether emotional stimuli activate lateralised networks associated with the subjective emotional experience. By using emotion-induction procedures, we investigated the effect of listening to happy and sad music on three well-established lateralization tasks. In a prestudy, Mozart's piano sonata (K. 448) and Beethoven's Moonlight Sonata were rated as the most happy and sad excerpts, respectively. Participants listened to either one emotional excerpt, or sat in silence before completing an emotional chimeric faces task (Experiment 1), visual line bisection task (Experiment 2) and a dichotic listening task (Experiment 3 and 4). Listening to happy music resulted in a reduced right hemispheric bias in facial emotion recognition (Experiment 1) and visuospatial attention (Experiment 2) and increased left hemispheric bias in language lateralization (Experiments 3 and 4). Although Experiments 1-3 revealed an increased positive emotional state after listening to happy music, mediation analyses revealed that the effect on hemispheric asymmetries was not mediated by music-induced emotional changes. The direct effect of music listening on lateralization was investigated in Experiment 4 in which tempo of the happy excerpt was manipulated by controlling for other acoustic features. However, the results of Experiment 4 made it rather unlikely that tempo is the critical cue accounting for the effects. We conclude that listening to music can affect functional cerebral asymmetries in well-established emotional and cognitive laterality tasks, independent of music-induced changes in the emotion state.
Collapse
Affiliation(s)
- Markus Hausmann
- Department of Psychology, Durham University, Durham, United Kingdom.
| | - Sophie Hodgetts
- Department of Psychology, Durham University, Durham, United Kingdom
| | - Tuomas Eerola
- Department of Music, Durham University, Durham, United Kingdom
| |
Collapse
|
59
|
Shirvani S, Jafari Z, Motasaddi Zarandi M, Jalaie S, Mohagheghi H, Tale MR. Emotional Perception of Music in Children With Bimodal Fitting and Unilateral Cochlear Implant. Ann Otol Rhinol Laryngol 2015; 125:470-7. [PMID: 26681623 DOI: 10.1177/0003489415619943] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
OBJECTIVE Biological, structural, and acoustical constraints faced by cochlear implant (CI) users can alter the perception of music. Bimodal fitting not only provides bilateral hearing but can also improve auditory skills. This study was conducted to assess the impact of this amplification style on the emotional perception of music among children with hearing loss (HL). METHODS Twenty-five children with congenital severe to profound HL and unilateral CIs, 20 children with bimodal fitting, and 30 children with normal hearing participated in this study. Their emotional perceptions of music were measured using a method where children indicated happy or sad feelings induced by music by pointing to pictures of faces showing these emotions. RESULTS Children with bimodal fitting obtained significantly higher mean scores than children with unilateral CIs for both happy and sad music items and in overall test scores (P < .001). Both groups with HL obtained significantly lower scores than children with normal hearing (P < .001). CONCLUSIONS Bimodal fitting results in a better emotional perception of music compared to unilateral CI. Given the influence of music in neurological and linguistic development and social interactions, it is important to evaluate the possible benefits of bimodal fitting prescriptions for individuals with unilateral CIs.
Collapse
Affiliation(s)
- Sareh Shirvani
- Department of Audiology, School of Rehabilitation, Tehran University of Medical Sciences (TUMS), Tehran, Iran
| | - Zahra Jafari
- Department of Basic Sciences in Rehabilitation, School of Rehabilitation Sciences, Iran University of Medical Sciences (IUMS), Tehran, Iran Canadian Center for Behavioral Neuroscience (CCBN), Lethbridge University, Lethbridge, Alberta, Canada
| | - Masoud Motasaddi Zarandi
- Cochlear Implant Research Center, AmirAlam Hospital, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Shohre Jalaie
- Department of Physiotherapy, School of Rehabilitation, Tehran University of Medical Sciences (TUMS), Tehran, Iran
| | - Hamed Mohagheghi
- Department of Audiology, School of Rehabilitation Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | | |
Collapse
|
60
|
Poon M, Schutz M. Cueing musical emotions: An empirical analysis of 24-piece sets by Bach and Chopin documents parallels with emotional speech. Front Psychol 2015; 6:1419. [PMID: 26578990 PMCID: PMC4629484 DOI: 10.3389/fpsyg.2015.01419] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2015] [Accepted: 09/07/2015] [Indexed: 11/22/2022] Open
Abstract
Acoustic cues such as pitch height and timing are effective at communicating emotion in both music and speech. Numerous experiments altering musical passages have shown that higher and faster melodies generally sound “happier” than lower and slower melodies, findings consistent with corpus analyses of emotional speech. However, equivalent corpus analyses of complex time-varying cues in music are less common, due in part to the challenges of assembling an appropriate corpus. Here, we describe a novel, score-based exploration of the use of pitch height and timing in a set of “balanced” major and minor key compositions. Our analysis included all 24 Preludes and 24 Fugues from Bach’s Well-Tempered Clavier (book 1), as well as all 24 of Chopin’s Preludes for piano. These three sets are balanced with respect to both modality (major/minor) and key chroma (“A,” “B,” “C,” etc.). Consistent with predictions derived from speech, we found major-key (nominally “happy”) pieces to be two semitones higher in pitch height and 29% faster than minor-key (nominally “sad”) pieces. This demonstrates that our balanced corpus of major and minor key pieces uses low-level acoustic cues for emotion in a manner consistent with speech. A series of post hoc analyses illustrate interesting trade-offs, with sets featuring greater emphasis on timing distinctions between modalities exhibiting the least pitch distinction, and vice-versa. We discuss these findings in the broader context of speech-music research, as well as recent scholarship exploring the historical evolution of cue use in Western music.
Collapse
Affiliation(s)
- Matthew Poon
- Music, Acoustics, Perception and Learning Lab, McMaster Institute for Music and the Mind, School of the Arts, McMaster University, Hamilton ON, Canada
| | - Michael Schutz
- Music, Acoustics, Perception and Learning Lab, McMaster Institute for Music and the Mind, School of the Arts, McMaster University, Hamilton ON, Canada
| |
Collapse
|
61
|
Sena Moore K, Hanson-Abromeit D. Theory-guided Therapeutic Function of Music to facilitate emotion regulation development in preschool-aged children. Front Hum Neurosci 2015; 9:572. [PMID: 26528171 PMCID: PMC4604312 DOI: 10.3389/fnhum.2015.00572] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2015] [Accepted: 09/30/2015] [Indexed: 11/22/2022] Open
Abstract
Emotion regulation (ER) is an umbrella term to describe interactive, goal-dependent explicit, and implicit processes that are intended to help an individual manage and shift an emotional experience. The primary window for appropriate ER development occurs during the infant, toddler, and preschool years. Atypical ER development is considered a risk factor for mental health problems and has been implicated as a primary mechanism underlying childhood pathologies. Current treatments are predominantly verbal- and behavioral-based and lack the opportunity to practice in-the-moment management of emotionally charged situations. There is also an absence of caregiver–child interaction in these treatment strategies. Based on behavioral and neural support for music as a therapeutic mechanism, the incorporation of intentional music experiences, facilitated by a music therapist, may be one way to address these limitations. Musical Contour Regulation Facilitation (MCRF) is an interactive therapist-child music-based intervention for ER development practice in preschoolers. The MCRF intervention uses the deliberate contour and temporal structure of a music therapy session to mirror the changing flow of the caregiver–child interaction through the alternation of high arousal and low arousal music experiences. The purpose of this paper is to describe the Therapeutic Function of Music (TFM), a theory-based description of the structural characteristics for a music-based stimulus to musically facilitate developmentally appropriate high arousal and low arousal in-the-moment ER experiences. The TFM analysis is based on a review of the music theory, music neuroscience, and music development literature and provides a preliminary model of the structural characteristics of the music as a core component of the MCRF intervention.
Collapse
Affiliation(s)
- Kimberly Sena Moore
- Department of Music Education and Music Therapy, Frost School of Music, University of Miami Coral Gables, FL, USA
| | - Deanna Hanson-Abromeit
- Division of Music Education and Music Therapy, School of Music, University of Kansas Lawrence, KS, USA
| |
Collapse
|
62
|
Marin MM, Thompson WF, Gingras B, Stewart L. Affective evaluation of simultaneous tone combinations in congenital amusia. Neuropsychologia 2015; 78:207-20. [DOI: 10.1016/j.neuropsychologia.2015.10.004] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2014] [Revised: 09/27/2015] [Accepted: 10/02/2015] [Indexed: 10/22/2022]
|
63
|
Sensitivity to musical emotions in congenital amusia. Cortex 2015; 71:171-82. [DOI: 10.1016/j.cortex.2015.06.022] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2013] [Revised: 07/15/2014] [Accepted: 06/22/2015] [Indexed: 11/22/2022]
|
64
|
Siu TSC, Cheung H. Emotional experience in music fosters 18-month-olds' emotion-action understanding: a training study. Dev Sci 2015; 19:933-946. [PMID: 26355193 DOI: 10.1111/desc.12348] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2014] [Accepted: 07/07/2015] [Indexed: 11/30/2022]
Abstract
We examine whether emotional experiences induced via music-making promote infants' use of emotional cues to predict others' action. Fifteen-month-olds were randomly assigned to participate in interactive emotion training either with or without musical engagement for three months. Both groups were then re-tested with two violation-of-expectation paradigms respectively assessing their sensitivity to some expressive features in music and understanding of the link between emotion and behaviour in simple action sequences. The infants who had participated in music, but not those who had not, were surprised by music-face inconsistent displays and were able to interpret an agent's action as guided by her expressed emotion. The findings suggest a privileged role of musical experience in prompting infants to form emotional representations, which support their understanding of the association between affective states and action.
Collapse
Affiliation(s)
- Tik Sze Carrey Siu
- Department of Psychology, The Chinese University of Hong Kong, Hong Kong
| | - Him Cheung
- Department of Psychology, The Chinese University of Hong Kong, Hong Kong.
| |
Collapse
|
65
|
Omigie D. Basic, specific, mechanistic? Conceptualizing musical emotions in the brain. J Comp Neurol 2015; 524:1676-86. [PMID: 26172307 DOI: 10.1002/cne.23854] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2015] [Revised: 07/07/2015] [Accepted: 07/07/2015] [Indexed: 11/10/2022]
Abstract
The number of studies investigating music processing in the human brain continues to increase, with a large proportion of them focussing on the correlates of so-called musical emotions. The current Review highlights the recent development whereby such studies are no longer concerned only with basic emotions such as happiness and sadness but also with so-called music-specific or "aesthetic" ones such as nostalgia and wonder. It also highlights how mechanisms such as expectancy and empathy, which are seen as inducing musical emotions, are enjoying ever-increasing investigation and substantiation with physiological and neuroimaging methods. It is proposed that a combination of these approaches, namely, investigation of the precise mechanisms through which so-called music-specific or aesthetic emotions may arise, will provide the most important advances for our understanding of the unique nature of musical experience.
Collapse
Affiliation(s)
- Diana Omigie
- Music Department, Max Planck Institute for Empirical Aesthetics, 60322, Frankfurt am Main, Germany
| |
Collapse
|
66
|
Creel SC. Ups and Downs in Auditory Development: Preschoolers' Sensitivity to Pitch Contour and Timbre. Cogn Sci 2015; 40:373-403. [PMID: 25846115 DOI: 10.1111/cogs.12237] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2014] [Revised: 09/24/2014] [Accepted: 12/23/2014] [Indexed: 11/30/2022]
Abstract
Much research has explored developing sound representations in language, but less work addresses developing representations of other sound patterns. This study examined preschool children's musical representations using two different tasks: discrimination and sound-picture association. Melodic contour--a musically relevant property--and instrumental timbre, which is (arguably) less musically relevant, were tested. In Experiment 1, children failed to associate cartoon characters to melodies with maximally different pitch contours, with no advantage for melody preexposure. Experiment 2 also used different-contour melodies and found good discrimination, whereas association was at chance. Experiment 3 replicated Experiment 2, but with a large timbre change instead of a contour change. Here, discrimination and association were both excellent. Preschool-aged children may have stronger or more durable representations of timbre than contour, particularly in more difficult tasks. Reasons for weaker association of contour than timbre information are discussed, along with implications for auditory development.
Collapse
Affiliation(s)
- Sarah C Creel
- Department of Cognitive Science, University of California San Diego
| |
Collapse
|
67
|
Abstract
Music is universal at least partly because it expresses emotion and regulates affect. Associations between music and emotion have been examined regularly by music psychologists. Here, we review recent findings in three areas: (a) the communication and perception of emotion in music, (b) the emotional consequences of music listening, and (c) predictors of music preferences.
Collapse
|
68
|
Hopyan T, Manno III FAM, Papsin BC, Gordon KA. Sad and happy emotion discrimination in music by children with cochlear implants. Child Neuropsychol 2015; 22:366-80. [DOI: 10.1080/09297049.2014.992400] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
69
|
Livingstone SR, Thompson WF, Wanderley MM, Palmer C. Common cues to emotion in the dynamic facial expressions of speech and song. Q J Exp Psychol (Hove) 2014; 68:952-70. [PMID: 25424388 PMCID: PMC4440649 DOI: 10.1080/17470218.2014.971034] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Abstract
Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech–song differences. Vocalists’ jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech–song. Vocalists’ emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists’ facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.
Collapse
|
70
|
McPherson MJ, Lopez-Gonzalez M, Rankin SK, Limb CJ. The role of emotion in musical improvisation: an analysis of structural features. PLoS One 2014; 9:e105144. [PMID: 25144200 PMCID: PMC4140734 DOI: 10.1371/journal.pone.0105144] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2014] [Accepted: 07/18/2014] [Indexed: 11/24/2022] Open
Abstract
One of the primary functions of music is to convey emotion, yet how music accomplishes this task remains unclear. For example, simple correlations between mode (major vs. minor) and emotion (happy vs. sad) do not adequately explain the enormous range, subtlety or complexity of musically induced emotions. In this study, we examined the structural features of unconstrained musical improvisations generated by jazz pianists in response to emotional cues. We hypothesized that musicians would not utilize any universal rules to convey emotions, but would instead combine heterogeneous musical elements together in order to depict positive and negative emotions. Our findings demonstrate a lack of simple correspondence between emotions and musical features of spontaneous musical improvisation. While improvisations in response to positive emotional cues were more likely to be in major keys, have faster tempos, faster key press velocities and more staccato notes when compared to negative improvisations, there was a wide distribution for each emotion with components that directly violated these primary associations. The finding that musicians often combine disparate features together in order to convey emotion during improvisation suggests that structural diversity may be an essential feature of the ability of music to express a wide range of emotion.
Collapse
Affiliation(s)
- Malinda J. McPherson
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, United States of America
| | - Monica Lopez-Gonzalez
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, United States of America
- Peabody Conservatory of The Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Summer K. Rankin
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, United States of America
| | - Charles J. Limb
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, United States of America
- Peabody Conservatory of The Johns Hopkins University, Baltimore, Maryland, United States of America
| |
Collapse
|
71
|
Volkova A, Trehub SE, Schellenberg EG, Papsin BC, Gordon KA. Children's identification of familiar songs from pitch and timing cues. Front Psychol 2014; 5:863. [PMID: 25147537 PMCID: PMC4123732 DOI: 10.3389/fpsyg.2014.00863] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2014] [Accepted: 07/20/2014] [Indexed: 11/13/2022] Open
Abstract
The goal of the present study was to ascertain whether children with normal hearing and prelingually deaf children with cochlear implants could use pitch or timing cues alone or in combination to identify familiar songs. Children 4–7 years of age were required to identify the theme songs of familiar TV shows in a simple task with excerpts that preserved (1) the relative pitch and timing cues of the melody but not the original instrumentation, (2) the timing cues only (rhythm, meter, and tempo), and (3) the relative pitch cues only (pitch contour and intervals). Children with normal hearing performed at high levels and comparably across the three conditions. The performance of child implant users was well above chance levels when both pitch and timing cues were available, marginally above chance with timing cues only, and at chance with pitch cues only. This is the first demonstration that children can identify familiar songs from monotonic versions—timing cues but no pitch cues—and from isochronous versions—pitch cues but no timing cues. The study also indicates that, in the context of a very simple task, young implant users readily identify songs from melodic versions that preserve pitch and timing cues.
Collapse
Affiliation(s)
- Anna Volkova
- Department of Psychology, University of Toronto Mississauga Mississauga, ON, Canada
| | - Sandra E Trehub
- Department of Psychology, University of Toronto Mississauga Mississauga, ON, Canada
| | - E Glenn Schellenberg
- Department of Psychology, University of Toronto Mississauga Mississauga, ON, Canada
| | - Blake C Papsin
- Department of Otolaryngology, University of Toronto Toronto, ON, Canada
| | - Karen A Gordon
- Department of Otolaryngology, University of Toronto Toronto, ON, Canada
| |
Collapse
|
72
|
The impact of cerebellar disorders on musical ability. J Neurol Sci 2014; 343:76-81. [DOI: 10.1016/j.jns.2014.05.036] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2014] [Revised: 04/16/2014] [Accepted: 05/16/2014] [Indexed: 11/18/2022]
|
73
|
van Vugt FT, Furuya S, Vauth H, Jabusch HC, Altenmüller E. Playing beautifully when you have to be fast: spatial and temporal symmetries of movement patterns in skilled piano performance at different tempi. Exp Brain Res 2014; 232:3555-67. [PMID: 25059908 DOI: 10.1007/s00221-014-4036-4] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2013] [Accepted: 07/05/2014] [Indexed: 10/25/2022]
Abstract
Humans are capable of learning a variety of motor skills such as playing the piano. Performance of these skills is subject to multiple constraints, such as musical phrasing or speed requirements, and these constraints vary from one context to another. In order to understand how the brain controls highly skilled movements, we investigated pianists playing musical scales with their left or right hand at various speeds. Pianists showed systematic temporal deviations away from regularity. At slow tempi, pianists slowed down at the beginning and end of the movement (which we call phrasal template). At fast tempi, temporal deviation traces consisted of three peak delays caused by a thumb-under manoeuvre (which we call neuromuscular template). Intermediate tempi were a linear combination trade-off between these two. We introduce and cross-validate a simple four-parameter model that predicted the timing deviation of each individual note across tempi (R(2) = 0.70). The model can be fitted on the data of individual pianists, providing a novel quantification of expert performance. The present study shows that the motor system can generate complex movements through a dynamic combination of simple movement templates. This provides insight into how the motor system flexibly adapts to varying contextual constraints.
Collapse
Affiliation(s)
- Floris T van Vugt
- Institute of Music Physiology and Musicians' Medicine, University of Music, Drama, and Media, Emmichplatz 1, 30175, Hanover, Germany,
| | | | | | | | | |
Collapse
|
74
|
Russell K, Meeuwisse W, Nettel-Aguirre A, Emery CA, Gushue S, Wishart J, Romanow N, Rowe BH, Goulet C, Hagel BE. Listening to a personal music player is associated with fewer but more serious injuries among snowboarders in a terrain park: a case-control study. Br J Sports Med 2014; 49:62-6. [DOI: 10.1136/bjsports-2014-093487] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
|
75
|
Flaig NK, Large EW. Dynamic musical communication of core affect. Front Psychol 2014; 5:72. [PMID: 24672492 PMCID: PMC3956121 DOI: 10.3389/fpsyg.2014.00072] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2013] [Accepted: 01/19/2014] [Indexed: 12/02/2022] Open
Abstract
Is there something special about the way music communicates feelings? Theorists since Meyer (1956) have attempted to explain how music could stimulate varied and subtle affective experiences by violating learned expectancies, or by mimicking other forms of social interaction. Our proposal is that music speaks to the brain in its own language; it need not imitate any other form of communication. We review recent theoretical and empirical literature, which suggests that all conscious processes consist of dynamic neural events, produced by spatially dispersed processes in the physical brain. Intentional thought and affective experience arise as dynamical aspects of neural events taking place in multiple brain areas simultaneously. At any given moment, this content comprises a unified "scene" that is integrated into a dynamic core through synchrony of neuronal oscillations. We propose that (1) neurodynamic synchrony with musical stimuli gives rise to musical qualia including tonal and temporal expectancies, and that (2) music-synchronous responses couple into core neurodynamics, enabling music to directly modulate core affect. Expressive music performance, for example, may recruit rhythm-synchronous neural responses to support affective communication. We suggest that the dynamic relationship between musical expression and the experience of affect presents a unique opportunity for the study of emotional experience. This may help elucidate the neural mechanisms underlying arousal and valence, and offer a new approach to exploring the complex dynamics of the how and why of emotional experience.
Collapse
Affiliation(s)
- Nicole K Flaig
- Music Dynamics Lab, Department of Psychology, University of Connecticut Storrs, CT, USA
| | - Edward W Large
- Music Dynamics Lab, Department of Psychology, University of Connecticut Storrs, CT, USA
| |
Collapse
|
76
|
Livingstone SR, Choi DH, Russo FA. The influence of vocal training and acting experience on measures of voice quality and emotional genuineness. Front Psychol 2014; 5:156. [PMID: 24639659 PMCID: PMC3945712 DOI: 10.3389/fpsyg.2014.00156] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2013] [Accepted: 02/08/2014] [Indexed: 11/23/2022] Open
Abstract
Vocal training through singing and acting lessons is known to modify acoustic parameters of the voice. While the effects of singing training have been well documented, the role of acting experience on the singing voice remains unclear. In two experiments, we used linear mixed models to examine the relationships between the relative amounts of acting and singing experience on the acoustics and perception of the male singing voice. In Experiment 1, 12 male vocalists were recorded while singing with five different emotions, each with two intensities. Acoustic measures of pitch accuracy, jitter, and harmonics-to-noise ratio (HNR) were examined. Decreased pitch accuracy and increased jitter, indicative of a lower “voice quality,” were associated with more years of acting experience, while increased pitch accuracy was associated with more years of singing lessons. We hypothesized that the acoustic deviations exhibited by more experienced actors was an intentional technique to increase the genuineness or truthfulness of their emotional expressions. In Experiment 2, listeners rated vocalists’ emotional genuineness. Vocalists with more years of acting experience were rated as more genuine than vocalists with less acting experience. No relationship was reported for singing training. Increased genuineness was associated with decreased pitch accuracy, increased jitter, and a higher HNR. These effects may represent a shifting of priorities by male vocalists with acting experience to emphasize emotional genuineness over pitch accuracy or voice quality in their singing performances.
Collapse
Affiliation(s)
- Steven R Livingstone
- Department of Psychology, Ryerson University Toronto, ON, Canada ; Toronto Rehabilitation Institute Toronto, ON, Canada
| | - Deanna H Choi
- Department of Psychology, Queen's University Kingston, ON, Canada
| | - Frank A Russo
- Department of Psychology, Ryerson University Toronto, ON, Canada ; Toronto Rehabilitation Institute Toronto, ON, Canada
| |
Collapse
|
77
|
Lin YP, Duann JR, Feng W, Chen JH, Jung TP. Revealing spatio-spectral electroencephalographic dynamics of musical mode and tempo perception by independent component analysis. J Neuroeng Rehabil 2014; 11:18. [PMID: 24581119 PMCID: PMC3941612 DOI: 10.1186/1743-0003-11-18] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2012] [Accepted: 02/20/2014] [Indexed: 11/21/2022] Open
Abstract
Background Music conveys emotion by manipulating musical structures, particularly musical mode- and tempo-impact. The neural correlates of musical mode and tempo perception revealed by electroencephalography (EEG) have not been adequately addressed in the literature. Method This study used independent component analysis (ICA) to systematically assess spatio-spectral EEG dynamics associated with the changes of musical mode and tempo. Results Empirical results showed that music with major mode augmented delta-band activity over the right sensorimotor cortex, suppressed theta activity over the superior parietal cortex, and moderately suppressed beta activity over the medial frontal cortex, compared to minor-mode music, whereas fast-tempo music engaged significant alpha suppression over the right sensorimotor cortex. Conclusion The resultant EEG brain sources were comparable with previous studies obtained by other neuroimaging modalities, such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET). In conjunction with advanced dry and mobile EEG technology, the EEG results might facilitate the translation from laboratory-oriented research to real-life applications for music therapy, training and entertainment in naturalistic environments.
Collapse
Affiliation(s)
| | | | | | | | - Tzyy-Ping Jung
- Institute for Neural Computation and Institute of Engineering in Medicine, University of California, San Diego, La Jolla, CA, USA.
| |
Collapse
|
78
|
Kerer M, Marksteiner J, Hinterhuber H, Kemmler G, Bliem HR, Weiss EM. Happy and Sad Judgements in Dependence on Mode and Note Density in Patients with Mild Cognitive Impairment and Early-Stage Alzheimer's Disease. Gerontology 2014; 60:402-12. [DOI: 10.1159/000358010] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2013] [Accepted: 12/16/2013] [Indexed: 11/19/2022] Open
|
79
|
Loui P, Bachorik JP, Li HC, Schlaug G. Effects of voice on emotional arousal. Front Psychol 2013; 4:675. [PMID: 24101908 PMCID: PMC3787249 DOI: 10.3389/fpsyg.2013.00675] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2013] [Accepted: 09/07/2013] [Indexed: 11/27/2022] Open
Abstract
Music is a powerful medium capable of eliciting a broad range of emotions. Although the relationship between language and music is well documented, relatively little is known about the effects of lyrics and the voice on the emotional processing of music and on listeners' preferences. In the present study, we investigated the effects of vocals in music on participants' perceived valence and arousal in songs. Participants (N = 50) made valence and arousal ratings for familiar songs that were presented with and without the voice. We observed robust effects of vocal content on perceived arousal. Furthermore, we found that the effect of the voice on enhancing arousal ratings is independent of familiarity of the song and differs across genders and age: females were more influenced by vocals than males; furthermore these gender effects were enhanced among older adults. Results highlight the effects of gender and aging in emotion perception and are discussed in terms of the social roles of music.
Collapse
Affiliation(s)
- Psyche Loui
- Department of Neurology, Beth Israel Deaconess Medical Center and Harvard Medical School, BostonMA, USA
- Department of Psychology, Wesleyan University, MiddletownCT, USA
| | - Justin P. Bachorik
- Department of Neurology, Beth Israel Deaconess Medical Center and Harvard Medical School, BostonMA, USA
| | - H. Charles Li
- Department of Neurology, Beth Israel Deaconess Medical Center and Harvard Medical School, BostonMA, USA
| | - Gottfried Schlaug
- Department of Neurology, Beth Israel Deaconess Medical Center and Harvard Medical School, BostonMA, USA
| |
Collapse
|
80
|
Chubb C, Dickson CA, Dean T, Fagan C, Mann DS, Wright CE, Guan M, Silva AE, Gregersen PK, Kowalsky E. Bimodal distribution of performance in discriminating major/minor modes. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2013; 134:3067-3078. [PMID: 24116441 DOI: 10.1121/1.4816546] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
This study investigated the abilities of listeners to classify various sorts of musical stimuli as major vs minor. All stimuli combined four pure tones: low and high tonics (G5 and G6), dominant (D), and either a major third (B) or a minor third (B[symbol: see text]). Especially interesting results were obtained using tone-scrambles, randomly ordered sequences of pure tones presented at ≈15 per second. All tone-scrambles tested comprised 16 G's (G5's + G6's), 8 D's, and either 8 B's or 8 B[symbol: see text]'s. The distribution of proportion correct across 275 listeners tested over the course of three experiments was strikingly bimodal, with one mode very close to chance performance, and the other very close to perfect performance. Testing with tone-scrambles thus sorts listeners fairly cleanly into two subpopulations. Listeners in subpopulation 1 are sufficiently sensitive to major vs minor to classify tone-scrambles nearly perfectly; listeners in subpopulation 2 (comprising roughly 70% of the population) have very little sensitivity to major vs minor. Skill in classifying major vs minor tone-scrambles shows a modest correlation of around 0.5 with years of musical training.
Collapse
Affiliation(s)
- Charles Chubb
- Department of Cognitive Sciences, University of California at Irvine, Irvine, California 92697-5100
| | | | | | | | | | | | | | | | | | | |
Collapse
|
81
|
Paquette S, Peretz I, Belin P. The "Musical Emotional Bursts": a validated set of musical affect bursts to investigate auditory affective processing. Front Psychol 2013; 4:509. [PMID: 23964255 PMCID: PMC3741467 DOI: 10.3389/fpsyg.2013.00509] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2013] [Accepted: 07/18/2013] [Indexed: 11/18/2022] Open
Abstract
The Musical Emotional Bursts (MEB) consist of 80 brief musical executions expressing basic emotional states (happiness, sadness and fear) and neutrality. These musical bursts were designed to be the musical analog of the Montreal Affective Voices (MAV)—a set of brief non-verbal affective vocalizations portraying different basic emotions. The MEB consist of short (mean duration: 1.6 s) improvisations on a given emotion or of imitations of a given MAV stimulus, played on a violin (10 stimuli × 4 [3 emotions + neutral]), or a clarinet (10 stimuli × 4 [3 emotions + neutral]). The MEB arguably represent a primitive form of music emotional expression, just like the MAV represent a primitive form of vocal, non-linguistic emotional expression. To create the MEB, stimuli were recorded from 10 violinists and 10 clarinetists, and then evaluated by 60 participants. Participants evaluated 240 stimuli [30 stimuli × 4 (3 emotions + neutral) × 2 instruments] by performing either a forced-choice emotion categorization task, a valence rating task or an arousal rating task (20 subjects per task); 40 MAVs were also used in the same session with similar task instructions. Recognition accuracy of emotional categories expressed by the MEB (n:80) was lower than for the MAVs but still very high with an average percent correct recognition score of 80.4%. Highest recognition accuracies were obtained for happy clarinet (92.0%) and fearful or sad violin (88.0% each) MEB stimuli. The MEB can be used to compare the cerebral processing of emotional expressions in music and vocal communication, or used for testing affective perception in patients with communication problems.
Collapse
Affiliation(s)
- Sébastien Paquette
- Department of Psychology, International Laboratory for Brain Music and Sound Research, Center for Research on Brain, Language and Music, University of Montreal Montreal, QC, Canada
| | | | | |
Collapse
|
82
|
Bowling DL. A vocal basis for the affective character of musical mode in melody. Front Psychol 2013; 4:464. [PMID: 23914179 PMCID: PMC3728488 DOI: 10.3389/fpsyg.2013.00464] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2013] [Accepted: 07/03/2013] [Indexed: 12/03/2022] Open
Abstract
Why does major music sound happy and minor music sound sad? The idea that different musical modes are best suited to the expression of different emotions has been prescribed by composers, music theorists, and natural philosophers for millennia. However, the reason we associate musical modes with emotions remains a matter of debate. On one side there is considerable evidence that mode-emotion associations arise through exposure to the conventions of a particular musical culture, suggesting a basis in lifetime learning. On the other, cross-cultural comparisons suggest that the particular associations we make are supported by musical similarities to the prosodic characteristics of the voice in different affective states, indicating a basis in the biology of emotional expression. Here, I review developmental and cross-cultural studies on the affective character of musical modes, concluding that while learning clearly plays a role, the emotional associations we make are (1) not arbitrary, and (2) best understood by also taking into account the physical characteristics and biological purposes of vocalization.
Collapse
Affiliation(s)
- Daniel L Bowling
- Department of Cognitive Biology, University of Vienna Vienna, Austria
| |
Collapse
|
83
|
Eerola T, Friberg A, Bresin R. Emotional expression in music: contribution, linearity, and additivity of primary musical cues. Front Psychol 2013; 4:487. [PMID: 23908642 PMCID: PMC3726864 DOI: 10.3389/fpsyg.2013.00487] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2013] [Accepted: 07/11/2013] [Indexed: 11/15/2022] Open
Abstract
The aim of this study is to manipulate musical cues systematically to determine the aspects of music that contribute to emotional expression, and whether these cues operate in additive or interactive fashion, and whether the cue levels can be characterized as linear or non-linear. An optimized factorial design was used with six primary musical cues (mode, tempo, dynamics, articulation, timbre, and register) across four different music examples. Listeners rated 200 musical examples according to four perceived emotional characters (happy, sad, peaceful, and scary). The results exhibited robust effects for all cues and the ranked importance of these was established by multiple regression. The most important cue was mode followed by tempo, register, dynamics, articulation, and timbre, although the ranking varied across the emotions. The second main result suggested that most cue levels contributed to the emotions in a linear fashion, explaining 77–89% of variance in ratings. Quadratic encoding of cues did lead to minor but significant increases of the models (0–8%). Finally, the interactions between the cues were non-existent suggesting that the cues operate mostly in an additive fashion, corroborating recent findings on emotional expression in music (Juslin and Lindström, 2010).
Collapse
Affiliation(s)
- Tuomas Eerola
- Department of Music, University of Jyväskylä Jyväskylä, Finland
| | | | | |
Collapse
|
84
|
Volkova A, Trehub SE, Schellenberg EG, Papsin BC, Gordon KA. Children with bilateral cochlear implants identify emotion in speech and music. Cochlear Implants Int 2013; 14:80-91. [DOI: 10.1179/1754762812y.0000000004] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
|
85
|
Hopyan T, Gordon KA, Papsin BC. Identifying emotions in music through electrical hearing in deaf children using cochlear implants. Cochlear Implants Int 2013; 12:21-6. [DOI: 10.1179/146701010x12677899497399] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
86
|
Tian Y, Ma W, Tian C, Xu P, Yao D. Brain oscillations and electroencephalography scalp networks during tempo perception. Neurosci Bull 2013; 29:731-6. [PMID: 23852557 DOI: 10.1007/s12264-013-1352-9] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2012] [Accepted: 11/15/2012] [Indexed: 10/26/2022] Open
Abstract
In the current study we used electroencephalography (EEG) to investigate the relation between musical tempo perception and the oscillatory activity in specific brain regions, and the scalp EEG networks in the theta, alpha, and beta bands. The results showed that the theta power at the frontal midline decreased with increased arousal level related to tempo. The alpha power induced by original music at the bilateral occipital-parietal regions was stronger than that by tempo-transformed music. The beta power did not change with tempo. At the network level, the original music-related alpha network had high global efficiency and the optimal path length. This study was the first to use EEG to investigate multi-oscillatory activities and the data support the tempo-specific timing hypothesis.
Collapse
Affiliation(s)
- Yin Tian
- Bio-information College, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China,
| | | | | | | | | |
Collapse
|
87
|
Trochidis K, Bigand E. Investigation of the Effect of Mode and Tempo on Emotional Responses to Music Using EEG Power Asymmetry. J PSYCHOPHYSIOL 2013. [DOI: 10.1027/0269-8803/a000099] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
The combined interactions of mode and tempo on emotional responses to music were investigated using both self-reports and electroencephalogram (EEG) activity. A musical excerpt was performed in three different modes and tempi. Participants rated the emotional content of the resulting nine stimuli and their EEG activity was recorded. Musical modes influence the valence of emotion with major mode being evaluated happier and more serene, than minor and locrian modes. In EEG frontal activity, major mode was associated with an increased alpha activation in the left hemisphere compared to minor and locrian modes, which, in turn, induced increased activation in the right hemisphere. The tempo modulates the arousal value of emotion with faster tempi associated with stronger feeling of happiness and anger and this effect is associated in EEG with an increase of frontal activation in the left hemisphere. By contrast, slow tempo induced decreased frontal activation in the left hemisphere. Some interactive effects were found between mode and tempo: An increase of tempo modulated the emotion differently depending on the mode of the piece.
Collapse
Affiliation(s)
- Konstantinos Trochidis
- McMaster University, Department of Psychology, Neuroscience and Behavior, Hamilton, Canada
| | - Emmanuel Bigand
- University of Burgundy, Department of Cognitive Psychology, Dijon, France
| |
Collapse
|
88
|
Brattico E, Bogert B, Jacobsen T. Toward a neural chronometry for the aesthetic experience of music. Front Psychol 2013; 4:206. [PMID: 23641223 PMCID: PMC3640187 DOI: 10.3389/fpsyg.2013.00206] [Citation(s) in RCA: 91] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2012] [Accepted: 04/02/2013] [Indexed: 01/06/2023] Open
Abstract
Music is often studied as a cognitive domain alongside language. The emotional aspects of music have also been shown to be important, but views on their nature diverge. For instance, the specific emotions that music induces and how they relate to emotional expression are still under debate. Here we propose a mental and neural chronometry of the aesthetic experience of music initiated and mediated by external and internal contexts such as intentionality, background mood, attention, and expertise. The initial stages necessary for an aesthetic experience of music are feature analysis, integration across modalities, and cognitive processing on the basis of long-term knowledge. These stages are common to individuals belonging to the same musical culture. The initial emotional reactions to music include the startle reflex, core "liking," and arousal. Subsequently, discrete emotions are perceived and induced. Presumably somatomotor processes synchronizing the body with the music also come into play here. The subsequent stages, in which cognitive, affective, and decisional processes intermingle, require controlled cross-modal neural processes to result in aesthetic emotions, aesthetic judgments, and conscious liking. These latter aesthetic stages often require attention, intentionality, and expertise for their full actualization.
Collapse
Affiliation(s)
- Elvira Brattico
- Cognitive Brain Research Unit, Institute of Behavioural Sciences, University of HelsinkiHelsinki, Finland
- Finnish Center of Excellence in Interdisciplinary Music Research, University of JyväskyläJyväskylä, Finland
- Brain and Mind Laboratory, Department of Biomedical Engineering and Computational Science, Aalto University School of ScienceHelsinki, Finland
| | - Brigitte Bogert
- Cognitive Brain Research Unit, Institute of Behavioural Sciences, University of HelsinkiHelsinki, Finland
- Finnish Center of Excellence in Interdisciplinary Music Research, University of JyväskyläJyväskylä, Finland
| | - Thomas Jacobsen
- Experimental Psychology Unit, Faculty of Humanities and Social Sciences, Helmut Schmidt University/University of the Federal Armed Forces HamburgHamburg, Germany
| |
Collapse
|
89
|
Maes PJ, Leman M. The influence of body movements on children's perception of music with an ambiguous expressive character. PLoS One 2013; 8:e54682. [PMID: 23358805 PMCID: PMC3554646 DOI: 10.1371/journal.pone.0054682] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2012] [Accepted: 12/14/2012] [Indexed: 11/19/2022] Open
Abstract
The theory of embodied music cognition states that the perception and cognition of music is firmly, although not exclusively, linked to action patterns associated with that music. In this regard, the focus lies mostly on how music promotes certain action tendencies (i.e., dance, entrainment, etc.). Only recently, studies have started to devote attention to the reciprocal effects that people’s body movements may exert on how people perceive certain aspects of music and sound (e.g., pitch, meter, musical preference, etc.). The present study positions itself in this line of research. The central research question is whether expressive body movements, which are systematically paired with music, can modulate children’s perception of musical expressiveness. We present a behavioral experiment in which different groups of children (7–8 years, N = 46) either repetitively performed a happy or a sad choreography in response to expressively ambiguous music or merely listened to that music. The results of our study show indeed that children’s perception of musical expressiveness is modulated in accordance with the expressive character of the dance choreography performed to the music. This finding supports theories that claim a strong connection between action and perception, although further research is needed to uncover the details of this connection.
Collapse
|
90
|
Olsen KN, Stevens CJ. Psychophysiological Response to Acoustic Intensity Change in a Musical Chord. J PSYCHOPHYSIOL 2013. [DOI: 10.1027/0269-8803/a000082] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
This paper investigates psychological and psychophysiological components of arousal and emotional response to a violin chord stimulus comprised of continuous increases (up-ramp) or decreases (down-ramp) of intensity. A factorial experiment manipulated direction of intensity change (60–90 dB SPL up-ramp, 90–60 dB SPL down-ramp) and duration (1.8 s, 3.6 s) within-subjects (N = 45). Dependent variables were ratings of emotional arousal, valence, and loudness change, and a fine-grained analysis of event-related skin conductance response (SCR). As hypothesized, relative to down-ramps, musical up-ramps elicited significantly higher ratings of emotional arousal and loudness change, with marginally longer SCR rise times. However, SCR magnitude was greater in response to musical down-ramps. The implications of acoustic intensity change for music-induced emotion and auditory warning perception are discussed.
Collapse
Affiliation(s)
- Kirk N. Olsen
- MARCS Institute and School of Social Sciences and Psychology, University of Western Sydney, Penrith, NSW, Australia
| | - Catherine J. Stevens
- MARCS Institute and School of Social Sciences and Psychology, University of Western Sydney, Penrith, NSW, Australia
| |
Collapse
|
91
|
Brandt A, Gebrian M, Slevc LR. Music and early language acquisition. Front Psychol 2012; 3:327. [PMID: 22973254 PMCID: PMC3439120 DOI: 10.3389/fpsyg.2012.00327] [Citation(s) in RCA: 61] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2012] [Accepted: 08/15/2012] [Indexed: 11/13/2022] Open
Abstract
Language is typically viewed as fundamental to human intelligence. Music, while recognized as a human universal, is often treated as an ancillary ability - one dependent on or derivative of language. In contrast, we argue that it is more productive from a developmental perspective to describe spoken language as a special type of music. A review of existing studies presents a compelling case that musical hearing and ability is essential to language acquisition. In addition, we challenge the prevailing view that music cognition matures more slowly than language and is more difficult; instead, we argue that music learning matches the speed and effort of language acquisition. We conclude that music merits a central place in our understanding of human development.
Collapse
Affiliation(s)
- Anthony Brandt
- Shepherd School of Music, Rice UniversityHouston, TX, USA
| | - Molly Gebrian
- Shepherd School of Music, Rice UniversityHouston, TX, USA
| | - L. Robert Slevc
- Psychology, Language and Music Cognition Lab, University of MarylandCollege Park, MD, USA
| |
Collapse
|
92
|
|
93
|
Virtala P, Huotilainen M, Putkinen V, Makkonen T, Tervaniemi M. Musical training facilitates the neural discrimination of major versus minor chords in 13-year-old children. Psychophysiology 2012; 49:1125-32. [PMID: 22681183 DOI: 10.1111/j.1469-8986.2012.01386.x] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2011] [Accepted: 03/17/2012] [Indexed: 11/28/2022]
Abstract
Music practice since childhood affects the development of hearing skills. An important classification in Western music is the chords' major-minor dichotomy. Its preattentive auditory discrimination was studied here using a mismatch negativity (MMN) paradigm in 13-year-olds with active hobbies, music-related (music group) or other (control group). In a context of root major chords, root minor chords and inverted major chords were presented infrequently. The interval structure of inverted majors differs more from root majors than the interval structure of root minors. However, the identity of the chords is the same in inverted and root majors (major), but different in root minors. The deviant chords introduced no new frequencies to the paradigm. Hence, an MMN caused by physical deviance was prevented. An MMN was elicited by the minor chords but not by the inverted majors. The MMN amplitude in the music group was larger than in the control group. Thus, the conceptual discrimination skills are present already in the preattentive processing level of the auditory cortex, and musical training can advance these skills.
Collapse
Affiliation(s)
- P Virtala
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki, Helsinki, Finland.
| | | | | | | | | |
Collapse
|
94
|
Moghimi S, Kushki A, Power S, Guerguerian AM, Chau T. Automatic detection of a prefrontal cortical response to emotionally rated music using multi-channel near-infrared spectroscopy. J Neural Eng 2012; 9:026022. [PMID: 22419117 DOI: 10.1088/1741-2560/9/2/026022] [Citation(s) in RCA: 64] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
95
|
Abstract
An overview of the computational prediction of emotional responses to music is presented. Communication of emotions by music has received a great deal of attention during the last years and a large number of empirical studies have described the role of individual features (tempo, mode, articulation, timbre) in predicting the emotions suggested or invoked by the music. However, unlike the present work, relatively few studies have attempted to model continua of expressed emotions using a variety of musical features from audio-based representations in a correlation design. The construction of the computational model is divided into four separate phases, with a different focus for evaluation. These phases include the theoretical selection of relevant features, empirical assessment of feature validity, actual feature selection, and overall evaluation of the model. Existing research on music and emotions and extraction of musical features is reviewed in terms of these criteria. Examples drawn from recent studies of emotions within the context of film soundtracks are used to demonstrate each phase in the construction of the model. These models are able to explain the dominant part of the listeners' self-reports of the emotions expressed by music and the models show potential to generalize over different genres within Western music. Possible applications of the computational models of emotions are discussed.
Collapse
Affiliation(s)
- Tuomas Eerola
- Department of Music, Finnish Centre of Excellence in Interdisciplinary Music Research, University of Jyväskylä, Jyväskylä, Finland.
| |
Collapse
|
96
|
Abstract
Creating emotionally sensitive machines will significantly enhance the interaction between humans and machines. In this chapter we focus on enabling this ability for music. Music is extremely powerful to induce emotions. If machines can somehow apprehend emotions in music, it gives them a relevant competence to communicate with humans. In this chapter we review the theories of music and emotions. We detail different representations of musical emotions from the literature, together with related musical features. Then, we focus on techniques to detect the emotion in music from audio content. As a proof of concept, we detail a machine learning method to build such a system. We also review the current state of the art results, provide evaluations and give some insights into the possible applications and future trends of these techniques.
Collapse
|
97
|
Brattico E, Alluri V, Bogert B, Jacobsen T, Vartiainen N, Nieminen S, Tervaniemi M. A Functional MRI Study of Happy and Sad Emotions in Music with and without Lyrics. Front Psychol 2011; 2:308. [PMID: 22144968 PMCID: PMC3227856 DOI: 10.3389/fpsyg.2011.00308] [Citation(s) in RCA: 110] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2011] [Accepted: 10/13/2011] [Indexed: 12/12/2022] Open
Abstract
Musical emotions, such as happiness and sadness, have been investigated using instrumental music devoid of linguistic content. However, pop and rock, the most common musical genres, utilize lyrics for conveying emotions. Using participants’ self-selected musical excerpts, we studied their behavior and brain responses to elucidate how lyrics interact with musical emotion processing, as reflected by emotion recognition and activation of limbic areas involved in affective experience. We extracted samples from subjects’ selections of sad and happy pieces and sorted them according to the presence of lyrics. Acoustic feature analysis showed that music with lyrics differed from music without lyrics in spectral centroid, a feature related to perceptual brightness, whereas sad music with lyrics did not diverge from happy music without lyrics, indicating the role of other factors in emotion classification. Behavioral ratings revealed that happy music without lyrics induced stronger positive emotions than happy music with lyrics. We also acquired functional magnetic resonance imaging data while subjects performed affective tasks regarding the music. First, using ecological and acoustically variable stimuli, we broadened previous findings about the brain processing of musical emotions and of songs versus instrumental music. Additionally, contrasts between sad music with versus without lyrics recruited the parahippocampal gyrus, the amygdala, the claustrum, the putamen, the precentral gyrus, the medial and inferior frontal gyri (including Broca’s area), and the auditory cortex, while the reverse contrast produced no activations. Happy music without lyrics activated structures of the limbic system and the right pars opercularis of the inferior frontal gyrus, whereas auditory regions alone responded to happy music with lyrics. These findings point to the role of acoustic cues for the experience of happiness in music and to the importance of lyrics for sad musical emotions.
Collapse
Affiliation(s)
- Elvira Brattico
- Cognitive Brain Research Unit, Institute of Behavioral Sciences, University of Helsinki Helsinki, Finland
| | | | | | | | | | | | | |
Collapse
|
98
|
Pereira CS, Teixeira J, Figueiredo P, Xavier J, Castro SL, Brattico E. Music and emotions in the brain: familiarity matters. PLoS One 2011; 6:e27241. [PMID: 22110619 PMCID: PMC3217963 DOI: 10.1371/journal.pone.0027241] [Citation(s) in RCA: 198] [Impact Index Per Article: 14.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2011] [Accepted: 10/12/2011] [Indexed: 11/22/2022] Open
Abstract
The importance of music in our daily life has given rise to an increased number of studies addressing the brain regions involved in its appreciation. Some of these studies controlled only for the familiarity of the stimuli, while others relied on pleasantness ratings, and others still on musical preferences. With a listening test and a functional magnetic resonance imaging (fMRI) experiment, we wished to clarify the role of familiarity in the brain correlates of music appreciation by controlling, in the same study, for both familiarity and musical preferences. First, we conducted a listening test, in which participants rated the familiarity and liking of song excerpts from the pop/rock repertoire, allowing us to select a personalized set of stimuli per subject. Then, we used a passive listening paradigm in fMRI to study music appreciation in a naturalistic condition with increased ecological value. Brain activation data revealed that broad emotion-related limbic and paralimbic regions as well as the reward circuitry were significantly more active for familiar relative to unfamiliar music. Smaller regions in the cingulate cortex and frontal lobe, including the motor cortex and Broca's area, were found to be more active in response to liked music when compared to disliked one. Hence, familiarity seems to be a crucial factor in making the listeners emotionally engaged with music, as revealed by fMRI data.
Collapse
Affiliation(s)
- Carlos Silva Pereira
- Institute for Biomedical Sciences Abel Salazar (ICBAS), University of Porto, Porto, Portugal.
| | | | | | | | | | | |
Collapse
|
99
|
Hunter PG, Glenn Schellenberg E, Stalinski SM. Liking and identifying emotionally expressive music: Age and gender differences. J Exp Child Psychol 2011; 110:80-93. [DOI: 10.1016/j.jecp.2011.04.001] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2010] [Revised: 03/29/2011] [Accepted: 04/01/2011] [Indexed: 10/18/2022]
|
100
|
Expressiveness in musical emotions. PSYCHOLOGICAL RESEARCH 2011; 76:641-53. [DOI: 10.1007/s00426-011-0361-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2010] [Accepted: 06/23/2011] [Indexed: 10/18/2022]
|