1
|
Valentin O, Lehmann A, Nguyen D, Paquette S. Integrating Emotion Perception in Rehabilitation Programs for Cochlear Implant Users: A Call for a More Comprehensive Approach. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1635-1642. [PMID: 38619441 DOI: 10.1044/2024_jslhr-23-00660] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/16/2024]
Abstract
PURPOSE Postoperative rehabilitation programs for cochlear implant (CI) recipients primarily emphasize enhancing speech perception. However, effective communication in everyday social interactions necessitates consideration of diverse verbal social cues to facilitate language comprehension. Failure to discern emotional expressions may lead to maladjusted social behavior, underscoring the importance of integrating social cues perception into rehabilitation initiatives to enhance CI users' well-being. After conventional rehabilitation, CI users demonstrate varying levels of emotion perception abilities. This disparity notably impacts young CI users, whose emotion perception deficit can extend to social functioning, encompassing coping strategies and social competence, even when relying on nonauditory cues such as facial expressions. Knowing that emotion perception abilities generally decrease with age, acknowledging emotion perception impairments in aging CI users is crucial, especially since a direct correlation between quality-of-life scores and vocal emotion recognition abilities has been observed in adult CI users. After briefly reviewing the scope of CI rehabilitation programs and summarizing the mounting evidence on CI users' emotion perception deficits and their impact, we will present our recommendations for embedding emotional training as part of enriched and standardized evaluation/rehabilitation programs that can improve CI users' social integration and quality of life. CONCLUSIONS Evaluating all aspects, including emotion perception, in CI rehabilitation programs is crucial because it ensures a comprehensive approach that enhances speech comprehension and the emotional dimension of communication, potentially improving CI users' social interaction and overall well-being. The development of emotion perception training holds promises for CI users and individuals grappling with various forms of hearing loss and sensory deficits. Ultimately, adopting such a comprehensive approach has the potential to significantly elevate the overall quality of life for a broad spectrum of patients.
Collapse
Affiliation(s)
- Olivier Valentin
- International Laboratory for Brain, Music and Sound Research and Centre for Research on Brain, Language and Music (BRAMS and CRBLM), Montréal, Québec, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montréal, Québec, Canada
- Department of Otolaryngology-Head and Neck Surgery, Faculty of Medicine and Health Sciences, McGill University, Montréal, Québec, Canada
- Research Institute of the McGill University Health Centre, Montréal, Québec, Canada
| | - Alexandre Lehmann
- International Laboratory for Brain, Music and Sound Research and Centre for Research on Brain, Language and Music (BRAMS and CRBLM), Montréal, Québec, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montréal, Québec, Canada
- Department of Otolaryngology-Head and Neck Surgery, Faculty of Medicine and Health Sciences, McGill University, Montréal, Québec, Canada
- Research Institute of the McGill University Health Centre, Montréal, Québec, Canada
| | - Don Nguyen
- International Laboratory for Brain, Music and Sound Research and Centre for Research on Brain, Language and Music (BRAMS and CRBLM), Montréal, Québec, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montréal, Québec, Canada
- Research Institute of the McGill University Health Centre, Montréal, Québec, Canada
| | - Sébastien Paquette
- International Laboratory for Brain, Music and Sound Research and Centre for Research on Brain, Language and Music (BRAMS and CRBLM), Montréal, Québec, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montréal, Québec, Canada
- Department of Psychology, Faculty of Arts and Science, Trent University, Peterborough, Ontario, Canada
| |
Collapse
|
2
|
Paquette S, Gouin S, Lehmann A. Improving emotion perception in cochlear implant users: insights from machine learning analysis of EEG signals. BMC Neurol 2024; 24:115. [PMID: 38589815 PMCID: PMC11000345 DOI: 10.1186/s12883-024-03616-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2023] [Accepted: 03/29/2024] [Indexed: 04/10/2024] Open
Abstract
BACKGROUND Although cochlear implants can restore auditory inputs to deafferented auditory cortices, the quality of the sound signal transmitted to the brain is severely degraded, limiting functional outcomes in terms of speech perception and emotion perception. The latter deficit negatively impacts cochlear implant users' social integration and quality of life; however, emotion perception is not currently part of rehabilitation. Developing rehabilitation programs incorporating emotional cognition requires a deeper understanding of cochlear implant users' residual emotion perception abilities. METHODS To identify the neural underpinnings of these residual abilities, we investigated whether machine learning techniques could be used to identify emotion-specific patterns of neural activity in cochlear implant users. Using existing electroencephalography data from 22 cochlear implant users, we employed a random forest classifier to establish if we could model and subsequently predict from participants' brain responses the auditory emotions (vocal and musical) presented to them. RESULTS Our findings suggest that consistent emotion-specific biomarkers exist in cochlear implant users, which could be used to develop effective rehabilitation programs incorporating emotion perception training. CONCLUSIONS This study highlights the potential of machine learning techniques to improve outcomes for cochlear implant users, particularly in terms of emotion perception.
Collapse
Affiliation(s)
- Sebastien Paquette
- Psychology Department, Faculty of Arts and Science, Trent University, Peterborough, ON, Canada.
- Research Institute of the McGill University Health Centre (RI-MUHC), Montreal, QC, Canada.
- Centre for Research On Brain, Language, and Music (CRBLM), International Laboratory for Brain, Music & Sound Research (BRAMS), Psychology Department, University of Montreal, Montreal, QC, Canada.
| | - Samir Gouin
- Centre for Research On Brain, Language, and Music (CRBLM), International Laboratory for Brain, Music & Sound Research (BRAMS), Psychology Department, University of Montreal, Montreal, QC, Canada
- Faculty of Medicine and Health Sciences, Department of Otolaryngology-Head and Neck Surgery, McGill University, Montreal, QC, Canada
| | - Alexandre Lehmann
- Research Institute of the McGill University Health Centre (RI-MUHC), Montreal, QC, Canada
- Centre for Research On Brain, Language, and Music (CRBLM), International Laboratory for Brain, Music & Sound Research (BRAMS), Psychology Department, University of Montreal, Montreal, QC, Canada
- Faculty of Medicine and Health Sciences, Department of Otolaryngology-Head and Neck Surgery, McGill University, Montreal, QC, Canada
| |
Collapse
|
3
|
Wu D, Jia X, Rao W, Dou W, Li Y, Li B. Construction of a Chinese traditional instrumental music dataset: A validated set of naturalistic affective music excerpts. Behav Res Methods 2024; 56:3757-3778. [PMID: 38702502 PMCID: PMC11133124 DOI: 10.3758/s13428-024-02411-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/22/2024] [Indexed: 05/06/2024]
Abstract
Music is omnipresent among human cultures and moves us both physically and emotionally. The perception of emotions in music is influenced by both psychophysical and cultural factors. Chinese traditional instrumental music differs significantly from Western music in cultural origin and music elements. However, previous studies on music emotion perception are based almost exclusively on Western music. Therefore, the construction of a dataset of Chinese traditional instrumental music is important for exploring the perception of music emotions in the context of Chinese culture. The present dataset included 273 10-second naturalistic music excerpts. We provided rating data for each excerpt on ten variables: familiarity, dimensional emotions (valence and arousal), and discrete emotions (anger, gentleness, happiness, peacefulness, sadness, solemnness, and transcendence). The excerpts were rated by a total of 168 participants on a seven-point Likert scale for the ten variables. Three labels for the excerpts were obtained: familiarity, discrete emotion, and cluster. Our dataset demonstrates good reliability, and we believe it could contribute to cross-cultural studies on emotional responses to music.
Collapse
Affiliation(s)
- Di Wu
- Institute of Brain Science and Department of Physiology, School of Basic Medical Sciences, Hangzhou Normal University, Hangzhou, 311121, China
- Zhejiang Philosophy and Social Science Laboratory for Research in Early Development and Childcare, Hangzhou Normal University, Hangzhou, 311121, China
| | - Xi Jia
- Institute of Brain Science and Department of Physiology, School of Basic Medical Sciences, Hangzhou Normal University, Hangzhou, 311121, China
- Zhejiang Philosophy and Social Science Laboratory for Research in Early Development and Childcare, Hangzhou Normal University, Hangzhou, 311121, China
| | - Wenxin Rao
- Institute of Brain Science and Department of Physiology, School of Basic Medical Sciences, Hangzhou Normal University, Hangzhou, 311121, China
| | - Wenjie Dou
- Institute of Brain Science and Department of Physiology, School of Basic Medical Sciences, Hangzhou Normal University, Hangzhou, 311121, China
- Zhejiang Philosophy and Social Science Laboratory for Research in Early Development and Childcare, Hangzhou Normal University, Hangzhou, 311121, China
| | - Yangping Li
- Institute of Brain Science and Department of Physiology, School of Basic Medical Sciences, Hangzhou Normal University, Hangzhou, 311121, China
- School of Foreign Studies, Xi'an Jiaotong University, Xi'an, 710049, China
| | - Baoming Li
- Institute of Brain Science and Department of Physiology, School of Basic Medical Sciences, Hangzhou Normal University, Hangzhou, 311121, China.
- Zhejiang Philosophy and Social Science Laboratory for Research in Early Development and Childcare, Hangzhou Normal University, Hangzhou, 311121, China.
| |
Collapse
|
4
|
Paquette S, Deroche MLD, Goffi-Gomez MV, Hoshino ACH, Lehmann A. Predicting emotion perception abilities for cochlear implant users. Int J Audiol 2023; 62:946-954. [PMID: 36047767 DOI: 10.1080/14992027.2022.2111611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 08/05/2022] [Indexed: 11/05/2022]
Abstract
OBJECTIVE In daily life, failure to perceive emotional expressions can result in maladjusted behaviour. For cochlear implant users, perceiving emotional cues in sounds remains challenging, and the factors explaining the variability in patients' sensitivity to emotions are currently poorly understood. Understanding how these factors relate to auditory proficiency is a major challenge of cochlear implant research and is critical in addressing patients' limitations. DESIGN To fill this gap, we evaluated different auditory perception aspects in implant users (pitch discrimination, music processing and speech intelligibility) and correlated them to their performance in an emotion recognition task. STUDY SAMPLE Eighty-four adults (18-76 years old) participated in our investigation; 42 cochlear implant users and 42 controls. Cochlear implant users performed worse than their controls on all tasks, and emotion perception abilities were correlated to their age and their clinical outcome as measured in the speech intelligibility task. RESULTS As previously observed, emotion perception abilities declined with age (here by about 2-3% in a decade). Interestingly, even when emotional stimuli were musical, CI users' skills relied more on processes underlying speech intelligibility. CONCLUSIONS These results suggest that speech processing remains a clinical priority even when one is interested in affective skills.
Collapse
Affiliation(s)
- S Paquette
- International Laboratory for Brain Music and Sound Research, Department of Psychology, University of Montréal, Montreal, Canada
- Centre for Research on Brain, Language and Music, McGill University, Montreal, Canada
- Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Canada
| | - M L D Deroche
- International Laboratory for Brain Music and Sound Research, Department of Psychology, University of Montréal, Montreal, Canada
- Centre for Research on Brain, Language and Music, McGill University, Montreal, Canada
- Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Canada
- Laboratory for Hearing and Cognition, Psychology Department, Concordia University, Montreal, Canada
| | - M V Goffi-Gomez
- Cochlear Implant Group, School of Medicine, Hospital das Clínicas, Universidade de São Paulo, São Paulo, Canada
| | - A C H Hoshino
- Cochlear Implant Group, School of Medicine, Hospital das Clínicas, Universidade de São Paulo, São Paulo, Canada
| | - A Lehmann
- International Laboratory for Brain Music and Sound Research, Department of Psychology, University of Montréal, Montreal, Canada
- Centre for Research on Brain, Language and Music, McGill University, Montreal, Canada
- Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Canada
| |
Collapse
|
5
|
Ross P, Williams E, Herbert G, Manning L, Lee B. Turn that music down! Affective musical bursts cause an auditory dominance in children recognizing bodily emotions. J Exp Child Psychol 2023; 230:105632. [PMID: 36731279 DOI: 10.1016/j.jecp.2023.105632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 12/16/2022] [Accepted: 01/13/2023] [Indexed: 02/01/2023]
Abstract
Previous work has shown that different sensory channels are prioritized across the life course, with children preferentially responding to auditory information. The aim of the current study was to investigate whether the mechanism that drives this auditory dominance in children occurs at the level of encoding (overshadowing) or when the information is integrated to form a response (response competition). Given that response competition is dependent on a modality integration attempt, a combination of stimuli that could not be integrated was used so that if children's auditory dominance persisted, this would provide evidence for the overshadowing over the response competition mechanism. Younger children (≤7 years), older children (8-11 years), and adults (18+ years) were asked to recognize the emotion (happy or fearful) in either nonvocal auditory musical emotional bursts or human visual bodily expressions of emotion in three conditions: unimodal, congruent bimodal, and incongruent bimodal. We found that children performed significantly worse at recognizing emotional bodies when they heard (and were told to ignore) musical emotional bursts. This provides the first evidence for auditory dominance in both younger and older children when presented with modally incongruent emotional stimuli. The continued presence of auditory dominance, despite the lack of modality integration, was taken as supportive evidence for the overshadowing explanation. These findings are discussed in relation to educational considerations, and future sensory dominance investigations and models are proposed.
Collapse
Affiliation(s)
- Paddy Ross
- Department of Psychology, Durham University, Durham DH1 3LE, UK.
| | - Ella Williams
- Department of Psychology, Durham University, Durham DH1 3LE, UK; Oxford Neuroscience, University of Oxford, Oxford OX3 9DU, UK
| | - Gemma Herbert
- Department of Psychology, Durham University, Durham DH1 3LE, UK
| | - Laura Manning
- Department of Psychology, Durham University, Durham DH1 3LE, UK
| | - Becca Lee
- Department of Psychology, Durham University, Durham DH1 3LE, UK
| |
Collapse
|
6
|
Donhauser PW, Klein D. Audio-Tokens: A toolbox for rating, sorting and comparing audio samples in the browser. Behav Res Methods 2023; 55:508-515. [PMID: 35297013 PMCID: PMC10027774 DOI: 10.3758/s13428-022-01803-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/19/2022] [Indexed: 12/30/2022]
Abstract
Here we describe a JavaScript toolbox to perform online rating studies with auditory material. The main feature of the toolbox is that audio samples are associated with visual tokens on the screen that control audio playback and can be manipulated depending on the type of rating. This allows the collection of single- and multidimensional feature ratings, as well as categorical and similarity ratings. The toolbox ( github.com/pwdonh/audio_tokens ) can be used via a plugin for the widely used jsPsych, as well as using plain JavaScript for custom applications. We expect the toolbox to be useful in psychological research on speech and music perception, as well as for the curation and annotation of datasets in machine learning.
Collapse
Affiliation(s)
- Peter W Donhauser
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC, H3A 2B4, Canada.
- Ernst Strüngmann Institute for Neuroscience in Cooperation with Max Planck Society, 60528, Frankfurt am Main, Germany.
| | - Denise Klein
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC, H3A 2B4, Canada.
- Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, H3G 2A8, Canada.
| |
Collapse
|
7
|
Harding EE, Gaudrain E, Hrycyk IJ, Harris RL, Tillmann B, Maat B, Free RH, Başkent D. Musical Emotion Categorization with Vocoders of Varying Temporal and Spectral Content. Trends Hear 2023; 27:23312165221141142. [PMID: 36628512 PMCID: PMC9837297 DOI: 10.1177/23312165221141142] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023] Open
Abstract
While previous research investigating music emotion perception of cochlear implant (CI) users observed that temporal cues informing tempo largely convey emotional arousal (relaxing/stimulating), it remains unclear how other properties of the temporal content may contribute to the transmission of arousal features. Moreover, while detailed spectral information related to pitch and harmony in music - often not well perceived by CI users- reportedly conveys emotional valence (positive, negative), it remains unclear how the quality of spectral content contributes to valence perception. Therefore, the current study used vocoders to vary temporal and spectral content of music and tested music emotion categorization (joy, fear, serenity, sadness) in 23 normal-hearing participants. Vocoders were varied with two carriers (sinewave or noise; primarily modulating temporal information), and two filter orders (low or high; primarily modulating spectral information). Results indicated that emotion categorization was above-chance in vocoded excerpts but poorer than in a non-vocoded control condition. Among vocoded conditions, better temporal content (sinewave carriers) improved emotion categorization with a large effect while better spectral content (high filter order) improved it with a small effect. Arousal features were comparably transmitted in non-vocoded and vocoded conditions, indicating that lower temporal content successfully conveyed emotional arousal. Valence feature transmission steeply declined in vocoded conditions, revealing that valence perception was difficult for both lower and higher spectral content. The reliance on arousal information for emotion categorization of vocoded music suggests that efforts to refine temporal cues in the CI user signal may immediately benefit their music emotion perception.
Collapse
Affiliation(s)
- Eleanor E. Harding
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen,
The Netherlands,Graduate School of Medical Sciences, Research School of Behavioural
and Cognitive Neurosciences, University of Groningen, Groningen,
The Netherlands,Prins Claus Conservatoire, Hanze University of Applied Sciences, Groningen, The Netherlands,Eleanor E. Harding, Department of Otorhinolarynology, University Medical Center Groningen, Hanzeplein 1 9713 GZ, Groningen, The Netherlands.
| | - Etienne Gaudrain
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen,
The Netherlands,Lyon Neuroscience Research Center, CNRS UMR5292, Inserm U1028, Université Lyon 1, Université de Saint-Etienne, Lyon, France
| | - Imke J. Hrycyk
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen,
The Netherlands,Graduate School of Medical Sciences, Research School of Behavioural
and Cognitive Neurosciences, University of Groningen, Groningen,
The Netherlands
| | - Robert L. Harris
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen,
The Netherlands,Prins Claus Conservatoire, Hanze University of Applied Sciences, Groningen, The Netherlands
| | - Barbara Tillmann
- Lyon Neuroscience Research Center, CNRS UMR5292, Inserm U1028, Université Lyon 1, Université de Saint-Etienne, Lyon, France
| | - Bert Maat
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen,
The Netherlands,Graduate School of Medical Sciences, Research School of Behavioural
and Cognitive Neurosciences, University of Groningen, Groningen,
The Netherlands,Cochlear Implant Center Northern Netherlands, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Rolien H. Free
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen,
The Netherlands,Graduate School of Medical Sciences, Research School of Behavioural
and Cognitive Neurosciences, University of Groningen, Groningen,
The Netherlands,Cochlear Implant Center Northern Netherlands, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen,
The Netherlands,Graduate School of Medical Sciences, Research School of Behavioural
and Cognitive Neurosciences, University of Groningen, Groningen,
The Netherlands
| |
Collapse
|
8
|
Sivathasan S, Dahary H, Burack JA, Quintin EM. Basic emotion recognition of children on the autism spectrum is enhanced in music and typical for faces and voices. PLoS One 2023; 18:e0279002. [PMID: 36630376 PMCID: PMC9833514 DOI: 10.1371/journal.pone.0279002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Accepted: 11/28/2022] [Indexed: 01/12/2023] Open
Abstract
In contrast with findings of reduced facial and vocal emotional recognition (ER) accuracy, children on the autism spectrum (AS) demonstrate comparable ER skills to those of typically-developing (TD) children using music. To understand the specificity of purported ER differences, the goal of this study was to examine ER from music compared with faces and voices among children on the AS and TD children. Twenty-five children on the AS and 23 TD children (6-13 years) completed an ER task, using categorical (happy, sad, fear) and dimensional (valence, arousal) ratings, of emotions presented via music, faces, or voices. Compared to the TD group, the AS group showed a relative ER strength from music, and comparable performance from faces and voices. Although both groups demonstrated greater vocal ER accuracy, the children on the AS performed equally well with music and faces, whereas the TD children performed better with faces than with music. Both groups performed comparably with dimensional ratings, except for greater variability by the children on the AS in valence ratings for happy emotions. These findings highlight a need to re-examine ER of children on the AS, and to consider how facilitating strengths-based approaches can re-shape our thinking about and support for persons on the AS.
Collapse
Affiliation(s)
- Shalini Sivathasan
- Department of Educational and Counselling Psychology, McGill University, Montreal, Quebec, Canada
- Azrieli Centre for Autism Research, Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada
- Center for Research on Music, Brain, and Language, McGill University, Montreal, Quebec, Canada
| | - Hadas Dahary
- Department of Educational and Counselling Psychology, McGill University, Montreal, Quebec, Canada
- Azrieli Centre for Autism Research, Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada
- Center for Research on Music, Brain, and Language, McGill University, Montreal, Quebec, Canada
| | - Jacob A. Burack
- Department of Educational and Counselling Psychology, McGill University, Montreal, Quebec, Canada
- Azrieli Centre for Autism Research, Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada
| | - Eve-Marie Quintin
- Department of Educational and Counselling Psychology, McGill University, Montreal, Quebec, Canada
- Azrieli Centre for Autism Research, Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada
- Center for Research on Music, Brain, and Language, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
9
|
Martins I, Lima CF, Pinheiro AP. Enhanced salience of musical sounds in singers and instrumentalists. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2022; 22:1044-1062. [PMID: 35501427 DOI: 10.3758/s13415-022-01007-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/10/2022] [Indexed: 06/14/2023]
Abstract
Music training has been linked to facilitated processing of emotional sounds. However, most studies have focused on speech, and less is known about musicians' brain responses to other emotional sounds and in relation to instrument-specific experience. The current study combined behavioral and EEG methods to address two novel questions related to the perception of auditory emotional cues: whether and how long-term music training relates to a distinct emotional processing of nonverbal vocalizations and music; and whether distinct training profiles (vocal vs. instrumental) modulate brain responses to emotional sounds from early to late processing stages. Fifty-eight participants completed an EEG implicit emotional processing task, in which musical and vocal sounds differing in valence were presented as nontarget stimuli. After this task, participants explicitly evaluated the same sounds regarding the emotion being expressed, their valence, and arousal. Compared with nonmusicians, musicians displayed enhanced salience detection (P2), attention orienting (P3), and elaborative processing (Late Positive Potential) of musical (vs. vocal) sounds in event-related potential (ERP) data. The explicit evaluation of musical sounds also was distinct in musicians: accuracy in the emotional recognition of musical sounds was similar across valence types in musicians, who also judged musical sounds to be more pleasant and more arousing than nonmusicians. Specific profiles of music training (singers vs. instrumentalists) did not relate to differences in the processing of vocal vs. musical sounds. Together, these findings reveal that music has a privileged status in the auditory system of long-term musically trained listeners, irrespective of their instrument-specific experience.
Collapse
Affiliation(s)
- Inês Martins
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, 1649-013, Lisbon, Portugal
| | - César F Lima
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisbon, Portugal
| | - Ana P Pinheiro
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, 1649-013, Lisbon, Portugal.
| |
Collapse
|
10
|
Wołoszyn K, Hohol M, Kuniecki M, Winkielman P. Restricting movements of lower face leaves recognition of emotional vocalizations intact but introduces a valence positivity bias. Sci Rep 2022; 12:16101. [PMID: 36167865 PMCID: PMC9515079 DOI: 10.1038/s41598-022-18888-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Accepted: 08/22/2022] [Indexed: 11/08/2022] Open
Abstract
Blocking facial mimicry can disrupt recognition of emotion stimuli. Many previous studies have focused on facial expressions, and it remains unclear whether this generalises to other types of emotional expressions. Furthermore, by emphasizing categorical recognition judgments, previous studies neglected the role of mimicry in other processing stages, including dimensional (valence and arousal) evaluations. In the study presented herein, we addressed both issues by asking participants to listen to brief non-verbal vocalizations of four emotion categories (anger, disgust, fear, happiness) and neutral sounds under two conditions. One of the conditions included blocking facial mimicry by creating constant tension on the lower face muscles, in the other condition facial muscles remained relaxed. After each stimulus presentation, participants evaluated sounds' category, valence, and arousal. Although the blocking manipulation did not influence emotion recognition, it led to higher valence ratings in a non-category-specific manner, including neutral sounds. Our findings suggest that somatosensory and motor feedback play a role in the evaluation of affect vocalizations, perhaps introducing a directional bias. This distinction between stimulus recognition, stimulus categorization, and stimulus evaluation is important for understanding what cognitive and emotional processing stages involve somatosensory and motor processes.
Collapse
Affiliation(s)
- Kinga Wołoszyn
- Institute of Psychology, Jagiellonian University, Kraków, Poland.
| | - Mateusz Hohol
- Copernicus Center for Interdisciplinary Studies, Jagiellonian University, Kraków, Poland
| | - Michał Kuniecki
- Institute of Psychology, Jagiellonian University, Kraków, Poland
| | - Piotr Winkielman
- Department of Psychology, University of California San Diego, La Jolla, USA.
| |
Collapse
|
11
|
Kuttenreich AM, von Piekartz H, Heim S. Is There a Difference in Facial Emotion Recognition after Stroke with vs. without Central Facial Paresis? Diagnostics (Basel) 2022; 12:diagnostics12071721. [PMID: 35885625 PMCID: PMC9325259 DOI: 10.3390/diagnostics12071721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Revised: 07/06/2022] [Accepted: 07/10/2022] [Indexed: 11/16/2022] Open
Abstract
The Facial Feedback Hypothesis (FFH) states that facial emotion recognition is based on the imitation of facial emotional expressions and the processing of physiological feedback. In the light of limited and contradictory evidence, this hypothesis is still being debated. Therefore, in the present study, emotion recognition was tested in patients with central facial paresis after stroke. Performance in facial vs. auditory emotion recognition was assessed in patients with vs. without facial paresis. The accuracy of objective facial emotion recognition was significantly lower in patients with vs. without facial paresis and also in comparison to healthy controls. Moreover, for patients with facial paresis, the accuracy measure for facial emotion recognition was significantly worse than that for auditory emotion recognition. Finally, in patients with facial paresis, the subjective judgements of their own facial emotion recognition abilities differed strongly from their objective performances. This pattern of results demonstrates a specific deficit in facial emotion recognition in central facial paresis and thus provides support for the FFH and points out certain effects of stroke.
Collapse
Affiliation(s)
- Anna-Maria Kuttenreich
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany;
- Department of Neurology, Medical Faculty, RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany
- Department of Otorhinolaryngology, Jena University Hospital, Am Klinikum 1, 07747 Jena, Germany
- Facial-Nerve-Center Jena, Jena University Hospital, Am Klinikum 1, 07747 Jena, Germany
- Center of Rare Diseases Jena, Jena University Hospital, Am Klinikum 1, 07747 Jena, Germany
- Correspondence: ; Tel.: +49-3641-9329398
| | - Harry von Piekartz
- Department of Physical Therapy and Rehabilitation Science, Osnabrück University of Applied Sciences, Albrechtstr. 30, 49076 Osnabrück, Germany;
| | - Stefan Heim
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany;
- Department of Neurology, Medical Faculty, RWTH Aachen University, Pauwelsstr. 30, 52074 Aachen, Germany
- Institute of Neuroscience and Medicine (INM−1), Forschungszentrum Jülich, Leo-Brand-Str. 5, 52428 Jülich, Germany
| |
Collapse
|
12
|
Self-prioritization with unisensory and multisensory stimuli in a matching task. Atten Percept Psychophys 2022; 84:1666-1688. [PMID: 35538291 PMCID: PMC9232425 DOI: 10.3758/s13414-022-02498-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/21/2022] [Indexed: 11/26/2022]
Abstract
A shape-label matching task is commonly used to examine the self-advantage in motor reaction-time responses (the Self-Prioritization Effect; SPE). In the present study, auditory labels were introduced, and, for the first time, responses to unisensory auditory, unisensory visual, and multisensory object-label stimuli were compared across block-type (i.e., trials blocked by sensory modality type, and intermixed trials of unisensory and multisensory stimuli). Auditory stimulus intensity was presented at either 50 dB (Group 1) or 70 dB (Group 2). The participants in Group 2 also completed a multisensory detection task, making simple speeded motor responses to the shape and sound stimuli and their multisensory combinations. In the matching task, the SPE was diminished in intermixed trials, and in responses to the unisensory auditory stimuli as compared with the multisensory (visual shape+auditory label) stimuli. In contrast, the SPE did not differ in responses to the unisensory visual and multisensory (auditory object+visual label) stimuli. The matching task was associated with multisensory ‘costs’ rather than gains, but response times to self- versus stranger-associated stimuli were differentially affected by the type of multisensory stimulus (auditory object+visual label or visual shape+auditory label). The SPE was thus modulated both by block-type and the combination of object and label stimulus modalities. There was no SPE in the detection task. Taken together, these findings suggest that the SPE with unisensory and multisensory stimuli is modulated by both stimulus- and task-related parameters within the matching task. The SPE does not transfer to a significant motor speed gain when the self-associations are not task-relevant.
Collapse
|
13
|
A Preliminary Investigation on Frequency Dependant Cues for Human Emotions. ACOUSTICS 2022. [DOI: 10.3390/acoustics4020028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
The recent advances in Human-Computer Interaction and Artificial Intelligence have significantly increased the importance of identifying human emotions from different sensory cues. Hence, understanding the underlying relationships between emotions and sensory cues have become a subject of study in many fields including Acoustics, Psychology, Psychiatry, Neuroscience and Biochemistry. This work is a preliminary step towards investigating cues for human emotion on a fundamental level by aiming to establish relationships between tonal frequencies of sound and emotions. For that, an online perception test is conducted, in which participants are asked to rate the perceived emotions corresponding to each tone. The results show that a crossover point for four primary emotions lies in the frequency range of 417–440 Hz, thus consolidating the hypothesis that the frequency range of 432–440 Hz is neutral from human emotion perspective. It is also observed that the frequency dependant relationships between emotion pairs Happy—Sad, and Anger—Calm are approximately mirrored symmetric in nature.
Collapse
|
14
|
Sharma V, Prakash NR, Kalra P. Depression status identification using autoencoder neural network. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
15
|
Abstract
OBJECTIVE Discrepancies exist in reports of social cognition deficits in individuals with premanifest Huntington's disease (HD); however, the reason for this variability has not been investigated. The aims of this study were to (1) evaluate group- and individual-level social cognitive performance and (2) examine intra-individual variability (dispersion) across social cognitive domains in individuals with premanifest HD. METHOD Theory of mind (ToM), social perception, empathy, and social connectedness were evaluated in 35 individuals with premanifest HD and 29 healthy controls. Cut-off values beneath the median and 1.5 × the interquartile range below the 25th percentile (P25 - 1.5 × IQR) of healthy controls for each variable were established for a profiling method. Dispersion between social cognitive domains was also calculated. RESULTS Compared to healthy controls, individuals with premanifest HD performed worse on all social cognitive domains except empathy. Application of the profiling method revealed a large proportion of people with premanifest HD fell below healthy control median values across ToM (>80%), social perception (>57%), empathy (>54%), and social behaviour (>40%), with a percentage of these individuals displaying more pronounced impairments in empathy (20%) and ToM (22%). Social cognition dispersion did not differ between groups. No significant correlations were found between social cognitive domains and mood, sleep, and neurocognitive outcomes. CONCLUSIONS Significant group-level social cognition deficits were observed in the premanifest HD cohort. However, our profiling method showed that only a small percentage of these individuals experienced marked difficulties in social cognition, indicating the importance of individual-level assessments, particularly regarding future personalised treatments.
Collapse
|
16
|
Macoir J, Tremblay MP, Wilson MA, Laforce R, Hudon C. The Importance of Being Familiar: The Role of Semantic Knowledge in the Activation of Emotions and Factual Knowledge from Music in the Semantic Variant of Primary Progressive Aphasia. J Alzheimers Dis 2021; 85:115-128. [PMID: 34776446 DOI: 10.3233/jad-215083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND The role of semantic knowledge in emotion recognition remains poorly understood. The semantic variant of primary progressive aphasia (svPPA) is a degenerative disorder characterized by progressive loss of semantic knowledge, while other cognitive abilities remain spared, at least in the early stages of the disease. The syndrome is therefore a reliable clinical model of semantic impairment allowing for testing the propositions made in theoretical models of emotion recognition. OBJECTIVE The main goal of this study was to investigate the role of semantic memory in the recognition of basic emotions conveyed by music in individuals with svPPA. METHODS The performance of 9 individuals with svPPA was compared to that of 32 control participants in tasks designed to investigate the ability: a) to differentiate between familiar and non-familiar musical excerpts, b) to associate semantic concepts to musical excerpts, and c) to recognize basic emotions conveyed by music. RESULTS Results revealed that individuals with svPPA showed preserved abilities to recognize familiar musical excerpts but impaired performance on the two other tasks. Moreover, recognition of basic emotions and association of musical excerpts with semantic concepts was significantly better for familiar than non-familiar musical excerpts in participants with svPPA. CONCLUSION Results of this study have important implications for theoretical models of emotion recognition and music processing. They suggest that impairment of semantic memory in svPPA affects both the activation of emotions and factual knowledge from music and that this impairment is modulated by familiarity with musical tunes.
Collapse
Affiliation(s)
- Joël Macoir
- Département de Réadaptation, Faculté de Médecine, Université Laval, Québec, QC, Canada.,Centre de recherche CERVO - Brain Research Centre, Québec, QC, Canada
| | - Marie-Pier Tremblay
- Centre de recherche CERVO - Brain Research Centre, Québec, QC, Canada.,École de Psychologie, Université Laval, Québec, QC, Canada
| | - Maximiliano A Wilson
- Département de Réadaptation, Faculté de Médecine, Université Laval, Québec, QC, Canada.,Centre Interdisciplinaire de Recherche en Réadaptation et Intégration Sociale (CIRRIS), Québec, QC, Canada
| | - Robert Laforce
- Clinique Interdisciplinaire de Mémoire (CIME) du CHU de Québec, Département des sciences neurologiques, Québec, QC, Canada.,Département de Médecine, Faculté de Médecine, Université Laval, Québec, QC, Canada.,Research Chair on Primary Progressive Aphasia - Fondation Famille Lemaire, Québec, QC, Canada
| | - Carol Hudon
- Centre de recherche CERVO - Brain Research Centre, Québec, QC, Canada.,École de Psychologie, Université Laval, Québec, QC, Canada
| |
Collapse
|
17
|
Putkinen V, Nazari-Farsani S, Seppälä K, Karjalainen T, Sun L, Karlsson HK, Hudson M, Heikkilä TT, Hirvonen J, Nummenmaa L. Decoding Music-Evoked Emotions in the Auditory and Motor Cortex. Cereb Cortex 2021; 31:2549-2560. [PMID: 33367590 DOI: 10.1093/cercor/bhaa373] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Revised: 10/16/2020] [Accepted: 11/06/2020] [Indexed: 11/14/2022] Open
Abstract
Music can induce strong subjective experience of emotions, but it is debated whether these responses engage the same neural circuits as emotions elicited by biologically significant events. We examined the functional neural basis of music-induced emotions in a large sample (n = 102) of subjects who listened to emotionally engaging (happy, sad, fearful, and tender) pieces of instrumental music while their hemodynamic brain activity was measured with functional magnetic resonance imaging (fMRI). Ratings of the four categorical emotions and liking were used to predict hemodynamic responses in general linear model (GLM) analysis of the fMRI data. Multivariate pattern analysis (MVPA) was used to reveal discrete neural signatures of the four categories of music-induced emotions. To map neural circuits governing non-musical emotions, the subjects were scanned while viewing short emotionally evocative film clips. The GLM revealed that most emotions were associated with activity in the auditory, somatosensory, and motor cortices, cingulate gyrus, insula, and precuneus. Fear and liking also engaged the amygdala. In contrast, the film clips strongly activated limbic and cortical regions implicated in emotional processing. MVPA revealed that activity in the auditory cortex and primary motor cortices reliably discriminated the emotion categories. Our results indicate that different music-induced basic emotions have distinct representations in regions supporting auditory processing, motor control, and interoception but do not strongly rely on limbic and medial prefrontal regions critical for emotions with survival value.
Collapse
Affiliation(s)
- Vesa Putkinen
- Turku PET Centre, and Turku University Hospital, University of Turku, 20520, Turku, Finland
| | - Sanaz Nazari-Farsani
- Turku PET Centre, and Turku University Hospital, University of Turku, 20520, Turku, Finland
| | - Kerttu Seppälä
- Turku PET Centre, and Turku University Hospital, University of Turku, 20520, Turku, Finland
| | - Tomi Karjalainen
- Turku PET Centre, and Turku University Hospital, University of Turku, 20520, Turku, Finland
| | - Lihua Sun
- Turku PET Centre, and Turku University Hospital, University of Turku, 20520, Turku, Finland
| | - Henry K Karlsson
- Turku PET Centre, and Turku University Hospital, University of Turku, 20520, Turku, Finland
| | - Matthew Hudson
- Turku PET Centre, and Turku University Hospital, University of Turku, 20520, Turku, Finland.,National College of Ireland, D01 K6W2, Dublin, Ireland
| | - Timo T Heikkilä
- Department of Psychology, University of Turku, FI-20014, Turku, Finland
| | - Jussi Hirvonen
- Turku PET Centre, and Turku University Hospital, University of Turku, 20520, Turku, Finland.,Department of Radiology, Turku University Hospital, 20520, Turku, Finland
| | - Lauri Nummenmaa
- Turku PET Centre, and Turku University Hospital, University of Turku, 20520, Turku, Finland.,Department of Psychology, University of Turku, FI-20014, Turku, Finland
| |
Collapse
|
18
|
Ross P, Atkins B, Allison L, Simpson H, Duffell C, Williams M, Ermolina O. Children cannot ignore what they hear: Incongruent emotional information leads to an auditory dominance in children. J Exp Child Psychol 2021; 204:105068. [PMID: 33434707 DOI: 10.1016/j.jecp.2020.105068] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2020] [Revised: 11/19/2020] [Accepted: 12/07/2020] [Indexed: 10/22/2022]
Abstract
Effective emotion recognition is imperative to successfully navigating social situations. Research suggests differing developmental trajectories for the recognition of bodily and vocal emotion, but emotions are usually studied in isolation and rarely considered as multimodal stimuli in the literature. When adults are presented with basic multimodal sensory stimuli, the Colavita effect suggests that they have a visual dominance, whereas more recent research finds that an auditory sensory dominance may be present in children under 8 years of age. However, it is not currently known whether this phenomenon holds for more complex multimodal social stimuli. Here we presented children and adults with multimodal social stimuli consisting of emotional bodies and voices, asking them to recognize the emotion in one modality while ignoring the other. We found that adults can perform this task with no detrimental effects on performance regardless of whether the ignored emotion was congruent or not. However, children find it extremely challenging to recognize bodily emotion while trying to ignore incongruent vocal emotional information. In several instances, they performed below chance level, indicating that the auditory modality actively informs their choice of bodily emotion. Therefore, this is the first evidence, to our knowledge, of an auditory dominance in children when presented with emotionally meaningful stimuli.
Collapse
Affiliation(s)
- Paddy Ross
- Department of Psychology, Durham University, Durham DH1 3LE, UK.
| | - Beth Atkins
- Department of Psychology, Durham University, Durham DH1 3LE, UK
| | - Laura Allison
- Department of Psychology, Durham University, Durham DH1 3LE, UK
| | - Holly Simpson
- Department of Psychology, Durham University, Durham DH1 3LE, UK
| | | | - Matthew Williams
- Department of Psychology, Durham University, Durham DH1 3LE, UK; Department of Psychology, Newcastle University, Newcastle upon Tyne NE1 7RU, UK
| | - Olga Ermolina
- Department of Psychology, Durham University, Durham DH1 3LE, UK
| |
Collapse
|
19
|
Paquette S, Rigoulot S, Grunewald K, Lehmann A. Temporal decoding of vocal and musical emotions: Same code, different timecourse? Brain Res 2020; 1741:146887. [PMID: 32422128 DOI: 10.1016/j.brainres.2020.146887] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Revised: 04/22/2020] [Accepted: 05/12/2020] [Indexed: 11/24/2022]
Abstract
From a baby's cry to a piece of music, we perceive emotions from our auditory environment every day. Many theories bring forward the concept of common neural substrates for the perception of vocal and musical emotions. It has been proposed that, for us to perceive emotions, music recruits emotional circuits that evolved for the processing of biologically relevant vocalizations (e.g., screams, laughs). Although some studies have found similarities between voice and instrumental music in terms of acoustic cues and neural correlates, little is known about their processing timecourse. To further understand how vocal and instrumental emotional sounds are perceived, we used EEG to compare the neural processing timecourse of both stimuli type expressed with a varying degree of complexity (vocal/musical affect bursts and emotion-embedded speech/music). Vocal stimuli in general, as well as musical/vocal bursts, were associated with a more concise sensory trace at initial stages of analysis (smaller N1), although vocal bursts had shorter latencies than the musical ones. As for the P2 - vocal affect bursts and Emotion-Embedded Musical stimuli were associated with earlier P2s. These results support the idea that emotional vocal stimuli are differentiated early from other sources and provide insight into the common neurobiological underpinnings of auditory emotions.
Collapse
Affiliation(s)
- S Paquette
- Department of Otolaryngology - Head and Neck Surgery, McGill University, Montreal, Canada; Center for Research on Brain, Language, and Music, McGill University, Montreal, Canada; International Laboratory for Brain, Music, and Sound Research, Université de Montréal, Montreal, Canada.
| | - S Rigoulot
- Center for Research on Brain, Language, and Music, McGill University, Montreal, Canada; Department of Psychology, Université du Québec à Trois-Rivières, Trois-Rivières, Canada; International Laboratory for Brain, Music, and Sound Research, Université de Montréal, Montreal, Canada
| | - K Grunewald
- Center for Research on Brain, Language, and Music, McGill University, Montreal, Canada; International Laboratory for Brain, Music, and Sound Research, Université de Montréal, Montreal, Canada
| | - A Lehmann
- Department of Otolaryngology - Head and Neck Surgery, McGill University, Montreal, Canada; Center for Research on Brain, Language, and Music, McGill University, Montreal, Canada; International Laboratory for Brain, Music, and Sound Research, Université de Montréal, Montreal, Canada
| |
Collapse
|
20
|
Revuelta P, Ortiz T, Lucía MJ, Ruiz B, Sánchez-Pena JM. Limitations of Standard Accessible Captioning of Sounds and Music for Deaf and Hard of Hearing People: An EEG Study. Front Integr Neurosci 2020; 14:1. [PMID: 32132904 PMCID: PMC7040021 DOI: 10.3389/fnint.2020.00001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2019] [Accepted: 01/06/2020] [Indexed: 11/28/2022] Open
Abstract
Captioning is the process of transcribing speech and acoustical information into text to help deaf and hard of hearing people accessing to the auditory track of audiovisual media. In addition to the verbal transcription, it includes information such as sound effects, speaker identification, or music tagging. However, it just takes into account a limited spectrum of the whole acoustic information available in the soundtrack, and hence, an important amount of emotional information is lost when attending just to the normative compliant captions. In this article, it is shown, by means of behavioral and EEG measurements, how emotional information related to sounds and music used by the creator in the audiovisual work is perceived differently by normal hearing group and hearing disabled group when applying standard captioning. Audio and captions activate similar processing areas, respectively, in each group, although not with the same intensity. Moreover, captions require higher activation of voluntary attentional circuits, as well as language-related areas. Captions transcribing musical information increase attentional activity, instead of emotional processing.
Collapse
Affiliation(s)
- Pablo Revuelta
- Department of Computer Science, Oviedo University, Oviedo, Spain
| | - Tomás Ortiz
- Department of Psychiatric, Complutense University of Madrid, Madrid, Spain
| | - María J Lucía
- Spanish Center for Captioning and Audiodescription, Carlos III University of Madrid, Leganés, Spain.,Department of Computer Science, Carlos III University of Madrid, Leganés, Spain
| | - Belén Ruiz
- Spanish Center for Captioning and Audiodescription, Carlos III University of Madrid, Leganés, Spain
| | - José Manuel Sánchez-Pena
- Spanish Center for Captioning and Audiodescription, Carlos III University of Madrid, Leganés, Spain
| |
Collapse
|
21
|
Ross P, Atkinson AP. Expanding Simulation Models of Emotional Understanding: The Case for Different Modalities, Body-State Simulation Prominence, and Developmental Trajectories. Front Psychol 2020; 11:309. [PMID: 32194476 PMCID: PMC7063097 DOI: 10.3389/fpsyg.2020.00309] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2019] [Accepted: 02/10/2020] [Indexed: 12/14/2022] Open
Abstract
Recent models of emotion recognition suggest that when people perceive an emotional expression, they partially activate the respective emotion in themselves, providing a basis for the recognition of that emotion. Much of the focus of these models and of their evidential basis has been on sensorimotor simulation as a basis for facial expression recognition - the idea, in short, that coming to know what another feels involves simulating in your brain the motor plans and associated sensory representations engaged by the other person's brain in producing the facial expression that you see. In this review article, we argue that simulation accounts of emotion recognition would benefit from three key extensions. First, that fuller consideration be given to simulation of bodily and vocal expressions, given that the body and voice are also important expressive channels for providing cues to another's emotional state. Second, that simulation of other aspects of the perceived emotional state, such as changes in the autonomic nervous system and viscera, might have a more prominent role in underpinning emotion recognition than is typically proposed. Sensorimotor simulation models tend to relegate such body-state simulation to a subsidiary role, despite the plausibility of body-state simulation being able to underpin emotion recognition in the absence of typical sensorimotor simulation. Third, that simulation models of emotion recognition be extended to address how embodied processes and emotion recognition abilities develop through the lifespan. It is not currently clear how this system of sensorimotor and body-state simulation develops and in particular how this affects the development of emotion recognition ability. We review recent findings from the emotional body recognition literature and integrate recent evidence regarding the development of mimicry and interoception to significantly expand simulation models of emotion recognition.
Collapse
Affiliation(s)
- Paddy Ross
- Department of Psychology, Durham University, Durham, United Kingdom
| | | |
Collapse
|
22
|
Proverbio AM, Camporeale E, Brusa A. Multimodal Recognition of Emotions in Music and Facial Expressions. Front Hum Neurosci 2020; 14:32. [PMID: 32116613 PMCID: PMC7027335 DOI: 10.3389/fnhum.2020.00032] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Accepted: 01/23/2020] [Indexed: 01/24/2023] Open
Abstract
The aim of the study was to investigate the neural processing of congruent vs. incongruent affective audiovisual information (facial expressions and music) by means of ERPs (Event Related Potentials) recordings. Stimuli were 200 infant faces displaying Happiness, Relaxation, Sadness, Distress and 32 piano musical pieces conveying the same emotional states (as specifically assessed). Music and faces were presented simultaneously, and paired so that in half cases they were emotionally congruent or incongruent. Twenty subjects were told to pay attention and respond to infrequent targets (adult neutral faces) while their EEG was recorded from 128 channels. The face-related N170 (160-180 ms) component was the earliest response affected by the emotional content of faces (particularly by distress), while visual P300 (250-450 ms) and auditory N400 (350-550 ms) responses were specifically modulated by the emotional content of both facial expressions and musical pieces. Face/music emotional incongruence elicited a wide N400 negativity indicating the detection of a mismatch in the expressed emotion. A swLORETA inverse solution applied to N400 (difference wave Incong. - Cong.), showed the crucial role of Inferior and Superior Temporal Gyri in the multimodal representation of emotional information extracted from faces and music. Furthermore, the prefrontal cortex (superior and medial, BA 10) was also strongly active, possibly supporting working memory. The data hints at a common system for representing emotional information derived by social cognition and music processing, including uncus and cuneus.
Collapse
|
23
|
Reybrouck M, Podlipniak P. Preconceptual Spectral and Temporal Cues as a Source of Meaning in Speech and Music. Brain Sci 2019; 9:E53. [PMID: 30832292 PMCID: PMC6468545 DOI: 10.3390/brainsci9030053] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Revised: 02/18/2019] [Accepted: 02/26/2019] [Indexed: 11/24/2022] Open
Abstract
This paper explores the importance of preconceptual meaning in speech and music, stressing the role of affective vocalizations as a common ancestral instrument in communicative interactions. Speech and music are sensory rich stimuli, both at the level of production and perception, which involve different body channels, mainly the face and the voice. However, this bimodal approach has been challenged as being too restrictive. A broader conception argues for an action-oriented embodied approach that stresses the reciprocity between multisensory processing and articulatory-motor routines. There is, however, a distinction between language and music, with the latter being largely unable to function referentially. Contrary to the centrifugal tendency of language to direct the attention of the receiver away from the text or speech proper, music is centripetal in directing the listener's attention to the auditory material itself. Sound, therefore, can be considered as the meeting point between speech and music and the question can be raised as to the shared components between the interpretation of sound in the domain of speech and music. In order to answer these questions, this paper elaborates on the following topics: (i) The relationship between speech and music with a special focus on early vocalizations in humans and non-human primates; (ii) the transition from sound to meaning in speech and music; (iii) the role of emotion and affect in early sound processing; (iv) vocalizations and nonverbal affect burst in communicative sound comprehension; and (v) the acoustic features of affective sound with a special emphasis on temporal and spectrographic cues as parts of speech prosody and musical expressiveness.
Collapse
Affiliation(s)
- Mark Reybrouck
- Musicology Research Group, KU Leuven⁻University of Leuven, 3000 Leuven, Belgium and IPEM⁻Department of Musicology, Ghent University, 9000 Ghent, Belgium.
| | - Piotr Podlipniak
- Institute of Musicology, Adam Mickiewicz University in Poznań, ul. Umultowska 89D, 61-614 Poznań, Poland.
| |
Collapse
|
24
|
Paquette S, Ahmed GD, Goffi-Gomez MV, Hoshino ACH, Peretz I, Lehmann A. Musical and vocal emotion perception for cochlear implants users. Hear Res 2018; 370:272-282. [PMID: 30181063 DOI: 10.1016/j.heares.2018.08.009] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/25/2018] [Revised: 08/18/2018] [Accepted: 08/22/2018] [Indexed: 10/28/2022]
Abstract
Cochlear implants can successfully restore hearing in profoundly deaf individuals and enable speech comprehension. However, the acoustic signal provided is severely degraded and, as a result, many important acoustic cues for perceiving emotion in voices and music are unavailable. The deficit of cochlear implant users in auditory emotion processing has been clearly established. Yet, the extent to which this deficit and the specific cues that remain available to cochlear implant users are unknown due to several confounding factors. Here we assessed the recognition of the most basic forms of auditory emotion and aimed to identify which acoustic cues are most relevant to recognize emotions through cochlear implants. To do so, we used stimuli that allowed vocal and musical auditory emotions to be comparatively assessed while controlling for confounding factors. These stimuli were used to evaluate emotion perception in cochlear implant users (Experiment 1) and to investigate emotion perception in natural versus cochlear implant hearing in the same participants with a validated cochlear implant simulation approach (Experiment 2). Our results showed that vocal and musical fear was not accurately recognized by cochlear implant users. Interestingly, both experiments found that timbral acoustic cues (energy and roughness) correlate with participant ratings for both vocal and musical emotion bursts in the cochlear implant simulation condition. This suggests that specific attention should be given to these cues in the design of cochlear implant processors and rehabilitation protocols (especially energy, and roughness). For instance, music-based interventions focused on timbre could improve emotion perception and regulation, and thus improve social functioning, in children with cochlear implants during development.
Collapse
Affiliation(s)
- S Paquette
- International Laboratory for Brain Music and Sound Research, Center for Research on Brain, Language and Music, Department of Psychology, University of Montreal, Québec, Canada; Neurology Department, Beth Israel Deaconess Medical Center, Harvard Medical School, MA, USA.
| | - G D Ahmed
- Department of Otolaryngology, Head and Neck Surgery, McGill University, Québec, Canada; Department of Otolaryngology, Head and Neck Surgery, King Abdulaziz University, Rabigh Medical College, Jeddah, Saudi Arabia
| | - M V Goffi-Gomez
- Cochlear Implant Group, School of Medicine, Hospital das Clínicas, Universidade de São Paulo, SP, Brazil
| | - A C H Hoshino
- Cochlear Implant Group, School of Medicine, Hospital das Clínicas, Universidade de São Paulo, SP, Brazil
| | - I Peretz
- International Laboratory for Brain Music and Sound Research, Center for Research on Brain, Language and Music, Department of Psychology, University of Montreal, Québec, Canada
| | - A Lehmann
- International Laboratory for Brain Music and Sound Research, Center for Research on Brain, Language and Music, Department of Psychology, University of Montreal, Québec, Canada; Department of Otolaryngology, Head and Neck Surgery, McGill University, Québec, Canada
| |
Collapse
|
25
|
DAVID: An open-source platform for real-time transformation of infra-segmental emotional cues in running speech. Behav Res Methods 2018; 50:323-343. [PMID: 28374144 PMCID: PMC5809549 DOI: 10.3758/s13428-017-0873-y] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
We present an open-source software platform that transforms emotional cues expressed by speech signals using audio effects like pitch shifting, inflection, vibrato, and filtering. The emotional transformations can be applied to any audio file, but can also run in real time, using live input from a microphone, with less than 20-ms latency. We anticipate that this tool will be useful for the study of emotions in psychology and neuroscience, because it enables a high level of control over the acoustical and emotional content of experimental stimuli in a variety of laboratory situations, including real-time social situations. We present here results of a series of validation experiments aiming to position the tool against several methodological requirements: that transformed emotions be recognized at above-chance levels, valid in several languages (French, English, Swedish, and Japanese) and with a naturalness comparable to natural speech.
Collapse
|
26
|
Sachs ME, Habibi A, Damasio A, Kaplan JT. Decoding the neural signatures of emotions expressed through sound. Neuroimage 2018; 174:1-10. [DOI: 10.1016/j.neuroimage.2018.02.058] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2017] [Revised: 02/23/2018] [Accepted: 02/27/2018] [Indexed: 12/15/2022] Open
|
27
|
Livingstone SR, Russo FA. The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS One 2018; 13:e0196391. [PMID: 29768426 PMCID: PMC5955500 DOI: 10.1371/journal.pone.0196391] [Citation(s) in RCA: 175] [Impact Index Per Article: 29.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2017] [Accepted: 04/12/2018] [Indexed: 11/19/2022] Open
Abstract
The RAVDESS is a validated multimodal database of emotional speech and song. The database is gender balanced consisting of 24 professional actors, vocalizing lexically-matched statements in a neutral North American accent. Speech includes calm, happy, sad, angry, fearful, surprise, and disgust expressions, and song contains calm, happy, sad, angry, and fearful emotions. Each expression is produced at two levels of emotional intensity, with an additional neutral expression. All conditions are available in face-and-voice, face-only, and voice-only formats. The set of 7356 recordings were each rated 10 times on emotional validity, intensity, and genuineness. Ratings were provided by 247 individuals who were characteristic of untrained research participants from North America. A further set of 72 participants provided test-retest data. High levels of emotional validity and test-retest intrarater reliability were reported. Corrected accuracy and composite "goodness" measures are presented to assist researchers in the selection of stimuli. All recordings are made freely available under a Creative Commons license and can be downloaded at https://doi.org/10.5281/zenodo.1188976.
Collapse
Affiliation(s)
- Steven R. Livingstone
- Department of Psychology, Ryerson University, Toronto, Canada
- Department of Computer Science and Information Systems, University of Wisconsin-River Falls, Wisconsin, WI, United States of America
| | - Frank A. Russo
- Department of Psychology, Ryerson University, Toronto, Canada
| |
Collapse
|
28
|
Paquette S, Takerkart S, Saget S, Peretz I, Belin P. Cross-classification of musical and vocal emotions in the auditory cortex. Ann N Y Acad Sci 2018; 1423:329-337. [PMID: 29741242 DOI: 10.1111/nyas.13666] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2017] [Revised: 02/05/2018] [Accepted: 02/13/2018] [Indexed: 12/17/2022]
Abstract
Whether emotions carried by voice and music are processed by the brain using similar mechanisms has long been investigated. Yet neuroimaging studies do not provide a clear picture, mainly due to lack of control over stimuli. Here, we report a functional magnetic resonance imaging (fMRI) study using comparable stimulus material in the voice and music domains-the Montreal Affective Voices and the Musical Emotional Bursts-which include nonverbal short bursts of happiness, fear, sadness, and neutral expressions. We use a multivariate emotion-classification fMRI analysis involving cross-timbre classification as a means of comparing the neural mechanisms involved in processing emotional information in the two domains. We find, for affective stimuli in the violin, clarinet, or voice timbres, that local fMRI patterns in the bilateral auditory cortex and upper premotor regions support above-chance emotion classification when training and testing sets are performed within the same timbre category. More importantly, classifier performance generalized well across timbre in cross-classifying schemes, albeit with a slight accuracy drop when crossing the voice-music boundary, providing evidence for a shared neural code for processing musical and vocal emotions, with possibly a cost for the voice due to its evolutionary significance.
Collapse
Affiliation(s)
- Sébastien Paquette
- Department of Psychology, International Laboratory for Brain Music and Sound Research, Université de Montréal, Montreal, Canada
- Department of Neurology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts
| | - Sylvain Takerkart
- Institut de Neurosciences de La Timone, CNRS & Aix-Marseille University, Marseille, France
| | - Shinji Saget
- Institut de Neurosciences de La Timone, CNRS & Aix-Marseille University, Marseille, France
| | - Isabelle Peretz
- Department of Psychology, International Laboratory for Brain Music and Sound Research, Université de Montréal, Montreal, Canada
| | - Pascal Belin
- Department of Psychology, International Laboratory for Brain Music and Sound Research, Université de Montréal, Montreal, Canada
- Institut de Neurosciences de La Timone, CNRS & Aix-Marseille University, Marseille, France
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| |
Collapse
|
29
|
Ahmed DG, Paquette S, Zeitouni A, Lehmann A. Neural Processing of Musical and Vocal Emotions Through Cochlear Implants Simulation. Clin EEG Neurosci 2018; 49:143-151. [PMID: 28958161 DOI: 10.1177/1550059417733386] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Cochlear implants (CIs) partially restore the sense of hearing in the deaf. However, the ability to recognize emotions in speech and music is reduced due to the implant's electrical signal limitations and the patient's altered neural pathways. Electrophysiological correlations of these limitations are not yet well established. Here we aimed to characterize the effect of CIs on auditory emotion processing and, for the first time, directly compare vocal and musical emotion processing through a CI-simulator. We recorded 16 normal hearing participants' electroencephalographic activity while listening to vocal and musical emotional bursts in their original form and in a degraded (CI-simulated) condition. We found prolonged P50 latency and reduced N100-P200 complex amplitude in the CI-simulated condition. This points to a limitation in encoding sound signals processed through CI simulation. When comparing the processing of vocal and musical bursts, we found a delay in latency with the musical bursts compared to the vocal bursts in both conditions (original and CI-simulated). This suggests that despite the cochlear implants' limitations, the auditory cortex can distinguish between vocal and musical stimuli. In addition, it adds to the literature supporting the complexity of musical emotion. Replicating this study with actual CI users might lead to characterizing emotional processing in CI users and could ultimately help develop optimal rehabilitation programs or device processing strategies to improve CI users' quality of life.
Collapse
Affiliation(s)
- Duha G Ahmed
- 1 International Laboratory for Brain Music and Sound Research, Center for Research on Brain, Language and Music, Department of Psychology, University of Montreal, Montreal, Quebec, Canada.,2 Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Quebec, Canada.,3 Department of Otolaryngology, Head and Neck Surgery, King Abdulaziz University, Rabigh Medical College, Jeddah, Saudi Arabia
| | - Sebastian Paquette
- 1 International Laboratory for Brain Music and Sound Research, Center for Research on Brain, Language and Music, Department of Psychology, University of Montreal, Montreal, Quebec, Canada.,4 Neurology Department, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - Anthony Zeitouni
- 2 Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Quebec, Canada
| | - Alexandre Lehmann
- 1 International Laboratory for Brain Music and Sound Research, Center for Research on Brain, Language and Music, Department of Psychology, University of Montreal, Montreal, Quebec, Canada.,2 Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
30
|
Picou EM, Singh G, Goy H, Russo F, Hickson L, Oxenham AJ, Buono GH, Ricketts TA, Launer S. Hearing, Emotion, Amplification, Research, and Training Workshop: Current Understanding of Hearing Loss and Emotion Perception and Priorities for Future Research. Trends Hear 2018; 22:2331216518803215. [PMID: 30270810 PMCID: PMC6168729 DOI: 10.1177/2331216518803215] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2018] [Revised: 08/18/2018] [Accepted: 09/03/2018] [Indexed: 12/19/2022] Open
Abstract
The question of how hearing loss and hearing rehabilitation affect patients' momentary emotional experiences is one that has received little attention but has considerable potential to affect patients' psychosocial function. This article is a product from the Hearing, Emotion, Amplification, Research, and Training workshop, which was convened to develop a consensus document describing research on emotion perception relevant for hearing research. This article outlines conceptual frameworks for the investigation of emotion in hearing research; available subjective, objective, neurophysiologic, and peripheral physiologic data acquisition research methods; the effects of age and hearing loss on emotion perception; potential rehabilitation strategies; priorities for future research; and implications for clinical audiologic rehabilitation. More broadly, this article aims to increase awareness about emotion perception research in audiology and to stimulate additional research on the topic.
Collapse
Affiliation(s)
- Erin M. Picou
- Vanderbilt University School of
Medicine, Nashville, TN, USA
| | - Gurjit Singh
- Phonak Canada, Mississauga, ON,
Canada
- Department of Speech-Language Pathology,
University of Toronto, ON, Canada
- Department of Psychology, Ryerson
University, Toronto, ON, Canada
| | - Huiwen Goy
- Department of Psychology, Ryerson
University, Toronto, ON, Canada
| | - Frank Russo
- Department of Psychology, Ryerson
University, Toronto, ON, Canada
| | - Louise Hickson
- School of Health and Rehabilitation
Sciences, University of Queensland, Brisbane, Australia
| | | | | | | | | |
Collapse
|
31
|
Ackerley R, Aimonetti JM, Ribot-Ciscar E. Emotions alter muscle proprioceptive coding of movements in humans. Sci Rep 2017; 7:8465. [PMID: 28814736 PMCID: PMC5559453 DOI: 10.1038/s41598-017-08721-4] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2016] [Accepted: 07/18/2017] [Indexed: 12/29/2022] Open
Abstract
Emotions can evoke strong reactions that have profound influences, from gross changes in our internal environment to small fluctuations in facial muscles, and reveal our feelings overtly. Muscles contain proprioceptive afferents, informing us about our movements and regulating motor activities. Their firing reflects changes in muscle length, yet their sensitivity can be modified by the fusimotor system, as found in animals. In humans, the sensitivity of muscle afferents is modulated by cognitive processes, such as attention; however, it is unknown if emotional processes can modulate muscle feedback. Presently, we explored whether muscle afferent sensitivity adapts to the emotional situation. We recorded from single muscle afferents in the leg, using microneurography, and moved the ankle joint of participants, while they listened to evocative classical music to induce sad, neutral, or happy emotions, or sat passively (no music). We further monitored their physiological responses using skin conductance, heart rate, and electromyography measures. We found that muscle afferent firing was modified by the emotional context, especially for sad emotions, where the muscle spindle dynamic response increased. We suggest that this allows us to prime movements, where the emotional state prepares the body for consequent behaviour-appropriate reactions.
Collapse
Affiliation(s)
- Rochelle Ackerley
- Aix Marseille Univ, CNRS, LNIA, FR3C, Marseille, France.,Department of Physiology, University of Gothenburg, 40530, Göteborg, Sweden
| | | | | |
Collapse
|
32
|
Jafari Z, Esmaili M, Delbari A, Mehrpour M, Mohajerani MH. Post-stroke acquired amusia: A comparison between right- and left-brain hemispheric damages. NeuroRehabilitation 2017; 40:233-241. [DOI: 10.3233/nre-161408] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Zahra Jafari
- Department of Basic Sciences in Rehabilitation, School of Rehabilitation Sciences, Iran University of Medical Sciences (IUMS), Tehran, Iran
- Department of Neuroscience, Canadian Center for Behavioral Neuroscience (CCBN), University of Lethbridge, Lethbridge, AB, Canada
- Iranian Research Center on Aging, University of Social Welfare and Rehabilitation Sciences (USWR), Tehran, Iran
| | - Mahdiye Esmaili
- Iranian Research Center on Aging, University of Social Welfare and Rehabilitation Sciences (USWR), Tehran, Iran
| | - Ahmad Delbari
- Iranian Research Center on Aging, University of Social Welfare and Rehabilitation Sciences (USWR), Tehran, Iran
| | - Masoud Mehrpour
- Department of Neurology, Firouzgar Hospital, Iran University of Medical Sciences (IUMS), Tehran, Iran
| | - Majid H. Mohajerani
- Department of Neuroscience, Canadian Center for Behavioral Neuroscience (CCBN), University of Lethbridge, Lethbridge, AB, Canada
| |
Collapse
|
33
|
Lehmann A, Paquette S. Cross-domain processing of musical and vocal emotions in cochlear implant users. Front Neurosci 2015; 9:343. [PMID: 26441512 PMCID: PMC4585154 DOI: 10.3389/fnins.2015.00343] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2015] [Accepted: 09/10/2015] [Indexed: 01/08/2023] Open
Affiliation(s)
- Alexandre Lehmann
- Department of Otolaryngology Head and Neck Surgery, McGill University Montreal, QC, Canada ; International Laboratory for Brain, Music and Sound Research, Center for Research on Brain, Language and Music Montreal, QC, Canada ; Department of Psychology, University of Montreal Montreal, QC, Canada
| | - Sébastien Paquette
- International Laboratory for Brain, Music and Sound Research, Center for Research on Brain, Language and Music Montreal, QC, Canada ; Department of Psychology, University of Montreal Montreal, QC, Canada
| |
Collapse
|
34
|
Bhatara A, Laukka P, Levitin DJ. Expression of emotion in music and vocal communication: Introduction to the research topic. Front Psychol 2014; 5:399. [PMID: 24829557 PMCID: PMC4017128 DOI: 10.3389/fpsyg.2014.00399] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2014] [Accepted: 04/15/2014] [Indexed: 11/13/2022] Open
Affiliation(s)
- Anjali Bhatara
- Sorbonne Paris Cité, Université Paris Descartes Paris, France ; Laboratoire Psychologie de la Perception, CNRS, UMR 8242 Paris, France
| | - Petri Laukka
- Department of Psychology, Stockholm University Stockholm, Sweden
| | - Daniel J Levitin
- Department of Psychology, McGill University Montreal, QC, Canada
| |
Collapse
|