1
|
Nussbaum C, Schirmer A, Schweinberger SR. Musicality - Tuned to the melody of vocal emotions. Br J Psychol 2024; 115:206-225. [PMID: 37851369 DOI: 10.1111/bjop.12684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 09/12/2023] [Accepted: 09/24/2023] [Indexed: 10/19/2023]
Abstract
Musicians outperform non-musicians in vocal emotion perception, likely because of increased sensitivity to acoustic cues, such as fundamental frequency (F0) and timbre. Yet, how musicians make use of these acoustic cues to perceive emotions, and how they might differ from non-musicians, is unclear. To address these points, we created vocal stimuli that conveyed happiness, fear, pleasure or sadness, either in all acoustic cues, or selectively in either F0 or timbre only. We then compared vocal emotion perception performance between professional/semi-professional musicians (N = 39) and non-musicians (N = 38), all socialized in Western music culture. Compared to non-musicians, musicians classified vocal emotions more accurately. This advantage was seen in the full and F0-modulated conditions, but was absent in the timbre-modulated condition indicating that musicians excel at perceiving the melody (F0), but not the timbre of vocal emotions. Further, F0 seemed more important than timbre for the recognition of all emotional categories. Additional exploratory analyses revealed a link between time-varying F0 perception in music and voices that was independent of musical training. Together, these findings suggest that musicians are particularly tuned to the melody of vocal emotions, presumably due to a natural predisposition to exploit melodic patterns.
Collapse
Affiliation(s)
- Christine Nussbaum
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, Jena, Germany
- Voice Research Unit, Friedrich Schiller University, Jena, Germany
| | - Annett Schirmer
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, Jena, Germany
- Institute of Psychology, University of Innsbruck, Innsbruck, Austria
| | - Stefan R Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, Jena, Germany
- Voice Research Unit, Friedrich Schiller University, Jena, Germany
- Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| |
Collapse
|
2
|
Tang L, Xu Y, Yang S, Meng X, Du B, Sun C, Liu L, Dong Q, Nan Y. Mandarin-Speaking Amusics' Online Recognition of Tone and Intonation. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1107-1116. [PMID: 38470842 DOI: 10.1044/2024_jslhr-23-00520] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/14/2024]
Abstract
PURPOSE Congenital amusia is a neurogenetic disorder of musical pitch processing. Its linguistic consequences have been examined separately for speech intonations and lexical tones. However, in a tonal language such as Chinese, the processing of intonations and lexical tones interacts with each other during online speech perception. Whether and how the musical pitch disorder might affect linguistic pitch processing during online speech perception remains unknown. METHOD We investigated this question with intonation (question vs. statement) and lexical tone (rising Tone 2 vs. falling Tone 4) identification tasks using the same set of sentences, comparing behavioral and event-related potential measurements between Mandarin-speaking amusics and matched controls. We specifically focused on the amusics without behavioral lexical tone deficits (the majority, i.e., pure amusics). RESULTS Results showed that, despite relative to normal performance when tested in word lexical tone test, pure amusics demonstrated inferior recognition than controls during sentence tone and intonation identification. Compared to controls, pure amusics had larger N400 amplitudes in question stimuli during tone task and smaller P600 amplitudes in intonation task. CONCLUSION These data indicate that musical pitch disorder affects both tone and intonation processing during sentence processing even for pure amusics, whose lexical tone processing was intact when tested with words.
Collapse
Affiliation(s)
- Lirong Tang
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, China
| | - Yangxiaoxue Xu
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, China
| | - Shiting Yang
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, China
| | - Xiangyun Meng
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, China
| | - Boqi Du
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, China
| | - Chen Sun
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, China
| | - Li Liu
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, China
| | - Qi Dong
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, China
| | - Yun Nan
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, China
| |
Collapse
|
3
|
Ma W, Bowers L, Behrend D, Hellmuth Margulis E, Forde Thompson W. Child word learning in song and speech. Q J Exp Psychol (Hove) 2024; 77:343-362. [PMID: 37073951 DOI: 10.1177/17470218231172494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/20/2023]
Abstract
Listening to sung words rather than spoken words can facilitate word learning and memory in adults and school-aged children. To explore the development of this effect in young children, this study examined word learning (assessed as forming word-object associations) in 1- to 2-year olds and 3- to 4-year olds, and word long-term memory (LTM) in 4- to 5-year olds several days after the initial learning. In an intermodal preferential looking paradigm, children were taught a pair of words utilising adult-directed speech (ADS) and a pair of sung words. Word learning performance was better with sung words than with ADS words in 1- to 2-year olds (Experiments 1a and 1b), 3- to 4-year olds (Experiment 1a), and 4- to 5-year olds (Experiment 2b), revealing a benefit of song in word learning in all age ranges recruited. We also examined whether children successfully learned the words by comparing their performance against chance. The 1- to 2-year olds only learned sung words, but the 3- to 4-year olds learned both sung and ADS words, suggesting that the reliance on music features in word learning observed at ages 1-2 decreased with age. Furthermore, song facilitated the word mapping-recognition processes. Results on children's LTM performance showed that the 4- to 5-year olds' LTM performance did not differ between sung and ADS words. However, the 4- to 5-year olds reliably recalled sung words but not spoken words. The reliable LTM of sung words arose from hearing sung words during the initial learning rather than at test. Finally, the benefit of song on word learning and the reliable LTM of sung words observed at ages 3-5 cannot be explained as an attentional effect.
Collapse
Affiliation(s)
- Weiyi Ma
- School of Human Environmental Sciences, University of Arkansas, Fayetteville, AR, USA
| | - Lisa Bowers
- Department of Rehabilitation, Human Resources and Communication Disorders, University of Arkansas, Fayetteville, AR, USA
| | - Douglas Behrend
- Department of Psychological Science, University of Arkansas, Fayetteville, AR, USA
| | | | | |
Collapse
|
4
|
Bowling DL. Biological principles for music and mental health. Transl Psychiatry 2023; 13:374. [PMID: 38049408 PMCID: PMC10695969 DOI: 10.1038/s41398-023-02671-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 10/30/2023] [Accepted: 11/17/2023] [Indexed: 12/06/2023] Open
Abstract
Efforts to integrate music into healthcare systems and wellness practices are accelerating but the biological foundations supporting these initiatives remain underappreciated. As a result, music-based interventions are often sidelined in medicine. Here, I bring together advances in music research from neuroscience, psychology, and psychiatry to bridge music's specific foundations in human biology with its specific therapeutic applications. The framework I propose organizes the neurophysiological effects of music around four core elements of human musicality: tonality, rhythm, reward, and sociality. For each, I review key concepts, biological bases, and evidence of clinical benefits. Within this framework, I outline a strategy to increase music's impact on health based on standardizing treatments and their alignment with individual differences in responsivity to these musical elements. I propose that an integrated biological understanding of human musicality-describing each element's functional origins, development, phylogeny, and neural bases-is critical to advancing rational applications of music in mental health and wellness.
Collapse
Affiliation(s)
- Daniel L Bowling
- Department of Psychiatry and Behavioral Sciences, Stanford University, School of Medicine, Stanford, CA, USA.
- Center for Computer Research in Music and Acoustics (CCRMA), Stanford University, School of Humanities and Sciences, Stanford, CA, USA.
| |
Collapse
|
5
|
Nussbaum C, Schirmer A, Schweinberger SR. Electrophysiological Correlates of Vocal Emotional Processing in Musicians and Non-Musicians. Brain Sci 2023; 13:1563. [PMID: 38002523 PMCID: PMC10670383 DOI: 10.3390/brainsci13111563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 10/31/2023] [Accepted: 11/03/2023] [Indexed: 11/26/2023] Open
Abstract
Musicians outperform non-musicians in vocal emotion recognition, but the underlying mechanisms are still debated. Behavioral measures highlight the importance of auditory sensitivity towards emotional voice cues. However, it remains unclear whether and how this group difference is reflected at the brain level. Here, we compared event-related potentials (ERPs) to acoustically manipulated voices between musicians (n = 39) and non-musicians (n = 39). We used parameter-specific voice morphing to create and present vocal stimuli that conveyed happiness, fear, pleasure, or sadness, either in all acoustic cues or selectively in either pitch contour (F0) or timbre. Although the fronto-central P200 (150-250 ms) and N400 (300-500 ms) components were modulated by pitch and timbre, differences between musicians and non-musicians appeared only for a centro-parietal late positive potential (500-1000 ms). Thus, this study does not support an early auditory specialization in musicians but suggests instead that musicality affects the manner in which listeners use acoustic voice cues during later, controlled aspects of emotion evaluation.
Collapse
Affiliation(s)
- Christine Nussbaum
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, 07743 Jena, Germany;
- Voice Research Unit, Friedrich Schiller University, 07743 Jena, Germany
| | - Annett Schirmer
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, 07743 Jena, Germany;
- Institute of Psychology, University of Innsbruck, 6020 Innsbruck, Austria
| | - Stefan R. Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University, 07743 Jena, Germany;
- Voice Research Unit, Friedrich Schiller University, 07743 Jena, Germany
- Swiss Center for Affective Sciences, University of Geneva, 1202 Geneva, Switzerland
| |
Collapse
|
6
|
Paquette S, Deroche MLD, Goffi-Gomez MV, Hoshino ACH, Lehmann A. Predicting emotion perception abilities for cochlear implant users. Int J Audiol 2023; 62:946-954. [PMID: 36047767 DOI: 10.1080/14992027.2022.2111611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 08/05/2022] [Indexed: 11/05/2022]
Abstract
OBJECTIVE In daily life, failure to perceive emotional expressions can result in maladjusted behaviour. For cochlear implant users, perceiving emotional cues in sounds remains challenging, and the factors explaining the variability in patients' sensitivity to emotions are currently poorly understood. Understanding how these factors relate to auditory proficiency is a major challenge of cochlear implant research and is critical in addressing patients' limitations. DESIGN To fill this gap, we evaluated different auditory perception aspects in implant users (pitch discrimination, music processing and speech intelligibility) and correlated them to their performance in an emotion recognition task. STUDY SAMPLE Eighty-four adults (18-76 years old) participated in our investigation; 42 cochlear implant users and 42 controls. Cochlear implant users performed worse than their controls on all tasks, and emotion perception abilities were correlated to their age and their clinical outcome as measured in the speech intelligibility task. RESULTS As previously observed, emotion perception abilities declined with age (here by about 2-3% in a decade). Interestingly, even when emotional stimuli were musical, CI users' skills relied more on processes underlying speech intelligibility. CONCLUSIONS These results suggest that speech processing remains a clinical priority even when one is interested in affective skills.
Collapse
Affiliation(s)
- S Paquette
- International Laboratory for Brain Music and Sound Research, Department of Psychology, University of Montréal, Montreal, Canada
- Centre for Research on Brain, Language and Music, McGill University, Montreal, Canada
- Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Canada
| | - M L D Deroche
- International Laboratory for Brain Music and Sound Research, Department of Psychology, University of Montréal, Montreal, Canada
- Centre for Research on Brain, Language and Music, McGill University, Montreal, Canada
- Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Canada
- Laboratory for Hearing and Cognition, Psychology Department, Concordia University, Montreal, Canada
| | - M V Goffi-Gomez
- Cochlear Implant Group, School of Medicine, Hospital das Clínicas, Universidade de São Paulo, São Paulo, Canada
| | - A C H Hoshino
- Cochlear Implant Group, School of Medicine, Hospital das Clínicas, Universidade de São Paulo, São Paulo, Canada
| | - A Lehmann
- International Laboratory for Brain Music and Sound Research, Department of Psychology, University of Montréal, Montreal, Canada
- Centre for Research on Brain, Language and Music, McGill University, Montreal, Canada
- Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Canada
| |
Collapse
|
7
|
Tillmann B, Graves JE, Talamini F, Lévêque Y, Fornoni L, Hoarau C, Pralus A, Ginzburg J, Albouy P, Caclin A. Auditory cortex and beyond: Deficits in congenital amusia. Hear Res 2023; 437:108855. [PMID: 37572645 DOI: 10.1016/j.heares.2023.108855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Revised: 06/14/2023] [Accepted: 07/21/2023] [Indexed: 08/14/2023]
Abstract
Congenital amusia is a neuro-developmental disorder of music perception and production, with the observed deficits contrasting with the sophisticated music processing reported for the general population. Musical deficits within amusia have been hypothesized to arise from altered pitch processing, with impairments in pitch discrimination and, notably, short-term memory. We here review research investigating its behavioral and neural correlates, in particular the impairments at encoding, retention, and recollection of pitch information, as well as how these impairments extend to the processing of pitch cues in speech and emotion. The impairments have been related to altered brain responses in a distributed fronto-temporal network, which can be observed also at rest. Neuroimaging studies revealed changes in connectivity patterns within this network and beyond, shedding light on the brain dynamics underlying auditory cognition. Interestingly, some studies revealed spared implicit pitch processing in congenital amusia, showing the power of implicit cognition in the music domain. Building on these findings, together with audiovisual integration and other beneficial mechanisms, we outline perspectives for training and rehabilitation and the future directions of this research domain.
Collapse
Affiliation(s)
- Barbara Tillmann
- CNRS, INSERM, Centre de Recherche en Neurosciences de Lyon CRNL, Université Claude Bernard Lyon 1, UMR5292, U1028, F-69500, Bron, France; Laboratory for Research on Learning and Development, Université de Bourgogne, LEAD - CNRS UMR5022, Dijon, France; LEAD-CNRS UMR5022; Université Bourgogne Franche-Comté; Pôle AAFE; 11 Esplanade Erasme; 21000 Dijon, France.
| | - Jackson E Graves
- Laboratoire des systèmes perceptifs, Département d'études cognitives, École normale supérieure, PSL University, Paris 75005, France
| | | | - Yohana Lévêque
- CNRS, INSERM, Centre de Recherche en Neurosciences de Lyon CRNL, Université Claude Bernard Lyon 1, UMR5292, U1028, F-69500, Bron, France
| | - Lesly Fornoni
- CNRS, INSERM, Centre de Recherche en Neurosciences de Lyon CRNL, Université Claude Bernard Lyon 1, UMR5292, U1028, F-69500, Bron, France
| | - Caliani Hoarau
- CNRS, INSERM, Centre de Recherche en Neurosciences de Lyon CRNL, Université Claude Bernard Lyon 1, UMR5292, U1028, F-69500, Bron, France
| | - Agathe Pralus
- CNRS, INSERM, Centre de Recherche en Neurosciences de Lyon CRNL, Université Claude Bernard Lyon 1, UMR5292, U1028, F-69500, Bron, France
| | - Jérémie Ginzburg
- CNRS, INSERM, Centre de Recherche en Neurosciences de Lyon CRNL, Université Claude Bernard Lyon 1, UMR5292, U1028, F-69500, Bron, France
| | - Philippe Albouy
- CERVO Brain Research Center, School of Psychology, Laval University, Québec, G1J 2G3; International Laboratory for Brain, Music and Sound Research (BRAMS), CRBLM, Montreal QC, H2V 2J2, Canada
| | - Anne Caclin
- CNRS, INSERM, Centre de Recherche en Neurosciences de Lyon CRNL, Université Claude Bernard Lyon 1, UMR5292, U1028, F-69500, Bron, France.
| |
Collapse
|
8
|
Singing ability is related to vocal emotion recognition: Evidence for shared sensorimotor processing across speech and music. Atten Percept Psychophys 2023; 85:234-243. [PMID: 36380148 DOI: 10.3758/s13414-022-02613-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/03/2022] [Indexed: 11/16/2022]
Abstract
The ability to recognize emotion in speech is a critical skill for social communication. Motivated by previous work that has shown that vocal emotion recognition accuracy varies by musical ability, the current study addressed this relationship using a behavioral measure of musical ability (i.e., singing) that relies on the same effector system used for vocal prosody production. In the current study, participants completed a musical production task that involved singing four-note novel melodies. To measure pitch perception, we used a simple pitch discrimination task in which participants indicated whether a target pitch was higher or lower than a comparison pitch. We also used self-report measures to address language and musical background. We report that singing ability, but not self-reported musical experience nor pitch discrimination ability, was a unique predictor of vocal emotion recognition accuracy. These results support a relationship between processes involved in vocal production and vocal perception, and suggest that sensorimotor processing of the vocal system is recruited for processing vocal prosody.
Collapse
|
9
|
Marin MM, Rathgeber I. Darwin’s sexual selection hypothesis revisited: Musicality increases sexual attraction in both sexes. Front Psychol 2022; 13:971988. [PMID: 36092107 PMCID: PMC9453251 DOI: 10.3389/fpsyg.2022.971988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Accepted: 07/25/2022] [Indexed: 11/18/2022] Open
Abstract
A number of theories about the origins of musicality have incorporated biological and social perspectives. Darwin argued that musicality evolved by sexual selection, functioning as a courtship display in reproductive partner choice. Darwin did not regard musicality as a sexually dimorphic trait, paralleling evidence that both sexes produce and enjoy music. A novel research strand examines the effect of musicality on sexual attraction by acknowledging the importance of facial attractiveness. We previously demonstrated that music varying in emotional content increases the perceived attractiveness and dating desirability of opposite-sex faces only in females, compared to a silent control condition. Here, we built upon this approach by presenting the person depicted (target) as the performer of the music (prime), thus establishing a direct link. We hypothesized that musical priming would increase sexual attraction, with high-arousing music inducing the largest effect. Musical primes (25 s, piano solo music) varied in arousal and pleasantness, and targets were photos of other-sex faces of average attractiveness and with neutral expressions (2 s). Participants were 35 females and 23 males (heterosexual psychology students, single, and no hormonal contraception use) matched for musical background, mood, and liking for the music used in the experiment. After musical priming, females’ ratings of attractiveness and dating desirability increased significantly. In males, only dating desirability was significantly increased by musical priming. No specific effects of music-induced pleasantness and arousal were observed. Our results, together with other recent empirical evidence, corroborate the sexual selection hypothesis for the evolution of human musicality.
Collapse
Affiliation(s)
- Manuela M. Marin
- Department of Cognition, Emotion and Methods in Psychology, University of Vienna, Vienna, Austria
- Department of Psychology, University of Innsbruck, Innsbruck, Austria
- *Correspondence: Manuela M. Marin,
| | - Ines Rathgeber
- Department of Psychology, University of Innsbruck, Innsbruck, Austria
| |
Collapse
|
10
|
Lee CY, Zhang C, Wang WSY, Waye MMY. Editorial: Relationship of language and music, ten years after: Neural organization, cross-domain transfer and evolutionary origins. Front Psychol 2022; 13:990857. [PMID: 35967615 PMCID: PMC9371976 DOI: 10.3389/fpsyg.2022.990857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2022] [Accepted: 07/12/2022] [Indexed: 12/02/2022] Open
Affiliation(s)
- Chao-Yang Lee
- Division of Communication Sciences and Disorders, Ohio University, Athens, OH, United States
- *Correspondence: Chao-Yang Lee
| | - Caicai Zhang
- Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - William Shi-Yuan Wang
- Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - Mary Miu Yee Waye
- The Nethersole School of Nursing, The Chinese University of Hong Kong, Shatin, Hong Kong SAR, China
| |
Collapse
|
11
|
Zhang G, Shao J, Zhang C, Wang L. The Perception of Lexical Tone and Intonation in Whispered Speech by Mandarin-Speaking Congenital Amusics. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:1331-1348. [PMID: 35377182 DOI: 10.1044/2021_jslhr-21-00345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
PURPOSE A fundamental feature of human speech is variation, including the manner of phonation, as exemplified in the case of whispered speech. In this study, we employed whispered speech to examine an unresolved issue about congenital amusia, a neurodevelopmental disorder of musical pitch processing, which also affects speech pitch processing such as lexical tone and intonation perception. The controversy concerns whether amusia is a pitch-processing disorder or can affect speech processing beyond pitch. METHOD We examined lexical tone and intonation recognition in 19 Mandarin-speaking amusics and 19 matched controls in phonated and whispered speech, where fundamental frequency (f o) information is either present or absent. RESULTS The results revealed that the performance of congenital amusics was inferior to that of controls in lexical tone identification in both phonated and whispered speech. These impairments were also detected in identifying intonation (statements/questions) in phonated and whispered modes. Across the experiments, regression models revealed that f o and non-f o (duration, intensity, and formant frequency) acoustic cues predicted tone and intonation recognition in phonated speech, whereas non-f o cues predicted tone and intonation recognition in whispered speech. There were significant differences between amusics and controls in the use of both f o and non-f o cues. CONCLUSION The results provided the first evidence that the impairments of amusics in lexical tone and intonation identification prevail into whispered speech and support the hypothesis that the deficits of amusia extend beyond pitch processing. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.19302275.
Collapse
Affiliation(s)
- Gaoyuan Zhang
- Department of Chinese Language and Literature, Peking University, Beijing, China
| | - Jing Shao
- Department of English Language and Literature, Hong Kong Baptist University, Hong Kong SAR, China
| | - Caicai Zhang
- Research Centre for Language, Cognition, and Neuroscience, Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Lan Wang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China
| |
Collapse
|
12
|
Beyond the Language Module: Musicality as a Stepping Stone Towards Language Acquisition. EVOLUTIONARY PSYCHOLOGY 2022. [DOI: 10.1007/978-3-030-76000-7_12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
|
13
|
Bedoya D, Arias P, Rachman L, Liuni M, Canonne C, Goupil L, Aucouturier JJ. Even violins can cry: specifically vocal emotional behaviours also drive the perception of emotions in non-vocal music. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200396. [PMID: 34719254 PMCID: PMC8558776 DOI: 10.1098/rstb.2020.0396] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
A wealth of theoretical and empirical arguments have suggested that music triggers emotional responses by resembling the inflections of expressive vocalizations, but have done so using low-level acoustic parameters (pitch, loudness, speed) that, in fact, may not be processed by the listener in reference to human voice. Here, we take the opportunity of the recent availability of computational models that allow the simulation of three specifically vocal emotional behaviours: smiling, vocal tremor and vocal roughness. When applied to musical material, we find that these three acoustic manipulations trigger emotional perceptions that are remarkably similar to those observed on speech and scream sounds, and identical across musician and non-musician listeners. Strikingly, this not only applied to singing voice with and without musical background, but also to purely instrumental material. This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part I)’.
Collapse
Affiliation(s)
- D Bedoya
- Science and Technology of Music and Sound, IRCAM/CNRS/Sorbonne Université, Paris, France
| | - P Arias
- Science and Technology of Music and Sound, IRCAM/CNRS/Sorbonne Université, Paris, France.,Department of Cognitive Science, Lund University, Lund, Sweden
| | - L Rachman
- Faculty of Medical Sciences, University of Groningen, Groningen, The Netherlands
| | - M Liuni
- Alta Voce SAS, Houilles, France
| | - C Canonne
- Science and Technology of Music and Sound, IRCAM/CNRS/Sorbonne Université, Paris, France
| | - L Goupil
- BabyDevLab, University of East London, London, UK
| | - J-J Aucouturier
- FEMTO-ST Institute, Université de Bourgogne Franche-Comté/CNRS, Besançon, France
| |
Collapse
|
14
|
Rapid Assessment of Non-Verbal Auditory Perception in Normal-Hearing Participants and Cochlear Implant Users. J Clin Med 2021; 10:jcm10102093. [PMID: 34068067 PMCID: PMC8152499 DOI: 10.3390/jcm10102093] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 04/26/2021] [Accepted: 05/06/2021] [Indexed: 01/17/2023] Open
Abstract
In the case of hearing loss, cochlear implants (CI) allow for the restoration of hearing. Despite the advantages of CIs for speech perception, CI users still complain about their poor perception of their auditory environment. Aiming to assess non-verbal auditory perception in CI users, we developed five listening tests. These tests measure pitch change detection, pitch direction identification, pitch short-term memory, auditory stream segregation, and emotional prosody recognition, along with perceived intensity ratings. In order to test the potential benefit of visual cues for pitch processing, the three pitch tests included half of the trials with visual indications to perform the task. We tested 10 normal-hearing (NH) participants with material being presented as original and vocoded sounds, and 10 post-lingually deaf CI users. With the vocoded sounds, the NH participants had reduced scores for the detection of small pitch differences, and reduced emotion recognition and streaming abilities compared to the original sounds. Similarly, the CI users had deficits for small differences in the pitch change detection task and emotion recognition, as well as a decreased streaming capacity. Overall, this assessment allows for the rapid detection of specific patterns of non-verbal auditory perception deficits. The current findings also open new perspectives about how to enhance pitch perception capacities using visual cues.
Collapse
|
15
|
Impaired face recognition is associated with abnormal gray matter volume in the posterior cingulate cortex in congenital amusia. Neuropsychologia 2021; 156:107833. [PMID: 33757844 DOI: 10.1016/j.neuropsychologia.2021.107833] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Revised: 03/12/2021] [Accepted: 03/18/2021] [Indexed: 11/21/2022]
Abstract
Congenital amusia is as a neurodevelopment disorder primarily defined by impairment in pitch discrimination and pitch memory. Interestingly, it has been reported that individuals with congenital amusia also exhibit deficits in face recognition (prosopagnosia). One explanation of such comorbidity is that the neural substrates of pitch recognition and face recognition may be similar. To test this hypothesis, face recognition ability was assessed using the Cambridge Face Memory Test (CFMT) and gray matter volume was determined through voxel-based morphometry (VBM) among participants with and without congenital amusia. As expected, participants with amusia performed worse on the CFMT test and showed reduced gray matter volume (GMV) in the middle temporal gyrus (MTG), the superior temporal gyrus (STG), and the posterior cingulate cortex (PCC) in the right hemisphere, when compared with matched controls. Furthermore, correlation analyses demonstrated that the CFMT score was positively related to MTG, STG, and PCC GMV in all participants, while separate analyses of each group found a positive correlation of CFMT score and PCC GMV in amusics. These findings suggest that face recognition is associated with a widely distributed microstructural network in the human brain and the PCC plays an important role in both pitch recognition and face recognition in amusics. In addition, neurodevelopmental disorders such as congenital amusia and prosopagnosia may share a common neural substrate.
Collapse
|
16
|
Kogan VV, Reiterer SM. Eros, Beauty, and Phon-Aesthetic Judgements of Language Sound. We Like It Flat and Fast, but Not Melodious. Comparing Phonetic and Acoustic Features of 16 European Languages. Front Hum Neurosci 2021; 15:578594. [PMID: 33708080 PMCID: PMC7940689 DOI: 10.3389/fnhum.2021.578594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Accepted: 01/12/2021] [Indexed: 11/13/2022] Open
Abstract
This article concerns sound aesthetic preferences for European foreign languages. We investigated the phonetic-acoustic dimension of the linguistic aesthetic pleasure to describe the "music" found in European languages. The Romance languages, French, Italian, and Spanish, take a lead when people talk about melodious language - the music-like effects in the language (a.k.a., phonetic chill). On the other end of the melodiousness spectrum are German and Arabic that are often considered sounding harsh and un-attractive. Despite the public interest, limited research has been conducted on the topic of phonaesthetics, i.e., the subfield of phonetics that is concerned with the aesthetic properties of speech sounds (Crystal, 2008). Our goal is to fill the existing research gap by identifying the acoustic features that drive the auditory perception of language sound beauty. What is so music-like in the language that makes people say "it is music in my ears"? We had 45 central European participants listening to 16 auditorily presented European languages and rating each language in terms of 22 binary characteristics (e.g., beautiful - ugly and funny - boring) plus indicating their language familiarities, L2 backgrounds, speaker voice liking, demographics, and musicality levels. Findings revealed that all factors in complex interplay explain a certain percentage of variance: familiarity and expertise in foreign languages, speaker voice characteristics, phonetic complexity, musical acoustic properties, and finally musical expertise of the listener. The most important discovery was the trade-off between speech tempo and so-called linguistic melody (pitch variance): the faster the language, the flatter/more atonal it is in terms of the pitch (speech melody), making it highly appealing acoustically (sounding beautiful and sexy), but not so melodious in a "musical" sense.
Collapse
Affiliation(s)
- Vita V Kogan
- School of European Culture and Languages, University of Kent, Kent, United Kingdom
| | - Susanne M Reiterer
- Department of Linguistics, University of Vienna, Vienna, Austria.,Teacher Education Centre, University of Vienna, Vienna, Austria
| |
Collapse
|
17
|
Cheung YL, Zhang C, Zhang Y. Emotion processing in congenital amusia: the deficits do not generalize to written emotion words. CLINICAL LINGUISTICS & PHONETICS 2021; 35:101-116. [PMID: 31986915 DOI: 10.1080/02699206.2020.1719209] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Revised: 01/10/2020] [Accepted: 01/17/2020] [Indexed: 06/10/2023]
Abstract
Congenital amusia is a lifelong impairment in musical ability. Individuals with amusia are found to show reduced sensitivity to emotion recognition in speech prosody and silent facial expressions, implying a possible cross-modal emotion-processing deficit. However, it is not clear whether the observed deficits are primarily confined to socio-emotional contexts, where visual cues (facial expression) often co-occur with auditory cues (emotion prosody) to express intended emotions, or extend to linguistic emotion processing. In order to better understand the underlying deficiency mechanism of emotion processing in individuals with amusia, we examined whether reduced sensitivity to emotional processing extends to the recognition of emotion category and valence of written words in individuals with amusia. Twenty Cantonese speakers with amusia and 17 controls were tested in three experiments: (1) emotion prosody rating, in which participants rated how much each spoken sentence was expressed in each of the four emotions on 7-point rating scales; (2) written word emotion recognition, in which participants recognized the emotion of written emotion words; and (3) written word valence judgment, in which participants judged the valence of written words. Results showed that participants with amusia preformed significantly less accurately than controls in emotion prosody recognition; in contrast, the two groups showed no significant difference in accuracy rates in both written word tasks (emotion recognition and valence judgment). The results indicate that the impairment of individuals with amusia in emotion processing may not generalize to linguistic emotion processing in written words, implying that the emotion deficit is likely to be restricted to socio-emotional contexts in individuals with amusia.
Collapse
Affiliation(s)
- Yi Lam Cheung
- School of Management, Cranfield University , Cranfield, UK
| | - Caicai Zhang
- Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University , Hong Kong, SAR, China
- Research Centre for Language, Cognition, and Neuroscience, The Hong Kong Polytechnic University , Hong Kong, SAR, China
| | - Yubin Zhang
- Department of Linguistics, University of Southern California , Los Angeles, California, USA
| |
Collapse
|
18
|
Fernandez NB, Vuilleumier P, Gosselin N, Peretz I. Influence of Background Musical Emotions on Attention in Congenital Amusia. Front Hum Neurosci 2021; 14:566841. [PMID: 33568976 PMCID: PMC7868440 DOI: 10.3389/fnhum.2020.566841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Accepted: 11/30/2020] [Indexed: 11/13/2022] Open
Abstract
Congenital amusia in its most common form is a disorder characterized by a musical pitch processing deficit. Although pitch is involved in conveying emotion in music, the implications for pitch deficits on musical emotion judgements is still under debate. Relatedly, both limited and spared musical emotion recognition was reported in amusia in conditions where emotion cues were not determined by musical mode or dissonance. Additionally, assumed links between musical abilities and visuo-spatial attention processes need further investigation in congenital amusics. Hence, we here test to what extent musical emotions can influence attentional performance. Fifteen congenital amusic adults and fifteen healthy controls matched for age and education were assessed in three attentional conditions: executive control (distractor inhibition), alerting, and orienting (spatial shift) while music expressing either joy, tenderness, sadness, or tension was presented. Visual target detection was in the normal range for both accuracy and response times in the amusic relative to the control participants. Moreover, in both groups, music exposure produced facilitating effects on selective attention that appeared to be driven by the arousal dimension of musical emotional content, with faster correct target detection during joyful compared to sad music. These findings corroborate the idea that pitch processing deficits related to congenital amusia do not impede other cognitive domains, particularly visual attention. Furthermore, our study uncovers an intact influence of music and its emotional content on the attentional abilities of amusic individuals. The results highlight the domain-selectivity of the pitch disorder in congenital amusia, which largely spares the development of visual attention and affective systems.
Collapse
Affiliation(s)
- Natalia B Fernandez
- Laboratory of Behavioral Neurology and Imaging of Cognition, Department of Fundamental Neuroscience, University of Geneva, Geneva, Switzerland.,Swiss Center of Affective Sciences, Department of Psychology, University of Geneva, Geneva, Switzerland
| | - Patrik Vuilleumier
- Laboratory of Behavioral Neurology and Imaging of Cognition, Department of Fundamental Neuroscience, University of Geneva, Geneva, Switzerland.,Swiss Center of Affective Sciences, Department of Psychology, University of Geneva, Geneva, Switzerland
| | - Nathalie Gosselin
- International Laboratory for Brain, Music and Sound Research, University of Montreal, Montreal, QC, Canada.,Department of Psychology, University of Montreal, Montreal, QC, Canada
| | - Isabelle Peretz
- International Laboratory for Brain, Music and Sound Research, University of Montreal, Montreal, QC, Canada.,Department of Psychology, University of Montreal, Montreal, QC, Canada
| |
Collapse
|
19
|
Liao X, Sun J, Jin Z, Wu D, Liu J. Cortical Morphological Changes in Congenital Amusia: Surface-Based Analyses. Front Psychiatry 2021; 12:721720. [PMID: 35095585 PMCID: PMC8794692 DOI: 10.3389/fpsyt.2021.721720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Accepted: 12/07/2021] [Indexed: 11/25/2022] Open
Abstract
Background: Congenital amusia (CA) is a rare disorder characterized by deficits in pitch perception, and many structural and functional magnetic resonance imaging studies have been conducted to better understand its neural bases. However, a structural magnetic resonance imaging analysis using a surface-based morphology method to identify regions with cortical features abnormalities at the vertex-based level has not yet been performed. Methods: Fifteen participants with CA and 13 healthy controls underwent structural magnetic resonance imaging. A surface-based morphology method was used to identify anatomical abnormalities. Then, the surface parameters' mean value of the identified clusters with statistically significant between-group differences were extracted and compared. Finally, Pearson's correlation analysis was used to assess the correlation between the Montreal Battery of Evaluation of Amusia (MBEA) scores and surface parameters. Results: The CA group had significantly lower MBEA scores than the healthy controls (p = 0.000). The CA group exhibited a significant higher fractal dimension in the right caudal middle frontal gyrus and a lower sulcal depth in the right pars triangularis gyrus (p < 0.05; false discovery rate-corrected at the cluster level) compared to healthy controls. There were negative correlations between the mean fractal dimension values in the right caudal middle frontal gyrus and MBEA score, including the mean MBEA score (r = -0.5398, p = 0.0030), scale score (r = -0.5712, p = 0.0015), contour score (r = -0.4662, p = 0.0124), interval score (r = -0.4564, p = 0.0146), rhythmic score (r = -0.5133, p = 0.0052), meter score (r = -0.3937, p = 0.0382), and memory score (r = -0.3879, p = 0.0414). There was a significant positive correlation between the mean sulcal depth in the right pars triangularis gyrus and the MBEA score, including the mean score (r = 0.5130, p = 0.0052), scale score (r = 0.5328, p = 0.0035), interval score (r = 0.4059, p = 0.0321), rhythmic score (r = 0.5733, p = 0.0014), meter score (r = 0.5061, p = 0.0060), and memory score (r = 0.4001, p = 0.0349). Conclusion: Individuals with CA exhibit cortical morphological changes in the right hemisphere. These findings may indicate that the neural basis of speech perception and memory impairments in individuals with CA is associated with abnormalities in the right pars triangularis gyrus and middle frontal gyrus, and that these cortical abnormalities may be a neural marker of CA.
Collapse
Affiliation(s)
- Xuan Liao
- Department of Radiology, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Junjie Sun
- Department of Radiology, The Sir Run Run Shaw Hospital Affiliated to Zhejiang University School of Medicine, Hangzhou, China
| | - Zhishuai Jin
- Medical Psychological Center, The Second Xiangya Hospital of Central South University, Changsha, China
| | - DaXing Wu
- Medical Psychological Center, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Jun Liu
- Department of Radiology, The Second Xiangya Hospital of Central South University, Changsha, China.,Clinical Research Center for Medical Imaging in Hunan Province, Changsha, China.,Department of Radiology Quality Control Center, The Second Xiangya Hospital of Central South University, Changsha, China
| |
Collapse
|
20
|
Filippi P. Emotional Voice Intonation: A Communication Code at the Origins of Speech Processing and Word-Meaning Associations? JOURNAL OF NONVERBAL BEHAVIOR 2020. [DOI: 10.1007/s10919-020-00337-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Abstract
The aim of the present work is to investigate the facilitating effect of vocal emotional intonation on the evolution of the following processes involved in language: (a) identifying and producing phonemes, (b) processing compositional rules underlying vocal utterances, and (c) associating vocal utterances with meanings. To this end, firstly, I examine research on the presence of these abilities in animals, and the biologically ancient nature of emotional vocalizations. Secondly, I review research attesting to the facilitating effect of emotional voice intonation on these abilities in humans. Thirdly, building on these studies in animals and humans, and through taking an evolutionary perspective, I provide insights for future empirical work on the facilitating effect of emotional intonation on these three processes in animals and preverbal humans. In this work, I highlight the importance of a comparative approach to investigate language evolution empirically. This review supports Darwin’s hypothesis, according to which the ability to express emotions through voice modulation was a key step in the evolution of spoken language.
Collapse
|
21
|
Shao J, Zhang C. Dichotic Perception of Lexical Tones in Cantonese-Speaking Congenital Amusics. Front Psychol 2020; 11:1411. [PMID: 32733321 PMCID: PMC7358218 DOI: 10.3389/fpsyg.2020.01411] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2020] [Accepted: 05/26/2020] [Indexed: 11/23/2022] Open
Abstract
Congenital amusia is an inborn neurogenetic disorder of musical pitch processing, which also induces impairment in lexical tone perception. However, it has not been examined before how the brain specialization of lexical tone perception is affected in amusics. The current study adopted the dichotic listening paradigm to examine this issue, testing 18 Cantonese-speaking amusics and 18 matched controls on pitch/lexical tone identification and discrimination in three conditions: non-speech tone, low syllable variation, and high syllable variation. For typical listeners, the discrimination accuracy was higher with shorter RT in the left ear regardless of the stimulus types, suggesting a left-ear advantage in discrimination. When the demand of phonological processing increased, as in the identification task, shorter RT was still obtained in the left ear, however, the identification accuracy revealed a bilateral pattern. Taken together, the results of the identification task revealed a reduced LEA or a shift from the right hemisphere to bilateral processing in identification. Amusics exhibited overall poorer performance in both identification and discrimination tasks, indicating that pitch/lexical tone processing in dichotic listening settings was impaired, but there was no evidence that amusics showed different ear preference from controls. These findings provided temporary evidence that although amusics demonstrate deficient neural mechanisms of pitch/lexical tone processing, their ear preference patterns might not be affected. These results broadened the understanding of the nature of pitch and lexical tone processing deficiencies in amusia.
Collapse
Affiliation(s)
- Jing Shao
- School of Humanities, Shanghai Jiao Tong University, Shanghai, China.,Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Caicai Zhang
- Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Hong Kong, China.,Research Centre for Language, Cognition, and Neuroscience, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
22
|
Lo CY, Looi V, Thompson WF, McMahon CM. Music Training for Children With Sensorineural Hearing Loss Improves Speech-in-Noise Perception. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:1990-2015. [PMID: 32543961 DOI: 10.1044/2020_jslhr-19-00391] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose A growing body of evidence suggests that long-term music training provides benefits to auditory abilities for typical-hearing adults and children. The purpose of this study was to evaluate how music training may provide perceptual benefits (such as speech-in-noise, spectral resolution, and prosody) for children with hearing loss. Method Fourteen children aged 6-9 years with prelingual sensorineural hearing loss using bilateral cochlear implants, bilateral hearing aids, or bimodal configuration participated in a 12-week music training program, with nine participants completing the full testing requirements of the music training. Activities included weekly group-based music therapy and take-home music apps three times a week. The design was a pseudorandomized, longitudinal study (half the cohort was wait-listed, initially serving as a passive control group prior to music training). The test battery consisted of tasks related to music perception, music appreciation, and speech perception. As a comparison, 16 age-matched children with typical hearing also completed this test battery, but without participation in the music training. Results There were no changes for any outcomes for the passive control group. After music training, perception of speech-in-noise, question/statement prosody, musical timbre, and spectral resolution improved significantly, as did measures of music appreciation. There were no benefits for emotional prosody or pitch perception. Conclusion The findings suggest even a modest amount of music training has benefits for music and speech outcomes. These preliminary results provide further evidence that music training is a suitable complementary means of habilitation to improve the outcomes for children with hearing loss.
Collapse
Affiliation(s)
- Chi Yhun Lo
- Department of Linguistics, Macquarie University, Sydney, New South Wales, Australia
- The HEARing CRC, Melbourne, Victoria, Australia
- ARC Centre of Excellence in Cognition and its Disorders, Sydney, New South Wales, Australia
| | - Valerie Looi
- SCIC Cochlear Implant Program-An RIDBC Service, Sydney, New South Wales, Australia
| | - William Forde Thompson
- ARC Centre of Excellence in Cognition and its Disorders, Sydney, New South Wales, Australia
- Department of Psychology, Macquarie University, Sydney, New South Wales, Australia
| | - Catherine M McMahon
- Department of Linguistics, Macquarie University, Sydney, New South Wales, Australia
- The HEARing CRC, Melbourne, Victoria, Australia
| |
Collapse
|
23
|
Music processing deficits in Landau-Kleffner syndrome: Four case studies in adulthood. Cortex 2020; 129:99-111. [PMID: 32442777 DOI: 10.1016/j.cortex.2020.03.025] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2019] [Revised: 12/23/2019] [Accepted: 03/05/2020] [Indexed: 11/20/2022]
Abstract
Verbal-auditory agnosia and aphasia are the most prominent symptoms in Landau-Kleffner syndrome (LKS), a childhood epilepsy that can have sustained long-term effects on language processing. The present study provides the first objective investigation of music perception skills in four adult patients with a diagnosis of LKS during childhood, covering the spectrum of severity of the syndrome from mild to severe. Pitch discrimination, short-term memory for melodic, rhythmic and verbal information, as well as emotion recognition in music and speech prosody were assessed with listening tests, and subjective attitude to music with a questionnaire. We observed amusia in 3 out of 4 patients, with elevated pitch discrimination thresholds and poor short-term memory for melody and rhythm. The two patients with the most severe LKS had impairments in music and prosody emotion recognition, but normal perception of emotional intensity of music. Overall, performance in music processing tasks was proportional to the severity of the syndrome. Nonetheless, the four patients reported that they enjoyed music, felt musical emotions, and used music in their daily life. These new data support the hypothesis that, beyond verbal impairments, cerebral networks involved in sound processing and encoding are deeply altered by the epileptic activity in LKS, well after electrophysiological normalization.
Collapse
|
24
|
Shao J, Wang L, Zhang C. Talker Processing in Mandarin-Speaking Congenital Amusics. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:1361-1375. [PMID: 32343927 DOI: 10.1044/2020_jslhr-19-00209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose The ability to recognize individuals from their vocalizations is an important trait of human beings. In the current study, we aimed to examine how congenital amusia, an inborn pitch-processing disorder, affects discrimination and identification of talkers' voices. Method Twenty Mandarin-speaking amusics and 20 controls were tested on talker discrimination and identification in four types of contexts that varied in the degree of language familiarity: Mandarin real words, Mandarin pseudowords, Arabic words, and reversed Mandarin speech. Results The language familiarity effect was more evident in the talker identification task than the discrimination task for both participant groups, and talker identification accuracy decreased as native phonological representations were removed from the stimuli. Importantly, amusics demonstrated degraded performance in both native speech conditions that contained phonological/linguistic information to facilitate talker identification and nonnative conditions where talker voice processing primarily relied on phonetics cues, including pitch. Moreover, the performance in talker processing can be predicted by the participants' musical ability and phonological memory capacity. Conclusions The results provided a first set of behavioral evidence that individuals with amusia are impaired in the ability of human voice identification. Meanwhile, it is found that amusia is not only a pitch disorder but is likely to affect the phonological processing of speech, in terms of using phonological information in native speech to analyze a talker's identity. The above findings expanded the understanding of the nature and scope of congenital amusia. Supplemental Material https://doi.org/10.23641/asha.12170379.
Collapse
Affiliation(s)
- Jing Shao
- School of Humanities, Shanghai Jiao Tong University, China
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Lan Wang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Caicai Zhang
- Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, China
- Research Centre for Language, Cognition, and Neuroscience, The Hong Kong Polytechnic University, China
| |
Collapse
|
25
|
Li M, Tang W, Liu C, Nan Y, Wang W, Dong Q. Vowel and Tone Identification for Mandarin Congenital Amusics: Effects of Vowel Type and Semantic Content. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:4300-4308. [PMID: 31805240 DOI: 10.1044/2019_jslhr-s-18-0440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Purpose This study aimed to explore the effects of Mandarin congenital amusia with or without lexical tone deficit (i.e., tone agnosia and pure amusia) on Mandarin vowel and tone identification in different types of vowels (e.g., monophthong, diphthongs, and triphthongs) embedded in consonant-vowel contexts with and without semantic content. Method Thirteen pure amusics (i.e., amusics with normal lexical processing), 5 tone agnosics (i.e., with lexical tone deficit), and 12 controls were screened with Montreal Battery of Evaluation of Amusia and lexical tone tests (Nan et al., 2010; Peretz et al., 2003). Vowel-plus-tone identification tasks with the factors of vowel type and syllables with and without semantic content (e.g., real and nonsense words) were examined among the 3 groups, and identification scores were calculated in 3 formats: vowel-plus-tone identification, vowel identification, and tone identification. Results Tone agnosics showed significantly poorer performances on identifications of vowel, tone, and vowel plus tone across monophthongs, diphthongs, and triphthongs in both real and nonsense words compared to pure amusics and controls. Their deficits were similar across the 3 types of vowels, while the deficit on vowel-plus-tone identification was more severe in nonsense words than in real words. On the other hand, pure amusics performed similarly with controls across all these conditions. Conclusions Tone agnosia might affect both musical pitch and phonological processing, resulting in deficits in lexical tone and vowel perception. On the contrary, pure amusics's effect is primarily on musical pitch perception but not on lexical tone or phonemic deficit. Vowel type did not affect speech deficits for tone agnosics, while they relied more on semantic content as a compensation.
Collapse
Affiliation(s)
- Mingshuang Li
- Department of Communication Sciences and Disorders, University of Texas at Austin
| | - Wei Tang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, China
| | - Chang Liu
- Department of Communication Sciences and Disorders, University of Texas at Austin
| | - Yun Nan
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, China
| | - Wenjing Wang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, China
| | - Qi Dong
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, China
| |
Collapse
|
26
|
Tao W, Huang H, Haponenko H, Sun HJ. Face recognition and memory in congenital amusia. PLoS One 2019; 14:e0225519. [PMID: 31790454 PMCID: PMC6886812 DOI: 10.1371/journal.pone.0225519] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Accepted: 11/06/2019] [Indexed: 11/19/2022] Open
Abstract
Congenital amusia, commonly known as tone deafness, is a lifelong impairment of music perception and production. It remains a question of debate whether the impairments in musical domain observed in congenital amusia are paralleled in other non-musical perceptual abilities. Using behavioral measures in two experiments, the current study explored face perception and memory in congenital amusics. Both congenital amusics and matched controls performed a face perception task (Experiment 1) and an old/novel object memory task (for both faces and houses, Experiment 2). The results showed that the congenital amusic group had significantly slower reaction times than that in matched control group when identifying whether two faces presented together were the same or different. For different face-pairs, the deficit was greater for upright faces compared with inverted faces. For object memory task, the congenital amusic group also showed worse memory performance than the control group. The results of the present study suggest that the impairment attributed to congenital amusia is not only limited to music, but also extends to visual perception and visual memory domain.
Collapse
Affiliation(s)
- Weidong Tao
- Department of Psychology, School of Teacher Education, Huzhou Normal University, Huzhou, Zhejiang, China
- * E-mail:
| | - Huayan Huang
- Department of Psychology, School of Education, Lingnan Normal University, Zhanjiang, Guangdong, China
| | - Hanna Haponenko
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamiliton, Ontario, Canada
| | - Hong-jin Sun
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamiliton, Ontario, Canada
| |
Collapse
|
27
|
Graves JE, Pralus A, Fornoni L, Oxenham AJ, Caclin A, Tillmann B. Short- and long-term memory for pitch and non-pitch contours: Insights from congenital amusia. Brain Cogn 2019; 136:103614. [PMID: 31546175 PMCID: PMC6953621 DOI: 10.1016/j.bandc.2019.103614] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2019] [Revised: 09/11/2019] [Accepted: 09/13/2019] [Indexed: 11/25/2022]
Abstract
Congenital amusia is a neurodevelopmental disorder characterized by deficits in music perception, including discriminating and remembering melodies and melodic contours. As non-amusic listeners can perceive contours in dimensions other than pitch, such as loudness and brightness, our present study investigated whether amusics' pitch contour deficits also extend to these other auditory dimensions. Amusic and control participants performed an identification task for ten familiar melodies and a short-term memory task requiring the discrimination of changes in the contour of novel four-tone melodies. For both tasks, melodic contour was defined by pitch, brightness, or loudness. Amusic participants showed some ability to extract contours in all three dimensions. For familiar melodies, amusic participants showed impairment in all conditions, perhaps reflecting the fact that the long-term memory representations of the familiar melodies were defined in pitch. In the contour discrimination task with novel melodies, amusic participants exhibited less impairment for loudness-based melodies than for pitch- or brightness-based melodies, suggesting some specificity of the deficit for spectral changes, if not for pitch alone. The results suggest pitch and brightness may not be processed by the same mechanisms as loudness, and that short-term memory for loudness contours may be spared to some degree in congenital amusia.
Collapse
Affiliation(s)
- Jackson E Graves
- Lyon Neuroscience Research Center (CRNL), CNRS, UMR 5292, Inserm U1028, Université Lyon 1, Lyon, France; Department of Psychology, University of Minnesota, Minneapolis, MN, USA; Laboratoire des systèmes perceptifs, Département d'études cognitives, École normale supérieure, PSL University, CNRS, 75005 Paris, France.
| | - Agathe Pralus
- Lyon Neuroscience Research Center (CRNL), CNRS, UMR 5292, Inserm U1028, Université Lyon 1, Lyon, France
| | - Lesly Fornoni
- Lyon Neuroscience Research Center (CRNL), CNRS, UMR 5292, Inserm U1028, Université Lyon 1, Lyon, France
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, MN, USA
| | - Anne Caclin
- Lyon Neuroscience Research Center (CRNL), CNRS, UMR 5292, Inserm U1028, Université Lyon 1, Lyon, France
| | - Barbara Tillmann
- Lyon Neuroscience Research Center (CRNL), CNRS, UMR 5292, Inserm U1028, Université Lyon 1, Lyon, France
| |
Collapse
|
28
|
Pralus A, Fornoni L, Bouet R, Gomot M, Bhatara A, Tillmann B, Caclin A. Emotional prosody in congenital amusia: Impaired and spared processes. Neuropsychologia 2019; 134:107234. [DOI: 10.1016/j.neuropsychologia.2019.107234] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Revised: 08/12/2019] [Accepted: 10/16/2019] [Indexed: 12/15/2022]
|
29
|
Zhou L, Liu F, Jiang J, Jiang C. Impaired emotional processing of chords in congenital amusia: Electrophysiological and behavioral evidence. Brain Cogn 2019; 135:103577. [DOI: 10.1016/j.bandc.2019.06.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2017] [Revised: 06/04/2019] [Accepted: 06/04/2019] [Indexed: 10/26/2022]
|
30
|
Sihvonen AJ, Särkämö T, Rodríguez-Fornells A, Ripollés P, Münte TF, Soinila S. Neural architectures of music - Insights from acquired amusia. Neurosci Biobehav Rev 2019; 107:104-114. [PMID: 31479663 DOI: 10.1016/j.neubiorev.2019.08.023] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2019] [Revised: 08/27/2019] [Accepted: 08/29/2019] [Indexed: 12/27/2022]
Abstract
The ability to perceive and produce music is a quintessential element of human life, present in all known cultures. Modern functional neuroimaging has revealed that music listening activates a large-scale bilateral network of cortical and subcortical regions in the healthy brain. Even the most accurate structural studies do not reveal which brain areas are critical and causally linked to music processing. Such questions may be answered by analysing the effects of focal brain lesions in patients´ ability to perceive music. In this sense, acquired amusia after stroke provides a unique opportunity to investigate the neural architectures crucial for normal music processing. Based on the first large-scale longitudinal studies on stroke-induced amusia using modern multi-modal magnetic resonance imaging (MRI) techniques, such as advanced lesion-symptom mapping, grey and white matter morphometry, tractography and functional connectivity, we discuss neural structures critical for music processing, consider music processing in light of the dual-stream model in the right hemisphere, and propose a neural model for acquired amusia.
Collapse
Affiliation(s)
- Aleksi J Sihvonen
- Department of Neurosciences, University of Helsinki, Finland; Cognitive Brain Research Unit, Department of Psychology and Logopedics, University of Helsinki, Finland.
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, University of Helsinki, Finland
| | - Antoni Rodríguez-Fornells
- Department of Cognition, University of Barcelona, Cognition & Brain Plasticity Unit, Bellvitge Biomedical Research Institute (IDIBELL), Institució Catalana de recerca i Estudis Avançats (ICREA), Barcelona, Spain
| | - Pablo Ripollés
- Department of Psychology, New York University and Music and Audio Research Laboratory, New York University, USA
| | - Thomas F Münte
- Department of Neurology and Institute of Psychology II, University of Lübeck, Germany
| | - Seppo Soinila
- Division of Clinical Neurosciences, Turku University Hospital, Department of Neurology, University of Turku, Finland
| |
Collapse
|
31
|
Leo V, Sihvonen AJ, Linnavalli T, Tervaniemi M, Laine M, Soinila S, Särkämö T. Cognitive and neural mechanisms underlying the mnemonic effect of songs after stroke. NEUROIMAGE-CLINICAL 2019; 24:101948. [PMID: 31419766 PMCID: PMC6706631 DOI: 10.1016/j.nicl.2019.101948] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/20/2018] [Revised: 04/05/2019] [Accepted: 07/19/2019] [Indexed: 01/28/2023]
Abstract
Sung melody provides a mnemonic cue that can enhance the acquisition of novel verbal material in healthy subjects. Recent evidence suggests that also stroke patients, especially those with mild aphasia, can learn and recall novel narrative stories better when they are presented in sung than spoken format. Extending this finding, the present study explored the cognitive mechanisms underlying this effect by determining whether learning and recall of novel sung vs. spoken stories show a differential pattern of serial position effects (SPEs) and chunking effects in non-aphasic and aphasic stroke patients (N = 31) studied 6 months post-stroke. The structural neural correlates of these effects were also explored using voxel-based morphometry (VBM) and deterministic tractography (DT) analyses of structural MRI data. Non-aphasic patients showed more stable recall with reduced SPEs in the sung than spoken task, which was coupled with greater volume and integrity (indicated by fractional anisotropy, FA) of the left arcuate fasciculus. In contrast, compared to non-aphasic patients, the aphasic patients showed a larger recency effect (better recall of the last vs. middle part of the story) and enhanced chunking (larger units of correctly recalled consecutive items) in the sung than spoken task. In aphasics, the enhanced chunking and better recall on the middle verse in the sung vs. spoken task correlated also with better ability to perceive emotional prosody in speech. Neurally, the sung > spoken recency effect in aphasic patients was coupled with greater grey matter volume in a bilateral network of temporal, frontal, and parietal regions and also greater volume of the right inferior fronto-occipital fasciculus (IFOF). These results provide novel cognitive and neurobiological insight on how a repetitive sung melody can function as a verbal mnemonic aid after stroke. Non-aphasic stroke patients show more stable recall of sung than spoken stories. Aphasic patients show larger recency and chunking effects to sung vs. spoken stories. The left dorsal pathway mediates better recall of sung stories in non-aphasics. The right ventral pathway mediates better recall of sung stories in aphasics. Large-scale bilateral cortical networks are linked to musical mnemonics in aphasia.
Collapse
Affiliation(s)
- Vera Leo
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland
| | - Aleksi J Sihvonen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland; Department of Neurosciences, Faculty of Medicine, University of Helsinki, Finland
| | - Tanja Linnavalli
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland
| | - Mari Tervaniemi
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland; CICERO Learning, University of Helsinki, Finland
| | - Matti Laine
- Department of Psychology, Åbo Akademi University, Turku, Finland
| | - Seppo Soinila
- Division of Clinical Neurosciences, Turku University Hospital, Department of Neurology, University of Turku, Finland
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland.
| |
Collapse
|
32
|
Oechslin MS, Gschwind M, James CE. Tracking Training-Related Plasticity by Combining fMRI and DTI: The Right Hemisphere Ventral Stream Mediates Musical Syntax Processing. Cereb Cortex 2019; 28:1209-1218. [PMID: 28203797 DOI: 10.1093/cercor/bhx033] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2015] [Accepted: 01/25/2017] [Indexed: 12/25/2022] Open
Abstract
As a functional homolog for left-hemispheric syntax processing in language, neuroimaging studies evidenced involvement of right prefrontal regions in musical syntax processing, of which underlying white matter connectivity remains unexplored so far. In the current experiment, we investigated the underlying pathway architecture in subjects with 3 levels of musical expertise. Employing diffusion tensor imaging tractography, departing from seeds from our previous functional magnetic resonance imaging study on music syntax processing in the same participants, we identified a pathway in the right ventral stream that connects the middle temporal lobe with the inferior frontal cortex via the extreme capsule, and corresponds to the left hemisphere ventral stream, classically attributed to syntax processing in language comprehension. Additional morphometric consistency analyses allowed dissociating tract core from more dispersed fiber portions. Musical expertise related to higher tract consistency of the right ventral stream pathway. Specifically, tract consistency in this pathway predicted the sensitivity for musical syntax violations. We conclude that enduring musical practice sculpts ventral stream architecture. Our results suggest that training-related pathway plasticity facilitates the right hemisphere ventral stream information transfer, supporting an improved sound-to-meaning mapping in music.
Collapse
Affiliation(s)
- Mathias S Oechslin
- Faculty of Psychology and Educational Sciences, University of Geneva, CH-1211 Geneva, Switzerland.,Department of Education and Culture of the Canton of Thurgau, CH-8500, Frauenfeld, Switzerland
| | - Markus Gschwind
- Department of Neurology, Geneva University Hospitals, CH-1211 Geneva, Switzerland.,Department of Neuroscience, Campus Biotech, University of Geneva, CH-1202 Geneva, Switzerland
| | - Clara E James
- Faculty of Psychology and Educational Sciences, University of Geneva, CH-1211 Geneva, Switzerland.,Geneva Neuroscience Center, University of Geneva, CH-1211 Geneva, Switzerland.,HES-SO University of Applied Sciences and Arts Western Switzerland, School of Health Sciences, CH-1206 Geneva, Switzerland
| |
Collapse
|
33
|
Nordström H, Laukka P. The time course of emotion recognition in speech and music. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:3058. [PMID: 31153307 DOI: 10.1121/1.5108601] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/02/2018] [Accepted: 04/25/2019] [Indexed: 06/09/2023]
Abstract
The auditory gating paradigm was adopted to study how much acoustic information is needed to recognize emotions from speech prosody and music performances. In Study 1, brief utterances conveying ten emotions were segmented into temporally fine-grained gates and presented to listeners, whereas Study 2 instead used musically expressed emotions. Emotion recognition accuracy increased with increasing gate duration and generally stabilized after a certain duration, with different trajectories for different emotions. Above-chance accuracy was observed for ≤100 ms stimuli for anger, happiness, neutral, and sadness, and for ≤250 ms stimuli for most other emotions, for both speech and music. This suggests that emotion recognition is a fast process that allows discrimination of several emotions based on low-level physical characteristics. The emotion identification points, which reflect the amount of information required for stable recognition, were shortest for anger and happiness for both speech and music, but recognition took longer to stabilize for music vs speech. This, in turn, suggests that acoustic cues that develop over time also play a role for emotion inferences (especially for music). Finally, acoustic cue patterns were positively correlated between speech and music, suggesting a shared acoustic code for expressing emotions.
Collapse
Affiliation(s)
- Henrik Nordström
- Department of Psychology, Stockholm University, 106 91 Stockholm, Sweden
| | - Petri Laukka
- Department of Psychology, Stockholm University, 106 91 Stockholm, Sweden
| |
Collapse
|
34
|
Shao J, Zhang C. Talker normalization in typical Cantonese-speaking listeners and congenital amusics: Evidence from event-related potentials. Neuroimage Clin 2019; 23:101814. [PMID: 30978657 PMCID: PMC6458432 DOI: 10.1016/j.nicl.2019.101814] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2018] [Revised: 03/20/2019] [Accepted: 04/02/2019] [Indexed: 12/01/2022]
Abstract
Despite the lack of invariance in the mapping between the acoustic signal and phonological representation, typical listeners are capable of using information of a talker's vocal characteristics to recognize phonemes, a process known as "talker normalization". The current study investigated the time course of talker normalization in typical listeners and individuals with congenital amusia, a neurodevelopmental disorder of refined pitch processing. We examined the event-related potentials (ERPs) underling lexical tone processing in 24 Cantonese-speaking amusics and 24 typical listeners (controls) in two conditions: blocked-talker and mixed-talker conditions. The results demonstrated that for typical listeners, effects of talker variability can be observed as early as in the N1 time-window (100-150 ms), with the N1 amplitude reduced in the mixed-talker condition. Significant effects were also found in later components: the N2b/c peaked significantly earlier and the P3a and P3b amplitude was enhanced in the blocked-talker condition relative to the mixed-talker condition, especially for the tone pair that is more difficult to discriminate. These results suggest that the blocked-talker mode of stimulus presentation probably facilitates auditory processing and requires less attentional effort with easier speech categorization than the mixed-talker condition, providing neural evidence for the "active control theory". On the other hand, amusics exhibited comparable N1 amplitude to controls in both conditions, but deviated from controls in later components. They demonstrated overall later N2b/c peak latency significantly reduced P3a amplitude in the blocked-talker condition and reduced P3b amplitude irrespective of talker conditions. These results suggest that the amusic brain was intact in the auditory processing of talker normalization processes, as reflected by the comparable N1 amplitude, but exhibited reduced automatic attentional switch to tone changes in the blocked-talker condition, as captured by the reduced P3a amplitude, which presumably underlies a previously reported perceptual "anchoring" deficit in amusics. Altogether, these findings revealed the time course of talker normalization processes in typical listeners and extended the finding that conscious pitch processing is impaired in the amusic brain.
Collapse
Affiliation(s)
- Jing Shao
- The Hong Kong Polytechnic University, Department of Chinese and Bilingual Studies, Hong Kong, China; Research Centre for Language, Cognition, and Neuroscience, The Hong Kong Polytechnic University, Hong Kong, China
| | - Caicai Zhang
- The Hong Kong Polytechnic University, Department of Chinese and Bilingual Studies, Hong Kong, China; Research Centre for Language, Cognition, and Neuroscience, The Hong Kong Polytechnic University, Hong Kong, China.
| |
Collapse
|
35
|
Reybrouck M, Podlipniak P. Preconceptual Spectral and Temporal Cues as a Source of Meaning in Speech and Music. Brain Sci 2019; 9:E53. [PMID: 30832292 PMCID: PMC6468545 DOI: 10.3390/brainsci9030053] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Revised: 02/18/2019] [Accepted: 02/26/2019] [Indexed: 11/24/2022] Open
Abstract
This paper explores the importance of preconceptual meaning in speech and music, stressing the role of affective vocalizations as a common ancestral instrument in communicative interactions. Speech and music are sensory rich stimuli, both at the level of production and perception, which involve different body channels, mainly the face and the voice. However, this bimodal approach has been challenged as being too restrictive. A broader conception argues for an action-oriented embodied approach that stresses the reciprocity between multisensory processing and articulatory-motor routines. There is, however, a distinction between language and music, with the latter being largely unable to function referentially. Contrary to the centrifugal tendency of language to direct the attention of the receiver away from the text or speech proper, music is centripetal in directing the listener's attention to the auditory material itself. Sound, therefore, can be considered as the meeting point between speech and music and the question can be raised as to the shared components between the interpretation of sound in the domain of speech and music. In order to answer these questions, this paper elaborates on the following topics: (i) The relationship between speech and music with a special focus on early vocalizations in humans and non-human primates; (ii) the transition from sound to meaning in speech and music; (iii) the role of emotion and affect in early sound processing; (iv) vocalizations and nonverbal affect burst in communicative sound comprehension; and (v) the acoustic features of affective sound with a special emphasis on temporal and spectrographic cues as parts of speech prosody and musical expressiveness.
Collapse
Affiliation(s)
- Mark Reybrouck
- Musicology Research Group, KU Leuven⁻University of Leuven, 3000 Leuven, Belgium and IPEM⁻Department of Musicology, Ghent University, 9000 Ghent, Belgium.
| | - Piotr Podlipniak
- Institute of Musicology, Adam Mickiewicz University in Poznań, ul. Umultowska 89D, 61-614 Poznań, Poland.
| |
Collapse
|
36
|
Abstract
The Montreal Battery for the Evaluation of Amusia (MBEA; Peretz, Champod, & Hyde Annals of the New York Academy of Sciences, 999, 58-75, 2003) is an empirically grounded quantitative tool that is widely used to identify individuals with congenital amusia. The use of such a standardized measure ensures that the individuals tested will conform to a specific neuropsychological profile, allowing for comparisons across studies and research groups. Recently, a number of researchers have published credible critiques of the usefulness of the MBEA as a diagnostic tool for amusia. Here we argue that the MBEA and its online counterpart, the AMUSIA tests (Peretz et al. Music Perception, 25, 331-343, 2008), should be considered steps in a screening process for amusia, rather than standalone diagnostic tools. The goal of this article is to present, in detailed and easily replicable format, the full protocol through which congenital amusics should be identified. In providing information that has often gone unreported in published articles, we aim to clarify the strengths and limitations of the MBEA and to make recommendations for its continued use by the research community as part of the Montreal Protocol for Identification of Amusia.
Collapse
|
37
|
Shao J, Lau RYM, Tang POC, Zhang C. The Effects of Acoustic Variation on the Perception of Lexical Tone in Cantonese-Speaking Congenital Amusics. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:190-205. [PMID: 30950752 DOI: 10.1044/2018_jslhr-h-17-0483] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Purpose Congenital amusia is an inborn neurogenetic disorder of fine-grained pitch processing. This study attempted to pinpoint the impairment mechanism of speech processing in tonal language speakers with amusia. We designed a series of perception tasks aiming at selectively probing low-level pitch processing and relatively high-level phonological processing of lexical tones, with an aim to illuminate the deficiency mechanism underlying tone perception in amusia. Method Sixteen Cantonese-speaking amusics and 16 matched controls were tested on the effects of acoustic (talker/syllable) variations on the identification and discrimination of Cantonese tones in two conditions. In the low-variation condition, tones were always associated with the same talker or syllable; in the high-variation condition, tones were associated with either different talkers (with the syllable controlled) or different syllables (with the talker controlled). Results Largely similar results were obtained in talker and syllable variation conditions. Amusics exhibited overall poorer performance than controls in tone identification. Although amusics also demonstrated poorer performance in tone discrimination, the group difference was more obvious in low-variation conditions, where more acoustic constancy was provided. Besides, controls exhibited a greater increase in discrimination sensitivity from high- to low-variation conditions, implying a stronger benefit of acoustic constancy. Conclusions The findings suggested that amusics' lexical tone perception abilities, in terms of both low-level pitch processing and high-level phonological processing, as measured in low- and high-variation conditions, are impaired. Importantly, amusics were more impaired in taking advantage of low acoustic variation contexts and thus less efficiently sharpened their perception of tones when perceptual anchors in talker/syllable were provided, suggesting a possible "anchoring deficit" in congenital amusia. Supplemental Material https://doi.org/10.23641/asha.7616555.
Collapse
Affiliation(s)
- Jing Shao
- Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, China
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China
| | - Rebecca Yick Man Lau
- Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, China
| | - Phyllis Oi Ching Tang
- Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, China
| | - Caicai Zhang
- Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, China
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, China
| |
Collapse
|
38
|
Perception of musical pitch in developmental prosopagnosia. Neuropsychologia 2019; 124:87-97. [PMID: 30625291 DOI: 10.1016/j.neuropsychologia.2018.12.022] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2017] [Revised: 12/19/2018] [Accepted: 12/29/2018] [Indexed: 11/21/2022]
Abstract
Studies of developmental prosopagnosia have often shown that developmental prosopagnosia differentially affects human face processing over non-face object processing. However, little consideration has been given to whether this condition is associated with perceptual or sensorimotor impairments in other modalities. Comorbidities have played a role in theories of other developmental disorders such as dyslexia, but studies of developmental prosopagnosia have often focused on the nature of the visual recognition impairment despite evidence for widespread neural anomalies that might affect other sensorimotor systems. We studied 12 subjects with developmental prosopagnosia with a battery of auditory tests evaluating pitch and rhythm processing as well as voice perception and recognition. Overall, three subjects were impaired in fine pitch discrimination, a prevalence of 25% that is higher than the estimated 4% prevalence of congenital amusia in the general population. This was a selective deficit, as rhythm perception was unaffected in all 12 subjects. Furthermore, two of the three prosopagnosic subjects who were impaired in pitch discrimination had intact voice perception and recognition, while two of the remaining nine subjects had impaired voice recognition but intact pitch perception. These results indicate that, in some subjects with developmental prosopagnosia, the face recognition deficit is not an isolated impairment but is associated with deficits in other domains, such as auditory perception. These deficits may form part of a broader syndrome which could be due to distributed microstructural anomalies in various brain networks, possibly with a common theme of right hemispheric predominance.
Collapse
|
39
|
|
40
|
Loutrari A, Tselekidou F, Proios H. Phrase-Final Words in Greek Storytelling Speech: A Study on the Effect of a Culturally-Specific Prosodic Feature on Short-Term Memory. JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2018; 47:947-957. [PMID: 29488146 DOI: 10.1007/s10936-018-9570-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Prosodic patterns of speech appear to make a critical contribution to memory-related processing. We considered the case of a previously unexplored prosodic feature of Greek storytelling and its effect on free recall in thirty typically developing children between the ages of 10 and 12 years, using short ecologically valid auditory stimuli. The combination of a falling pitch contour and, more notably, extensive final-syllable vowel lengthening, which gives rise to the prosodic feature in question, led to statistically significantly higher performance in comparison to neutral phrase-final prosody. Number of syllables in target words did not reveal substantial difference in performance. The current study presents a previously undocumented culturally-specific prosodic pattern and its effect on short-term memory.
Collapse
Affiliation(s)
- Ariadne Loutrari
- Department of Applied Linguistics and Communication, Birkbeck, University of London, London, UK.
| | - Freideriki Tselekidou
- Department of Education and Social Policy, University of Macedonia, Thessaloniki, Greece
| | - Hariklia Proios
- Department of Education and Social Policy, University of Macedonia, Thessaloniki, Greece
| |
Collapse
|
41
|
Pfeifer J, Hamann S. The Nature and Nurture of Congenital Amusia: A Twin Case Study. Front Behav Neurosci 2018; 12:120. [PMID: 29988571 PMCID: PMC6026798 DOI: 10.3389/fnbeh.2018.00120] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2017] [Accepted: 05/31/2018] [Indexed: 12/25/2022] Open
Abstract
In this article, we report the first documented case of congenital amusia in dizygotic twins. The female twin pair was 27 years old at the time of testing, with normal hearing and above average intelligence. Both had formal music lesson from the age of 8-12 and were exposed to music in their childhood. Using the Montreal Battery of Evaluation of Amusia (Peretz et al., 2003), one twin was diagnosed as amusic, with a pitch perception as well as a rhythm perception deficit, while the other twin had normal pitch and rhythm perception. We conducted a large battery of tests assessing the performance of the twins in music, pitch perception and memory, language perception and spatial processing. Both showed an identical albeit low pitch memory span of 3.5 tones and an impaired performance on a beat alignment task, yet the non-amusic twin outperformed the amusic twin in three other musical and all language related tasks. The twins also differed significantly in their performance on one of two spatial tasks (visualization), with the non-amusic twin outperforming the amusic twin (83% vs. 20% correct). The performance of the twins is also compared to normative samples of normal and amusic participants from other studies. This twin case study highlights that congenital amusia is not due to insufficient exposure to music in childhood: The exposure to music of the twin pair was as comparable as it can be for two individuals. This study also indicates that there is an association between amusia and a spatial processing deficit (see Douglas and Bilkey, 2007; contra Tillmann et al., 2010; Williamson et al., 2011) and that more research is needed in this area.
Collapse
Affiliation(s)
- Jasmin Pfeifer
- Phonetics Laboratory, Amsterdam Center for Language and Communication, University of Amsterdam, Amsterdam, Netherlands.,Institute for Language and Information, Heinrich-Heine University, Düsseldorf, Germany
| | - Silke Hamann
- Phonetics Laboratory, Amsterdam Center for Language and Communication, University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
42
|
Tang W, Wang XJ, Li JQ, Liu C, Dong Q, Nan Y. Vowel and tone recognition in quiet and in noise among Mandarin-speaking amusics. Hear Res 2018. [DOI: 10.1016/j.heares.2018.03.004] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
43
|
Normal pre-attentive and impaired attentive processing of lexical tones in Cantonese-speaking congenital amusics. Sci Rep 2018; 8:8420. [PMID: 29849069 PMCID: PMC5976652 DOI: 10.1038/s41598-018-26368-7] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2018] [Accepted: 05/10/2018] [Indexed: 11/08/2022] Open
Abstract
The neural underpinnings of congenital amusia, an innate neurogenetic disorder of musical pitch processing, are not well understood. Previous studies suggest that amusia primarily impairs attentive processing (P300) of small pitch deviations in music, leaving pre-attentive pitch processing (mismatch negativity or MMN) more or less intact. However, it remains unknown whether the same neuro-dynamic mechanism of deficiency underlies pitch processing in speech, where amusics also often show impairment behaviorally. The current study examined how lexical tones are processed in pre-attentive (MMN) and attentive (P300) conditions in 24 Cantonese-speaking amusics and 24 matched controls. At the pre-attentive level, Cantonese-speaking amusics exhibited normal MMN responses to lexical tone changes, even for tone pairs with small pitch differences (mid level vs. low level tone; high rising vs. low rising tone). However, at the attentive level, amusics exhibited reduced P3a amplitude for all tone pairs, and further reduced P3b amplitude for tone pairs with small pitch differences. These results suggest that the amusic brain detects tone changes normally pre-attentively, but shows impairment in consciously detecting the same tone differences. Consistent with previous findings in nonspeech pitch processing, this finding provides support for a domain-general neuro-dynamic mechanism of deficient attentive pitch processing in amusia.
Collapse
|
44
|
Sun Y, Lu X, Ho HT, Johnson BW, Sammler D, Thompson WF. Syntactic processing in music and language: Parallel abnormalities observed in congenital amusia. NEUROIMAGE-CLINICAL 2018; 19:640-651. [PMID: 30013922 PMCID: PMC6022360 DOI: 10.1016/j.nicl.2018.05.032] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2017] [Revised: 05/22/2018] [Accepted: 05/23/2018] [Indexed: 11/23/2022]
Abstract
Evidence is accumulating that similar cognitive resources are engaged to process syntactic structure in music and language. Congenital amusia – a neurodevelopmental disorder that primarily affects music perception, including musical syntax – provides a special opportunity to understand the nature of this overlap. Using electroencephalography (EEG), we investigated whether individuals with congenital amusia have parallel deficits in processing language syntax in comparison to control participants. Twelve amusic participants (eight females) and 12 control participants (eight females) were presented melodies in one session, and spoken sentences in another session, both of which had syntactic-congruent and -incongruent stimuli. They were asked to complete a music-related and a language-related task that were irrelevant to the syntactic incongruities. Our results show that amusic participants exhibit impairments in the early stages of both music- and language-syntactic processing. Specifically, we found that two event-related potential (ERP) components – namely Early Right Anterior Negativity (ERAN) and Left Anterior Negativity (LAN), associated with music- and language-syntactic processing respectively, were absent in the amusia group. However, at later processing stages, amusics showed similar brain responses as controls to syntactic incongruities in both music and language. This was reflected in a normal N5 in response to melodies and a normal P600 to spoken sentences. Notably, amusics' parallel music- and language-syntactic impairments were not accompanied by deficits in semantic processing (indexed by normal N400 in response to semantic incongruities). Together, our findings provide further evidence for shared music and language syntactic processing, particularly at early stages of processing. Amusics displayed abnormal brain responses to music-syntactic irregularities. They also exhibited abnormal brain responses to language-syntactic irregularities. These impairments affect an early stage of syntactic processing not a later stage. Music and language involve similar cognitive mechanisms for processing syntax.
Collapse
Affiliation(s)
- Yanan Sun
- Department of Cognitive Science, Macquarie University, New South Wales 2109, Australia; ARC Centre of Excellence in Cognition and its Disorders, New South Wales 2109, Australia.
| | - Xuejing Lu
- ARC Centre of Excellence in Cognition and its Disorders, New South Wales 2109, Australia; Department of Psychology, Macquarie University, New South Wales 2109, Australia; CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Hao Tam Ho
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa 56126, Italy; School of Psychology, University of Sydney, New South Wales 2006, Australia
| | - Blake W Johnson
- Department of Cognitive Science, Macquarie University, New South Wales 2109, Australia; ARC Centre of Excellence in Cognition and its Disorders, New South Wales 2109, Australia
| | - Daniela Sammler
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - William Forde Thompson
- ARC Centre of Excellence in Cognition and its Disorders, New South Wales 2109, Australia; Department of Psychology, Macquarie University, New South Wales 2109, Australia
| |
Collapse
|
45
|
Livingstone SR, Russo FA. The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS One 2018; 13:e0196391. [PMID: 29768426 PMCID: PMC5955500 DOI: 10.1371/journal.pone.0196391] [Citation(s) in RCA: 162] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2017] [Accepted: 04/12/2018] [Indexed: 11/19/2022] Open
Abstract
The RAVDESS is a validated multimodal database of emotional speech and song. The database is gender balanced consisting of 24 professional actors, vocalizing lexically-matched statements in a neutral North American accent. Speech includes calm, happy, sad, angry, fearful, surprise, and disgust expressions, and song contains calm, happy, sad, angry, and fearful emotions. Each expression is produced at two levels of emotional intensity, with an additional neutral expression. All conditions are available in face-and-voice, face-only, and voice-only formats. The set of 7356 recordings were each rated 10 times on emotional validity, intensity, and genuineness. Ratings were provided by 247 individuals who were characteristic of untrained research participants from North America. A further set of 72 participants provided test-retest data. High levels of emotional validity and test-retest intrarater reliability were reported. Corrected accuracy and composite "goodness" measures are presented to assist researchers in the selection of stimuli. All recordings are made freely available under a Creative Commons license and can be downloaded at https://doi.org/10.5281/zenodo.1188976.
Collapse
Affiliation(s)
- Steven R. Livingstone
- Department of Psychology, Ryerson University, Toronto, Canada
- Department of Computer Science and Information Systems, University of Wisconsin-River Falls, Wisconsin, WI, United States of America
| | - Frank A. Russo
- Department of Psychology, Ryerson University, Toronto, Canada
| |
Collapse
|
46
|
Picou EM, Singh G, Goy H, Russo F, Hickson L, Oxenham AJ, Buono GH, Ricketts TA, Launer S. Hearing, Emotion, Amplification, Research, and Training Workshop: Current Understanding of Hearing Loss and Emotion Perception and Priorities for Future Research. Trends Hear 2018; 22:2331216518803215. [PMID: 30270810 PMCID: PMC6168729 DOI: 10.1177/2331216518803215] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2018] [Revised: 08/18/2018] [Accepted: 09/03/2018] [Indexed: 12/19/2022] Open
Abstract
The question of how hearing loss and hearing rehabilitation affect patients' momentary emotional experiences is one that has received little attention but has considerable potential to affect patients' psychosocial function. This article is a product from the Hearing, Emotion, Amplification, Research, and Training workshop, which was convened to develop a consensus document describing research on emotion perception relevant for hearing research. This article outlines conceptual frameworks for the investigation of emotion in hearing research; available subjective, objective, neurophysiologic, and peripheral physiologic data acquisition research methods; the effects of age and hearing loss on emotion perception; potential rehabilitation strategies; priorities for future research; and implications for clinical audiologic rehabilitation. More broadly, this article aims to increase awareness about emotion perception research in audiology and to stimulate additional research on the topic.
Collapse
Affiliation(s)
- Erin M. Picou
- Vanderbilt University School of
Medicine, Nashville, TN, USA
| | - Gurjit Singh
- Phonak Canada, Mississauga, ON,
Canada
- Department of Speech-Language Pathology,
University of Toronto, ON, Canada
- Department of Psychology, Ryerson
University, Toronto, ON, Canada
| | - Huiwen Goy
- Department of Psychology, Ryerson
University, Toronto, ON, Canada
| | - Frank Russo
- Department of Psychology, Ryerson
University, Toronto, ON, Canada
| | - Louise Hickson
- School of Health and Rehabilitation
Sciences, University of Queensland, Brisbane, Australia
| | | | | | | | | |
Collapse
|
47
|
Tracting the neural basis of music: Deficient structural connectivity underlying acquired amusia. Cortex 2017; 97:255-273. [DOI: 10.1016/j.cortex.2017.09.028] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2017] [Revised: 06/08/2017] [Accepted: 09/29/2017] [Indexed: 11/17/2022]
|
48
|
Zhang C, Shao J, Huang X. Deficits of congenital amusia beyond pitch: Evidence from impaired categorical perception of vowels in Cantonese-speaking congenital amusics. PLoS One 2017; 12:e0183151. [PMID: 28829808 PMCID: PMC5568739 DOI: 10.1371/journal.pone.0183151] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2016] [Accepted: 07/31/2017] [Indexed: 11/30/2022] Open
Abstract
Congenital amusia is a lifelong disorder of fine-grained pitch processing in music and speech. However, it remains unclear whether amusia is a pitch-specific deficit, or whether it affects frequency/spectral processing more broadly, such as the perception of formant frequency in vowels, apart from pitch. In this study, in order to illuminate the scope of the deficits, we compared the performance of 15 Cantonese-speaking amusics and 15 matched controls on the categorical perception of sound continua in four stimulus contexts: lexical tone, pure tone, vowel, and voice onset time (VOT). Whereas lexical tone, pure tone and vowel continua rely on frequency/spectral processing, the VOT continuum depends on duration/temporal processing. We found that the amusic participants performed similarly to controls in all stimulus contexts in the identification, in terms of the across-category boundary location and boundary width. However, the amusic participants performed systematically worse than controls in discriminating stimuli in those three contexts that depended on frequency/spectral processing (lexical tone, pure tone and vowel), whereas they performed normally when discriminating duration differences (VOT). These findings suggest that the deficit of amusia is probably not pitch specific, but affects frequency/spectral processing more broadly. Furthermore, there appeared to be differences in the impairment of frequency/spectral discrimination in speech and nonspeech contexts. The amusic participants exhibited less benefit in between-category discriminations than controls in speech contexts (lexical tone and vowel), suggesting reduced categorical perception; on the other hand, they performed inferiorly compared to controls across the board regardless of between- and within-category discriminations in nonspeech contexts (pure tone), suggesting impaired general auditory processing. These differences imply that the frequency/spectral-processing deficit might be manifested differentially in speech and nonspeech contexts in amusics—it is manifested as a deficit of higher-level phonological processing in speech sounds, and as a deficit of lower-level auditory processing in nonspeech sounds.
Collapse
Affiliation(s)
- Caicai Zhang
- Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Hong Kong SAR, China
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- * E-mail:
| | - Jing Shao
- Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Xunan Huang
- Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Hong Kong SAR, China
| |
Collapse
|
49
|
Preserved appreciation of aesthetic elements of speech and music prosody in an amusic individual: A holistic approach. Brain Cogn 2017; 115:1-11. [PMID: 28371645 PMCID: PMC5434247 DOI: 10.1016/j.bandc.2017.03.010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2016] [Revised: 03/23/2017] [Accepted: 03/24/2017] [Indexed: 11/21/2022]
Abstract
An amusic individual was given novel tasks of speech and music prosody. Intact processing of holistic aesthetic aspects of prosody was demonstrated. Examination of speech and music prosodic phenomena adds to understanding amusia.
We present a follow-up study on the case of a Greek amusic adult, B.Z., whose impaired performance on scale, contour, interval, and meter was reported by Paraskevopoulos, Tsapkini, and Peretz in 2010, employing a culturally-tailored version of the Montreal Battery of Evaluation of Amusia. In the present study, we administered a novel set of perceptual judgement tasks designed to investigate the ability to appreciate holistic prosodic aspects of ‘expressiveness’ and emotion in phrase length music and speech stimuli. Our results show that, although diagnosed as a congenital amusic, B.Z. scored as well as healthy controls (N = 24) on judging ‘expressiveness’ and emotional prosody in both speech and music stimuli. These findings suggest that the ability to make perceptual judgements about such prosodic qualities may be preserved in individuals who demonstrate difficulties perceiving basic musical features such as melody or rhythm. B.Z.’s case yields new insights into amusia and the processing of speech and music prosody through a holistic approach. The employment of novel stimuli with relatively fewer non-naturalistic manipulations, as developed for this study, may be a useful tool for revealing unexplored aspects of music and speech cognition and offer the possibility to further the investigation of the perception of acoustic streams in more authentic auditory conditions.
Collapse
|
50
|
Jafari Z, Esmaili M, Delbari A, Mehrpour M, Mohajerani MH. Post-stroke acquired amusia: A comparison between right- and left-brain hemispheric damages. NeuroRehabilitation 2017; 40:233-241. [DOI: 10.3233/nre-161408] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Zahra Jafari
- Department of Basic Sciences in Rehabilitation, School of Rehabilitation Sciences, Iran University of Medical Sciences (IUMS), Tehran, Iran
- Department of Neuroscience, Canadian Center for Behavioral Neuroscience (CCBN), University of Lethbridge, Lethbridge, AB, Canada
- Iranian Research Center on Aging, University of Social Welfare and Rehabilitation Sciences (USWR), Tehran, Iran
| | - Mahdiye Esmaili
- Iranian Research Center on Aging, University of Social Welfare and Rehabilitation Sciences (USWR), Tehran, Iran
| | - Ahmad Delbari
- Iranian Research Center on Aging, University of Social Welfare and Rehabilitation Sciences (USWR), Tehran, Iran
| | - Masoud Mehrpour
- Department of Neurology, Firouzgar Hospital, Iran University of Medical Sciences (IUMS), Tehran, Iran
| | - Majid H. Mohajerani
- Department of Neuroscience, Canadian Center for Behavioral Neuroscience (CCBN), University of Lethbridge, Lethbridge, AB, Canada
| |
Collapse
|