1
|
Brown S, Phillips E. The vocal origin of musical scales: the Interval Spacing model. Front Psychol 2023; 14:1261218. [PMID: 37868594 PMCID: PMC10587400 DOI: 10.3389/fpsyg.2023.1261218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 09/11/2023] [Indexed: 10/24/2023] Open
Affiliation(s)
- Steven Brown
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, ON, Canada
| | | |
Collapse
|
2
|
Zhang X, Talifu Z, Li J, Li X, Yu F. Melodic intonation therapy for non-fluent aphasia after stroke: A clinical pilot study on behavioral and DTI findings. iScience 2023; 26:107453. [PMID: 37744405 PMCID: PMC10517365 DOI: 10.1016/j.isci.2023.107453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 05/13/2023] [Accepted: 07/19/2023] [Indexed: 09/26/2023] Open
Abstract
Music-based melodic intonation therapy (MIT) has shown promise as a treatment for non-fluent aphasia after stroke. This trial compared the efficacy of music-based MIT and speech therapy (ST) in aphasia, focusing on arcuate fasciculus connectivity in brain structural and language ability scores. A total of 62 patients were enrolled, of whom 40 completed the trial. The experimental group received MIT for 30 min/d, five days per week for four weeks, while the control group received ST with the same dose. The BDAE and fMRI-DTI were performed at T0 and T1. The music-based MIT group demonstrated better language levels. DTI showed that FA, FN, and path length of the MIT group in the right hemisphere were significantly increased. Music-based MIT had positive effects on reorganization and activation of arcuate fasciculus in aphasia after stroke. This research is funded by NSFC No. T2341003 and No.2020CZ-10. Clinical Trials ChiCTR2000037871. Ethics approval number: 2020-013-1.
Collapse
Affiliation(s)
- Xiaoying Zhang
- School of Rehabilitation Medicine, Capital Medical University, Beijing 100068, China
- Department of Music Artificial Intelligence and Music Information Technology, Central Conservatory of Music, Beijing 100038, China
- Institute of Advanced Science and Technology, Xi’an Jiaotong University, Xi’an 710079, Shanxi, China
- Music Therapy Center, China Rehabilitation Research Center, Beijing 100068, China
- Department of Neurorehabilitation, China Rehabilitation Research Center, Beijing 100068, China
| | - Zuliyaer Talifu
- School of Rehabilitation Medicine, Capital Medical University, Beijing 100068, China
- Music Therapy Center, China Rehabilitation Research Center, Beijing 100068, China
- Department of Neurorehabilitation, China Rehabilitation Research Center, Beijing 100068, China
| | - Jianjun Li
- School of Rehabilitation Medicine, Capital Medical University, Beijing 100068, China
- Music Therapy Center, China Rehabilitation Research Center, Beijing 100068, China
- Department of Neurorehabilitation, China Rehabilitation Research Center, Beijing 100068, China
| | - Xiaobing Li
- Department of Music Artificial Intelligence and Music Information Technology, Central Conservatory of Music, Beijing 100038, China
- Institute of Advanced Science and Technology, Xi’an Jiaotong University, Xi’an 710079, Shanxi, China
| | - Feng Yu
- Department of Music Artificial Intelligence and Music Information Technology, Central Conservatory of Music, Beijing 100038, China
- Institute of Advanced Science and Technology, Xi’an Jiaotong University, Xi’an 710079, Shanxi, China
| |
Collapse
|
3
|
Scharinger M, Knoop CA, Wagner V, Menninghaus W. Neural processing of poems and songs is based on melodic properties. Neuroimage 2022; 257:119310. [PMID: 35569784 DOI: 10.1016/j.neuroimage.2022.119310] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 04/26/2022] [Accepted: 05/11/2022] [Indexed: 11/30/2022] Open
Abstract
The neural processing of speech and music is still a matter of debate. A long tradition that assumes shared processing capacities for the two domains contrasts with views that assume domain-specific processing. We here contribute to this topic by investigating, in a functional magnetic imaging (fMRI) study, ecologically valid stimuli that are identical in wording and differ only in that one group is typically spoken (or silently read), whereas the other is sung: poems and their respective musical settings. We focus on the melodic properties of spoken poems and their sung musical counterparts by looking at proportions of significant autocorrelations (PSA) based on pitch values extracted from their recordings. Following earlier studies, we assumed a bias of poem-processing towards the left and a bias for song-processing on the right hemisphere. Furthermore, PSA values of poems and songs were expected to explain variance in left- vs. right-temporal brain areas, while continuous liking ratings obtained in the scanner should modulate activity in the reward network. Overall, poem processing compared to song processing relied on left temporal regions, including the superior temporal gyrus, whereas song processing compared to poem processing recruited more right temporal areas, including Heschl's gyrus and the superior temporal gyrus. PSA values co-varied with activation in bilateral temporal regions for poems, and in right-dominant fronto-temporal regions for songs. Continuous liking ratings were correlated with activity in the default mode network for both poems and songs. The pattern of results suggests that the neural processing of poems and their musical settings is based on their melodic properties, supported by bilateral temporal auditory areas and an additional right fronto-temporal network known to be implicated in the processing of melodies in songs. These findings take a middle ground in providing evidence for specific processing circuits for speech and music in the left and right hemisphere, but simultaneously for shared processing of melodic aspects of both poems and their musical settings in the right temporal cortex. Thus, we demonstrate the neurobiological plausibility of assuming the importance of melodic properties in spoken and sung aesthetic language alike, along with the involvement of the default mode network in the aesthetic appreciation of these properties.
Collapse
Affiliation(s)
- Mathias Scharinger
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Research Group Phonetics, Institute of German Linguistics, Philipps-University Marburg, Pilgrimstein 16, Marburg 35032, Germany; Center for Mind, Brain and Behavior, Universities of Marburg and Gießen, Germany.
| | - Christine A Knoop
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Valentin Wagner
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Experimental Psychology Unit, Helmut Schmidt University / University of the Federal Armed Forces Hamburg, Germany
| | - Winfried Menninghaus
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| |
Collapse
|
4
|
Boutsen F, Park E, Dvorak JD. Reading Warm-Up, Reading Skill, and Reading Prosody When Reading the My Grandfather Passage: An Exploratory Study Born Out of the Motor Planning Theory of Prosody and Reading Prosody Research. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:2047-2063. [PMID: 35640099 DOI: 10.1044/2022_jslhr-21-00615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE The Motor Planning Theory of Prosody and reading prosody research indicate that "out of the blue" oral reading, as practiced in clinical and research settings, invokes surface rather than covert prosody, particularly when readers are recorded, less skilled, and/or speech impaired. Warm-up is not considered in passage reading for motor-speech assessment. We report on a preliminary study aimed to investigate the effect of warm-up on reading prosody in two conditions: silent reading alone and reading "out of the blue" followed by silent reading. A secondary aim of the study was to examine the effect of reading skill on reading prosody. METHOD Twenty-one monolingual, English-speaking volunteers were recorded reading the My Grandfather Passage (GP) while their eye movements were tracked. Participants were randomly assigned to one of two reading conditions: (a) silent-oral (SO) and (b) oral-silent-oral (OSO). In the SO condition, participants read the GP silently as a warm-up for the subsequent oral reading. In the OSO condition, participants first read the GP aloud ("out of the blue") and then read the same passage silently with the instruction to do this in preparation for a second oral reading. Reading skill was quantified using eye-voice span and Wide Range Achievement Test-Fourth Edition testing. Reading prosody was evaluated using pause indexes, the Acoustic Multidimensional Prosody Index, and speech rate. CONCLUSIONS One oral reading before a silent reading but not a silent reading alone before oral reading was shown to affect reading prosody. In terms of reading skill, results indicate that predictive associations patterned differently in the reading conditions explored, suggesting different underlying skill sets.
Collapse
Affiliation(s)
- Frank Boutsen
- Department of Communication Disorders, New Mexico State University, Las Cruces
| | - Eunsun Park
- Department of Communication Disorders and Sciences, William Paterson University, Wayne, NJ
| | - Justin D Dvorak
- Hudson College of Public Health, University of Oklahoma Health Sciences Center, Oklahoma City
| |
Collapse
|
5
|
Zhang XY, Yu WY, Teng WJ, Lu MY, Wu XL, Yang YQ, Chen C, Liu LX, Liu SH, Li JJ. Effectiveness of Melodic Intonation Therapy in Chinese Mandarin on Non-fluent Aphasia in Patients After Stroke: A Randomized Control Trial. Front Neurosci 2021; 15:648724. [PMID: 34366768 PMCID: PMC8344357 DOI: 10.3389/fnins.2021.648724] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2021] [Accepted: 06/10/2021] [Indexed: 01/31/2023] Open
Abstract
Melodic intonation therapy (MIT) positively impacts the speech function of patients suffering from aphasia and strokes. Fixed-pitch melodies and phrases formulated in MIT provide the key to the target language to open the language pathway. This randomized controlled trial compared the effects of music therapy-based MIT and speech therapy on patients with non-fluent aphasia. The former is more effective in the recovery of language function in patients with aphasia. Forty-two participants were enrolled in the study, and 40 patients were registered. The participants were randomly assigned to two groups: the intervention group (n = 20; 16 males, 4 females; 52.90 ± 9.08 years), which received MIT, and the control group (n = 20; 15 males, 5 females; 54.05 ± 10.81 years), which received speech therapy. The intervention group received MIT treatment for 30 min/day, five times a week for 8 weeks, and the control group received identical sessions of speech therapy for 30 min/day, five times a week for 8 weeks. Each participant of the group was assessed by a Boston Diagnostic Aphasia Examination (BDAE) at the baseline (t1, before the start of the experiment), and after 8 weeks (t2, the experiment was finished). The Hamilton Anxiety Scale (HAMA) and Hamilton Depression Scale (HAMD) were also measured on the time points. The best medical care of the two groups is the same. Two-way ANOVA analysis of variance was used only for data detection. In the spontaneous speech (information), the listening comprehension (right or wrong, word recognition, and sequential order) and repetitions of the intervention group were significantly higher than the control group in terms of the cumulative effect of time and the difference between groups after 8 weeks. The intervention group has a significant time effect in fluency, but the results after 8 weeks were not significantly different from those in the control group. In terms of naming, the intervention group was much better than the control group in spontaneous naming. Regarding object naming, reaction naming, and sentence completing, the intervention group showed a strong time accumulation effect. Still, the results after 8 weeks were not significantly different from those in the control group. These results indicate that, compared with speech therapy, MIT based on music therapy is a more effective musical activity and is effective and valuable for the recovery of speech function in patients with non-fluent aphasia. As a more professional non-traumatic treatment method, MIT conducted by qualified music therapists requires deeper cooperation between doctors and music therapists to improve rehabilitating patients with aphasia. The Ethics Committee of the China Rehabilitation Research Center approved this study (Approval No. 2020-013-1 on April 1, 2020) and was registered with the Chinese Clinical Trial Registry (Registration number: Clinical Trials ChiCTR2000037871) on September 3, 2020.
Collapse
Affiliation(s)
- Xiao-Ying Zhang
- School of Rehabilitation Medicine, Capital Medical University, Beijing, China.,China Rehabilitation Science Institute, Beijing, China.,Beijing Key Laboratory of Neural Injury and Rehabilitation, Beijing, China.,Center of Neural Injury and Repair, Beijing Institute for Brain Disorders, Beijing, China.,Music Therapy Center, Department of Psychology, China Rehabilitation Research Center, Beijing, China
| | - Wei-Yong Yu
- School of Rehabilitation Medicine, Capital Medical University, Beijing, China.,Department of Imaging, China Rehabilitation Research Center, Beijing, China
| | - Wen-Jia Teng
- School of Rehabilitation Medicine, Capital Medical University, Beijing, China.,Music Therapy Center, Department of Psychology, China Rehabilitation Research Center, Beijing, China
| | - Meng-Yang Lu
- School of Rehabilitation Medicine, Capital Medical University, Beijing, China.,Music Therapy Center, Department of Psychology, China Rehabilitation Research Center, Beijing, China
| | - Xiao-Li Wu
- School of Rehabilitation Medicine, Capital Medical University, Beijing, China.,Department of Neurorehabilitation, China Rehabilitation Research Center, Beijing, China
| | - Yu-Qi Yang
- School of Rehabilitation Medicine, Capital Medical University, Beijing, China.,Department of Neurorehabilitation, China Rehabilitation Research Center, Beijing, China
| | - Chen Chen
- Department of Music Education, Xinghai Conservatory of Music, Guangzhou, China
| | - Li-Xu Liu
- School of Rehabilitation Medicine, Capital Medical University, Beijing, China.,Department of Neurorehabilitation, China Rehabilitation Research Center, Beijing, China
| | - Song-Huai Liu
- School of Rehabilitation Medicine, Capital Medical University, Beijing, China.,Music Therapy Center, Department of Psychology, China Rehabilitation Research Center, Beijing, China
| | - Jian-Jun Li
- School of Rehabilitation Medicine, Capital Medical University, Beijing, China.,China Rehabilitation Science Institute, Beijing, China.,Beijing Key Laboratory of Neural Injury and Rehabilitation, Beijing, China.,Center of Neural Injury and Repair, Beijing Institute for Brain Disorders, Beijing, China
| |
Collapse
|
6
|
Lin RZ, Marsh EB. Abnormal singing can identify patients with right hemisphere cortical strokes at risk for impaired prosody. Medicine (Baltimore) 2021; 100:e26280. [PMID: 34115027 PMCID: PMC8202571 DOI: 10.1097/md.0000000000026280] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Accepted: 05/21/2021] [Indexed: 01/04/2023] Open
Abstract
Despite lacking aphasia seen with left hemisphere (LH) infarcts involving the middle cerebral artery territory, right hemisphere (RH) strokes can result in significant difficulties in affective prosody. These impairments may be more difficult to identify but lead to significant communication problems.We determine if evaluation of singing can accurately identify stroke patients with cortical RH infarcts at risk for prosodic impairment who may benefit from rehabilitation.A prospective cohort of 36 patients evaluated with acute ischemic stroke was recruited. Participants underwent an experimental battery evaluating their singing, prosody comprehension, and prosody production. Singing samples were rated by 2 independent reviewers as subjectively "normal" or "abnormal," and analyzed for properties of the fundamental frequency. Relationships between infarct location, singing, and prosody performance were evaluated using t tests and chi-squared analysis.Eighty percent of participants with LH cortical strokes were unable to successfully complete any of the tasks due to severe aphasia. For the remainder, singing ratings corresponded to stroke location for 68% of patients. RH cortical strokes demonstrated a lower mean fundamental frequency while singing than those with subcortical infarcts (176.8 vs 130.4, P = 0.02). They also made more errors on tasks of prosody comprehension (28.6 vs 16.0, P < 0.001) and production (40.4 vs 18.4, P < 0.001).Patients with RH cortical infarcts are more likely to exhibit impaired prosody comprehension and production and demonstrate the poor variation of tone when singing compared to patients with subcortical infarcts. A simple singing screen is able to successfully identify patients with cortical lesions and potential prosodic deficits.
Collapse
Affiliation(s)
- Rebecca Z. Lin
- Department of Cognitive Science, Johns Hopkins University
| | - Elisabeth B. Marsh
- Department of Neurology, Johns Hopkins School of Medicine, Baltimore, MD, USA
| |
Collapse
|
7
|
Kogan VV, Reiterer SM. Eros, Beauty, and Phon-Aesthetic Judgements of Language Sound. We Like It Flat and Fast, but Not Melodious. Comparing Phonetic and Acoustic Features of 16 European Languages. Front Hum Neurosci 2021; 15:578594. [PMID: 33708080 PMCID: PMC7940689 DOI: 10.3389/fnhum.2021.578594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Accepted: 01/12/2021] [Indexed: 11/13/2022] Open
Abstract
This article concerns sound aesthetic preferences for European foreign languages. We investigated the phonetic-acoustic dimension of the linguistic aesthetic pleasure to describe the "music" found in European languages. The Romance languages, French, Italian, and Spanish, take a lead when people talk about melodious language - the music-like effects in the language (a.k.a., phonetic chill). On the other end of the melodiousness spectrum are German and Arabic that are often considered sounding harsh and un-attractive. Despite the public interest, limited research has been conducted on the topic of phonaesthetics, i.e., the subfield of phonetics that is concerned with the aesthetic properties of speech sounds (Crystal, 2008). Our goal is to fill the existing research gap by identifying the acoustic features that drive the auditory perception of language sound beauty. What is so music-like in the language that makes people say "it is music in my ears"? We had 45 central European participants listening to 16 auditorily presented European languages and rating each language in terms of 22 binary characteristics (e.g., beautiful - ugly and funny - boring) plus indicating their language familiarities, L2 backgrounds, speaker voice liking, demographics, and musicality levels. Findings revealed that all factors in complex interplay explain a certain percentage of variance: familiarity and expertise in foreign languages, speaker voice characteristics, phonetic complexity, musical acoustic properties, and finally musical expertise of the listener. The most important discovery was the trade-off between speech tempo and so-called linguistic melody (pitch variance): the faster the language, the flatter/more atonal it is in terms of the pitch (speech melody), making it highly appealing acoustically (sounding beautiful and sexy), but not so melodious in a "musical" sense.
Collapse
Affiliation(s)
- Vita V Kogan
- School of European Culture and Languages, University of Kent, Kent, United Kingdom
| | - Susanne M Reiterer
- Department of Linguistics, University of Vienna, Vienna, Austria.,Teacher Education Centre, University of Vienna, Vienna, Austria
| |
Collapse
|