1
|
Ziegler W, Staiger A, Schölderle T. Profiles of Dysarthria: Clinical Assessment and Treatment. Brain Sci 2023; 14:11. [PMID: 38248226 PMCID: PMC10813547 DOI: 10.3390/brainsci14010011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Accepted: 12/12/2023] [Indexed: 01/23/2024] Open
Abstract
In recent decades, we have witnessed a wealth of theoretical work and proof-of-principle studies on dysarthria, including descriptions and classifications of dysarthric speech patterns, new and refined assessment methods, and innovative experimental intervention trials [...].
Collapse
Affiliation(s)
- Wolfram Ziegler
- Clinical Neuropsychology Research Group (EKN), Institute of Phonetics and Speech Processing, Ludwig-Maximilians-University, 80799 Munich, Germany; (A.S.); (T.S.)
| | | | | |
Collapse
|
2
|
Haley KL, Jacks A, Richardson JD, Harmon TG, Lacey EH, Turkeltaub P. Do People With Apraxia of Speech and Aphasia Improve or Worsen Across Repeated Sequential Word Trials? JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:1240-1251. [PMID: 36917782 PMCID: PMC10187966 DOI: 10.1044/2022_jslhr-22-00438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 11/29/2022] [Accepted: 12/28/2022] [Indexed: 05/18/2023]
Abstract
PURPOSE During motor speech examinations for suspected apraxia of speech (AOS), clients are routinely asked to repeat words several times sequentially. The purpose of this study was to understand the task in terms of the relationship among consecutive attempts. We asked to what extent phonemic accuracy changes across trials and whether the change is predicted by AOS diagnosis and sound production severity. METHOD One hundred thirty-three participants were assigned to four diagnostic groups based on quantitative metrics (aphasia plus AOS, aphasia-only, and aphasia with two borderline speech profiles). Each participant produced four multisyllabic words 5 times consecutively. These productions were audio-recorded and transcribed phonetically and then summarized as the proportion of target phonemes that was produced accurately. Nonparametric statistics were used to analyze percent change in accuracy from the first to the last production based on diagnostic group and a broad measure of speech sound accuracy. RESULTS Accuracy on the repeated words deteriorated across trials for all groups, showing reduced accuracy from the first to the last repetition for 62% of participants. Although diagnostic groups differed on the broad measure of speech sound accuracy, severity classification based on this measure did not determine degree of deterioration on the repeated words task. DISCUSSION Responding to a request to say multisyllabic words 5 times sequentially is challenging for people with aphasia with and without AOS, and as such, performance is prone to errors even with mild impairment. For most, the task does not encourage self-correction. Instead, it promotes errors, regardless of diagnosis, and is, therefore, useful for screening purposes.
Collapse
Affiliation(s)
- Katarina L. Haley
- Division of Speech and Hearing Sciences, Department of Allied Health Sciences, The University of North Carolina at Chapel Hill
| | - Adam Jacks
- Division of Speech and Hearing Sciences, Department of Allied Health Sciences, The University of North Carolina at Chapel Hill
| | - Jessica D. Richardson
- Department of Speech and Hearing Sciences, The University of New Mexico, Albuquerque
| | - Tyson G. Harmon
- Department of Communication Disorders, Brigham Young University, Provo, UT
| | - Elizabeth H. Lacey
- Department of Neurology, Georgetown University Medical Center, and MedStar National Rehabilitation Hospital, Washington, DC
| | - Peter Turkeltaub
- Department of Neurology, Georgetown University Medical Center, and MedStar National Rehabilitation Hospital, Washington, DC
| |
Collapse
|
3
|
Lu Y, Wiltshire CEE, Watkins KE, Chiew M, Goldstein L. Characteristics of articulatory gestures in stuttered speech: A case study using real-time magnetic resonance imaging. JOURNAL OF COMMUNICATION DISORDERS 2022; 97:106213. [PMID: 35397388 DOI: 10.1016/j.jcomdis.2022.106213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 02/09/2022] [Accepted: 03/14/2022] [Indexed: 06/14/2023]
Abstract
INTRODUCTION Most of the previous articulatory studies of stuttering have focussed on the fluent speech of people who stutter. However, to better understand what causes the actual moments of stuttering, it is necessary to probe articulatory behaviors during stuttered speech. We examined the supralaryngeal articulatory characteristics of stuttered speech using real-time structural magnetic resonance imaging (RT-MRI). We investigated how articulatory gestures differ across stuttered and fluent speech of the same speaker. METHODS Vocal tract movements of an adult man who stutters during a pseudoword reading task were recorded using RT-MRI. Four regions of interest (ROIs) were defined on RT-MRI image sequences around the lips, tongue tip, tongue body, and velum. The variation of pixel intensity in each ROI over time provided an estimate of the movement of these four articulators. RESULTS All disfluencies occurred on syllable-initial consonants. Three articulatory patterns were identified. Pattern 1 showed smooth gestural formation and release like fluent speech. Patterns 2 and 3 showed delayed release of gestures due to articulator fixation or oscillation respectively. Block and prolongation corresponded to either pattern 1 or 2. Repetition corresponded to pattern 3 or a mix of patterns. Gestures for disfluent consonants typically exhibited a greater constriction than fluent gestures, which was rarely corrected during disfluencies. Gestures for the upcoming vowel were initiated and executed during these consonant disfluencies, achieving a tongue body position similar to the fluent counterpart. CONCLUSION Different perceptual types of disfluencies did not necessarily result from distinct articulatory patterns, highlighting the importance of collecting articulatory data of stuttering. Disfluencies on syllable-initial consonants were related to the delayed release and the overshoot of consonant gestures, rather than the delayed initiation of vowel gestures. This suggests that stuttering does not arise from problems with planning the vowel gestures, but rather with releasing the overly constricted consonant gestures.
Collapse
Affiliation(s)
- Yijing Lu
- Department of Linguistics, University of Southern California, United States.
| | - Charlotte E E Wiltshire
- Wellcome Centre for Integrative Neuroimaging, Department of Experimental Psychology, University of Oxford, United Kingdom.
| | - Kate E Watkins
- Wellcome Centre for Integrative Neuroimaging, Department of Experimental Psychology, University of Oxford, United Kingdom.
| | - Mark Chiew
- Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, University of Oxford, United Kingdom.
| | - Louis Goldstein
- Department of Linguistics, University of Southern California, United States.
| |
Collapse
|
4
|
Lim Y, Toutios A, Bliesener Y, Tian Y, Lingala SG, Vaz C, Sorensen T, Oh M, Harper S, Chen W, Lee Y, Töger J, Monteserin ML, Smith C, Godinez B, Goldstein L, Byrd D, Nayak KS, Narayanan SS. A multispeaker dataset of raw and reconstructed speech production real-time MRI video and 3D volumetric images. Sci Data 2021; 8:187. [PMID: 34285240 PMCID: PMC8292336 DOI: 10.1038/s41597-021-00976-x] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Accepted: 06/22/2021] [Indexed: 12/11/2022] Open
Abstract
Real-time magnetic resonance imaging (RT-MRI) of human speech production is enabling significant advances in speech science, linguistics, bio-inspired speech technology development, and clinical applications. Easy access to RT-MRI is however limited, and comprehensive datasets with broad access are needed to catalyze research across numerous domains. The imaging of the rapidly moving articulators and dynamic airway shaping during speech demands high spatio-temporal resolution and robust reconstruction methods. Further, while reconstructed images have been published, to-date there is no open dataset providing raw multi-coil RT-MRI data from an optimized speech production experimental setup. Such datasets could enable new and improved methods for dynamic image reconstruction, artifact correction, feature extraction, and direct extraction of linguistically-relevant biomarkers. The present dataset offers a unique corpus of 2D sagittal-view RT-MRI videos along with synchronized audio for 75 participants performing linguistically motivated speech tasks, alongside the corresponding public domain raw RT-MRI data. The dataset also includes 3D volumetric vocal tract MRI during sustained speech sounds and high-resolution static anatomical T2-weighted upper airway MRI for each participant.
Collapse
Affiliation(s)
- Yongwan Lim
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, California, USA
| | - Asterios Toutios
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, California, USA
| | - Yannick Bliesener
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, California, USA
| | - Ye Tian
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, California, USA
| | - Sajan Goud Lingala
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, California, USA
| | - Colin Vaz
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, California, USA
| | - Tanner Sorensen
- Department of Linguistics, Dornsife College of Letters, Arts and Sciences, University of Southern California, Los Angeles, California, USA
| | - Miran Oh
- Department of Linguistics, Dornsife College of Letters, Arts and Sciences, University of Southern California, Los Angeles, California, USA
| | - Sarah Harper
- Department of Linguistics, Dornsife College of Letters, Arts and Sciences, University of Southern California, Los Angeles, California, USA
| | - Weiyi Chen
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, California, USA
| | - Yoonjeong Lee
- Department of Linguistics, Dornsife College of Letters, Arts and Sciences, University of Southern California, Los Angeles, California, USA
| | - Johannes Töger
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, California, USA
| | - Mairym Lloréns Monteserin
- Department of Linguistics, Dornsife College of Letters, Arts and Sciences, University of Southern California, Los Angeles, California, USA
| | - Caitlin Smith
- Department of Linguistics, Dornsife College of Letters, Arts and Sciences, University of Southern California, Los Angeles, California, USA
| | - Bianca Godinez
- Department of Linguistics, California State University Long Beach, Long Beach, California, USA
| | - Louis Goldstein
- Department of Linguistics, Dornsife College of Letters, Arts and Sciences, University of Southern California, Los Angeles, California, USA
| | - Dani Byrd
- Department of Linguistics, Dornsife College of Letters, Arts and Sciences, University of Southern California, Los Angeles, California, USA
| | - Krishna S Nayak
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, California, USA
| | - Shrikanth S Narayanan
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, California, USA.
- Department of Linguistics, Dornsife College of Letters, Arts and Sciences, University of Southern California, Los Angeles, California, USA.
| |
Collapse
|
5
|
Wiltshire CEE, Chiew M, Chesters J, Healy MP, Watkins KE. Speech Movement Variability in People Who Stutter: A Vocal Tract Magnetic Resonance Imaging Study. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:2438-2452. [PMID: 34157239 PMCID: PMC8323486 DOI: 10.1044/2021_jslhr-20-00507] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Revised: 01/29/2021] [Accepted: 03/01/2021] [Indexed: 06/01/2023]
Abstract
Purpose People who stutter (PWS) have more unstable speech motor systems than people who are typically fluent (PWTF). Here, we used real-time magnetic resonance imaging (MRI) of the vocal tract to assess variability and duration of movements of different articulators in PWS and PWTF during fluent speech production. Method The vocal tracts of 28 adults with moderate to severe stuttering and 20 PWTF were scanned using MRI while repeating simple and complex pseudowords. Midsagittal images of the vocal tract from lips to larynx were reconstructed at 33.3 frames per second. For each participant, we measured the variability and duration of movements across multiple repetitions of the pseudowords in three selected articulators: the lips, tongue body, and velum. Results PWS showed significantly greater speech movement variability than PWTF during fluent repetitions of pseudowords. The group difference was most evident for measurements of lip aperture using these stimuli, as reported previously, but here, we report that movements of the tongue body and velum were also affected during the same utterances. Variability was not affected by phonological complexity. Speech movement variability was unrelated to stuttering severity within the PWS group. PWS also showed longer speech movement durations relative to PWTF for fluent repetitions of multisyllabic pseudowords, and this group difference was even more evident as complexity increased. Conclusions Using real-time MRI of the vocal tract, we found that PWS produced more variable movements than PWTF even during fluent productions of simple pseudowords. PWS also took longer to produce multisyllabic words relative to PWTF, particularly when words were more complex. This indicates general, trait-level differences in the control of the articulators between PWS and PWTF. Supplemental Material https://doi.org/10.23641/asha.14782092.
Collapse
Affiliation(s)
- Charlotte E. E. Wiltshire
- Wellcome Centre for Integrative Neuroimaging, Department of Experimental Psychology, Radcliffe Observatory Quarter, University of Oxford, United Kingdom
| | - Mark Chiew
- Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, University of Oxford, United Kingdom
| | - Jennifer Chesters
- Wellcome Centre for Integrative Neuroimaging, Department of Experimental Psychology, Radcliffe Observatory Quarter, University of Oxford, United Kingdom
| | - Máiréad P. Healy
- Wellcome Centre for Integrative Neuroimaging, Department of Experimental Psychology, Radcliffe Observatory Quarter, University of Oxford, United Kingdom
| | - Kate E. Watkins
- Wellcome Centre for Integrative Neuroimaging, Department of Experimental Psychology, Radcliffe Observatory Quarter, University of Oxford, United Kingdom
| |
Collapse
|
6
|
Ruthven M, Miquel ME, King AP. Deep-learning-based segmentation of the vocal tract and articulators in real-time magnetic resonance images of speech. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 198:105814. [PMID: 33197740 PMCID: PMC7732702 DOI: 10.1016/j.cmpb.2020.105814] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Accepted: 10/19/2020] [Indexed: 06/01/2023]
Abstract
BACKGROUND AND OBJECTIVE Magnetic resonance (MR) imaging is increasingly used in studies of speech as it enables non-invasive visualisation of the vocal tract and articulators, thus providing information about their shape, size, motion and position. Extraction of this information for quantitative analysis is achieved using segmentation. Methods have been developed to segment the vocal tract, however, none of these also fully segment any articulators. The objective of this work was to develop a method to fully segment multiple groups of articulators as well as the vocal tract in two-dimensional MR images of speech, thus overcoming the limitations of existing methods. METHODS Five speech MR image sets (392 MR images in total), each of a different healthy adult volunteer, were used in this work. A fully convolutional network with an architecture similar to the original U-Net was developed to segment the following six regions in the image sets: the head, soft palate, jaw, tongue, vocal tract and tooth space. A five-fold cross-validation was performed to investigate the segmentation accuracy and generalisability of the network. The segmentation accuracy was assessed using standard overlap-based metrics (Dice coefficient and general Hausdorff distance) and a novel clinically relevant metric based on velopharyngeal closure. RESULTS The segmentations created by the method had a median Dice coefficient of 0.92 and a median general Hausdorff distance of 5mm. The method segmented the head most accurately (median Dice coefficient of 0.99), and the soft palate and tooth space least accurately (median Dice coefficients of 0.92 and 0.93 respectively). The segmentations created by the method correctly showed 90% (27 out of 30) of the velopharyngeal closures in the MR image sets. CONCLUSIONS An automatic method to fully segment multiple groups of articulators as well as the vocal tract in two-dimensional MR images of speech was successfully developed. The method is intended for use in clinical and non-clinical speech studies which involve quantitative analysis of the shape, size, motion and position of the vocal tract and articulators. In addition, a novel clinically relevant metric for assessing the accuracy of vocal tract and articulator segmentation methods was developed.
Collapse
Affiliation(s)
- Matthieu Ruthven
- Clinical Physics, Barts Health NHS Trust, West Smithfield, London EC1A 7BE, United Kingdom; School of Biomedical Engineering & Imaging Sciences, King's College London, King's Health Partners, St Thomas' Hospital, London SE1 7EH, United Kingdom.
| | - Marc E Miquel
- Clinical Physics, Barts Health NHS Trust, West Smithfield, London EC1A 7BE, United Kingdom; Centre for Advanced Cardiovascular Imaging, NIHR Barts Biomedical Research Centre, William Harvey Institute, Queen Mary University of London, London EC1M 6BQ, United Kingdom
| | - Andrew P King
- School of Biomedical Engineering & Imaging Sciences, King's College London, King's Health Partners, St Thomas' Hospital, London SE1 7EH, United Kingdom
| |
Collapse
|
7
|
Namasivayam AK, Coleman D, O’Dwyer A, van Lieshout P. Speech Sound Disorders in Children: An Articulatory Phonology Perspective. Front Psychol 2020; 10:2998. [PMID: 32047453 PMCID: PMC6997346 DOI: 10.3389/fpsyg.2019.02998] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2019] [Accepted: 12/18/2019] [Indexed: 01/20/2023] Open
Abstract
Speech Sound Disorders (SSDs) is a generic term used to describe a range of difficulties producing speech sounds in children (McLeod and Baker, 2017). The foundations of clinical assessment, classification and intervention for children with SSD have been heavily influenced by psycholinguistic theory and procedures, which largely posit a firm boundary between phonological processes and phonetics/articulation (Shriberg, 2010). Thus, in many current SSD classification systems the complex relationships between the etiology (distal), processing deficits (proximal) and the behavioral levels (speech symptoms) is under-specified (Terband et al., 2019a). It is critical to understand the complex interactions between these levels as they have implications for differential diagnosis and treatment planning (Terband et al., 2019a). There have been some theoretical attempts made towards understanding these interactions (e.g., McAllister Byun and Tessier, 2016) and characterizing speech patterns in children either solely as the product of speech motor performance limitations or purely as a consequence of phonological/grammatical competence has been challenged (Inkelas and Rose, 2007; McAllister Byun, 2012). In the present paper, we intend to reconcile the phonetic-phonology dichotomy and discuss the interconnectedness between these levels and the nature of SSDs using an alternative perspective based on the notion of an articulatory "gesture" within the broader concepts of the Articulatory Phonology model (AP; Browman and Goldstein, 1992). The articulatory "gesture" serves as a unit of phonological contrast and characterization of the resulting articulatory movements (Browman and Goldstein, 1992; van Lieshout and Goldstein, 2008). We present evidence supporting the notion of articulatory gestures at the level of speech production and as reflected in control processes in the brain and discuss how an articulatory "gesture"-based approach can account for articulatory behaviors in typical and disordered speech production (van Lieshout, 2004; Pouplier and van Lieshout, 2016). Specifically, we discuss how the AP model can provide an explanatory framework for understanding SSDs in children. Although other theories may be able to provide alternate explanations for some of the issues we will discuss, the AP framework in our view generates a unique scope that covers linguistic (phonology) and motor processes in a unified manner.
Collapse
Affiliation(s)
- Aravind Kumar Namasivayam
- Oral Dynamics Laboratory, Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
| | - Deirdre Coleman
- Oral Dynamics Laboratory, Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- Independent Researcher, Surrey, BC, Canada
| | - Aisling O’Dwyer
- Oral Dynamics Laboratory, Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- St. James’s Hospital, Dublin, Ireland
| | - Pascal van Lieshout
- Oral Dynamics Laboratory, Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
- Rehabilitation Sciences Institute, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
8
|
Kim YC. Fast upper airway magnetic resonance imaging for assessment of speech production and sleep apnea. PRECISION AND FUTURE MEDICINE 2018. [DOI: 10.23838/pfm.2018.00100] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023] Open
|
9
|
Ramanarayanan V, Tilsen S, Proctor M, Töger J, Goldstein L, Nayak KS, Narayanan S. Analysis of speech production real-time MRI. COMPUT SPEECH LANG 2018. [DOI: 10.1016/j.csl.2018.04.002] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
10
|
Lingala SG, Sutton BP, Miquel ME, Nayak KS. Recommendations for real-time speech MRI. J Magn Reson Imaging 2016; 43:28-44. [PMID: 26174802 PMCID: PMC5079859 DOI: 10.1002/jmri.24997] [Citation(s) in RCA: 64] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2015] [Accepted: 06/23/2015] [Indexed: 11/11/2022] Open
Abstract
Real-time magnetic resonance imaging (RT-MRI) is being increasingly used for speech and vocal production research studies. Several imaging protocols have emerged based on advances in RT-MRI acquisition, reconstruction, and audio-processing methods. This review summarizes the state-of-the-art, discusses technical considerations, and provides specific guidance for new groups entering this field. We provide recommendations for performing RT-MRI of the upper airway. This is a consensus statement stemming from the ISMRM-endorsed Speech MRI summit held in Los Angeles, February 2014. A major unmet need identified at the summit was the need for consensus on protocols that can be easily adapted by researchers equipped with conventional MRI systems. To this end, we provide a discussion of tradeoffs in RT-MRI in terms of acquisition requirements, a priori assumptions, artifacts, computational load, and performance for different speech tasks. We provide four recommended protocols and identify appropriate acquisition and reconstruction tools. We list pointers to open-source software that facilitate implementation. We conclude by discussing current open challenges in the methodological aspects of RT-MRI of speech.
Collapse
Affiliation(s)
| | - Brad P. Sutton
- University of Illinois at Urbana-Champaign, Urbana-Champaign, Illinois, USA
| | | | | |
Collapse
|
11
|
Iltis PW, Frahm J, Voit D, Joseph AA, Schoonderwaldt E, Altenmüller E. High-speed real-time magnetic resonance imaging of fast tongue movements in elite horn players. Quant Imaging Med Surg 2015; 5:374-81. [PMID: 26029640 DOI: 10.3978/j.issn.2223-4292.2015.03.02] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2015] [Accepted: 02/27/2015] [Indexed: 12/23/2022]
Abstract
This paper describes the use of high-speed real-time (RT) magnetic resonance imaging (MRI) in quantifying very rapid motor function within the oropharyngeal cavity of six elite horn players. Based on simultaneous sound recordings, the efficacy of RT-MRI films at 30 and 100 frames per second (fps) was assessed for tongue movements associated with double tonguing performance. Serial images with a nominal temporal resolution of 10.0 and 33.3 ms were obtained by highly undersampled radial fast low-angle shot (FLASH) sequences (5 and 17 spokes, respectively) using complementary sets of spokes for successive acquisitions (extending over 9 and 5 frames, respectively). Reconstructions of high-speed images were obtained by temporally regularized nonlinear inversion (NLINV) as previously described. A customized MATLAB toolkit was developed for the extraction of line profiles from MRI films to quantify temporal phenomena associated with task performance. The analyses reveal that for the present setting, which required the use of a temporal median filter to optimize image quality, acquisition rates of 30 fps are inadequate to accurately detect tongue movements during double tonguing, but that rates of 100 fps do allow for a precise quantification of movement. These data for the first time demonstrate the extreme performance of elite horn players. High-speed RT-MRI offers so far unavailable opportunities to study the oropharyngeal movements during brass playing with future potential for teaching and the treatment of patients suffering from dystonia.
Collapse
Affiliation(s)
- Peter W Iltis
- 1 Department of Kinesiology, Gordon College, Wenham, MA, USA ; 2 University of Music, Drama and Media, Hannover, Germany ; 3 Biomedizinische NMR Forschungs GmbH am Max-Planck-Institut für biophysikalische Chemie, Göttingen, Germany
| | - Jens Frahm
- 1 Department of Kinesiology, Gordon College, Wenham, MA, USA ; 2 University of Music, Drama and Media, Hannover, Germany ; 3 Biomedizinische NMR Forschungs GmbH am Max-Planck-Institut für biophysikalische Chemie, Göttingen, Germany
| | - Dirk Voit
- 1 Department of Kinesiology, Gordon College, Wenham, MA, USA ; 2 University of Music, Drama and Media, Hannover, Germany ; 3 Biomedizinische NMR Forschungs GmbH am Max-Planck-Institut für biophysikalische Chemie, Göttingen, Germany
| | - Arun A Joseph
- 1 Department of Kinesiology, Gordon College, Wenham, MA, USA ; 2 University of Music, Drama and Media, Hannover, Germany ; 3 Biomedizinische NMR Forschungs GmbH am Max-Planck-Institut für biophysikalische Chemie, Göttingen, Germany
| | - Erwin Schoonderwaldt
- 1 Department of Kinesiology, Gordon College, Wenham, MA, USA ; 2 University of Music, Drama and Media, Hannover, Germany ; 3 Biomedizinische NMR Forschungs GmbH am Max-Planck-Institut für biophysikalische Chemie, Göttingen, Germany
| | - Eckart Altenmüller
- 1 Department of Kinesiology, Gordon College, Wenham, MA, USA ; 2 University of Music, Drama and Media, Hannover, Germany ; 3 Biomedizinische NMR Forschungs GmbH am Max-Planck-Institut für biophysikalische Chemie, Göttingen, Germany
| |
Collapse
|