1
|
Benway NR, Preston JL. Artificial Intelligence-Assisted Speech Therapy for /ɹ/: A Single-Case Experimental Study. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2024; 33:2461-2486. [PMID: 39173110 DOI: 10.1044/2024_ajslp-23-00448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/24/2024]
Abstract
PURPOSE This feasibility trial describes changes in rhotic production in residual speech sound disorder following ten 40-min sessions including artificial intelligence (AI)-assisted motor-based intervention with ChainingAI, a version of Speech Motor Chaining that predicts clinician perceptual judgment using the PERCEPT-R Classifier (Perceptual Error Rating for the Clinical Evaluation of Phonetic Targets). The primary purpose is to evaluate /ɹ/ productions directly after practice with ChainingAI versus directly before ChainingAI and to evaluate how the overall AI-assisted treatment package may lead to perceptual improvement in /ɹ/ productions compared to a no-treatment baseline phase. METHOD Five participants ages 10;7-19;3 (years;months) who were stimulable for /ɹ/ participated in a multiple (no-treatment)-baseline ABA single-case experiment. Prepractice activities were led by a human clinician, and drill-based motor learning practice was automated by ChainingAI. Study outcomes were derived from masked expert listener perceptual ratings of /ɹ/ from treated and untreated utterances recorded during baseline, treatment, and posttreatment sessions. RESULTS Listeners perceived significantly more rhoticity in practiced utterances after 30 min of ChainingAI, without a clinician, than directly before ChainingAI. Three of five participants showed significant generalization of /ɹ/ to untreated words during the treatment phase compared to the no-treatment baseline. All five participants demonstrated statistically significant generalization of /ɹ/ to untreated words from pretreatment to posttreatment. PERCEPT-clinician rater agreement (i.e., F1 score) was largely within the range of human-human agreement for four of five participants. Survey data indicated that parents and participants felt hybrid computerized-clinician service delivery could facilitate at-home practice. CONCLUSIONS This study provides evidence of participant improvement for /ɹ/ in untreated words in response to an AI-assisted treatment package. The continued development of AI-assisted treatments may someday mitigate barriers precluding access to sufficiently intense speech therapy for individuals with speech sound disorders. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.26662807.
Collapse
Affiliation(s)
- Nina R Benway
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD
| | - Jonathan L Preston
- Department of Communication Sciences and Disorders, Syracuse University, NY
| |
Collapse
|
2
|
Heller Murray E. Conducting high-quality and reliable acoustic analysis: A tutorial focused on training research assistants. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:2603-2611. [PMID: 38629881 PMCID: PMC11026110 DOI: 10.1121/10.0025536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 03/14/2024] [Accepted: 03/15/2024] [Indexed: 04/19/2024]
Abstract
Open science practices have led to an increase in available speech datasets for researchers interested in acoustic analysis. Accurate evaluation of these databases frequently requires manual or semi-automated analysis. The time-intensive nature of these analyses makes them ideally suited for research assistants in laboratories focused on speech and voice production. However, the completion of high-quality, consistent, and reliable analyses requires clear rules and guidelines for all research assistants to follow. This tutorial will provide information on training and mentoring research assistants to complete these analyses, covering areas including RA training, ongoing data analysis monitoring, and documentation needed for reliable and re-creatable findings.
Collapse
Affiliation(s)
- Elizabeth Heller Murray
- Department of Communication Sciences and Disorders, Temple University, Philadelphia, Pennsylvania 19122, USA
| |
Collapse
|
3
|
Shields R, Hopf SC. Intervention for residual speech errors in adolescents and adults: A systematised review. CLINICAL LINGUISTICS & PHONETICS 2024; 38:203-226. [PMID: 36946222 DOI: 10.1080/02699206.2023.2186765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Revised: 02/23/2023] [Accepted: 02/23/2023] [Indexed: 06/18/2023]
Abstract
When speech sound errors persist beyond childhood they are classified as residual speech errors (RSE) and may have detrimental impacts on an individual's social, educational and employment participation. Despite this, individuals who present with RSE are usually not prioritised on large caseloads. The aim of this literature review was to examine what intervention approaches are available in remediating RSE, and how effective are they for adolescents and adults? A systematised review was undertaken. Comprehensive and systematic searching included search of terms across seven databases, forward and reverse citation searching, and key author contact. Thirty articles underwent critical appraisal before data extraction. Inductive thematic analysis was done before completion of a narrative review. Twenty-three (76.6%) of the articles were from the US and most studies involved intervention for 'r' (90%). Intervention approaches for RSE involved traditional articulation therapy, auditory perceptual training, instrumental approaches, and approaches based on principles of motor learning. Twenty-one studies (70%) investigated the use of more than one intervention approach. Measures of intervention efficacy varied between studies; however, any intervention approach tended to be more successful if delivered in a more intensive schedule. A variety of approaches can be used for RSE, but a combination of high intensity, traditional therapy with adjunctive instrumental biofeedback may be most effective, especially with highly motivated individuals. Unfortunately, this usually requires costly equipment and training to implement. More information about the best dosage and intensity intervention for RSE, evaluated for a larger number of phonemes across other languages and dialects is required.
Collapse
Affiliation(s)
- Rebecca Shields
- Speech Pathology Department, School of Allied Health, Exercise and Sport Sciences, Charles Sturt University, Albury, Australia
| | - Suzanne C Hopf
- Speech Pathology Department, School of Allied Health, Exercise and Sport Sciences, Charles Sturt University, Albury, Australia
| |
Collapse
|
4
|
McAllister T, Preston JL, Ochs L, Hill J, Hitchcock ER. Comparing online versus laboratory measures of speech perception in older children and adolescents. PLoS One 2024; 19:e0297530. [PMID: 38324559 PMCID: PMC10849252 DOI: 10.1371/journal.pone.0297530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Accepted: 01/05/2024] [Indexed: 02/09/2024] Open
Abstract
Given the increasing prevalence of online data collection, it is important to know how behavioral data obtained online compare to samples collected in the laboratory. This study compares online and in-person measurement of speech perception in older children and adolescents. Speech perception is important for assessment and treatment planning in speech-language pathology; we focus on the American English /ɹ/ sound because of its frequency as a clinical target. Two speech perception tasks were adapted for web presentation using Gorilla: identification of items along a synthetic continuum from rake to wake, and category goodness judgment of English /ɹ/ sounds in words produced by various talkers with and without speech sound disorder. Fifty typical children aged 9-15 completed these tasks online using a standard headset. These data were compared to a previous sample of 98 typical children aged 9-15 who completed the same tasks in the lab setting. For the identification task, participants exhibited smaller boundary widths (suggestive of more acute perception) in the in-person setting relative to the online setting. For the category goodness judgment task, there was no statistically significant effect of modality. The correlation between scores on the two tasks was significant in the online setting but not in the in-person setting, but the difference in correlation strength was not statistically significant. Overall, our findings agree with previous research in suggesting that online and in-person data collection do not yield identical results, but the two contexts tend to support the same broad conclusions. In addition, these results suggest that online data collection can make it easier for researchers connect with a more representative sample of participants.
Collapse
Affiliation(s)
- Tara McAllister
- Department of Communicative Sciences and Disorders, New York University, New York, New York, United States of America
| | - Jonathan L. Preston
- Department of Communication Sciences and Disorders, Syracuse University, Syracuse, New York, New York, United States of America
| | - Laura Ochs
- Department of Communication Sciences and Disorders, Montclair State University, Montclair, New Jersey, United States of America
| | - Jennifer Hill
- Department of Applied Statistics, Social Sciences, and Humanities, New York University, New York, New York, United States of America
| | - Elaine R. Hitchcock
- Department of Communication Sciences and Disorders, Montclair State University, Montclair, New Jersey, United States of America
| |
Collapse
|
5
|
Benway NR, Preston JL, Salekin A, Hitchcock E, McAllister T. Evaluating acoustic representations and normalization for rhoticity classification in children with speech sound disorders. JASA EXPRESS LETTERS 2024; 4:025201. [PMID: 38299984 PMCID: PMC11522988 DOI: 10.1121/10.0024632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 01/10/2024] [Indexed: 02/02/2024]
Abstract
The effects of different acoustic representations and normalizations were compared for classifiers predicting perception of children's rhotic versus derhotic /ɹ/. Formant and Mel frequency cepstral coefficient (MFCC) representations for 350 speakers were z-standardized, either relative to values in the same utterance or age-and-sex data for typical /ɹ/. Statistical modeling indicated age-and-sex normalization significantly increased classifier performances. Clinically interpretable formants performed similarly to MFCCs and were endorsed for deep neural network engineering, achieving mean test-participant-specific F1-score = 0.81 after personalization and replication (σx = 0.10, med = 0.83, n = 48). Shapley additive explanations analysis indicated the third formant most influenced fully rhotic predictions.
Collapse
Affiliation(s)
- Nina R Benway
- Communication Sciences & Disorders, Syracuse University, Syracuse, New York 13244, USA
- Electrical and Computer Engineering, University of Maryland, College Park, Maryland 20742, USA
| | - Jonathan L Preston
- Communication Sciences & Disorders, Syracuse University, Syracuse, New York 13244, USA
| | - Asif Salekin
- Electrical Engineering and Computer Science, Syracuse University, Syracuse, New York 13244, USA
| | - Elaine Hitchcock
- Communication Sciences & Disorders, Montclair State University, Montclair, New Jersey 07043, USA
| | - Tara McAllister
- Communicative Sciences & Disorders, New York University, New York, New York 10007, , , , ,
| |
Collapse
|
6
|
Cao B, Ravi S, Sebkhi N, Bhavsar A, Inan OT, Xu W, Wang J. MagTrack: A Wearable Tongue Motion Tracking System for Silent Speech Interfaces. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:3206-3221. [PMID: 37146629 PMCID: PMC10555459 DOI: 10.1044/2023_jslhr-22-00319] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 09/06/2022] [Accepted: 02/20/2023] [Indexed: 05/07/2023]
Abstract
PURPOSE Current electromagnetic tongue tracking devices are not amenable for daily use and thus not suitable for silent speech interface and other applications. We have recently developed MagTrack, a novel wearable electromagnetic articulograph tongue tracking device. This study aimed to validate MagTrack for potential silent speech interface applications. METHOD We conducted two experiments: (a) classification of eight isolated vowels in consonant-vowel-consonant form and (b) continuous silent speech recognition. In these experiments, we used data from healthy adult speakers collected with MagTrack. The performance of vowel classification was measured by accuracies. The continuous silent speech recognition was measured by phoneme error rates. The performance was then compared with results using data collected with commercial electromagnetic articulograph in a prior study. RESULTS The isolated vowel classification using MagTrack achieved an average accuracy of 89.74% when leveraging all MagTrack signals (x, y, z coordinates; orientation; and magnetic signals), which outperformed the accuracy using commercial electromagnetic articulograph data (only y, z coordinates) in our previous study. The continuous speech recognition from two subjects using MagTrack achieved phoneme error rates of 73.92% and 66.73%, respectively. The commercial electromagnetic articulograph achieved 64.53% from the same subject (66.73% using MagTrack data). CONCLUSIONS MagTrack showed comparable results with the commercial electromagnetic articulograph when using the same localized information. Adding raw magnetic signals would improve the performance of MagTrack. Our preliminary testing demonstrated the potential for silent speech interface as a lightweight wearable device. This work also lays the foundation to support MagTrack's potential for other applications including visual feedback-based speech therapy and second language learning.
Collapse
Affiliation(s)
- Beiming Cao
- Department of Electrical and Computer Engineering, The University of Texas at Austin
- Department of Speech, Language, and Hearing Sciences, The University of Texas at Austin
| | - Shravan Ravi
- Department of Computer Science, The University of Texas at Austin
| | - Nordine Sebkhi
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta
| | - Arpan Bhavsar
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta
| | - Omer T. Inan
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta
| | - Wen Xu
- Division of Computer Science, Texas Woman's University, Denton
| | - Jun Wang
- Department of Speech, Language, and Hearing Sciences, The University of Texas at Austin
- Department of Neurology, The University of Texas at Austin
| |
Collapse
|
7
|
Benway NR, Preston JL, Hitchcock E, Rose Y, Salekin A, Liang W, McAllister T. Reproducible Speech Research With the Artificial Intelligence-Ready PERCEPT Corpora. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:1986-2009. [PMID: 37319018 DOI: 10.1044/2023_jslhr-22-00343] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
BACKGROUND Publicly available speech corpora facilitate reproducible research by providing open-access data for participants who have consented/assented to data sharing among different research teams. Such corpora can also support clinical education, including perceptual training and training in the use of speech analysis tools. PURPOSE In this research note, we introduce the PERCEPT (Perceptual Error Rating for the Clinical Evaluation of Phonetic Targets) corpora, PERCEPT-R (Rhotics) and PERCEPT-GFTA (Goldman-Fristoe Test of Articulation), which together contain over 36 hr of speech audio (> 125,000 syllable, word, and phrase utterances) from children, adolescents, and young adults aged 6-24 years with speech sound disorder (primarily residual speech sound disorders impacting /ɹ/) and age-matched peers. We highlight PhonBank as the repository for the corpora and demonstrate use of the associated speech analysis software, Phon, to query PERCEPT-R. A worked example of research with PERCEPT-R, suitable for clinical education and research training, is included as an appendix. Support for end users and information/descriptive statistics for future releases of the PERCEPT corpora can be found in a dedicated Slack channel. Finally, we discuss the potential for PERCEPT corpora to support the training of artificial intelligence clinical speech technology appropriate for use with children with speech sound disorders, the development of which has historically been constrained by the limited representation of either children or individuals with speech impairments in publicly available training corpora. CONCLUSIONS We demonstrate the use of PERCEPT corpora, PhonBank, and Phon for clinical training and research questions appropriate to child citation speech. Increased use of these tools has the potential to enhance reproducibility in the study of speech development and disorders.
Collapse
Affiliation(s)
- Nina R Benway
- Department of Communication Sciences & Disorders, Syracuse University, NY
| | - Jonathan L Preston
- Department of Communication Sciences & Disorders, Syracuse University, NY
- Haskins Laboratories, New Haven, CT
| | - Elaine Hitchcock
- Department of Communication Sciences and Disorders, Montclair State University, NJ
| | - Yvan Rose
- Department of Linguistics, Memorial University, St. John's, Newfoundland and Labrador, Canada
| | - Asif Salekin
- Department of Electrical Engineering and Computer Science, Syracuse University, NY
| | - Wendy Liang
- Department of Communicative Sciences and Disorders, New York University, NY
| | - Tara McAllister
- Department of Communicative Sciences and Disorders, New York University, NY
| |
Collapse
|
8
|
Ayala SA, Eads A, Kabakoff H, Swartz MT, Shiller DM, Hill J, Hitchcock ER, Preston JL, McAllister T. Auditory and Somatosensory Development for Speech in Later Childhood. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:1252-1273. [PMID: 36930986 PMCID: PMC10187971 DOI: 10.1044/2022_jslhr-22-00496] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 11/29/2022] [Accepted: 12/30/2022] [Indexed: 05/18/2023]
Abstract
PURPOSE This study collected measures of auditory-perceptual and oral somatosensory acuity in typically developing children and adolescents aged 9-15 years. We aimed to establish reference data that can be used as a point of comparison for individuals with residual speech sound disorder (RSSD), especially for RSSD affecting American English rhotics. We examined concurrent validity between tasks and hypothesized that performance on at least some tasks would show a significant association with age, reflecting ongoing refinement of sensory function in later childhood. We also tested for an inverse relationship between performance on auditory and somatosensory tasks, which would support the hypothesis of a trade-off between sensory domains. METHOD Ninety-eight children completed three auditory-perceptual tasks (identification and discrimination of stimuli from a "rake"-"wake" continuum and category goodness judgment for naturally produced words containing rhotics) and three oral somatosensory tasks (bite block with auditory masking, oral stereognosis, and articulatory awareness, which involved explicit judgments of relative tongue position for different speech sounds). Pairwise associations were examined between tasks within each domain and between task performance and age. Composite measures of auditory-perceptual and somatosensory functions were used to investigate the possibility of a sensory trade-off. RESULTS Statistically significant associations were observed between the identification and discrimination tasks and the bite block and articulatory awareness tasks. In addition, significant associations with age were found for the category goodness and bite block tasks. There was no statistically significant evidence of a trade-off between auditory-perceptual and somatosensory domains. CONCLUSIONS This study provided a multidimensional characterization of speech-related sensory function in older children/adolescents. Complete materials to administer all experimental tasks have been shared, along with measures of central tendency and dispersion for scores in two subgroups of age. Ultimately, we hope to apply this information to make customized treatment recommendations for children with RSSD based on sensory profiles.
Collapse
Affiliation(s)
- Samantha A. Ayala
- Department of Communicative Sciences and Disorders, New York University, NY
| | - Amanda Eads
- Department of Communicative Sciences and Disorders, New York University, NY
| | - Heather Kabakoff
- Department of Neurology, New York University Grossman School of Medicine, NY
| | - Michelle T. Swartz
- Department of Speech-Language Pathology, Thomas Jefferson University, Philadelphia, PA
| | - Douglas M. Shiller
- École d'orthophonie et d'audiologie, Faculté de medicine, Université de Montréal, Québec, Canada
| | - Jennifer Hill
- Center for Practice and Research at the Intersection of Information, Society, and Methodology, New York University, NY
| | - Elaine R. Hitchcock
- Department of Communication Sciences and Disorders, Montclair State University, NJ
| | | | - Tara McAllister
- Department of Communicative Sciences and Disorders, New York University, NY
| |
Collapse
|
9
|
Hitchcock ER, Ochs LC, Swartz MT, Leece MC, Preston JL, McAllister T. Tutorial: Using Visual-Acoustic Biofeedback for Speech Sound Training. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2023; 32:18-36. [PMID: 36623212 PMCID: PMC10023147 DOI: 10.1044/2022_ajslp-22-00142] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
PURPOSE This tutorial summarizes current practices using visual-acoustic biofeedback (VAB) treatment to improve speech outcomes for individuals with speech sound difficulties. Clinical strategies will focus on residual distortions of /ɹ/. METHOD Summary evidence related to the characteristics of VAB and the populations that may benefit from this treatment are reviewed. Guidelines are provided for clinicians on how to use VAB with clients to identify and modify their productions to match an acoustic representation. The clinical application of a linear predictive coding spectrum is emphasized. RESULTS Successful use of VAB requires several key factors including clinician and client comprehension of the acoustic representation, appropriate acoustic target and template selection, as well as appropriate selection of articulatory strategies, practice schedules, and feedback models to scaffold acquisition of new speech sounds. CONCLUSION Integrating a VAB component in clinical practice offers additional intervention options for individuals with speech sound difficulties and often facilitates improved speech sound acquisition and generalization outcomes. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21817722.
Collapse
|
10
|
McAllister T, Eads A, Kabakoff H, Scott M, Boyce S, Whalen DH, Preston JL. Baseline Stimulability Predicts Patterns of Response to Traditional and Ultrasound Biofeedback Treatment for Residual Speech Sound Disorder. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:2860-2880. [PMID: 35944047 PMCID: PMC9911120 DOI: 10.1044/2022_jslhr-22-00161] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Revised: 05/10/2022] [Accepted: 05/10/2022] [Indexed: 05/26/2023]
Abstract
PURPOSE This study aimed to identify predictors of response to treatment for residual speech sound disorder (RSSD) affecting English rhotics. Progress was tracked during an initial phase of traditional motor-based treatment and a longer phase of treatment incorporating ultrasound biofeedback. Based on previous literature, we focused on baseline stimulability and sensory acuity as predictors of interest. METHOD Thirty-three individuals aged 9-15 years with residual distortions of /ɹ/ received a course of individual intervention comprising 1 week of intensive traditional treatment and 9 weeks of ultrasound biofeedback treatment. Stimulability for /ɹ/ was probed prior to treatment, after the traditional treatment phase, and after the end of all treatment. Accuracy of /ɹ/ production in each probe was assessed with an acoustic measure: normalized third formant (F3)-second formant (F2) distance. Model-based clustering analysis was applied to these acoustic measures to identify different average trajectories of progress over the course of treatment. The resulting clusters were compared with respect to acuity in auditory and somatosensory domains. RESULTS All but four individuals were judged to exhibit a clinically significant response to the combined course of treatment. Two major clusters were identified. The "low stimulability" cluster was characterized by very low accuracy at baseline, minimal response to traditional treatment, and strong response to ultrasound biofeedback. The "high stimulability" group was more accurate at baseline and made significant gains in both traditional and ultrasound biofeedback phases of treatment. The clusters did not differ with respect to sensory acuity. CONCLUSIONS This research accords with clinical intuition in finding that individuals who are more stimulable at baseline are more likely to respond to traditional intervention, whereas less stimulable individuals may derive greater relative benefit from biofeedback. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.20422236.
Collapse
Affiliation(s)
- Tara McAllister
- Department of Communicative Sciences and Disorders, New York University, NY
| | - Amanda Eads
- Department of Communicative Sciences and Disorders, New York University, NY
| | - Heather Kabakoff
- Department of Neurology, Grossman School of Medicine, New York University, NY
| | - Marc Scott
- Department of Applied Statistics, Social Science, and Humanities, New York University, NY
| | - Suzanne Boyce
- Haskins Laboratories, New Haven, CT
- Department of Communication Sciences and Disorders, University of Cincinnati, OH
| | - D. H. Whalen
- Haskins Laboratories, New Haven, CT
- Program in Speech-Language-Hearing Sciences, Graduate School and University Center, City University of New York, NY
| | - Jonathan L. Preston
- Haskins Laboratories, New Haven, CT
- Department of Communication Sciences and Disorders, Syracuse University, NY
| |
Collapse
|
11
|
Peterson L, Savarese C, Campbell T, Ma Z, Simpson KO, McAllister T. Telepractice Treatment of Residual Rhotic Errors Using App-Based Biofeedback: A Pilot Study. Lang Speech Hear Serv Sch 2022; 53:256-274. [PMID: 35050705 DOI: 10.1044/2021_lshss-21-00084] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Although mobile apps are used extensively by speech-language pathologists, evidence for app-based treatments remains limited in quantity and quality. This study investigated the efficacy of app-based visual-acoustic biofeedback relative to nonbiofeedback treatment using a single-case randomization design. Because of COVID-19, all intervention was delivered via telepractice. METHOD Participants were four children aged 9-10 years with residual errors affecting American English /ɹ/. Using a randomization design, individual sessions were randomly assigned to feature practice with or without biofeedback, all delivered using the speech app Speech Therapist's App for /r/ Treatment. Progress was assessed using blinded listener ratings of word probes administered at baseline, posttreatment, and immediately before and after each treatment session. RESULTS All participants showed a clinically significant response to the overall treatment package, with effect sizes ranging from moderate to very large. One participant showed a significant advantage for biofeedback over nonbiofeedback treatment, although the order of treatment delivery poses a potential confound for interpretation in this case. CONCLUSIONS While larger scale studies are needed, these results suggest that app-based treatment for residual errors can be effective when delivered via telepractice. These results are compatible with previous findings in the motor learning literature regarding the importance of treatment dose and the timing of feedback conditions. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.18461576.
Collapse
Affiliation(s)
- Laura Peterson
- Department of Speech-Language Pathology, Rocky Mountain University of Health Professions, Provo, UT
| | | | - Twylah Campbell
- Department of Communicative Sciences and Disorders, New York University, NY
| | - Zhigong Ma
- Department of Communicative Sciences and Disorders, New York University, NY
| | - Kenneth O Simpson
- Department of Speech-Language Pathology, Rocky Mountain University of Health Professions, Provo, UT
| | - Tara McAllister
- Department of Communicative Sciences and Disorders, New York University, NY
| |
Collapse
|