1
|
Wilder A, Redmond SM. Updates on Clinical Language Sampling Practices: A Survey of Speech-Language Pathologists Practicing in the United States. Lang Speech Hear Serv Sch 2024; 55:1151-1166. [PMID: 39292921 DOI: 10.1044/2024_lshss-24-00035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/20/2024] Open
Abstract
PURPOSE Language sample analysis (LSA) provides many benefits for assessing, identifying therapy goals, and monitoring the progress of children with language disorders. Despite these widely recognized advantages, previous surveys suggest the declining use of LSA by speech-language pathologists (SLPs). This study aimed to provide updates on clinical LSA use following the recent introduction of two new LSA protocols, namely, the Sampling Utterances and Grammatical Analysis Revised (SUGAR) protocol and the Computerized Language Analysis KIDEVAL program. METHOD Survey data from SLPs practicing in the United States (N = 337) were used to examine rates of LSA use, methods, and protocols. Factors predicting LSA use and reported facilitators and barriers were also examined. RESULTS Results indicated that 60% of SLPs used LSA in the past year. LSA skill level, training, and serving preschool or elementary school children predicted LSA use, whereas workplace, caseload, and years of experience were not significant predictors. Most SLPs reported using self-designed LSA protocols (62%), followed by Systematic Analysis of Language Transcripts (23%) and SUGAR (12%) protocols. SLPs who did not use LSA reported limited time (74%), limited resources (59%), and limited expertise (41%) as barriers and identified additional training on LSA computer programs (52%) and access to automatic speech recognition programs (49%) as facilitators to their adoption of LSA. CONCLUSIONS Reported rates of LSA use and methods were consistent with previous survey findings. This study's findings highlight the ongoing needs for more extensive preprofessional training in LSA.
Collapse
Affiliation(s)
- Amy Wilder
- Department of Communication Sciences and Disorders, The University of Utah, Salt Lake City
| | - Sean M Redmond
- Department of Communication Sciences and Disorders, The University of Utah, Salt Lake City
| |
Collapse
|
2
|
Oetting JB, Maleki T. Transcription Decisions of Conjoined Independent Clauses Are Equitable Across Dialects but Impact Measurement Outcomes. Lang Speech Hear Serv Sch 2024; 55:870-883. [PMID: 38758707 PMCID: PMC11253809 DOI: 10.1044/2024_lshss-23-00180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 03/17/2024] [Accepted: 04/01/2024] [Indexed: 05/19/2024] Open
Abstract
PURPOSE Transcription of conjoined independent clauses within language samples varies across professionals. Some transcribe these clauses as two separate utterances, whereas others conjoin them within a single utterance. As an inquiry into equitable practice, we examined rates of conjoined independent clauses produced by children and the impact of separating these clauses within utterances on measures of mean length of utterance (MLU) by a child's English dialect, clinical status, and age. METHOD The data were archival and included 246 language samples from children classified by their dialect (African American English or Southern White English) and clinical status (developmental language disorder [DLD] or typically developing [TD]), with those in the TD group further classified by their age (4 years [TD4] or 6 years [TD6]). RESULTS Rates of conjoined independent clauses and the impact of these clauses on MLU varied by clinical status (DLD < TD) and age (TD4 < TD6), but not by dialect. Correlations between the rate of conjoined clauses, MLU, and language test scores were also similar across the two dialects. CONCLUSIONS Transcription decisions regarding conjoined independent clauses within language samples lead to equitable measurement outcomes across dialects of English. Nevertheless, transcribing conjoined independent clauses as two separate utterances reduces one's ability to detect syntactic differences between children with and without DLD and document syntactic growth as children age. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.25822675.
Collapse
Affiliation(s)
- Janna B. Oetting
- Department of Communication Disorders and Sciences, Louisiana State University, Baton Rouge
| | - Tahmineh Maleki
- Department of Communication Disorders and Sciences, Louisiana State University, Baton Rouge
| |
Collapse
|
3
|
Chu CY, Chen PH, Tsai YS, Chen CA, Chan YC, Ciou YJ. Effect of sample length on MLU in Mandarin-speaking hard-of-hearing children. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2024; 29:388-395. [PMID: 38409766 DOI: 10.1093/deafed/enae007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 11/14/2023] [Accepted: 01/22/2024] [Indexed: 02/28/2024]
Abstract
This study investigated the impact of language sample length on mean length of utterance (MLU) and aimed to determine the minimum number of utterances required for a reliable MLU. Conversations were collected from Mandarin-speaking, hard-of-hearing and typical-hearing children aged 16-81 months. The MLUs were calculated using sample sizes ranging from 25 to 200 utterances. The results showed that for an MLU between 1.0 and 2.5, 25 and 50 utterances were sufficient for reliable MLU calculations for hard-of-hearing and typical-hearing children, respectively. For an MLU between 2.5 and 3.75, 125 utterances were required for both groups. For an MLU greater than 3.75, 150 and 125 utterances were required for hard-of-hearing and typical-hearing children, respectively. These findings suggest that a greater number of utterances are required for a reliable MLU as language complexity increases. Professionals working with hard-of-hearing children should consider collecting different numbers of utterances based on the children's language complexity levels.
Collapse
Affiliation(s)
- Chia-Ying Chu
- Speech and Hearing Science Research Institute, Children's Hearing Foundation, Taipei City 114, Taiwan
| | - Pei-Hua Chen
- Speech and Hearing Science Research Institute, Children's Hearing Foundation, Taipei City 114, Taiwan
| | - Yi-Shin Tsai
- Speech and Hearing Science Research Institute, Children's Hearing Foundation, Taipei City 114, Taiwan
| | - Chieh-An Chen
- Speech and Hearing Science Research Institute, Children's Hearing Foundation, Taipei City 114, Taiwan
| | - Yi-Chih Chan
- Speech and Hearing Science Research Institute, Children's Hearing Foundation, Taipei City 114, Taiwan
| | - Yan-Jhe Ciou
- Speech and Hearing Science Research Institute, Children's Hearing Foundation, Taipei City 114, Taiwan
| |
Collapse
|
4
|
Reno EA, McMaster KL. Measuring Linguistic Growth in Sentence-Level Writing Curriculum-Based Measures: Exploring Complementary Scoring Methods. Lang Speech Hear Serv Sch 2024; 55:529-544. [PMID: 38284915 DOI: 10.1044/2023_lshss-23-00056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2024] Open
Abstract
PURPOSE Picture-word writing curriculum-based measures (PW CBM-Ws) are technically sound, formative measures of descriptive, sentence-level writing but cannot estimate underlying linguistic skills. The purpose of this exploratory alternative scoring investigation was to apply metrics from language sample analysis (LSA) to PW CBM-Ws as a complementary measure of underlying language skills in beginning writers' sentence-level writing. METHOD LSA metrics were applied to 104 typically developing first through third graders' PW CBM-W samples across fall and spring semesters. Factorial analyses of variance with post hoc Bonferroni pairwise comparisons were applied after obtaining alternate-form reliability and criterion-related validity estimates. RESULTS Analyses revealed reliable discrimination between grades and significant growth between fall and spring semesters for three LSA metrics: mean length of T-unit in words, mean length of T-unit in morphemes, and number of different words. While mean length of T-unit in words and morphemes demonstrated evidence of discrimination and growth in first grade only, number of different words showed evidence of reliable discrimination and growth in first and third grades. CONCLUSIONS Mean length of T-unit in words, mean length of T-unit in morphemes, and number of different words showed evidence of adequate criterion-related validity, discrimination among grades, and sensitivity to growth when calculated using PW CBM-W samples to gauge underlying linguistic skills in first- and third-grade students. Implications and future directions for research are discussed. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.25050290.
Collapse
Affiliation(s)
- Emily A Reno
- Department of Educational Psychology, University of Minnesota, Twin Cities, Minneapolis
| | - Kristen L McMaster
- Department of Educational Psychology, University of Minnesota, Twin Cities, Minneapolis
| |
Collapse
|
5
|
Bright R, Ashton E, Mckean C, Wren Y. The development of a digital story-retell elicitation and analysis tool through citizen science data collection, software development and machine learning. Front Psychol 2023; 14:989499. [PMID: 37287780 PMCID: PMC10243469 DOI: 10.3389/fpsyg.2023.989499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Accepted: 01/31/2023] [Indexed: 06/09/2023] Open
Abstract
Background In order to leverage the potential benefits of technology to speech and language therapy language assessment processes, large samples of naturalistic language data must be collected and analysed. These samples enable the development and testing of novel software applications with data relevant to their intended clinical application. However, the collection and analysis of such data can be costly and time-consuming. This paper describes the development of a novel application designed to elicit and analyse young children's story retell narratives to provide metrics regarding the child's use of grammatical structures (micro-structure) and story grammar (macro-structure elements). Key aspects for development were (1) methods to collect story retells, ensure accurate transcription and segmentation of utterances; (2) testing the reliability of the application to analyse micro-structure elements in children's story retells and (3) development of an algorithm to analyse narrative macro-structure elements. Methods A co-design process was used to design an app which would be used to gather story retell samples from children using mobile technology. A citizen science approach using mainstream marketing via online channels, the media and billboard ads was used to encourage participation from children across the United Kingdom. A stratified sampling framework was used to ensure a representative sample was obtained across age, gender and five bands of socio-economic disadvantage using partial postcodes and the relevant indices of deprivation. Trained Research Associates (RA) completed transcription and micro and macro-structure analysis of the language samples. Methods to improve transcriptions produced by automated speech recognition were developed to enable reliable analysis. RA micro-structure analyses were compared to those generated by the digital application to test its reliability using intra-class correlation (ICC). RA macro-structure analyses were used to train an algorithm to produce macro-structure metrics. Finally, results from the macro-structure algorithm were compared against a subset of RA macro-structure analyses not used in training to test its reliability using ICC. Results A total of 4,517 profiles were made in the app used in data collection and from these participants a final set of 599 were drawn which fulfilled the stratified sampling criteria. The story retells ranged from 35.66 s to 251.4 s in length and had word counts ranging from 37 to 496, with a mean of 148.29 words. ICC between the RA and application micro-structure analyses ranged from 0.213 to 1.0 with 41 out of a total of 44 comparisons reaching 'good' (0.70-0.90) or 'excellent' (>0.90) levels of reliability. ICC between the RA and application macro-structure features were completed for 85 samples not used in training the algorithm. ICC ranged from 0.5577 to 0.939 with 5 out of 7 metrics being 'good' or better. Conclusion Work to date has demonstrated the potential of semi-automated transcription and linguistic analyses to provide reliable, detailed and informative narrative language analysis for young children and for the use of citizen science based approaches using mobile technologies to collect representative and informative research data. Clinical evaluation of this new app is ongoing, so we do not yet have data documenting its developmental or clinical sensitivity and specificity.
Collapse
Affiliation(s)
| | - Elaine Ashton
- School of Education, Communication and Language Sciences, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Cristina Mckean
- School of Education, Communication and Language Sciences, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Yvonne Wren
- North Bristol NHS Trust, Bristol, United Kingdom
- Bristol Dental School, University of Bristol, Bristol, United Kingdom
- Cardiff School of Sport and Health Sciences, Cardiff Metropolitan University, Cardiff, United Kingdom
| |
Collapse
|
6
|
MacFarlane H, Salem AC, Bedrick S, Dolata JK, Wiedrick J, Lawley GO, Finestack LH, Kover ST, Thurman AJ, Abbeduto L, Fombonne E. Consistency and reliability of automated language measures across expressive language samples in autism. Autism Res 2023; 16:802-816. [PMID: 36722653 PMCID: PMC10123085 DOI: 10.1002/aur.2897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Accepted: 01/19/2023] [Indexed: 02/02/2023]
Abstract
Autism Spectrum Disorder (ASD) is a neurodevelopmental disorder with substantial clinical heterogeneity, especially in language and communication ability. There is a need for validated language outcome measures that show sensitivity to true change for this population. We used Natural Language Processing to analyze expressive language transcripts of 64 highly-verbal children and young adults (age: 6-23 years, mean 12.8 years; 78.1% male) with ASD to examine the validity across language sampling context and test-retest reliability of six previously validated Automated Language Measures (ALMs), including Mean Length of Utterance in Morphemes, Number of Distinct Word Roots, C-units per minute, unintelligible proportion, um rate, and repetition proportion. Three expressive language samples were collected at baseline and again 4 weeks later. These samples comprised interview tasks from the Autism Diagnostic Observation Schedule (ADOS-2) Modules 3 and 4, a conversation task, and a narration task. The influence of language sampling context on each ALM was estimated using either generalized linear mixed-effects models or generalized linear models, adjusted for age, sex, and IQ. The 4 weeks test-retest reliability was evaluated using Lin's Concordance Correlation Coefficient (CCC). The three different sampling contexts were associated with significantly (P < 0.001) different distributions for each ALM. With one exception (repetition proportion), ALMs also showed good test-retest reliability (median CCC: 0.73-0.88) when measured within the same context. Taken in conjunction with our previous work establishing their construct validity, this study demonstrates further critical psychometric properties of ALMs and their promising potential as language outcome measures for ASD research.
Collapse
Affiliation(s)
- Heather MacFarlane
- Department of Psychiatry, Oregon Health & Science University, Portland, Oregon, USA
| | - Alexandra C. Salem
- Department of Psychiatry, Oregon Health & Science University, Portland, Oregon, USA
| | - Steven Bedrick
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health & Science University, Portland, Oregon, USA
| | - Jill K. Dolata
- Department of Pediatrics, Oregon Health & Science University, Portland, Oregon, USA
- School of Communication Sciences and Disorders, Pacific University, Forest Grove, Oregon
| | - Jack Wiedrick
- Biostatistics & Design Program, Oregon Health & Science University, Portland, Oregon, USA
| | - Grace O. Lawley
- Computer Science and Electrical Engineering, Oregon Health & Science University, Portland, Oregon, USA
| | - Lizbeth H. Finestack
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, Minnesota, USA
| | - Sara T. Kover
- Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington, USA
| | - Angela John Thurman
- MIND Institute, University of California Davis Health, Sacramento, California, USA
- Department of Psychiatry and Behavioral Sciences, University of California Davis Health, Sacramento, California, USA
| | - Leonard Abbeduto
- MIND Institute, University of California Davis Health, Sacramento, California, USA
- Department of Psychiatry and Behavioral Sciences, University of California Davis Health, Sacramento, California, USA
| | - Eric Fombonne
- Department of Psychiatry, Oregon Health & Science University, Portland, Oregon, USA
| |
Collapse
|
7
|
Winters KL, Jasso J, Pustejovsky JE, Byrd CT. Investigating Narrative Performance in Children With Developmental Language Disorder: A Systematic Review and Meta-Analysis. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:3908-3929. [PMID: 36179252 DOI: 10.1044/2022_jslhr-22-00017] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
PURPOSE Narrative assessment is one potentially underutilized and inconsistent method speech-language pathologists may use when considering a diagnosis of developmental language disorder (DLD). However, narration research encompasses many varied methodologies. This systematic review and meta-analysis aimed to (a) investigate how various narrative assessment types (e.g., macrostructure, microstructure, and internal state language) differentiate children with typical development (TD) from children with DLD, (b) identify specific narrative assessment measures that result in greater group differences, and (c) evaluate participant and sample characteristics that may influence performance differences. METHOD Electronic databases (PsycINFO, ERIC, and PubMed) and ASHAWire were searched on July 30, 2019, to locate studies that reported oral narrative language measures for both DLD and TD groups between ages 4 and 12 years; studies focusing on written narration or other developmental disorders only were excluded. We extracted data related to sample participants, narrative task(s) and assessment measures, and research design. Group differences were quantified using standardized mean differences. Analyses used mixed-effects meta-regression with robust variance estimation to account for effect size dependencies. RESULTS Searches identified 37 eligible studies published between 1987 and 2019, including 382 effect sizes. Overall meta-analysis showed that children with DLD had decreased narrative performance relative to TD peers, with an overall average effect of -0.82 SD, 95% confidence interval [-0.99, -0.66]. Effect sizes showed significant heterogeneity both between and within studies, even after accounting for effect size-, sample-, and study-level predictors. Across model specifications, grammatical accuracy (microstructure) and story grammar (macrostructure) yielded the most consistent evidence of TD-DLD group differences. CONCLUSIONS Present findings suggest some narrative assessment measures yield significantly different performance between children with and without DLD. However, researchers need to improve consistency of inclusionary criteria, descriptions of sample characteristics, and reporting of correlations between measures to determine which assessment measures reliably distinguish between groups. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21200380.
Collapse
Affiliation(s)
| | - Javier Jasso
- The University of Texas at Austin
- Widener University, Chester, PA
| | | | | |
Collapse
|