1
|
Kurland J, Liu A, Varadharaju V, Stokes P, Cavanaugh R. Reliability of the Brief Assessment of Transactional Success in Communication in Aphasia. APHASIOLOGY 2024; 39:363-384. [PMID: 40160198 PMCID: PMC11949443 DOI: 10.1080/02687038.2024.2351029] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Accepted: 04/29/2024] [Indexed: 04/02/2025]
Abstract
Background While many measures exist for assessing discourse in aphasia, manual transcription, editing, and scoring are prohibitively labor intensive, a major obstacle to their widespread use by clinicians (Bryant et al., 2017; Cruice et al., 2020). Many tools also lack rigorous psychometric evidence of reliability and validity (Azios et al., 2022; Carragher et al., 2023). Establishing test reliability is the first step in our long-term goal of automating the Brief Assessment of Transactional Success in aphasia (BATS; Kurland et al., 2021) and making it accessible to clinicians and clinical researchers. Aims We evaluated multiple aspects of test reliability of the BATS by examining correlations between human/machine and human/human interrater edited transcripts, raw vs. edited transcripts, interrater scoring of main concepts, and test-retest performance. We hypothesized that automated methods of transcription and discourse analysis would demonstrate sufficient reliability to move forward with test development. Methods & Procedures We examined 576 story retelling narratives from a sample of 24 persons with aphasia and familiar and unfamiliar conversation partners (CP). Participants with aphasia (PWA) retold stories immediately after watching/listening to short video/audio clips. CP retold stories after six-minute topic-constrained conversations with a PWA in which the dyad co-constructed the stories. We utilized two macrostructural measures to analyze the automated speech-to-text transcripts of story retells: 1) a modified version of a semi-automated tool for measuring main concepts (mainConcept: Cavanaugh et al., 2021); and 2) an automated natural language processing 'pipeline' to assess topic similarity. Outcomes & Results Correlations between raw and edited scores were excellent, interrater reliability on transcripts and main concept scoring were acceptable. Test-retest on repeated stimuli was acceptable. This was especially true of aphasic story retellings where there were actual within subject repeated stimuli. Conclusions Results suggest that automated speech-to-text was generally sufficient in most cases to avoid the time-consuming, labor intensive step of transcribing and editing discourse. Overall, our study results suggest that natural language processing automated methods such as text vectorization and cosine similarity are a fast, efficient way to obtain a measure of topic similarity between two discourse samples. Although test-retest reliability for the semi-automated mainConcept method was generally higher than for automated methods of measuring topic similarity, we found no evidence of a difference between machine automated and human-reliant scoring.
Collapse
Affiliation(s)
- Jacquie Kurland
- University of Massachusetts Amherst, Department of Speech, Language, and Hearing Sciences
| | - Anna Liu
- University of Massachusetts Amherst, Department of Mathematics and Statistics
| | - Vishnupriya Varadharaju
- University of Massachusetts Amherst, Department of Computer Information and Computer Science
| | - Polly Stokes
- University of Massachusetts Amherst, Department of Speech, Language, and Hearing Sciences
| | | |
Collapse
|
2
|
Gridley K. Standardised data collection from people with dementia over the telephone: A qualitative study of the experience of DETERMIND programme researchers in a pandemic. DEMENTIA 2023; 22:1718-1737. [PMID: 37495232 PMCID: PMC10372513 DOI: 10.1177/14713012231190585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/28/2023]
Abstract
There is a notable lack of evidence on what constitutes good practice in remote quantitative data collection from research participants with dementia. During the COVID-19 pandemic face-to-face research became problematic, especially where participants were older and more at risk of infection. The DETERMIND-C19 study, a large cohort study of people with dementia, switched to telephone data collection over this period. This paper explores the experiences of researchers who collected quantitative data over the telephone from people with dementia during the first COVID-19 lockdowns in England. The aim was to learn from these experiences, share insights and inform future research practice across disciplines. Seven DETERMIND researchers were interviewed about the processes and challenges of collecting quantitative data from people with dementia over the telephone compared to face-to-face. Data were analysed using reflexive thematic analysis. Two themes were developed: first the telephone adds an extra layer of confusion to an already cognitively complex interaction. Second, researchers found it difficult to recognise subtle cues that signalled participants' rising emotion over the telephone in time to prevent distress. The researchers employed strategies to support participants which may not have conformed to the strict conventions of structured interviewing, but which were informed by person-oriented principles. Whilst in practice this may be a common approach to balancing the needs of participants and the requirements of quantitative research, it is rare for studies to openly discuss such trade-offs in the literature. Honest, reflective reporting is required if the practice of remote data collection from people with dementia is to progress ethically and with integrity.
Collapse
Affiliation(s)
- Kate Gridley
- Social Policy Research Unit, University of York, UK
| |
Collapse
|
3
|
Phillips AQ, Campi E, Talbott MR, Baranek GT. Assessment Fidelity of Parents Implementing a Standardized Telehealth Infant Autism Screener. OTJR-OCCUPATION PARTICIPATION AND HEALTH 2023; 43:360-367. [PMID: 37089013 PMCID: PMC10330541 DOI: 10.1177/15394492231164943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/25/2023]
Abstract
Telehealth is effective for service delivery in pediatric occupational therapy across ages and diagnoses. Remote parent coaching provides unique benefits for both parents and infants. As a result of COVID-19, practitioners and researchers pivoted to remote assessment and intervention without much preparation or training. It is critical that we evaluate the quality of these telehealth services. One important component of remote evaluations is assessment fidelity. To examine assessment fidelity of a telehealth-delivered observational autism screening tool for infants. An assessment fidelity checklist was applied as the primary outcome measure. Parents conducted assessments with 82% adherence to the fidelity checklist. Implications: A parent coaching telehealth approach may be valid for assessment in pediatric telehealth. Continually monitoring the assessment fidelity of a tool is critical for the valid administration of remote services.
Collapse
Affiliation(s)
| | - Emily Campi
- University of Southern California, Los Angeles, USA
| | | | | |
Collapse
|
4
|
Abstract
This chapter is written for the qualified neurologist or related professional working with persons who have had a stroke or other sudden brain injury. It is critical that the presence of aphasia is detected, no matter how mild the presentation, and to support that assertion, this chapter highlights the plight of persons with latent aphasia. At the individual level, the impact of aphasia is devastating, with overwhelming evidence that aphasia negatively impacts psychosocial outcomes. At the global level, sensitive detection and accurate diagnosis of aphasia are critical for accurate characterization and quantification of the global burden of aphasia. The word "LANGUAGE" is leveraged as an acronym to create a useful and memorable checklist to guide navigation of aphasia screening and assessment: it begins with the definition of language (L), followed by the definition and diagnostic criteria for aphasia (A). Then language abilities and characteristics to be considered in assessment are presented: naming (N); grammar and syntax (G); unintelligible words, jargon, and paraphasias (U); auditory comprehension and repetition (A); graphemic abilities-reading and writing (G); and everyday communication and discourse (E). Recommendations for improving procedural adherence are provided, and a list of potential brief assessment measures are introduced.
Collapse
Affiliation(s)
- Jessica D Richardson
- Department of Speech and Hearing Sciences, University of New Mexico, Albuquerque, NM, United States.
| | - Sarah Grace Dalton
- Department of Speech Pathology and Audiology, Marquette University, Milwaukee, WI, United States
| |
Collapse
|
5
|
Tabin M, Diacquenod C, Petitpierre G. Evaluating implementation outcomes of a measure of social vulnerability in adults with intellectual disabilities. RESEARCH IN DEVELOPMENTAL DISABILITIES 2021; 119:104111. [PMID: 34638029 DOI: 10.1016/j.ridd.2021.104111] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 09/30/2021] [Accepted: 10/05/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND A test identified as valid and accurate in research will not automatically be considered appropriate by those involved in its use, or even be used in the first place. The Social Vulnerability Test-22 items [TV-22] is a measure specially designed for adults with intellectual disabilities (ID). This study aims to evaluate the implementation outcomes of the TV-22; more precisely its acceptability (e.g., complexity), appropriateness (e.g., perceived relevance) and the assessment fidelity (i.e., adherence to assessment guidelines) by special education practitioners. PROCEDURES Thirty-one practitioners (8 psychologists, 11 educators, 12 special education center managers) administered the TV-22 during an interview with an adult with ID. Semi-structured interviews were conducted to collect practitioners' opinions on the acceptability and the appropriateness of the TV-22 for their clinical practice. Quantitative analyses were performed to assess the fidelity of the assessments and the influence of some personal factors. RESULTS The results indicate a good appropriateness, a reasonable acceptability, - but a low assessment fidelity of the TV-22 by some practitioners. Psychologists stand out for a more rigorous use of the test. IMPLICATIONS Results highlight the importance of evaluating implementation outcomes when a new measure is developed to ensure its appropriateness and correct use by stakeholders.
Collapse
Affiliation(s)
- Mireille Tabin
- Department of Special Education, University of Fribourg, Fribourg, Switzerland.
| | - Cindy Diacquenod
- Department of Special Education, University of Fribourg, Fribourg, Switzerland
| | | |
Collapse
|
6
|
Stark BC, Dutta M, Murray LL, Bryant L, Fromm D, MacWhinney B, Ramage AE, Roberts A, den Ouden DB, Brock K, McKinney-Bock K, Paek EJ, Harmon TG, Yoon SO, Themistocleous C, Yoo H, Aveni K, Gutierrez S, Sharma S. Standardizing Assessment of Spoken Discourse in Aphasia: A Working Group With Deliverables. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2021; 30:491-502. [PMID: 32585117 PMCID: PMC9128722 DOI: 10.1044/2020_ajslp-19-00093] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Revised: 11/21/2019] [Accepted: 01/03/2020] [Indexed: 05/14/2023]
Abstract
Purpose The heterogeneous nature of measures, methods, and analyses reported in the aphasia spoken discourse literature precludes comparison of outcomes across studies (e.g., meta-analyses) and inhibits replication. Furthermore, funding and time constraints significantly hinder collecting test-retest data on spoken discourse outcomes. This research note describes the development and structure of a working group, designed to address major gaps in the spoken discourse aphasia literature, including a lack of standardization in methodology, analysis, and reporting, as well as nominal data regarding the psychometric properties of spoken discourse outcomes. Method The initial initiatives for this working group are to (a) propose recommendations regarding standardization of spoken discourse collection, analysis, and reporting in aphasia, based on the results of an international survey and a systematic literature review and (b) create a database of test-retest spoken discourse data from individuals with and without aphasia. The survey of spoken discourse collection, analysis, and interpretation procedures was distributed to clinicians and researchers involved in aphasia assessment and rehabilitation from September to November 2019. We will publish survey results and recommend standards for collecting, analyzing, and reporting spoken discourse in aphasia. A multisite endeavor to collect test-retest spoken discourse data from individuals with and without aphasia will be initiated. This test-retest information will be contributed to a central site for transcription and analysis, and data will be subsequently openly curated. Conclusion The goal of the working group is to create recommendations for field-wide standards in methods, analysis, and reporting of spoken discourse outcomes, as has been done across other related disciplines (e.g., Consolidated Standards of Reporting Trials, Enhancing the Quality and Transparency of Health Research, Committee on Best Practice in Data Analysis and Sharing). Additionally, the creation of a database through our multisite collaboration will allow the identification of psychometrically sound outcome measures and norms that can be used by clinicians and researchers to assess spoken discourse abilities in aphasia.
Collapse
Affiliation(s)
- Brielle C. Stark
- Department of Speech, Hearing and Language Sciences, Indiana University Bloomington
- Program in Neuroscience, Indiana University Bloomington
| | - Manaswita Dutta
- Department of Speech, Hearing and Language Sciences, Indiana University Bloomington
| | - Laura L. Murray
- School of Communication Sciences and Disorders, Western University, London, Ontario, Canada
| | - Lucy Bryant
- Graduate School of Health, University of Technology Sydney, New South Wales, Australia
| | - Davida Fromm
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA
| | - Brian MacWhinney
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA
| | - Amy E. Ramage
- Department of Communication Sciences and Disorders, University of New Hampshire, Durham
| | - Angela Roberts
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL
| | - Dirk B. den Ouden
- Department of Communication Sciences and Disorders, University of South Carolina, Columbia
| | - Kris Brock
- Department of Communication Sciences and Disorders, Idaho State University, Pocatello
| | - Katy McKinney-Bock
- Center for Spoken Language Understanding, Oregon Health and Science University, Portland
| | - Eun Jin Paek
- Department of Audiology and Speech Pathology, University of Tennessee Health Science Center, Knoxville
| | - Tyson G. Harmon
- Department of Communication Disorders, Brigham Young University, Provo, UT
| | - Si On Yoon
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | | | - Hyunsoo Yoo
- Department of Communication Sciences and Disorders, Baylor University, Waco, TX
| | - Katharine Aveni
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL
| | - Stephanie Gutierrez
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL
| | - Saryu Sharma
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, NC
| |
Collapse
|
7
|
Dekhtyar M, Braun EJ, Billot A, Foo L, Kiran S. Videoconference Administration of the Western Aphasia Battery-Revised: Feasibility and Validity. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2020; 29:673-687. [PMID: 32191122 PMCID: PMC7842871 DOI: 10.1044/2019_ajslp-19-00023] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
Purpose There is a rapid growth of telepractice in both clinical and research settings; however, the literature validating translation of traditional methods of assessments and interventions to valid remote videoconference administrations is limited. This is especially true in the field of speech-language pathology where assessments of language and communication can be easily conducted via remote administration. The aim of this study was to validate videoconference administration of the Western Aphasia Battery-Revised (WAB-R). Method Twenty adults with chronic aphasia completed the assessment both in person and via videoconference with the order counterbalanced across administrations. Specific modifications to select WAB-R subtests were made to accommodate interaction by computer and Internet. Results Results revealed that the two methods of administration were highly correlated and showed no difference in domain scores. Additionally, most participants endorsed being mostly or very satisfied with the videoconference administration. Conclusion These findings suggest that administration of the WAB-R in person and via videoconference may be used interchangeably in this patient population. Modifications and guidelines are provided to ensure reproducibility and access to other clinicians and scientists interested in remote administration of the WAB-R. Supplemental Material https://doi.org/10.23641/asha.11977857.
Collapse
Affiliation(s)
- Maria Dekhtyar
- Speech, Language, and Hearing Sciences, Sargent College of Health and Rehabilitation Sciences, Boston University, MA
| | - Emily J. Braun
- Speech, Language, and Hearing Sciences, Sargent College of Health and Rehabilitation Sciences, Boston University, MA
| | - Anne Billot
- Speech, Language, and Hearing Sciences, Sargent College of Health and Rehabilitation Sciences, Boston University, MA
| | - Lindsey Foo
- Speech-Language Pathology, Spaulding Rehabilitation Network, Charlestown, MA
| | - Swathi Kiran
- Speech, Language, and Hearing Sciences, Sargent College of Health and Rehabilitation Sciences, Boston University, MA
| |
Collapse
|
8
|
Spell LA, Richardson JD, Basilakos A, Stark BC, Teklehaimanot A, Hillis AE, Fridriksson J. Developing, Implementing, and Improving Assessment and Treatment Fidelity in Clinical Aphasia Research. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2020; 29:286-298. [PMID: 31990598 PMCID: PMC7231909 DOI: 10.1044/2019_ajslp-19-00126] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2019] [Revised: 07/09/2019] [Accepted: 10/21/2019] [Indexed: 05/19/2023]
Abstract
Purpose The purpose of this study was to describe the development and implementation of a fidelity program for an ongoing, multifacility, aphasia intervention study and to explain how initial fidelity measures are being used to improve study integrity. Method A Clinical Core team developed and incorporated a fidelity plan in this study. The aims of the Clinical Core team were to (a) supervise data collection and data management at each clinical site, (b) optimize and monitor assessment fidelity, and (c) optimize and monitor treatment fidelity. Preliminary data are being used to guide ongoing efforts to preserve and improve the fidelity of this intervention study. Results Preliminary results show that specific recruitment strategies help to improve appropriate referrals and that accommodations to participants and their families help to maintain excellent retention. A streamlined and centralized training program assures the reliability of assessors and raters for the study's assessment and treatment protocols. Ongoing monitoring of both assessment and treatment tasks helps to maintain study integrity. Less-than-optimal interrater reliability data for the raters of some of the discourse measures guided the Clinical Core team to address the training and coding inconsistencies in a timely manner. Conclusions The creation of a Clinical Core team is instrumental in developing and implementing a fidelity plan for improved assessment and treatment fidelity. Intentional planning and assignment of study staff to implement and monitor ongoing fidelity measures assures that clinical data are reliable and valid. Ongoing review of the plan shows areas of strengths and weaknesses for continuing adjustments and improvement of study fidelity.
Collapse
Affiliation(s)
- Leigh Ann Spell
- Center for the Study of Aphasia Recovery, University of South Carolina, Columbia
| | | | - Alexandra Basilakos
- Center for the Study of Aphasia Recovery, University of South Carolina, Columbia
| | - Brielle C. Stark
- Department of Speech and Hearing Sciences, Indiana University Bloomington
- Program in Neuroscience, Indiana University Bloomington
| | - Abeba Teklehaimanot
- Department of Public Health Sciences, Medical University of South Carolina, Charleston
| | - Argye E. Hillis
- Department of Physical Medicine and Rehabilitation, School of Medicine, Johns Hopkins University, Baltimore, MD
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD
| | - Julius Fridriksson
- Center for the Study of Aphasia Recovery, University of South Carolina, Columbia
| |
Collapse
|
9
|
Kent-Walsh J, Binger C. Methodological advances, opportunities, and challenges in AAC research. Augment Altern Commun 2018; 34:93-103. [DOI: 10.1080/07434618.2018.1456560] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022] Open
Affiliation(s)
- Jennifer Kent-Walsh
- School of Communication Sciences and Disorders, University of Central Florida, Orlando, FL, USA
| | - Cathy Binger
- Speech and Hearing Sciences, University of New Mexico, Albuquerque, NM, USA
| |
Collapse
|
10
|
Richardson JD, Hudspeth Dalton SG, Fromm D, Forbes M, Holland A, MacWhinney B. The Relationship Between Confrontation Naming and Story Gist Production in Aphasia. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2018; 27:406-422. [PMID: 29497752 PMCID: PMC6111489 DOI: 10.1044/2017_ajslp-16-0211] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2016] [Revised: 04/18/2017] [Accepted: 10/06/2017] [Indexed: 05/19/2023]
Abstract
PURPOSE The purpose of this study was to examine the relationship between picture naming performance and the ability to communicate the gist, or essential elements, of a story. We also sought to determine if this relationship varied according to Western Aphasia Battery-Revised (WAB-R; Kertesz, 2007) aphasia subtype. METHOD Demographic information, test scores, and transcripts of 258 individuals with aphasia completing 3 narrative tasks were retrieved from the AphasiaBank database. Narratives were subjected to a main concept analysis to determine gist production. A correlation analysis was used to investigate the relationship between naming scores and main concept production for the whole group of persons with aphasia and for WAB-R subtypes separately. RESULTS We found strong correlations between naming test scores and narrative gist production for the large sample of persons with aphasia. However, the strength of the correlations varied by WAB-R subtype. CONCLUSIONS Picture naming may accurately predict gist production for individuals with Broca's and Wernicke's aphasia, but not for other WAB-R subtypes. Given the current reprioritization of outcome measurement, picture naming may not be an appropriate surrogate measure for functional communication for all persons with aphasia. SUPPLEMENTAL MATERIALS https://doi.org/10.23641/asha.5851848.
Collapse
|