1
|
Oosthuizen I, Kumar LMS, Nisha KV, Swanepoel DW, Granberg S, Karlsson E, Manchaiah V. Patient-Reported Outcome Measures for Hearing Aid Benefit and Satisfaction: Content Validity and Readability. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:4117-4136. [PMID: 37708535 DOI: 10.1044/2023_jslhr-22-00535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/16/2023]
Abstract
PURPOSE Numerous patient-reported outcome measures (PROMs) are available to measure hearing aid benefit and satisfaction. It is unclear to what extent currently available PROMs on hearing aid outcomes, often developed decades ago, meet current guidelines for good content validity and readability. This study evaluated the content validity and readability of PROMs that focus on perceived hearing aid benefit and/or satisfaction. METHOD A literature review was conducted to identify eligible instruments. Content validity evaluation included mapping extracted questionnaire items to the World Health Organization's International Classification of Functioning, Disability and Health (ICF) framework. In addition, study design in content validity methodology was evaluated using the COnsensus-based Standards for the selection of health Measurement INstruments study design checklist for PROM instruments. Readability was estimated using the Simple Measure of Gobbledygook measure. RESULTS Thirteen questionnaires were identified and evaluated. Item content focused primarily on the components of environmental factors as well as activity limitations and participation restrictions with less emphasis on body functions and personal factors. The content validity methodology analysis revealed an underuse or lack of reporting of a qualitative methodology in assessing patient and professional perspectives. All the included questionnaires exceeded the recommended sixth-grade reading level. CONCLUSIONS The categories covered by hearing aid PROMs vary considerably, with no single instrument comprehensively covering all the key ICF components. Future development of hearing aid outcome measures should consider a mixed methodology approach for improved content validity and ensure an appropriate reading level.
Collapse
Affiliation(s)
- Ilze Oosthuizen
- Department of Speech-Language Pathology and Audiology, University of Pretoria, South Africa
- Virtual Hearing Lab, Aurora, CO
| | | | | | - De Wet Swanepoel
- Department of Speech-Language Pathology and Audiology, University of Pretoria, South Africa
- Virtual Hearing Lab, Aurora, CO
- Ear Science Institute Australia, Subiaco, Western Australia
- Department of Otolaryngology-Head & Neck Surgery, University of Colorado School of Medicine, Aurora
| | - Sarah Granberg
- Faculty of Medicine and Health, Örebro University, Sweden
| | - Elin Karlsson
- Faculty of Medicine and Health, Örebro University, Sweden
| | - Vinaya Manchaiah
- Department of Speech-Language Pathology and Audiology, University of Pretoria, South Africa
- Virtual Hearing Lab, Aurora, CO
- Department of Otolaryngology-Head & Neck Surgery, University of Colorado School of Medicine, Aurora
- UCHealth Hearing and Balance Clinic, University of Colorado Hospital, Aurora
- Department of Speech and Hearing, Manipal College of Health Professions, Manipal Academy of Higher Education, India
| |
Collapse
|
2
|
Steiner SM, Slavych BK, Zraick RI. Assessment of Online Patient Education Material About Dysphagia. Dysphagia 2022; 38:990-1000. [PMID: 36205800 DOI: 10.1007/s00455-022-10524-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Accepted: 09/11/2022] [Indexed: 11/26/2022]
Abstract
To examine quality, readability, understandability, and actionability of English-language online educational materials about dysphagia. A Google search of "dysphagia" and related terms was conducted. Web page quality and accountability were measured using HON and URAC certification seals, the DISCERN instrument, and JAMA benchmark criteria. Understandability and actionability were assessed with the Patient Education Materials Assessment Tool for Printed Material (PEMAT-P). Readability was assessed using the Flesch Reading Ease (FRE), Flesch-Kincaid Grade Level (F-KGL), Gunning Fog (FOG), and the Simple Measure of Gobbledygook (SMOG) scores using dedicated readability software. Fifty web pages were analyzed. Seventeen web pages displayed a HON or URAC seal. DISCERN scores ranged from 17 to 50 (Mdn = 25.00; IQR = 32.25-21.00). Of the JAMA benchmark criteria, 88% of web pages met the disclosure criterion, while only 22% met the authorship, 20% met the attribution, and 16% met the currency criteria. PEMAT-P understandability and actionability scores were 69.38% ± 11.14% and 28.58% ± 22.19%, respectively. Readability scores, on average, exceeded the recommended grade reading levels for health information (FRE 46.34 ± 13.59, F-KGL 10.26 ± 2.29, FOG 12.11 ± 2.08, and SMOG 12.38 ± 1.70). Online materials about dysphagia can be improved by obtaining quality certificates and by including content that is more readable and easier to understand and act upon.
Collapse
Affiliation(s)
- Sarah M Steiner
- University of Central Florida, 4364 Scorpius St., Suite 101, Orlando, FL, 32816, USA
| | - Bonnie K Slavych
- University of Central Missouri, 415 E. Clark St, Warrensburg, MO, 64093, USA.
- Missouri State University, 901 S. National Ave, Springfield, MO, 65897, USA.
| | - Richard I Zraick
- University of Central Florida, 4364 Scorpius St., Suite 101, Orlando, FL, 32816, USA
| |
Collapse
|
3
|
La Scala JD, Zraick RI, Rosa-Lugo LI, Cosby JL. Readability of Cochlear Implant Brochures: A Potential Factor in Parent Decision Making. Am J Audiol 2022; 31:1133-1142. [PMID: 36054847 DOI: 10.1044/2022_aja-22-00048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
PURPOSE The purpose of this study was to examine the ease of reading cochlear implant (CI) brochures provided to parents and caregivers who are making informed decisions about the management of their child's hearing loss. METHOD CI brochures from three Food and Drug Administration-approved CI manufacturers were examined: Advanced Bionics, Cochlear Americas, and MED-EL. Reading grade levels and ease of reading were analyzed using a commercially available computer software program, applying six readability formulas commonly used to examine hearing-related patient education materials (PEMs). RESULTS The readability of the CI brochures exceeds the fifth- to sixth-grade reading-level guidelines. The CI brochures may be difficult for the average English-speaking adult to read with ease and requires at least a 10th-grade comparable reading level. CONCLUSIONS Despite health literacy initiatives, audiology-focused PEMs continue to be created without full consideration of the burden for the reader. Authors of PEMs should consider the average reading level of the reader as a variable potentially influencing the decision-making process. Likewise, clinicians should consider the average reading level needed to understand PEMs when presenting information and resources to parents and caregivers for informed and shared decision making.
Collapse
Affiliation(s)
- Jennifer D La Scala
- School of Communication Sciences and Disorders, College of Health Professions and Sciences, University of Central Florida, Orlando
| | - Richard I Zraick
- School of Communication Sciences and Disorders, College of Health Professions and Sciences, University of Central Florida, Orlando
| | - Linda I Rosa-Lugo
- School of Communication Sciences and Disorders, College of Health Professions and Sciences, University of Central Florida, Orlando
| | - Janel L Cosby
- School of Communication Sciences and Disorders, College of Health Professions and Sciences, University of Central Florida, Orlando
| |
Collapse
|
4
|
Docimo S, Seeras K, Acho R, Pryor A, Spaniolas K. Academic and community hernia center websites in the United States fail to meet healthcare literacy standards of readability. Hernia 2022; 26:779-786. [PMID: 35344107 DOI: 10.1007/s10029-022-02584-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Accepted: 02/09/2022] [Indexed: 11/30/2022]
Abstract
BACKGROUND Health literacy is considered the single best predictor of health status. Organizations including the American Medical Association (AMA) and the National Institutes of Health (NIH) have recommended that the readability of patient education materials not exceed the sixth-grade level. Our study focuses on the readability of self-designated hernia centers websites at both academic and community organizations across the United States to determine their ability to dispense patient information at an appropriate reading level. METHODS A search was conducted utilizing the Google search engine. The key words "Hernia Center" and "University Hernia Center" were used to identify links to surgical programs within the United States. The following readability tests were conducted via the program: Flesch-Kincaid Grade Level (FKGL), Gunning Fox Index (GFI), Coleman-Liau Index (CLI), Simple Measure of Gobbledygook (SMOG), and Flesch Reading Ease (FRE) score. RESULTS Of 96 websites, zero (0%) had fulfilled the recommended reading level in all four tests. The mean test scores for all non-academic centers (n = 50) were as follows: FKGL (11.14 ± 2.68), GFI (14.39 ± 3.07), CLI (9.29 ± 2.48) and SMOG (13.38 ± 2.03). The mean test scores [SK1] for all academic programs (n = 46) were as follows: FKGL (11.7 ± 2.66), GFI (15.01 ± 2.99), CLI (9.34 ± 1.91) and SMOG (13.71 ± 2.02). A one-sample t test was performed to compare the FKGL, GFI, CLI, and SMOG scores for each hernia center to a value of 6.9 (6.9 or less is considered an acceptable reading level) and a p value of 0.001 for all four tests were noted demonstrating statistical significance. The Academic and Community readability scores for both groups were compared to each other with a two-sample t test with a p value of > 0.05 for all four tests and there were no statistically significant differences. CONCLUSION Neither Academic nor Community hernia centers met the appropriate reading level of sixth-grade or less. Steps moving forward to improve patient comprehension and/or involving with their care should include appropriate reading level material, identification of a patient with a low literacy level with intervention or additional counseling when appropriate, and the addition of adjunct learning materials such as videos.
Collapse
Affiliation(s)
- S Docimo
- Division of Bariatric, Foregut, and Advanced Gastrointestinal Surgery, Renaissance School, Medicine at Stony Brook University, Stony Brook, NY, USA.
| | - K Seeras
- Division of Bariatric, Foregut, and Advanced Gastrointestinal Surgery, Renaissance School, Medicine at Stony Brook University, Stony Brook, NY, USA
| | - R Acho
- Henry Ford Macomb, Detroit, MI, USA
| | - A Pryor
- Division of Bariatric, Foregut, and Advanced Gastrointestinal Surgery, Renaissance School, Medicine at Stony Brook University, Stony Brook, NY, USA
| | - K Spaniolas
- Division of Bariatric, Foregut, and Advanced Gastrointestinal Surgery, Renaissance School, Medicine at Stony Brook University, Stony Brook, NY, USA
| |
Collapse
|
5
|
Taylor DJ, Jones L, Edwards L, Crabb DP. Patient-reported outcome measures in ophthalmology: too difficult to read? BMJ Open Ophthalmol 2021; 6:e000693. [PMID: 34212114 PMCID: PMC8208024 DOI: 10.1136/bmjophth-2020-000693] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Accepted: 04/30/2021] [Indexed: 12/14/2022] Open
Abstract
Objective Patient-reported outcome measures (PROMs) are commonly used in clinical trials and research. Yet, in order to be effective, a PROM needs to be understandable to respondents. The aim of this cross-sectional analysis was to assess reading level of PROMs validated for use in common eye conditions. Methods and analysis Readability measures determine the level of education a person is expected to have attained to be able to read a passage of text; this was calculated using the Flesch-Kincaid Grade Level, FORCAST and Gunning-Fog tests within readability calculations software package Oleander Readability Studio 2012.1. Forty PROMs, previously validated for use in at least one of age-related macular degeneration, glaucoma and/or diabetic retinopathy, were identified for inclusion via a systematic literature search. The American Medical Association (AMA) and National Institutes of Health (NIH) recommend patient materials should not exceed a sixth-grade reading level. Number of PROMs exceeding this level was calculated. Results Median (IQR) readability scores were 7.9 (5.4-10.5), 9.9 (8.9-10.7) and 8.4 (6.9-11.1) for Flesch-Kincaid Grade Level, FORCAST and Gunning-Fog test, respectively. Depending on metric used, this meant 61% (95% CI 45% to 76%), 100% (95% CI 91% to 100%) and 80% (95% CI 65% to 91%) exceeded the recommended threshold. Conclusion Most PROMs commonly used in ophthalmology require a higher reading level than that recommended by the AMA and NIH and likely contain questions that are too difficult for many patients to read. Greater care is needed in designing PROMs appropriate for the literacy level of a population.
Collapse
Affiliation(s)
- Deanna J Taylor
- Optometry and Visual Sciences, City University of London, London, UK
| | - Lee Jones
- Optometry and Visual Sciences, City University of London, London, UK.,Institute of Ophthalmology, University College London, London, UK
| | - Laura Edwards
- Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - David P Crabb
- Optometry and Visual Sciences, City University of London, London, UK
| |
Collapse
|
6
|
Stefu J, Slavych BK, Zraick RI. Patient-Reported Outcome Measures in Voice: An Updated Readability Analysis. J Voice 2021; 37:465.e27-465.e34. [PMID: 33736929 DOI: 10.1016/j.jvoice.2021.01.028] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 01/17/2021] [Accepted: 01/21/2021] [Indexed: 12/27/2022]
Abstract
PURPOSE The purpose of this study was to investigate whether voice-related patient-reported outcome measures (PROMs) developed and validated since 2011 meet the recommendation by health literacy experts that such materials be written at a fifth-to-sixth grade reading level. METHOD A readability analysis of eight voice-related PROMs was conducted. Readability formulas utilized were the Coleman-Liau index, Flesch-Kincaid reading ease, FORCAST, simple measure of Gobbledygook index, and Gunning-Fog score. RESULT Three-fourths of the PROMs exceeded the recommended fifth- to sixth-grade reading level. CONCLUSION Although awareness of health literacy has grown, voice-related PROMs continue to be developed without full consideration of their reading grade level. Researchers should consider revising or developing PROMs with consideration to reading grade level as well as other features to enhance readability.
Collapse
Affiliation(s)
- Julia Stefu
- University of Central Florida, Orlando, Florida
| | | | | |
Collapse
|
7
|
Lee SE, Farzal Z, Kimple AJ, Senior BA, Thorp BD, Zanation AM, Ebert CS. Readability of patient-reported outcome measures for chronic rhinosinusitis and skull base diseases. Laryngoscope 2020; 130:2305-2310. [PMID: 31603564 DOI: 10.1002/lary.28330] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Revised: 09/04/2019] [Accepted: 09/10/2019] [Indexed: 01/22/2023]
Abstract
OBJECTIVE Outcome measures in healthcare that presume a higher level of patient health and overall literacy may inadequately estimate the disease experiences of less-educated patients and further disadvantage them. Patient-Reported Outcome Measures (PROMs) are widely used communication tools for clinical practice and are often used to evaluate and guide management for chronic rhinosinusitis (CRS) and skull base diseases. However, their readability and subsequent incomprehensibility for patients have not been assessed. The aim of this study is to evaluate the readability of commonly used PROMs for these conditions and whether they meet recommended readability levels. METHODS Three readability measures, Gunning Fog, Simple Measure of Gobbledygook (SMOG), and FORCAST were used in the evaluation of commonly used PROMs for CRS and skull base disease. PROMs with sixth-grade readability level or lower were considered to meet health literacy experts' recommendations. RESULTS A total of 11 PROMs were reviewed (8 CRS, 3 skull base). Gunning Fog consistently estimated the easiest readability, whereas FORCAST the most difficult. One hundred percent of CRS and 67% of skull base PROMs were above National Institutes of Health and health literacy experts' recommended reading levels. PROMs developed more recently had easier readability. CONCLUSION PROMs are important clinical tools in otolaryngology that help guide management of disease for improved patient-centered care. Like many other fields of medicine, those used in otolaryngology are beyond recommended reading levels. Development of PROMs in the future should meet recommended readability levels to fully assess the disease experience of our patients. LEVEL OF EVIDENCE 4 Laryngoscope, 130:2305-2310, 2020.
Collapse
Affiliation(s)
- Saangyoung E Lee
- University of North Carolina School of Medicine, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, U.S.A
| | - Zainab Farzal
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, North Carolina, U.S.A
| | - Adam J Kimple
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, North Carolina, U.S.A
| | - Brent A Senior
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, North Carolina, U.S.A
| | - Brian D Thorp
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, North Carolina, U.S.A
| | - Adam M Zanation
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, North Carolina, U.S.A
| | - Charles S Ebert
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, North Carolina, U.S.A
| |
Collapse
|
8
|
Manchaiah V, Kelly-Campbell RJ, Bellon-Harn ML, Beukes EW. Quality, Readability, and Suitability of Hearing Health-Related Materials: A Descriptive Review. Am J Audiol 2020; 29:513-527. [PMID: 32551926 DOI: 10.1044/2020_aja-19-00040] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Objectives The objective of this descriptive review was to determine the quality, readability, and suitability of ear and hearing health information and materials for patients and their significant others. Method A literature search was conducted between August 2018 and April 2019 in the databases CINAHL Complete, MEDLINE, and PsychInfo. Inclusion and exclusion criteria were used to shortlist studies. Data regarding quality, suitability, and readability were extracted from the included studies. Data were assessed qualitatively. Results There were 34 studies included in this review. Of those, eight examined quality, 33 assessed readability, and four investigated the suitability of materials. The range of materials assessed included diagnostic reports, patient education materials (PEMs), patient-reported outcome measures, and websites. Quality elements were examined in studies focusing on website information. Findings indicated that most websites were of poor quality. Suitability was examined in studies focusing on PEMs such as hearing aid user guides. Findings indicated that most of the existing materials were not suitable for the intended populations. The reading grade level of information across all four categories was found to be higher than the recommended fifth or sixth reading grade level for health-related materials. Revisions of some diagnostic reports and PEMs showed that improvements are possible. Conclusions This review suggests that ear- and hearing-related materials generally have lower quality and suitability with higher readability (more difficult to read). Development of materials that are suitable, of high quality, and at the appropriate readability levels is required to improve accessibility of ear- and hearing-related materials.
Collapse
Affiliation(s)
- Vinaya Manchaiah
- Department of Speech and Hearing Sciences, Lamar University, Beaumont, TX
- Department of Speech and Hearing, School of Allied Health Sciences, Manipal University, Karnataka, India
| | | | | | - Eldré W. Beukes
- Department of Speech and Hearing Sciences, Lamar University, Beaumont, TX
- Department of Vision and Hearing Sciences, Anglia Ruskin University, Cambridge, United Kingdom
| |
Collapse
|
9
|
Abstract
This article introduces the Consumer Ear Disease Risk Assessment (CEDRA) tool. CEDRA is a brief questionnaire designed to screen for targeted ear diseases. It offers an opportunity for consumers to self-screen for disease before seeking a hearing device and may be used by clinicians to help their patients decide the appropriate path to follow in hearing healthcare. Here we provide highlights of previously published validation in the context of a more thorough description of CEDRA's development and implementation. CEDRA's sensitivity and specificity, using a cut-off score of 4 or higher, was 90% and 72%, respectively, relative to neurotologist diagnoses in the initial training sample used to create the scoring algorithm (n = 246). On a smaller independent test sample (n = 61), CEDRA's sensitivity and specificity were 76% and 80%, respectively. CEDRA has readability levels similar to many other patient-oriented questionnaires in hearing healthcare, and informal reports from pilot CEDRA-providers indicate that the majority of patients can complete it in less than 10 min. As the hearing healthcare landscape changes and provider intercession is no longer mandated, CEDRA provides a measure of safety without creating a barrier to access.
Collapse
|
10
|
Margol-Gromada M, Sereda M, Baguley DM. Readability assessment of self-report hyperacusis questionnaires. Int J Audiol 2020; 59:506-512. [DOI: 10.1080/14992027.2020.1723033] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Affiliation(s)
- Magdalena Margol-Gromada
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, UK
| | - Magdalena Sereda
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, UK
- NIHR Nottingham Biomedical Research Centre, Nottingham, UK
| | - David M. Baguley
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, UK
- NIHR Nottingham Biomedical Research Centre, Nottingham, UK
- Nottingham Audiology Services, Nottingham University NHS Trust, Nottingham, UK
| |
Collapse
|
11
|
Lee SE, Farzal Z, Ebert CS, Zanation AM. Readability of Patient-Reported Outcome Measures for Head and Neck Oncology. Laryngoscope 2020; 130:2839-2842. [PMID: 32078176 DOI: 10.1002/lary.28555] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2019] [Revised: 11/26/2019] [Accepted: 01/07/2020] [Indexed: 11/10/2022]
Abstract
OBJECTIVES/HYPOTHESIS Patient-reported outcome measures (PROMs) are communication tools to help patients convey their disease experience to medical providers and guide management decisions. However, the utility of healthcare outcome measures is dependent on patient literacy and readability of PROMs. If written for a more advanced literacy level, they can misestimate symptoms and add significant barriers to care, especially in the underserved. However, readability of head and neck (H&N) oncology PROMs has not been assessed. The aim of this study was to evaluate the readability of H&N oncology PROMs to assess whether they meet recommended readability levels. STUDY DESIGN Bibliometric review. METHODS Three readability measures: Gunning Fog, Simple Measure of Gobbledygook, and FORCAST were used to evaluate the readability level of commonly used H&N PROMs. PROMs with sixth grade readability level or lower were considered to meet the recommendations of health literacy experts. RESULTS Eight H&N oncology PROMs were reviewed. None of H&N PROMs met health literacy experts' and National Institutes of Health recommended reading levels. Gunning Fog consistently estimated easiest readability and FORCAST the most difficult. CONCLUSIONS PROMs are important clinical tools that drive patient-centric care in H&N oncology. All H&N PROMs are written above recommended reading levels and do not meet suggested standards. Future PROMs should be written with easier readability to accurately convey patients' H&N oncology disease experiences. LEVEL OF EVIDENCE 4 Laryngoscope, 2020.
Collapse
Affiliation(s)
- Saangyoung E Lee
- University of North Carolina School of Medicine, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina
| | - Zainab Farzal
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, North Carolina, U.S.A
| | - Charles S Ebert
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, North Carolina, U.S.A
| | - Adam M Zanation
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill School of Medicine, Chapel Hill, North Carolina, U.S.A
| |
Collapse
|
12
|
Cohen ML, Hula WD. Patient-Reported Outcomes and Evidence-Based Practice in Speech-Language Pathology. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2020; 29:357-370. [PMID: 32011905 DOI: 10.1044/2019_ajslp-19-00076] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Purpose The patient's perspective of their health is a core component of evidence-based practice (EBP) and person-centered care. Patient-reported outcomes (PROs), captured with PRO measures (PROMs), are the main way of formally soliciting and measuring the patient's perspective. Currently, however, PROs play a relatively small role in mainstream speech-language pathology practice. The purpose of this article is to raise important questions about how PROs could be applied to EBP in speech-language pathology for individuals with communication disorders and to propose preliminary approaches to address some of these questions. Method Based on a narrative review of the literature, this article introduces relevant terminology and broadly describes PRO applications in other health care fields. The article also raises questions related to PRO-informed clinical practice in speech-language pathology. To address some of these questions, the article explores previous research to provide suggestions for clinical administration, interpretation, and future research. Conclusion More routine measurement of subjective health constructs via PROMs-for example, constructs such as effort, participation, self-efficacy, and psychosocial functioning-may improve EBP. More routine use of PROMs could significantly expand the information that is available to clinicians about individual clients and add to the evidence base for the profession of speech-language pathology. However, careful consideration and more research are needed on how to capture and interpret PROs from individuals with cognitive and language disorders.
Collapse
Affiliation(s)
- Matthew L Cohen
- Department of Communication Sciences and Disorders and Center for Health Assessment Research and Translation, University of Delaware, Newark
| | - William D Hula
- Geriatric Research, Education, and Clinical Center, VA Health Care System, and Department of Communication Sciences and Disorders, University of Pittsburgh, PA
| |
Collapse
|
13
|
Pearson SE, Taylor J, Patel P, Baguley DM. Cancer survivors treated with platinum-based chemotherapy affected by ototoxicity and the impact on quality of life: a narrative synthesis systematic review. Int J Audiol 2019; 58:685-695. [PMID: 31545660 DOI: 10.1080/14992027.2019.1660918] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Objective: To identify any change in quality of life (QoL) caused by chemotherapy-induced toxicities, such as hearing loss and tinnitus, to provide information in order to improve services and aid clinicians in their decision-making. Design: This systematic review followed the preferred reporting items for systematic reviews and meta-analysis (PRISMA) checklist. The search terms were cancer, platinum-based chemotherapy, ototoxicity and "quality of life". Titles and abstracts, followed by full texts, were screened by two independent researchers. The relevant data were extracted and quality analysis was performed using the NIH Quality Assessment Tool. Study sample: About 308 titles and abstracts were screened, and 27 full-text articles were screened. Ten articles representing 11 studies were included in the review. Study design included cross-sectional studies, randomised control trials and longitudinal studies. Results: Diagnostic criteria consisted of audiograms, questionnaires and patient complaints. The study quality ranged from 21.43% to 85.71%. Overall results found that those treated with cisplatin had more hearing loss and tinnitus than those treated with other therapies. Furthermore, those with hearing loss and tinnitus were more likely to have a lower QoL. Conclusions: There is an urgent need to standardise diagnostics when investigating ototoxicity and its effect on QoL, particularly for research into risk factors, prevention and management.
Collapse
Affiliation(s)
- Stephanie E Pearson
- NIHR Nottingham Biomedical Research Centre , Nottingham , UK.,Department of Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham , Nottingham , UK
| | - John Taylor
- NIHR Nottingham Biomedical Research Centre , Nottingham , UK.,Department of Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham , Nottingham , UK
| | - Poulam Patel
- Nottingham University Hospitals NHS Trust , Nottingham , UK.,Division of Cancer and Stem Cells, Academic Unit of Oncology, School of Medicine, University of Nottingham , Nottingham , UK
| | - David M Baguley
- NIHR Nottingham Biomedical Research Centre , Nottingham , UK.,Department of Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham , Nottingham , UK.,Nottingham University Hospitals NHS Trust , Nottingham , UK
| |
Collapse
|
14
|
Manchaiah V, Granberg S, Grover V, Saunders GH, Ann Hall D. Content validity and readability of patient-reported questionnaire instruments of hearing disability. Int J Audiol 2019; 58:565-575. [DOI: 10.1080/14992027.2019.1602738] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Affiliation(s)
- Vinaya Manchaiah
- Department of Speech and Hearing Sciences, Lamar University, Beaumont, TX, USA
- Department of Speech and Hearing, School of Allied Health Sciences, Manipal University, Manipal, India
- Audiology India, Mysore, India
| | - Sarah Granberg
- The Swedish Institute for Disability Research (SIDR), School of Health Sciences, Örebro University, Örebro, Sweden
- Audiological Research Center, Örebro University Hospital, Örebro, Sweden
| | - Vibhu Grover
- Department of Speech and Hearing Sciences, Lamar University, Beaumont, TX, USA
| | | | - Deborah Ann Hall
- NIHR Biomedical Research Centre, University of Nottingham, Nottingham, UK
- Hearing Sciences, Division of Clinical Neuroscience School of Medicine, University of Nottingham, Nottingham, UK
- Queens Medical Centre, Nottingham University Hospitals NHS Trust, Nottingham, UK
- University of Nottingham Malaysia, Semenyih, Malaysia
| |
Collapse
|
15
|
Oliffe M, Thompson E, Johnston J, Freeman D, Bagga H, Wong PKK. Assessing the readability and patient comprehension of rheumatology medicine information sheets: a cross-sectional Health Literacy Study. BMJ Open 2019; 9:e024582. [PMID: 30813117 PMCID: PMC6377552 DOI: 10.1136/bmjopen-2018-024582] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/03/2018] [Revised: 12/06/2018] [Accepted: 12/27/2018] [Indexed: 12/12/2022] Open
Abstract
OBJECTIVES Patients are often provided with medicine information sheets (MIS). However, up to 60% of patients have low health literacy. The recommended readability level for health-related information is ≤grade 8. We sought to assess the readability of MIS given to patients by rheumatologists in Australia, the UK and Canada and to examine Australian patient comprehension of these documents. DESIGN Cross-sectional study. SETTING Community-based regional rheumatology practice. PARTICIPANTS Random sample of patients attending the rheumatology practice. OUTCOME MEASURES Readability of MIS was assessed using readability formulae (Flesch Reading Ease formula, Simple Measure of Gobbledygook scale, FORCAST (named after the authors FORd, CAylor, STicht) and the Gunning Fog scale). Literal comprehension was assessed by asking patients to read various Australian MIS and immediately answer five simple multiple choice questions about the MIS. RESULTS The mean (±SD) grade level for the MIS from Australia, the UK and Canada was 11.6±0.1, 11.8±0.1 and 9.7±0.1 respectively. The Flesch Reading Ease score for the Australian (50.8±0.6) and UK (48.5±1.5) MIS classified the documents as 'fairly difficult' to 'difficult'. The Canadian MIS (66.1±1.0) were classified as 'standard'. The five questions assessing comprehension were correctly answered by 9/21 patients for the adalimumab MIS, 7/11 for the methotrexate MIS, 6/28 for the non-steroidal anti-inflammatory MIS, 10/11 for the prednisone MIS and 13/24 for the abatacept MIS. CONCLUSIONS The readability of MIS used by rheumatologists in Australia, the UK and Canada exceeds grade 8 level. This may explain why patient literal comprehension of these documents may be poor. Simpler, shorter MIS with pictures and infographics may improve patient comprehension. This may lead to improved medication adherence and better health outcomes.
Collapse
Affiliation(s)
- Michael Oliffe
- Mid-North Coast Arthritis Clinic, Coffs Harbour, New South Wales, Australia
| | - Emma Thompson
- University of New South Wales Rural Clinical School, Coffs Harbour, New South Wales, Australia
| | - Jenny Johnston
- School of Education, Southern Cross University, Coffs Harbour, New South Wales, Australia
| | - Dianne Freeman
- Mid-North Coast Arthritis Clinic, Coffs Harbour, New South Wales, Australia
| | - Hanish Bagga
- Mid-North Coast Arthritis Clinic, Coffs Harbour, New South Wales, Australia
| | - Peter K K Wong
- Mid-North Coast Arthritis Clinic, Coffs Harbour, New South Wales, Australia
- University of New South Wales Rural Clinical School, Coffs Harbour, New South Wales, Australia
| |
Collapse
|