1
|
Lim B, Sen S. A cross-sectional quantitative analysis of the readability and quality of online resources regarding thumb carpometacarpal joint replacement surgery. J Hand Microsurg 2024; 16:100119. [PMID: 39234384 PMCID: PMC11369735 DOI: 10.1016/j.jham.2024.100119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Revised: 06/18/2024] [Accepted: 06/20/2024] [Indexed: 09/06/2024] Open
Abstract
Background Thumb carpometacarpal (CMC) joint osteoarthritis is a common degenerative condition that affects up to 15 % of the population older than 30 years. Poor readability of online health resources has been associated with misinformation, inappropriate care, incorrect self-treatment, worse health outcomes, and increased healthcare resource waste. This study aims to assess the readability and quality of online information regarding thumb carpometacarpal (CMC) joint replacement surgery. Methods The terms "thumb joint replacement surgery", "thumb carpometacarpal joint replacement surgery", "thumb cmc joint replacement surgery", "thumb arthroplasty", "thumb carpometacarpal arthroplasty", and "thumb cmc arthroplasty" were searched in Google and Bing. Readability was determined using the Flesch Reading Ease Score (FRES) and the Flesch-Kincaid Reading Grade Level (FKGL). FRES >65 or a grade level score of sixth grade and under was considered acceptable. Quality was assessed using the Patient Education Materials Assessment Tool (PEMAT) and a modified DISCERN tool. PEMAT scores below 70 were considered poorly understandable and poorly actionable. Results A total of 34 websites underwent qualitative analysis. The average FRES was 54.60 ± 7.91 (range 30.30-67.80). Only 3 (8.82 %) websites had a FRES score >65. The average FKGL score was 8.19 ± 1.80 (range 5.60-12.90). Only 3 (8.82 %) websites were written at or below a sixth-grade level. The average PEMAT percentage score for understandability and actionability was 76.82 ± 9.43 (range 61.54-93.75) and 36.18 ± 24.12 (range 0.00-60.00) respectively. Although 22 (64.71 %) of websites met the acceptable standard of 70 % for understandability, none of the websites met the acceptable standard of 70 % for actionability. The average total DISCERN score was 32.00 ± 4.29 (range 24.00-42.00). Conclusions Most websites reviewed were written above recommended reading levels. Most showed acceptable understandability but none showed acceptable actionability. To avoid the negative outcomes of poor patient understanding of online resources, providers of these resources should optimise accessibility to the average reader by using simple words, avoiding jargon, and analysing texts with readability software before publishing the materials online. Websites should also utilise visual aids and provide clearer pre-operative and post-operative instructions.
Collapse
Affiliation(s)
- Brandon Lim
- Trinity College Dublin, School of Medicine, Dublin, Ireland
| | - Suddhajit Sen
- Department of Trauma and Orthopaedic Surgery, Raigmore Hospital, Inverness, UK
| |
Collapse
|
2
|
Chang M, Weiss B, Worrell S, Hsu CH, Ghaderi I. Readability of online patient education material for foregut surgery. Surg Endosc 2024; 38:5259-5265. [PMID: 39009725 DOI: 10.1007/s00464-024-11042-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Accepted: 06/30/2024] [Indexed: 07/17/2024]
Abstract
INTRODUCTION Health literacy is the ability of individuals to use basic health information and services to make well-informed decisions. Low health literacy among surgical patients has been associated with nonadherence to preoperative and/or discharge instructions as well as poor comprehension of surgery. It likely poses as a barrier to patients considering foregut surgery which requires an understanding of different treatment options and specific diet instructions. The objective of this study was to assess and compare the readability of online patient education materials (PEM) for foregut surgery. METHODS Using Google, the terms "anti-reflux surgery, "GERD surgery," and "foregut surgery" were searched and a total of 30 webpages from universities and national organizations were selected. The readability of the text was assessed with seven instruments: Flesch Reading Ease formula (FRE), Gunning Fog (GF), Flesch-Kincaid Grade Level (FKGL), Coleman Liau Index (CL), Simple Measure of Gobbledygook (SMOG), Automated Readability Index (ARI), and Linsear Write Formula (LWF). Mean readability scores were calculated with standard deviations. We performed a qualitative analysis gathering characteristics such as, type of information (preoperative or postoperative), organization, use of multimedia, inclusion of a version in another language. RESULTS The overall average readability of the top PEM for foregut surgery was 12th grade. There was only one resource at the recommended sixth grade reading level. Nearly half of PEM included some form of multimedia. CONCLUSIONS The American Medical Association and National Institute of Health have recommended that PEMs to be written at the 5th-6th grade level. The majority of online PEM for foregut surgery is above the recommended reading level. This may be a barrier for patients seeking foregut surgery. Surgeons should be aware of the potential gaps in understanding of their patients to help them make informed decisions and improve overall health outcomes.
Collapse
Affiliation(s)
- Michelle Chang
- College of Medicine, Department of Surgery, University of Arizona, Tucson, USA.
| | - Barry Weiss
- College of Medicine, Department of Family and Community Medicine, University of Arizona, Tucson, USA
| | - Stephanie Worrell
- College of Medicine, Department of Surgery, University of Arizona, Tucson, USA
| | - Chiu-Hsieh Hsu
- College of Public Health, Department of Epidemiology and Biostatistics, University of Arizona, Tucson, USA
| | - Iman Ghaderi
- College of Medicine, Department of Surgery, University of Arizona, Tucson, USA
| |
Collapse
|
3
|
Coluk Y, Senocak MI. Patient education in the digital age: An analysis of quality and readability of online information on rhinoplasty. Medicine (Baltimore) 2024; 103:e39229. [PMID: 39121316 PMCID: PMC11315473 DOI: 10.1097/md.0000000000039229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/27/2024] [Accepted: 07/18/2024] [Indexed: 08/11/2024] Open
Abstract
This study aimed to investigate quality and readability of online rhinoplasty information provided on Turkish websites. We searched for the terms "rhinoplasty" (rinoplasti) and "nose job" (burun estetiği) in Turkish using the Google search engine in May 2023. The first 30 sites for each term were included in the evaluation. We used the DISCERN tool to evaluate quality and the Atesman and Cetinkaya-Uzun formulas to assess readability. According to the Atesman formula, the readability scores of all the websites were moderately difficult. According to the Cetinkaya-Uzun formula, the readability scores of websites were at the instructional reading level. The mean total DISCERN score was 2.33 ± 0.60, indicating poor quality. No statistically significant correlations were found between the Atesman or Cetinkaya-Uzun readability scores and the DISCERN scores across all websites (P > .05). Our analysis revealed key areas in which Turkish websites can improve the quality and readability of rhinoplasty information to support decision-making.
Collapse
Affiliation(s)
- Yonca Coluk
- Department of Otorhinolaryngology, Faculty of Medicine, Giresun University, Giresun, Turkey
| | - Muhammed Irfan Senocak
- Department of Otorhinolaryngology, Faculty of Medicine, Giresun University, Giresun, Turkey
| |
Collapse
|
4
|
Musheyev D, Pan A, Kabarriti AE, Loeb S, Borin JF. Quality of Information About Kidney Stones from Artificial Intelligence Chatbots. J Endourol 2024. [PMID: 39001821 DOI: 10.1089/end.2023.0484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/15/2024] Open
Abstract
Introduction: Kidney stones are common and morbid conditions in the general population with a rising incidence globally. Previous studies show substantial limitations of online sources of information regarding prevention and treatment. The objective of this study was to examine the quality of information on kidney stones from artificial intelligence (AI) chatbots. Methods: The most common online searches about kidney stones from Google Trends and headers from the National Institute of Diabetes and Digestive and Kidney Diseases website were used as inputs to four AI chatbots (ChatGPT version 3.5, Perplexity, Chat Sonic, and Bing AI). Validated instruments were used to assess the quality (DISCERN instrument from 1 low to 5 high), understandability, and actionability (PEMAT, from 0% to 100%) of the chatbot outputs. In addition, we examined the reading level of the information and whether there was misinformation compared with guidelines (5 point Likert scale). Results: AI chatbots generally provided high-quality consumer health information (median DISCERN 4 out of 5) and did not include misinformation (median 1 out of 5). The median understandability was moderate (median 69.6%), and actionability was moderate to poor (median 40%). Responses were presented at an advanced reading level (11th grade; median Flesch-Kincaid score 11.3). Conclusions: AI chatbots provide generally accurate information on kidney stones and lack misinformation; however, it is not easily actionable and is presented above the recommended reading level for consumer health information.
Collapse
Affiliation(s)
- David Musheyev
- Department of Urology, State University of New York Downstate Health Sciences University, Brooklyn, New York, USA
| | - Alexander Pan
- Department of Urology, State University of New York Downstate Health Sciences University, Brooklyn, New York, USA
| | - Abdo E Kabarriti
- Department of Urology, State University of New York Downstate Health Sciences University, Brooklyn, New York, USA
| | - Stacy Loeb
- Department of Urology, New York University Grossman School of Medicine, New York, New York, USA
- Department of Surgery, Manhattan Veterans Affairs Medical Center, New York, New York, USA
- Department of Population Health, New York University Grossman School of Medicine, New York, New York, USA
| | - James F Borin
- Department of Urology, New York University Grossman School of Medicine, New York, New York, USA
- Department of Surgery, Manhattan Veterans Affairs Medical Center, New York, New York, USA
| |
Collapse
|
5
|
Flaifl Y, Hassona Y, Altoum D, Flaifl N, Taimeh D. Online information about oral health in autism spectrum disorder: Is it good enough? SPECIAL CARE IN DENTISTRY 2024. [PMID: 39044329 DOI: 10.1111/scd.13045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Revised: 06/28/2024] [Accepted: 07/11/2024] [Indexed: 07/25/2024]
Abstract
INTRODUCTION The use of the internet has surged significantly over the years. Patients and caregivers of patients with autism spectrum disorder (ASD) might consult the internet for oral health-related information. Hence, this study aimed to assess the quality and readability of online information available in the English language regarding oral health in ASD. METHODS Online search using Google.com was conducted using the terms "Autism and dental care," "Autism and oral health," and "Autism and dentistry". The first 100 websites for each term were screened. Quality of information was assessed using the Patient Education Materials Assessment Tool for printed material (PEMAT-P) and the Journal of American Medical Association (JAMA) benchmarks. A PEMAT score higher than 70% is considered acceptable for readability and actionability. The JAMA benchmarks are authorship, attribution, disclosure, and currency. Readability was evaluated using the Flesch reading ease score and Simple Measure of Gobbledygook (SMOG) readability formula. RESULTS Out of the 300 screened websites, 66 were eventually included. The mean PEMAT understandability and actionability scores were 77.13%, and 42.12%, respectively. Only 12.1% of the websites displayed all four JAMA benchmarks. The mean Flesch score was 10th-12th grade level, and the mean SMOG score was 10th grade level. CONCLUSION While the understandability of the information was acceptable, the readability and actionability were too challenging for lay people. Health care professionals and organizations involved in patient education should place more efforts in promoting the quality of online information targeting patients with ASD.
Collapse
Affiliation(s)
- Yara Flaifl
- Department of Oral and Maxillofacial Surgery, Oral Medicine and Periodontology, School of Dentistry, The University of Jordan, Amman, Jordan
| | - Yazan Hassona
- Faculty of Dentistry, Al-Ahliyya Amman University, Amman, Jordan
- School of Dentistry, The University of Jordan, Amman, Jordan
| | - Dana Altoum
- Department of Oral and Maxillofacial Surgery, Oral Medicine and Periodontology, School of Dentistry, The University of Jordan, Amman, Jordan
| | - Nada Flaifl
- School of Dentistry, The University of Jordan, Amman, Jordan
| | - Dina Taimeh
- Department of Oral and Maxillofacial Surgery, Oral Medicine and Periodontology, School of Dentistry, The University of Jordan, Amman, Jordan
| |
Collapse
|
6
|
Musheyev D, Pan A, Gross P, Kamyab D, Kaplinsky P, Spivak M, Bragg MA, Loeb S, Kabarriti AE. Readability and Information Quality in Cancer Information From a Free vs Paid Chatbot. JAMA Netw Open 2024; 7:e2422275. [PMID: 39058491 PMCID: PMC11282443 DOI: 10.1001/jamanetworkopen.2024.22275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Accepted: 05/15/2024] [Indexed: 07/28/2024] Open
Abstract
Importance The mainstream use of chatbots requires a thorough investigation of their readability and quality of information. Objective To identify readability and quality differences in information between a free and paywalled chatbot cancer-related responses, and to explore if more precise prompting can mitigate any observed differences. Design, Setting, and Participants This cross-sectional study compared readability and information quality of a chatbot's free vs paywalled responses with Google Trends' top 5 search queries associated with breast, lung, prostate, colorectal, and skin cancers from January 1, 2021, to January 1, 2023. Data were extracted from the search tracker, and responses were produced by free and paywalled ChatGPT. Data were analyzed from December 20, 2023, to January 15, 2024. Exposures Free vs paywalled chatbot outputs with and without prompt: "Explain the following at a sixth grade reading level: [nonprompted input]." Main Outcomes and Measures The primary outcome measured the readability of a chatbot's responses using Flesch Reading Ease scores (0 [graduate reading level] to 100 [easy fifth grade reading level]). Secondary outcomes included assessing consumer health information quality with the validated DISCERN instrument (overall score from 1 [low quality] to 5 [high quality]) for each response. Scores were compared between the 2 chatbot models with and without prompting. Results This study evaluated 100 chatbot responses. Nonprompted free chatbot responses had lower readability (median [IQR] Flesh Reading ease scores, 52.60 [44.54-61.46]) than nonprompted paywalled chatbot responses (62.48 [54.83-68.40]) (P < .05). However, prompting the free chatbot to reword responses at a sixth grade reading level was associated with increased reading ease scores than the paywalled chatbot nonprompted responses (median [IQR], 71.55 [68.20-78.99]) (P < .001). Prompting was associated with increases in reading ease in both free (median [IQR], 71.55 [68.20-78.99]; P < .001)and paywalled versions (median [IQR], 75.64 [70.53-81.12]; P < .001). There was no significant difference in overall DISCERN scores between the chatbot models, with and without prompting. Conclusions and Relevance In this cross-sectional study, paying for the chatbot was found to provide easier-to-read responses, but prompting the free version of the chatbot was associated with increased response readability without changing information quality. Educating the public on how to prompt chatbots may help promote equitable access to health information.
Collapse
Affiliation(s)
- David Musheyev
- Department of Urology, State University of New York Downstate Health Sciences University, New York
| | - Alexander Pan
- Department of Urology, State University of New York Downstate Health Sciences University, New York
| | - Preston Gross
- Department of Urology, State University of New York Downstate Health Sciences University, New York
| | - Daniel Kamyab
- Department of Urology, State University of New York Downstate Health Sciences University, New York
| | - Peter Kaplinsky
- Department of Urology, State University of New York Downstate Health Sciences University, New York
| | - Mark Spivak
- Department of Urology, State University of New York Downstate Health Sciences University, New York
| | - Marie A. Bragg
- Department of Urology, New York University and Manhattan Veterans Affairs, New York
- Marketing Department, Stern School of Business, New York University, New York
- Department of Population Health, New York University, New York
| | - Stacy Loeb
- Department of Urology, New York University and Manhattan Veterans Affairs, New York
- Department of Population Health, New York University, New York
| | - Abdo E. Kabarriti
- Department of Urology, State University of New York Downstate Health Sciences University, New York
| |
Collapse
|
7
|
Ghanem D, Covarrubias O, Maxson R, Sabharwal S, Shafiq B. Readability of Trauma-related Patient Education Materials From the American Academy of Orthopaedic Surgeons and Orthopaedic Trauma Association Websites. J Am Acad Orthop Surg 2024; 32:e642-e650. [PMID: 38684136 DOI: 10.5435/jaaos-d-23-00449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 09/03/2023] [Indexed: 05/02/2024] Open
Abstract
INTRODUCTION Web-based resources serve as a fundamental educational platform for orthopaedic trauma patients; however, they are frequently written above the recommended sixth-grade reading level, and previous studies have demonstrated this for the American Academy of Orthopaedic Surgeons (AAOS) web-based articles. In this study, we perform an updated assessment of the readability of AAOS trauma-related educational articles as compared with injury-matched education materials developed by the Orthopaedic Trauma Association (OTA). METHODS All 46 AAOS trauma-related web-based ( https://www.orthoinfo.org/ ) patient education articles were analyzed for readability. Two independent reviewers used (1) the Flesch-Kincaid Grade Level (FKGL) and (2) the Flesch Reading Ease (FRE) algorithms to calculate the readability level. Mean readability scores were compared across body part categories. A one-sample t -test was done to compare mean FKGL with the recommended sixth-grade readability level and the average American adult reading level. A two-sample t -test was used to compare the readability scores of the AAOS trauma-related articles with those of the OTA. RESULTS The average (SD) FKGL and FRE for the AAOS articles were 8.9 (0.74) and 57.2 (5.8), respectively. All articles were written above the sixth-grade reading level. The average readability of the AAOS articles was significantly greater than the recommended sixth-grade reading level ( P < 0.001). The average FKGL and FRE for all AAOS articles were significantly higher compared with those of the OTA articles (8.9 ± 0.74 versus 8.1 ± 1.14, P < 0.001 and 57.2 ± 5.8 versus 65.6 ± 6.6, P < 0.001, respectively). Excellent agreement was observed between raters for the FKGL 0.956 (95% confidence interval, 0.922 to 0.975) and FRE 0.993 (95% confidence interval, 0.987 to 0.996). DISCUSSION Our findings suggest that after almost a decade, the readability of the AAOS trauma-related articles remains unchanged. The AAOS and OTA trauma patient education materials have high readability levels and may be too difficult for patient comprehension. A need remains to improve the readability of these commonly used trauma education materials.
Collapse
Affiliation(s)
- Diane Ghanem
- From the Department of Orthopaedic Surgery, The Johns Hopkins Hospital (Ghanem, Sabharwal, and Shafiq), and the School of Medicine, The Johns Hopkins University, Baltimore, MD (Covarrubias, and Maxson)
| | | | | | | | | |
Collapse
|
8
|
Meade MJ, Jensen S, Ju X, Hunter D, Jamieson L. Clear aligner therapy informed consent forms: A quality and readability evaluation. Int Orthod 2024; 22:100873. [PMID: 38713930 DOI: 10.1016/j.ortho.2024.100873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2024] [Revised: 04/04/2024] [Accepted: 04/08/2024] [Indexed: 05/09/2024]
Abstract
OBJECTIVE The aim of the present study was to evaluate the quality and readability of content contained within clear aligner therapy (CAT) informed consent forms. METHODS CAT informed consent forms were identified via an online search. The presence of details related to CAT-related processes, risks, benefits and alternatives in each form was recorded. A 4-point Likert type scale was used to determine the quality of content (QOC). The readability of content was evaluated with the Simple Measure of Gobbledegook (SMOG) and Flesch Reading Ease Score (FRES). RESULTS A total of 42 forms satisfied selection criteria. Nineteen (45.2%) were authored by companies who provided aligners to patients via clinicians. The QOC regarding CAT-related treatment processes [median 2.0; IQR 0, 2] and benefits [median 2.0; IQR 1, 2] was adequate. The QOC scores regarding treatment alternatives, consequences of no treatment and relapse were poor. There was no difference (P=0.59) in the median (IQR) QOC of the informed consent forms provided by direct-to-consumer (DTC) aligner providers [10 (8.25, 16.25)] and non-DTC aligner providers [12 (10, 14)]. The median (IQR) SMOG score was 12.1 (10.9, 12.7) and FRES was 39.0 (36.0, 44.25). CONCLUSIONS The QOC of the evaluated forms was incomplete and poor. The content was difficult to read and failed to reach recommended readability standards. Consent is unlikely to be valid if it is based solely on the content of the forms. Clinicians need to be aware of the limitations of informed consent forms for CAT particularly in relation to alternatives, prognosis, risks, and the need for long-term maintenance of results.
Collapse
Affiliation(s)
- Maurice J Meade
- Orthodontic Unit, Adelaide Dental School, University of Adelaide, South Australia, Australia.
| | - Sven Jensen
- Orthodontic Unit, Adelaide Dental School, University of Adelaide, South Australia, Australia
| | - Xiangqun Ju
- Australian Research Centre for Population Oral Health, Adelaide Dental School, University of Adelaide, South Australia, Australia
| | - David Hunter
- Adelaide Faculty of Health and Medical Sciences, University of Adelaide, South Australia, Australia
| | - Lisa Jamieson
- Australian Research Centre for Population Oral Health, Adelaide Dental School, University of Adelaide, South Australia, Australia
| |
Collapse
|
9
|
Baldwin AJ. An artificial intelligence language model improves readability of burns first aid information. Burns 2024; 50:1122-1127. [PMID: 38492982 DOI: 10.1016/j.burns.2024.03.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 01/29/2024] [Accepted: 03/05/2024] [Indexed: 03/18/2024]
Abstract
AIMS This study aimed to assess the potential of using an artificial intelligence (AI) large language model to improve the readability of burns first aid information. METHODS An AI language model (ChatGPT-3) was used to rewrite content from the top 50 English-language webpages containing burns first aid information to be understandable by an individual with the literacy level of an 11-year-old, as recommended by the American Medical Association and Health Education England. The assessment of readability was conducted using five validated tools. RESULTS In their original form, only 4% of the patient education materials (PEMs) met the target readability level across all tools. The median grade was 6.9 (SD=1.1). One-sample one-tailed t-test revealed that this was not significantly below the target (p = .31). After AI-modification, 18% of PEMs reached the target level using all tools, with a median grade of 6 (SD=0.9), which was significantly below the target level (p < .001). Once rewritten using AI, paired t-test demonstrated that all readability scores improved significantly (p < .001). CONCLUSION Utilising an AI language model proved an effective and viable method for enhancing readability of burns first aid information.
Collapse
Affiliation(s)
- Alexander J Baldwin
- Department of Burns and Plastic Surgery, Buckinghamshire Healthcare NHS Trust, Buckinghamshire, UK.
| |
Collapse
|
10
|
Zhou Z, Besson AJ, Hayes D, Yeung JMC. Ostomy Information on the Internet-Is It Good Enough? J Wound Ostomy Continence Nurs 2024; 51:199-205. [PMID: 38820217 DOI: 10.1097/won.0000000000001077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/02/2024]
Abstract
PURPOSE The aim of this study was to determine which internet search engines and keywords patients with ostomies utilize, to identify the common websites using these terms, to determine what aspects of information they wanted, and to perform a quality and readability assessment for these websites. DESIGN A cross-sectional survey of persons with ostomies to identify search engines and terms, followed by a structured assessment of the quality and readability of the identified web pages. SUBJECT AND SETTINGS The sample comprised 20 hospitalized patients with ostomies cared for on a colorectal surgical ward of a tertiary care hospital located in Melbourne, Australia. There were 15 (75%) adult males and 5 (25%) adult females; their mean age was 52.2 years. Participants were surveyed between August and December 2020. METHODS Patients with newly formed ostomies were surveyed about which search engines and keywords they would use to look for information and for which questions regarding ostomies they wanted answers. In addition, 2 researchers then performed independent searches using the search terms identified by patient participants. These searches were conducted in August 2021, with the geographical location set to Australia. The quality of the websites was graded using the DISCERN, Ensuring Quality Information for Patients, and Quality Evaluation Scoring Tool scoring assessments, and their readability was graded using the Flesch Reading Ease Score tool. RESULTS Participants used Google as their primary search engine. Four keywords/phrases were identified: stoma for bowel surgery, ileostomy, colostomy, and caring for stoma. Multiple web pages were identified, 8 (21%) originated from Australia, 7 (18%) were from the United Kingdom, and 23 (61%) were from the United States. Most web pages lacked recent updates; only 18% had been undated within the last 12 months. The overall quality of the online information on ostomies was moderate with an average level of readability, deemed suitable for patient educational purposes. CONCLUSIONS Information for persons living with an ostomy can be obtained from multiple web pages, and many sites have reasonable quality and are written at a suitable level. Unfortunately, these websites are rarely up-to-date and may contain advice that may not be applicable to individual patients.
Collapse
Affiliation(s)
- Zheyi Zhou
- Zheyi Zhou, MD, Department of Surgery, Western Health, Footscray, Victoria, Australia
- Alex J. Besson, MD, Department of Surgery, Western Health, Footscray, Victoria, Australia
- Diana Hayes, Department of Colorectal Surgery, Western Health, Footscray, Victoria, Australia
- Justin M.C. Yeung, DM, FRCSEd (Gen Surg), FRACS, Department of Surgery, Western Health, Footscray, Victoria, Australia; Department of Colorectal Surgery, Western Health, Footscray, Victoria, Australia; Department of Surgery, Western Precinct, University of Melbourne, Parkville, Victoria, Australia; and Western Health Chronic Disease Alliance, Western Health, Footscray, Victoria, Australia
| | - Alex J Besson
- Zheyi Zhou, MD, Department of Surgery, Western Health, Footscray, Victoria, Australia
- Alex J. Besson, MD, Department of Surgery, Western Health, Footscray, Victoria, Australia
- Diana Hayes, Department of Colorectal Surgery, Western Health, Footscray, Victoria, Australia
- Justin M.C. Yeung, DM, FRCSEd (Gen Surg), FRACS, Department of Surgery, Western Health, Footscray, Victoria, Australia; Department of Colorectal Surgery, Western Health, Footscray, Victoria, Australia; Department of Surgery, Western Precinct, University of Melbourne, Parkville, Victoria, Australia; and Western Health Chronic Disease Alliance, Western Health, Footscray, Victoria, Australia
| | - Diana Hayes
- Zheyi Zhou, MD, Department of Surgery, Western Health, Footscray, Victoria, Australia
- Alex J. Besson, MD, Department of Surgery, Western Health, Footscray, Victoria, Australia
- Diana Hayes, Department of Colorectal Surgery, Western Health, Footscray, Victoria, Australia
- Justin M.C. Yeung, DM, FRCSEd (Gen Surg), FRACS, Department of Surgery, Western Health, Footscray, Victoria, Australia; Department of Colorectal Surgery, Western Health, Footscray, Victoria, Australia; Department of Surgery, Western Precinct, University of Melbourne, Parkville, Victoria, Australia; and Western Health Chronic Disease Alliance, Western Health, Footscray, Victoria, Australia
| | - Justin M C Yeung
- Zheyi Zhou, MD, Department of Surgery, Western Health, Footscray, Victoria, Australia
- Alex J. Besson, MD, Department of Surgery, Western Health, Footscray, Victoria, Australia
- Diana Hayes, Department of Colorectal Surgery, Western Health, Footscray, Victoria, Australia
- Justin M.C. Yeung, DM, FRCSEd (Gen Surg), FRACS, Department of Surgery, Western Health, Footscray, Victoria, Australia; Department of Colorectal Surgery, Western Health, Footscray, Victoria, Australia; Department of Surgery, Western Precinct, University of Melbourne, Parkville, Victoria, Australia; and Western Health Chronic Disease Alliance, Western Health, Footscray, Victoria, Australia
| |
Collapse
|
11
|
Garcia Valencia OA, Thongprayoon C, Miao J, Suppadungsuk S, Krisanapan P, Craici IM, Jadlowiec CC, Mao SA, Mao MA, Leeaphorn N, Budhiraja P, Cheungpasitporn W. Empowering inclusivity: improving readability of living kidney donation information with ChatGPT. Front Digit Health 2024; 6:1366967. [PMID: 38659656 PMCID: PMC11039889 DOI: 10.3389/fdgth.2024.1366967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Accepted: 04/01/2024] [Indexed: 04/26/2024] Open
Abstract
Background Addressing disparities in living kidney donation requires making information accessible across literacy levels, especially important given that the average American adult reads at an 8th-grade level. This study evaluated the effectiveness of ChatGPT, an advanced AI language model, in simplifying living kidney donation information to an 8th-grade reading level or below. Methods We used ChatGPT versions 3.5 and 4.0 to modify 27 questions and answers from Donate Life America, a key resource on living kidney donation. We measured the readability of both original and modified texts using the Flesch-Kincaid formula. A paired t-test was conducted to assess changes in readability levels, and a statistical comparison between the two ChatGPT versions was performed. Results Originally, the FAQs had an average reading level of 9.6 ± 1.9. Post-modification, ChatGPT 3.5 achieved an average readability level of 7.72 ± 1.85, while ChatGPT 4.0 reached 4.30 ± 1.71, both with a p-value <0.001 indicating significant reduction. ChatGPT 3.5 made 59.26% of answers readable below 8th-grade level, whereas ChatGPT 4.0 did so for 96.30% of the texts. The grade level range for modified answers was 3.4-11.3 for ChatGPT 3.5 and 1-8.1 for ChatGPT 4.0. Conclusion Both ChatGPT 3.5 and 4.0 effectively lowered the readability grade levels of complex medical information, with ChatGPT 4.0 being more effective. This suggests ChatGPT's potential role in promoting diversity and equity in living kidney donation, indicating scope for further refinement in making medical information more accessible.
Collapse
Affiliation(s)
- Oscar A. Garcia Valencia
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN, United States
| | - Charat Thongprayoon
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN, United States
| | - Jing Miao
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN, United States
| | - Supawadee Suppadungsuk
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN, United States
- Chakri Naruebodindra Medical Institute, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Samut Prakan, Thailand
| | - Pajaree Krisanapan
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN, United States
- Division of Nephrology, Department of Internal Medicine, Faculty of Medicine, Thammasat University, Pathum Thani, Thailand
- Division of Nephrology, Department of Internal Medicine, Thammasat University Hospital, Pathum Thani, Thailand
| | - Iasmina M. Craici
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN, United States
| | - Caroline C. Jadlowiec
- Division of Transplant Surgery, Department of Surgery, Mayo Clinic, Phoenix, AZ, United States
| | - Shennen A. Mao
- Division of Transplant Surgery, Department of Transplant, Mayo Clinic, Jacksonville, FL, United States
| | - Michael A. Mao
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Jacksonville, FL, United States
| | - Napat Leeaphorn
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Jacksonville, FL, United States
| | - Pooja Budhiraja
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Phoenix, AZ, United States
| | - Wisit Cheungpasitporn
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN, United States
| |
Collapse
|
12
|
McCoy MS, Wu A, Burdyl S, Kim Y, Smith NK, Gonzales R, Friedman AB. User Information Sharing and Hospital Website Privacy Policies. JAMA Netw Open 2024; 7:e245861. [PMID: 38602678 PMCID: PMC11009820 DOI: 10.1001/jamanetworkopen.2024.5861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 02/13/2024] [Indexed: 04/12/2024] Open
Abstract
Importance Hospital websites frequently use tracking technologies that transfer user information to third parties. It is not known whether hospital websites include privacy policies that disclose relevant details regarding tracking. Objective To determine whether hospital websites have accessible privacy policies and whether those policies contain key information related to third-party tracking. Design, Setting, and Participants In this cross-sectional content analysis of website privacy policies of a nationally representative sample of nonfederal acute care hospitals, hospital websites were first measured to determine whether they included tracking technologies that transferred user information to third parties. Hospital website privacy policies were then identified using standardized searches. Policies were assessed for length and readability. Policy content was analyzed using a data abstraction form. Tracking measurement and privacy policy retrieval and analysis took place from November 2023 to January 2024. The prevalence of privacy policy characteristics was analyzed using standard descriptive statistics. Main Outcomes and Measures The primary study outcome was the availability of a website privacy policy. Secondary outcomes were the length and readability of privacy policies and the inclusion of privacy policy content addressing user information collected by the website, potential uses of user information, third-party recipients of user information, and user rights regarding tracking and information collection. Results Of 100 hospital websites, 96 (96.0%; 95% CI, 90.1%-98.9%) transferred user information to third parties. Privacy policies were found on 71 websites (71.0%; 95% CI, 61.6%-79.4%). Policies were a mean length of 2527 words (95% CI, 2058-2997 words) and were written at a mean grade level of 13.7 (95% CI, 13.4-14.1). Among 71 privacy policies, 69 (97.2%; 95% CI, 91.4%-99.5%) addressed types of user information automatically collected by the website, 70 (98.6%; 95% CI, 93.8%-99.9%) addressed how collected information would be used, 66 (93.0%; 95% CI, 85.3%-97.5%) addressed categories of third-party recipients of user information, and 40 (56.3%; 95% CI, 44.5%-67.7%) named specific third-party companies or services receiving user information. Conclusions and Relevance In this cross-sectional study of hospital website privacy policies, a substantial number of hospital websites did not present users with adequate information about the privacy implications of website use, either because they lacked a privacy policy or had a privacy policy that contained limited content about third-party recipients of user information.
Collapse
Affiliation(s)
- Matthew S. McCoy
- Department of Medical Ethics and Health Policy, University of Pennsylvania, Philadelphia
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia
| | - Angela Wu
- Carey Law School, University of Pennsylvania, Philadelphia
| | - Sam Burdyl
- Carey Law School, University of Pennsylvania, Philadelphia
| | - Yungjee Kim
- Carey Law School, University of Pennsylvania, Philadelphia
| | - Noell Kristen Smith
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia
| | - Rachel Gonzales
- Department of Emergency Medicine, University of Pennsylvania, Philadelphia
| | - Ari B. Friedman
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia
- Department of Emergency Medicine, University of Pennsylvania, Philadelphia
| |
Collapse
|
13
|
Morse E, Odigie E, Gillespie H, Rameau A. The Readability of Patient-Facing Social Media Posts on Common Otolaryngologic Diagnoses. Otolaryngol Head Neck Surg 2024; 170:1051-1058. [PMID: 38018504 DOI: 10.1002/ohn.584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 10/09/2023] [Accepted: 10/28/2023] [Indexed: 11/30/2023]
Abstract
OBJECTIVE To assess the readability of patient-facing educational information about the most common otolaryngology diagnoses on popular social media platforms. STUDY DESIGN Cross-sectional study. SETTING Social media platforms. METHODS The top 5 otolaryngologic diagnoses were identified from the National Ambulatory Medical Care Survey Database. Facebook, Twitter, TikTok, and Instagram were searched using these terms, and the top 25 patient-facing posts from unique accounts for each search term and poster type (otolaryngologist, other medical professional, layperson) were identified. Captions, text, and audio from images and video, and linked articles were extracted. The readability of each post element was calculated with multiple readability formulae. Readability was summarized and was compared between poster types, platforms, and search terms via Kruskal-Wallis testing. RESULTS Median readability, by grade level, by grade level, was greater than 10 for captions, 5 for image-associated text, and 9 for linked articles. Captions and images in posts by laypeople were significantly more readable than captions by otolaryngologists or other medical professionals, but there was no difference for linked articles. All post components were more readable in posts about cerumen than those about other search terms. CONCLUSIONS When examining the readability of posts on social media regarding the most common otolaryngology diagnoses, we found that many posts are less readable than recommended for patients, and found that posts by laypeople were significantly more readable than those by medical professionals. Medical professionals should work to make educational social media posts more readable to facilitate patient comprehension.
Collapse
Affiliation(s)
- Elliot Morse
- Department of Otolaryngology-Head and Neck Surgery, Weill Cornell Medicine, New York, New York, USA
| | - Eseosa Odigie
- Department of Otolaryngology-Head and Neck Surgery, Weill Cornell Medicine, New York, New York, USA
- Sean Parker Institute for the Voice, Weill Cornell Medicine, New York, New York, USA
| | - Helen Gillespie
- Department of Otolaryngology-Head and Neck Surgery, Weill Cornell Medicine, New York, New York, USA
- Sean Parker Institute for the Voice, Weill Cornell Medicine, New York, New York, USA
| | - Anaïs Rameau
- Department of Otolaryngology-Head and Neck Surgery, Weill Cornell Medicine, New York, New York, USA
- Sean Parker Institute for the Voice, Weill Cornell Medicine, New York, New York, USA
| |
Collapse
|
14
|
Marshall S, Hanish SJ, Baumann J, Groneck A, DeFroda S. A standardised method for improving patient education material readability for orthopaedic trauma patients. Musculoskeletal Care 2024; 22:e1869. [PMID: 38367003 DOI: 10.1002/msc.1869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Accepted: 02/05/2024] [Indexed: 02/19/2024]
Abstract
PURPOSE While the National Institutes of Health and American Medical Association recommend patient education materials (PEMs) should be written at the sixth-grade reading level or below, many patient education materials related to traumatic orthopaedic injuries do not meet these recommendations. The purpose of this study is to create a standardised method for enhancing the readability of trauma-related orthopaedic PEMs by reducing the use of ≥ three syllable words and reducing the use of sentences >15 words in length. We hypothesise that applying this standardized method will significantly improve the objective readability of orthopaedic trauma PEMs. METHODS A patient education website was queried for PEMs relevant to traumatic orthopaedic injuries. Orthopaedic trauma PEMs included (N = 40) were unique, written in a prose format, and <3500 words. PEM statistics, including scores for seven independent readability formulae, were determined for each PEM before and after applying this standard method. RESULTS All PEMs had significantly different readability scores when comparing original and edited PEMs (p < 0.01). The mean Flesch Kincaid Grade Level of the original PEMs (10.0 ± 1.0) was significantly higher than that of edited PEMs (5.8 ± 1.1) (p < 0.01). None of the original PEMs met recommendations of a sixth-grade reading level compared with 31 (77.5%) of edited PEMs. CONCLUSIONS This standard method that reduces the use of ≥ three syllable words and <15 word sentences has been shown to significantly reduce the reading-grade level of PEMs for traumatic orthopaedic injuries. Improving the readability of PEMs may lead to enhanced health literacy and improved health outcomes.
Collapse
Affiliation(s)
- Samuel Marshall
- Department of Orthopaedic Surgery, University of Missouri School of Medicine, Columbia, Missouri, USA
| | - Stefan J Hanish
- Department of Orthopaedic Surgery, University of Missouri School of Medicine, Columbia, Missouri, USA
| | - John Baumann
- Department of Orthopaedic Surgery, University of Missouri School of Medicine, Columbia, Missouri, USA
| | - Andrew Groneck
- Department of Orthopaedic Surgery, University of Missouri School of Medicine, Columbia, Missouri, USA
| | - Steven DeFroda
- Department of Orthopaedic Surgery, University of Missouri School of Medicine, Columbia, Missouri, USA
| |
Collapse
|
15
|
Kayastha A, Lakshmanan K, Valentine MJ, Kramer HD, Kim J, Pettinelli N, Kramer RC. A Readability Study of Carpal Tunnel Release in 2023. Hand (N Y) 2024:15589447241232095. [PMID: 38414220 DOI: 10.1177/15589447241232095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/29/2024]
Abstract
BACKGROUND The National Institutes of Health (NIH) and the American Medical Association (AMA) recommend a sixth-grade reading level for patient-directed content. This study aims to quantitatively evaluate the readability of online information sources related to carpal tunnel surgery using established readability indices. METHODS Web searches for "carpal tunnel release" and "carpal tunnel decompression surgery" queries were performed using Google, and the first 20 websites were identified per query. WebFX online software tools were utilized to determine readability. Indices included Flesch Kincaid Reading Ease, Flesch Kincaid Grade Level, Coleman Liau Index, Automated Readability Index, Gunning Fog Score, and the Simple Measure of Gobbledygook Index. Health-specific clickthrough rate (CTR) data were used in order to select the first 20 search engine results page from each query. RESULTS "Carpal tunnel release" had a mean readability of 8.46, and "carpal tunnel decompression surgery" had a mean readability of 8.70. The range of mean readability scores among the indices used for both search queries was 6.17 to 14.0. The total mean readability for carpal tunnel surgery information was found to be 8.58. This corresponds to approximately a ninth-grade reading level in the United States. CONCLUSION The average readability of carpal tunnel surgery online content is three grade levels above the recommended sixth-grade level for patient-directed materials. This discrepancy indicates that existing online materials related to carpal tunnel surgery are more difficult to understand than the standards set by NIH and AMA.
Collapse
|
16
|
Haffar A, Hirsch A, Morrill C, Garcia A, Werner Z, Gearhart JP, Crigger C. Clear as Mud: Readability Scores in Cloacal Exstrophy Literature and Its Treatment. Res Rep Urol 2024; 16:39-44. [PMID: 38370509 PMCID: PMC10871133 DOI: 10.2147/rru.s430744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Accepted: 01/05/2024] [Indexed: 02/20/2024] Open
Abstract
Purpose This study examines the readability of online medical information regarding cloacal exstrophy (CE). We hypothesize that inappropriate levels of comprehension are required in these resources, leading to poor understanding and confusion amongst caregivers. Methods The Google and Bing search engines were used to search the terms "cloacal exstrophy" and "cloacal exstrophy treatment". The first 100 results for each were collected. Each webpage was analyzed for readability using four independent validated scoring systems: the Gunning-Fog index (GFI), SMOG grade (Simple Measure of Gobbledygook), Dale-Chall index (DCI), and the Flesch-Kincaid grade (FKG). Results Forty-seven unique webpages fit the inclusion criteria. Mean readability scores across all websites were GFI, 14.6; SMOG score, 10.8; DCI, 9.3; and FKG, 11.8, correlating to adjusted grade levels of college sophomore, 11th grade, college, and 11th grade, respectively. There were significant differences across all readability formulas. Non-profit websites were significantly less readable than institutional and commercial webpages (GFI p = 0.012, SMOG p = 0.018, DCI p = 0.021, FKG p = 0.0093). Conclusion Caregiver-directed health information regarding CE and its treatment available online is written at the 11th grade reading level or above. Online resources pertaining to CE must be simplified to be effective.
Collapse
Affiliation(s)
- Ahmad Haffar
- Robert D. Jeffs Division of Pediatric Urology, James Buchanan Brady Urological Institute, Johns Hopkins Hospital, Johns Hopkins Medical Institutions, Charlotte Bloomberg Children’s Hospital, Baltimore, MD, USA
| | - Alexander Hirsch
- Robert D. Jeffs Division of Pediatric Urology, James Buchanan Brady Urological Institute, Johns Hopkins Hospital, Johns Hopkins Medical Institutions, Charlotte Bloomberg Children’s Hospital, Baltimore, MD, USA
| | - Christian Morrill
- Robert D. Jeffs Division of Pediatric Urology, James Buchanan Brady Urological Institute, Johns Hopkins Hospital, Johns Hopkins Medical Institutions, Charlotte Bloomberg Children’s Hospital, Baltimore, MD, USA
| | - Adelaide Garcia
- Robert D. Jeffs Division of Pediatric Urology, James Buchanan Brady Urological Institute, Johns Hopkins Hospital, Johns Hopkins Medical Institutions, Charlotte Bloomberg Children’s Hospital, Baltimore, MD, USA
| | - Zachary Werner
- West Virginia University School of Medicine, Department of Urology, Division of Pediatric Urology, Morgantown, WV, USA
| | - John P Gearhart
- Robert D. Jeffs Division of Pediatric Urology, James Buchanan Brady Urological Institute, Johns Hopkins Hospital, Johns Hopkins Medical Institutions, Charlotte Bloomberg Children’s Hospital, Baltimore, MD, USA
| | - Chad Crigger
- Robert D. Jeffs Division of Pediatric Urology, James Buchanan Brady Urological Institute, Johns Hopkins Hospital, Johns Hopkins Medical Institutions, Charlotte Bloomberg Children’s Hospital, Baltimore, MD, USA
| |
Collapse
|
17
|
Pan A, Musheyev D, Loeb S, Kabarriti AE. Quality of erectile dysfunction information from ChatGPT and other artificial intelligence chatbots. BJU Int 2024; 133:152-154. [PMID: 37997563 DOI: 10.1111/bju.16209] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2023]
Affiliation(s)
- Alexander Pan
- Department of Urology, State University of New York Downstate Health Sciences University, New York City, NY, USA
| | - David Musheyev
- Department of Urology, State University of New York Downstate Health Sciences University, New York City, NY, USA
| | - Stacy Loeb
- Department of Urology, New York University and Manhattan Veterans Affairs, New York City, NY, USA
- Department of Population Health, New York University, New York City, NY, USA
| | - Abdo E Kabarriti
- Department of Urology, State University of New York Downstate Health Sciences University, New York City, NY, USA
| |
Collapse
|
18
|
Demirci AN, İncebay Ö, Köse A. Evaluation of quality and readability of internet information on voice disorders. Public Health 2024; 226:1-7. [PMID: 37979233 DOI: 10.1016/j.puhe.2023.10.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2023] [Revised: 09/20/2023] [Accepted: 10/09/2023] [Indexed: 11/20/2023]
Abstract
OBJECTIVES The purpose of this study is to evaluate the readability and quality of Internet information related to vocal health, voice disorders and voice therapy. STUDY DESIGN This is a cross-sectional study. METHODS Eighty-two websites were included. Websites were then analyzed; their origin (clinic/hospital, non-profit, government), quality (Health On the Net [HON] certification and DISCERN scores) and readability (Ateşman readability formula and Bezirci-Yılmaz new readability formula) were assessed. Statistical analysis was used to examine differences between website origin and quality and readability scores and correlations between readability instruments. RESULTS Of the 82 websites, 93% were of private clinic/hospital, 6% were of non-profit organisation and 1% were of government. None of the 82 websites were HON certification, and the mean score of the item determining the general quality measure in DISCERN was 1.83 in a five-point scale. The mean of Ateşman readability formula value was calculated as 50.46 (±8.16). This value is defined as 'moderately hard' according to the readability scale. The average of Bezirci-Yılmaz new readability formula value is 13.85 (±3.48). This value is defined as 13th and 14th grade. CONCLUSIONS The quality of Internet-based health information about the voice is generally inadequate, and the sites examined in this study may be limited due to high readability levels. This may be a problem in people with poor literacy skills. For this reason, it is very important for speech and language therapists and other health professionals to evaluate and monitor the quality and readability of Internet-based information.
Collapse
Affiliation(s)
- A N Demirci
- Department of Speech and Language Therapy, Hacettepe University Faculty of Health Sciences, Hacettepe, Ankara, Turkey.
| | - Ö İncebay
- Department of Speech and Language Therapy, Hacettepe University Faculty of Health Sciences, Hacettepe, Ankara, Turkey
| | - A Köse
- Department of Speech and Language Therapy, Hacettepe University Faculty of Health Sciences, Hacettepe, Ankara, Turkey
| |
Collapse
|
19
|
Musheyev D, Pan A, Loeb S, Kabarriti AE. How Well Do Artificial Intelligence Chatbots Respond to the Top Search Queries About Urological Malignancies? Eur Urol 2024; 85:13-16. [PMID: 37567827 DOI: 10.1016/j.eururo.2023.07.004] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 06/18/2023] [Accepted: 07/17/2023] [Indexed: 08/13/2023]
Abstract
Artificial intelligence (AI) chatbots are becoming a popular source of information but there are limited data on the quality of information on urological malignancies that they provide. Our objective was to characterize the quality of information and detect misinformation about prostate, bladder, kidney, and testicular cancers from four AI chatbots: ChatGPT, Perplexity, Chat Sonic, and Microsoft Bing AI. We used the top five search queries related to prostate, bladder, kidney, and testicular cancers according to Google Trends from January 2021 to January 2023 and input them into the AI chatbots. Responses were evaluated for quality, understandability, actionability, misinformation, and readability using published instruments. AI chatbot responses had moderate to high information quality (median DISCERN score 4 out of 5, range 2-5) and lacked misinformation. Understandability was moderate (median Patient Education Material Assessment Tool for Printable Materials [PEMAT-P] understandability 66.7%, range 44.4-90.9%) and actionability was moderate to poor (median PEMAT-P actionability 40%, range 0-40%The responses were written at a fairly difficult reading level. AI chatbots produce information that is generally accurate and of moderate to high quality in response to the top urological malignancy-related search queries, but the responses lack clear, actionable instructions and exceed the reading level recommended for consumer health information. PATIENT SUMMARY: Artificial intelligence chatbots produce information that is generally accurate and of moderately high quality in response to popular Google searches about urological cancers. However, their responses are fairly difficult to read, are moderately hard to understand, and lack clear instructions for users to act on.
Collapse
Affiliation(s)
- David Musheyev
- Department of Urology, State University of New York Downstate Health Sciences University, New York, NY, USA
| | - Alexander Pan
- Department of Urology, State University of New York Downstate Health Sciences University, New York, NY, USA
| | - Stacy Loeb
- Department of Urology, New York University and Manhattan Veterans Affairs, New York, NY, USA; Department of Population Health, New York University, New York, NY, USA
| | - Abdo E Kabarriti
- Department of Urology, State University of New York Downstate Health Sciences University, New York, NY, USA.
| |
Collapse
|
20
|
Kurtz-Rossi S, Okonkwo IA, Chen Y, Dueñas N, Bilodeau T, Rushforth A, Klein A. Development of a New Tool for Writing Research Key Information in Plain Language. Health Lit Res Pract 2024; 8:e30-e37. [PMID: 38466225 PMCID: PMC10923613 DOI: 10.3928/24748307-20240218-01] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Accepted: 08/21/2023] [Indexed: 03/12/2024] Open
Abstract
BACKGROUND The complexity of research informed consent forms makes it hard for potential study participants to make informed consent decisions. In response, new rules for human research protection require informed consent forms to begin with a key information section that potential study participants can read and understand. This research study builds on exiting guidance on how to write research key information using plain language. OBJECTIVE The aim of this study was to develop a valid and reliable tool to evaluate and improve the readability, understandability, and actionability of the key information section on research informed consent forms. METHODS We developed an initial list of measures to include on the tool through literature review; established face and content validity of measures with expert input; conducted four rounds of reliability testing with four groups of reviewers; and established construct validity with potential research participants. KEY RESULTS We identified 87 candidate measures via literature review. After expert review, we included 23 items on the initial tool. Twenty-four raters conducted 4 rounds of reliability testing on 10 informed consent forms. After each round, we revised or eliminated items to improve agreement. In the final round of testing, 18 items demonstrated substantial inter-rater agreement per Fleiss' Kappa (average = .73) and Gwet's AC1 (average = .77). Intra-rater agreement was substantial per Cohen's Kappa (average = .74) and almost perfect per Gwet's AC1 (average = 0.84). Focus group feedback (N = 16) provided evidence suggesting key information was easy to read when rated as such by the Readability, Understandability and Actionability of Key Information (RUAKI) Indicator. CONCLUSION The RUAKI Indicator is an 18-item tool with evidence of validity and reliability investigators can use to write the key information section on their informed consent forms that potential study participants can read, understand, and act on to make informed decisions. [HLRP: Health Literacy Research and Practice. 2024;8(1):e29-e37.].
Collapse
Affiliation(s)
- Sabrina Kurtz-Rossi
- Address correspondence to Sabrina Kurtz-Rossi, MEd, Department of Public Health & Community Medicine, Tufts University School of Medicine, 136 Harrison Avenue, Boston, MA 02111; email address:
| | | | | | | | | | | | | |
Collapse
|
21
|
Baldwin AJ. Readability, accountability, and quality of burns first aid information available online. Burns 2023; 49:1823-1832. [PMID: 37821277 DOI: 10.1016/j.burns.2023.03.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Revised: 02/05/2023] [Accepted: 03/06/2023] [Indexed: 03/12/2023]
Abstract
AIM To assess the readability, accountability, and quality of burns first aid information available online. METHODS The top 50 English language webpages containing burns first aid information were compiled and categorised. Readability was measured using five validated tools. Accountability was assessed using the Journal of the American Medical Association (JAMA) benchmarks. Quality was evaluated using a scale based on previous literature. RESULTS Two (4%) webpages were judged to be at the target reading level using all tools. Median grade ranged from 4.6 to 9.6 (M = 6.9, SD = 1.1). One-sample one-tailed t-test determined that median grade was not significantly below the target grade of ≤ 6.9 (p = 0.314). Only seven (14%) webpages satisfied all the JAMA accountability benchmarks. No webpages fulfilled all 15 quality criteria. Mean quality score was 9.8 (SD = 2.4). Only 27 (54%) advised 20 min of cooling. One-way analysis of variance demonstrated that accountability was influenced by source (p = 0.01). Pearson's correlation coefficient revealed that accountability and quality had a positive correlation (r = 0.32, p = 0.02). CONCLUSION Much of the burns first aid information available online is written above the recommended reading level and fails to meet standards of accountability or quality.
Collapse
Affiliation(s)
- Alexander J Baldwin
- Department of Burns and Plastic Surgery, Buckinghamshire Healthcare NHS Trust, Buckinghamshire, UK.
| |
Collapse
|
22
|
Powell LE, Bien EM, Cohen JM, Barta RJ. Availability and Readability Level of Online Patient Education Materials Provided by Cleft Lip and Palate Teams. Cleft Palate Craniofac J 2023:10556656231213170. [PMID: 37926980 DOI: 10.1177/10556656231213170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2023] Open
Abstract
OBJECTIVES Evaluate the readability of online English and Spanish cleft lip and palate patient education materials. DESIGN Review of free online materials. SETTING English and Spanish language online patient education materials on cleft lip and palate were collected from American Cleft Palate-Craniofacial Association (ACPA) approved teams. PARTICIPANTS American Cleft Palate-Craniofacial Association (ACPA) approved teams. INTERVENTIONS English materials were analyzed using the Flesch-Kincaid, SMOG, and Coleman-Liau readability calculators. Spanish materials were analyzed using the Fry Graph, Fernandez Huerta, and INFLESZ calculators. A one-way analysis of variance (ANOVA) was used to test for variability between the readability tools. OUTCOMES Readability levels were examined for both sets of materials. RESULTS 171 (90.5%) teams provided English language materials online, with an average readability score calculated as 10.5 ± 2.9 (10th-11th grade). A total of 44 (23.2%) teams listed Spanish language materials online, with average readability score of 7.9 ± 1.2 (8th grade). ANOVA demonstrated statistically significant variability between the readability assessment tools (P < .01). CONCLUSION Online cleft lip and palate patient education material provided by ACPA craniofacial teams were more available in English than in Spanish. Both sets of materials demonstrated readability levels above the recommended 6th-7th grade. Refining readability is associated with lowered healthcare costs and increased patient satisfaction.
Collapse
Affiliation(s)
- Lauren E Powell
- Division of Plastic and Reconstructive Surgery, University of Minnesota, Minneapolis, MN, USA
| | - Erica M Bien
- University of Minnesota School of Medicine, Minneapolis, MN, USA
| | - Jade M Cohen
- University of Minnesota School of Medicine, Minneapolis, MN, USA
| | - Ruth J Barta
- Department of Plastic and Reconstructive Surgery, HealthPartners, Saint Paul, MN, USA
| |
Collapse
|
23
|
Gulbrandsen MT, O’Reilly OC, Gao B, Cannon D, Jesurajan J, Gulbrandsen TR, Phipatanakul WP. Health literacy in rotator cuff repair: a quantitative assessment of the understandability of online patient education material. JSES Int 2023; 7:2344-2348. [PMID: 37969518 PMCID: PMC10638567 DOI: 10.1016/j.jseint.2023.06.016] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2023] Open
Abstract
Background The American Medical Association and National Institutes of Health recommend online health information be written at a 6th grade or lower reading level for clear understanding. While syntax reading grade level has previously been utilized, those analyses do not determine whether readers are processing key information (understandability) or identifying available actions to take (actionability). The Patient Education Materials Assessment Tool (PEMAT-P) is a method to measure the understandability and actionability of online patient education materials. The purpose of this study was to evaluate online resources regarding rotator cuff repair utilizing measures of readability, understandability, and actionability. Methods The search term "rotator cuff surgery" was used in two independent online searches to obtain the top 50 search results. The readability of included resources was quantified using valid objective algorithms: Flesch-Kincaid Grade-Level, Simple Measure of Gobbledygook grade, Coleman-Liau Index, and Gunning Fog Index. The PEMAT-P form was used to assess actionability and understandability. Results A total of 49 unique websites were identified to meet our inclusion criteria and were included in our analysis. The mean Flesch-Kincaid Grade Level graded materials at a 10.6 (approximately a 10th grade reading level), with only two websites offering materials at a 6th grade reading level or below. The remaining readability studies graded the mean reading level at high school or greater, with the Gunning Fog Index scoring at a collegiate reading level. Mean understandability and actionability scores were 64.6% and 29.5%, respectively, falling below the 70% PEMAT score threshold for both scales. Fourteen (28.6%) websites were above the threshold for understandability, while no website (0%) scored above the 70% threshold for actionability. When comparing source categories, commercial health publishers provided websites that scored higher in understandability (P < .05), while private practice materials scored higher in actionability (P < .05). Resources published by academic institutions or organizations scored lower in both understandability and actionability than private practice and commercial health publishers (P < .05). No readability, understandability, or actionability score was significantly associated with search result rank. Conclusion Overall, online patient education materials related to rotator cuff surgery scored poorly with respect to readability, understandability, and actionability. Only two (4.1%) of the patient education websites scored at the American Medical Association and National Institutes of Health recommended reading level. Fourteen (28.6%) scored above the 70% PEMAT score for understandability; however, no website met the threshold for actionability.
Collapse
Affiliation(s)
- Matthew T. Gulbrandsen
- Department of Orthopedic Surgery, Loma Linda University Medical Center, Loma Linda, CA, USA
| | - Olivia C. O’Reilly
- Department of Orthopedic Surgery, University of Iowa Hospital, Iowa City, IA, USA
| | - Burke Gao
- Department of Orthopedic Surgery, University of Iowa Hospital, Iowa City, IA, USA
| | - Damion Cannon
- Department of Orthopedic Surgery, Loma Linda University Medical Center, Loma Linda, CA, USA
| | - Jose Jesurajan
- Department of Orthopedic Surgery, Loma Linda University Medical Center, Loma Linda, CA, USA
| | | | - Wesley P. Phipatanakul
- Department of Orthopedic Surgery, Loma Linda University Medical Center, Loma Linda, CA, USA
| |
Collapse
|
24
|
Decker H, Trang K, Ramirez J, Colley A, Pierce L, Coleman M, Bongiovanni T, Melton GB, Wick E. Large Language Model-Based Chatbot vs Surgeon-Generated Informed Consent Documentation for Common Procedures. JAMA Netw Open 2023; 6:e2336997. [PMID: 37812419 PMCID: PMC10562939 DOI: 10.1001/jamanetworkopen.2023.36997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Accepted: 08/29/2023] [Indexed: 10/10/2023] Open
Abstract
Importance Informed consent is a critical component of patient care before invasive procedures, yet it is frequently inadequate. Electronic consent forms have the potential to facilitate patient comprehension if they provide information that is readable, accurate, and complete; it is not known if large language model (LLM)-based chatbots may improve informed consent documentation by generating accurate and complete information that is easily understood by patients. Objective To compare the readability, accuracy, and completeness of LLM-based chatbot- vs surgeon-generated information on the risks, benefits, and alternatives (RBAs) of common surgical procedures. Design, Setting, and Participants This cross-sectional study compared randomly selected surgeon-generated RBAs used in signed electronic consent forms at an academic referral center in San Francisco with LLM-based chatbot-generated (ChatGPT-3.5, OpenAI) RBAs for 6 surgical procedures (colectomy, coronary artery bypass graft, laparoscopic cholecystectomy, inguinal hernia repair, knee arthroplasty, and spinal fusion). Main Outcomes and Measures Readability was measured using previously validated scales (Flesh-Kincaid grade level, Gunning Fog index, the Simple Measure of Gobbledygook, and the Coleman-Liau index). Scores range from 0 to greater than 20 to indicate the years of education required to understand a text. Accuracy and completeness were assessed using a rubric developed with recommendations from LeapFrog, the Joint Commission, and the American College of Surgeons. Both composite and RBA subgroup scores were compared. Results The total sample consisted of 36 RBAs, with 1 RBA generated by the LLM-based chatbot and 5 RBAs generated by a surgeon for each of the 6 surgical procedures. The mean (SD) readability score for the LLM-based chatbot RBAs was 12.9 (2.0) vs 15.7 (4.0) for surgeon-generated RBAs (P = .10). The mean (SD) composite completeness and accuracy score was lower for surgeons' RBAs at 1.6 (0.5) than for LLM-based chatbot RBAs at 2.2 (0.4) (P < .001). The LLM-based chatbot scores were higher than the surgeon-generated scores for descriptions of the benefits of surgery (2.3 [0.7] vs 1.4 [0.7]; P < .001) and alternatives to surgery (2.7 [0.5] vs 1.4 [0.7]; P < .001). There was no significant difference in chatbot vs surgeon RBA scores for risks of surgery (1.7 [0.5] vs 1.7 [0.4]; P = .38). Conclusions and Relevance The findings of this cross-sectional study suggest that despite not being perfect, LLM-based chatbots have the potential to enhance informed consent documentation. If an LLM were embedded in electronic health records in a manner compliant with the Health Insurance Portability and Accountability Act, it could be used to provide personalized risk information while easing documentation burden for physicians.
Collapse
Affiliation(s)
- Hannah Decker
- Department of Surgery, University of California, San Francisco
| | - Karen Trang
- Department of Surgery, University of California, San Francisco
| | - Joel Ramirez
- Department of Surgery, University of California, San Francisco
| | - Alexis Colley
- Department of Surgery, University of California, San Francisco
| | - Logan Pierce
- Department of Medicine, University of California, San Francisco
| | - Melissa Coleman
- Department of Surgery, University of California, San Francisco
| | | | - Genevieve B. Melton
- Department of Surgery, Institute for Health Informatics, and Center for Learning Health System Sciences, University of Minnesota, Minneapolis
| | - Elizabeth Wick
- Department of Surgery, University of California, San Francisco
| |
Collapse
|
25
|
Pan A, Musheyev D, Bockelman D, Loeb S, Kabarriti AE. Assessment of Artificial Intelligence Chatbot Responses to Top Searched Queries About Cancer. JAMA Oncol 2023; 9:1437-1440. [PMID: 37615960 PMCID: PMC10450581 DOI: 10.1001/jamaoncol.2023.2947] [Citation(s) in RCA: 30] [Impact Index Per Article: 30.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 05/12/2023] [Indexed: 08/25/2023]
Abstract
Importance Consumers are increasingly using artificial intelligence (AI) chatbots as a source of information. However, the quality of the cancer information generated by these chatbots has not yet been evaluated using validated instruments. Objective To characterize the quality of information and presence of misinformation about skin, lung, breast, colorectal, and prostate cancers generated by 4 AI chatbots. Design, Setting, and Participants This cross-sectional study assessed AI chatbots' text responses to the 5 most commonly searched queries related to the 5 most common cancers using validated instruments. Search data were extracted from the publicly available Google Trends platform and identical prompts were used to generate responses from 4 AI chatbots: ChatGPT version 3.5 (OpenAI), Perplexity (Perplexity.AI), Chatsonic (Writesonic), and Bing AI (Microsoft). Exposures Google Trends' top 5 search queries related to skin, lung, breast, colorectal, and prostate cancer from January 1, 2021, to January 1, 2023, were input into 4 AI chatbots. Main Outcomes and Measures The primary outcomes were the quality of consumer health information based on the validated DISCERN instrument (scores from 1 [low] to 5 [high] for quality of information) and the understandability and actionability of this information based on the understandability and actionability domains of the Patient Education Materials Assessment Tool (PEMAT) (scores of 0%-100%, with higher scores indicating a higher level of understandability and actionability). Secondary outcomes included misinformation scored using a 5-item Likert scale (scores from 1 [no misinformation] to 5 [high misinformation]) and readability assessed using the Flesch-Kincaid Grade Level readability score. Results The analysis included 100 responses from 4 chatbots about the 5 most common search queries for skin, lung, breast, colorectal, and prostate cancer. The quality of text responses generated by the 4 AI chatbots was good (median [range] DISCERN score, 5 [2-5]) and no misinformation was identified. Understandability was moderate (median [range] PEMAT Understandability score, 66.7% [33.3%-90.1%]), and actionability was poor (median [range] PEMAT Actionability score, 20.0% [0%-40.0%]). The responses were written at the college level based on the Flesch-Kincaid Grade Level score. Conclusions and Relevance Findings of this cross-sectional study suggest that AI chatbots generally produce accurate information for the top cancer-related search queries, but the responses are not readily actionable and are written at a college reading level. These limitations suggest that AI chatbots should be used supplementarily and not as a primary source for medical information.
Collapse
Affiliation(s)
- Alexander Pan
- Department of Urology, State University of New York Downstate Health Sciences University, New York
| | - David Musheyev
- Department of Urology, State University of New York Downstate Health Sciences University, New York
| | - Daniel Bockelman
- Department of Urology, State University of New York Downstate Health Sciences University, New York
| | - Stacy Loeb
- Department of Urology, New York University School of Medicine, New York
- Department of Population Health, New York University School of Medicine, New York
- Department of Surgery, VA New York Harbor Health Care, New York
| | - Abdo E. Kabarriti
- Department of Urology, State University of New York Downstate Health Sciences University, New York
| |
Collapse
|
26
|
Valentine MJ, Cottone G, Kramer HD, Kayastha A, Kim J, Pettinelli NJ, Kramer RC. Lower Back Pain Imaging: A Readability Analysis. Cureus 2023; 15:e45174. [PMID: 37842495 PMCID: PMC10575676 DOI: 10.7759/cureus.45174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/13/2023] [Indexed: 10/17/2023] Open
Abstract
PURPOSE The internet provides access to a myriad of educational health-related resources which are an invaluable source of information for patients. Lower back pain is a common complaint that is discussed extensively online. In this article, we aim to determine if the most commonly accessed articles about lower back pain imaging use language that can be understood by most patients. According to the American Medical Association (AMA) and National Institute of Health (NIH), this corresponds to a sixth-grade reading level. METHODS Online searches were conducted from the most commonly used search engine, Google, to assess the present state of readability on radiograph imaging for LBP. Then the top 20 populated URL links from each search were utilized based on "health & fitness" search trends and click-through rates (CTRs). The readability of various websites was evaluated with WebFX online software that analyzed the unique websites' text when put into reader view on Firefox web browser version 116.0.3 (64-bit). Evaluation occurred via five common readability indices: the Automated Readability Index (ARI), the Coleman Liau Index (CLI), the SMOG index, the Gunning Fog Score Index (GFSI), and the Flesch Kincaid Grade Level Index (FKGLI). In addition, the Flesch Kincaid Reading Ease Index (FKREI) was also used but was excluded from the calculation due to its measuring scale outside of US grade levels. The number of samples was analyzed via health and fitness-specific CTR from an open-access database from July 2022 to July 2023. This was used to calculate the number of persons clicking and visiting positional URLs (first URL to the 20th URL) from each unique keyword search and the rational criteria for selecting the first 20 websites for each query. RESULTS Online material that included LBP imaging information was calculated to have an overall readability score of 10.745 out of the 23 websites obtained from unique searches. The range was a mean readability score of 8 to 14. Notably, 17 websites were excluded from a total of 40 websites due to duplication of the same data (URLs that resulted from both unique searches) and accessibility requiring payment (specifically, an UpToDate link). A readability score of 10.745 refers to an 11th-grade reading level. That is to say, the most commonly visited sites on Google that contain information about lower back pain imaging are, on average, five grade levels higher than the sixth-grade reading level recommended by the AMA and the NIH. CONCLUSIONS Most internet content regarding lower back pain imaging is written at a reading level that is above the recommended limit defined by the AMA and NIH. To improve education about lower back pain imaging and the patient-physician relationship, we recommend guiding patients to online material that contains a reading level at the sixth-grade level as suggested by the AMA and NIH.
Collapse
Affiliation(s)
| | - Gannon Cottone
- College of Osteopathic Medicine, Kansas City University, Kansas City, USA
| | - Hunter D Kramer
- College of Osteopathic Medicine, Kansas City University, Kansas City, USA
| | - Ankur Kayastha
- College of Osteopathic Medicine, Kansas City University, Kansas City, USA
| | - James Kim
- College of Osteopathic Medicine, Kansas City University, Kansas City, USA
| | | | - Robert C Kramer
- Hand Surgery, Beaumont Bone and Joint Institute, Beaumont, USA
| |
Collapse
|
27
|
Stevens L, Guo M, Brown ZJ, Ejaz A, Pawlik TM, Cloyd JM. Evaluating the Quality of Online Information Regarding Neoadjuvant Therapy for Pancreatic Cancer. J Gastrointest Cancer 2023; 54:890-896. [PMID: 36327090 DOI: 10.1007/s12029-022-00879-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/20/2022] [Indexed: 11/06/2022]
Abstract
PURPOSE Neoadjuvant therapy (NT) is increasingly utilized for patients with localized pancreatic ductal adenocarcinoma (PDAC). Patients with cancer have high information needs and the Internet has materialized as a leading source of information for many patients. Nevertheless, little is known about the availability, accessibility, quality, and readability of online information regarding NT for PDAC. METHODS A search of online patient informational materials (PIMs) pertaining to NT for PDAC was conducted using a combination of common search engines and browsers. Two independent researchers evaluated the readability, quality, and availability of unique PIMs from the top 25 websites from each search using validated measures. RESULTS Among the 130 websites retrieved, 46 (35.4%) unique PIMs focused on treatment of PDAC. Only 30 (23%) mentioned NT as a possible treatment option. Downstaging was the rationale for NT mentioned in the majority (90%) of websites. The mean quality and reliability of the 30 PIMs, assessed using the DISCERN instrument, was 3.3 ± 0.7, suggesting moderate quality/reliability. The mean readability score, assessed using the SMOG Grade tool, was 10.96 ± 1.49, which is equivalent to an 11th grade reading level. CONCLUSION The low availability, poor readability, and moderate quality of online informational materials regarding NT for PDAC highlight the need for new patient-centered resources to educate patients and caregivers on an increasingly utilized treatment strategy for localized PDAC.
Collapse
Affiliation(s)
- Lena Stevens
- Department of Surgery, The Ohio State University Wexner Medical Center, 410 W 10th Ave, N-907 Doan Hall, Columbus, OH, 43210, USA
| | - Marissa Guo
- Department of Surgery, The Ohio State University Wexner Medical Center, 410 W 10th Ave, N-907 Doan Hall, Columbus, OH, 43210, USA
| | - Zachary J Brown
- Department of Surgery, The Ohio State University Wexner Medical Center, 410 W 10th Ave, N-907 Doan Hall, Columbus, OH, 43210, USA
| | - Aslam Ejaz
- Department of Surgery, The Ohio State University Wexner Medical Center, 410 W 10th Ave, N-907 Doan Hall, Columbus, OH, 43210, USA
| | - Timothy M Pawlik
- Department of Surgery, The Ohio State University Wexner Medical Center, 410 W 10th Ave, N-907 Doan Hall, Columbus, OH, 43210, USA
| | - Jordan M Cloyd
- Department of Surgery, The Ohio State University Wexner Medical Center, 410 W 10th Ave, N-907 Doan Hall, Columbus, OH, 43210, USA.
| |
Collapse
|
28
|
Sahin A, Kara-Aksay A, Demir G, Ekemen-Keles Y, Ustundag G, Berksoy E, Karadag-Oncel E, Yilmaz D. Parental Attitudes About Lumbar Puncture in Children With Suspected Central Nervous System Infection. Pediatr Emerg Care 2023; 39:661-665. [PMID: 37463198 DOI: 10.1097/pec.0000000000003015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 07/20/2023]
Abstract
OBJECTIVES This study aimed to evaluate parents' attitudes toward lumbar puncture (LP) for their children with suspected central nervous system infection to determine the reasons for rejection and related factors. METHODS The survey was provided to parents of children (1 month to 18 years old) for whom LP was recommended because of a concern for central nervous system infection. Sociodemographic characteristics and other related factors of parents who did and did not approve of LP were compared statistically. The reasons for the disapproval of parents who refused LP were revealed. RESULTS A total of 100 parents were included in the study. Eighty-two percent of the participating parents were mothers, and the median age of the mothers was 31 years (min: 17 years; max: 70 years). The median age of the fathers was 37 years (min: 22 years; max: 60 years). Among the parents, 34% did not give consent for LP. The most common reason for the participants to refuse LP was fear of paralysis of their children due to the procedure (82.3%). There was a statistical difference between the approval of the LP procedure and the person who informed the parents about the LP procedure and read the informed consent form ( P = 0.004 and P = 0.038, respectively).As a result of the binary logistic regression analysis, it was seen that the rate of acceptance of the LP procedure by the parents informed by the specialist doctors was 7.1-fold ( P = 0.02; 95% confidence interval, 1.3-37.6) higher than the parents informed by the resident physicians. CONCLUSION The informed consent process mainly influenced parents' attitudes toward LP. To increase the acceptance rates of LP, we should standardize the informed consent process so that it is not affected by factors such as seniority of the physician.
Collapse
Affiliation(s)
- Aslıhan Sahin
- From the Departments of Pediatric Infectious Diseases
| | | | - Gulsah Demir
- Pediatric Emergency, Health Sciences University Tepecik Training and Research Hospital
| | | | | | - Emel Berksoy
- Pediatric Emergency, Health Sciences University Tepecik Training and Research Hospital
| | | | | |
Collapse
|
29
|
Baumann J, Marshall S, Groneck A, Hanish SJ, Choma T, DeFroda S. Readability of spine-related patient education materials: a standard method for improvement. EUROPEAN SPINE JOURNAL : OFFICIAL PUBLICATION OF THE EUROPEAN SPINE SOCIETY, THE EUROPEAN SPINAL DEFORMITY SOCIETY, AND THE EUROPEAN SECTION OF THE CERVICAL SPINE RESEARCH SOCIETY 2023; 32:3039-3046. [PMID: 37466719 DOI: 10.1007/s00586-023-07856-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Revised: 03/08/2023] [Accepted: 03/19/2023] [Indexed: 07/20/2023]
Abstract
PURPOSE Orthopaedic patient education materials (PEMs) have repeatedly been shown to be well above the recommended reading level by the National Institute of Health and American Medical Association. The purpose of this study is to create a standardized method to improve the readability of PEMs describing spine-related conditions and injuries. It is hypothesized that reducing the usage of complex words (≥ 3 syllables) and reducing sentence length to < 15 words per sentence improves readability of PEMs as measured by all seven readability formulas used. METHODS OrthoInfo.org was queried for spine-related PEMs. The objective readability of PEMs was evaluated using seven unique readability formulas before and after applying a standardized method to improve readability while preserving critical content. This method involved reducing the use of > 3 syllable words and ensuring sentence length is < 15 words. Paired samples t-tests were conducted to assess relationships with the cut-off for statistical significance set at p < 0.05. RESULTS A total of 20 spine-related PEM articles were used in this study. When comparing original PEMs to edited PEMs, significant differences were seen among all seven readability scores and all six numerical descriptive statistics used. Per the Flesch Kincaid Grade level readability formula, one original PEM (5%) versus 15 edited PEMs (75%) met recommendations of a sixth-grade reading level. CONCLUSION The current study shows that using this standardized method significantly improves the readability of spine-related PEMs and significantly increased the likelihood that PEMs will meet recommendations for being at or below the sixth-grade reading level.
Collapse
Affiliation(s)
- John Baumann
- Department of Orthopaedic Surgery, University of Missouri Health Care, Columbia, MO, USA.
- University of Missouri School of Medicine, Columbia, MO, USA.
| | - Samuel Marshall
- Department of Orthopaedic Surgery, University of Missouri Health Care, Columbia, MO, USA
- University of Missouri School of Medicine, Columbia, MO, USA
| | - Andrew Groneck
- Department of Orthopaedic Surgery, University of Missouri Health Care, Columbia, MO, USA
- University of Missouri School of Medicine, Columbia, MO, USA
| | - Stefan J Hanish
- Department of Orthopaedic Surgery, University of Missouri Health Care, Columbia, MO, USA
- University of Missouri School of Medicine, Columbia, MO, USA
| | - Theodore Choma
- Department of Orthopaedic Surgery, University of Missouri Health Care, Columbia, MO, USA
| | - Steven DeFroda
- Department of Orthopaedic Surgery, University of Missouri Health Care, Columbia, MO, USA.
| |
Collapse
|
30
|
Frisby C, Eikelboom RH, Mahomed-Asmail F, Kuper H, Moore DR, de Kock T, Manchaiah V, Swanepoel DW. Mobile Health Hearing Aid Acclimatization and Support Program in Low-Income Communities: Feasibility Study. JMIR Form Res 2023; 7:e46043. [PMID: 37610802 PMCID: PMC10483300 DOI: 10.2196/46043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Revised: 06/06/2023] [Accepted: 07/04/2023] [Indexed: 08/24/2023] Open
Abstract
BACKGROUND The most common management option for hearing loss is hearing aids. In addition to devices, patients require information and support, including maintenance and troubleshooting. Mobile health (mHealth) technologies can support hearing aid management, acclimatization, and use. This study developed an mHealth acclimatization and support program for first-time hearing aid users and subsequently implemented and pilot-tested the feasibility of the program. The program was facilitated by community health workers (CHWs) in low-income communities in South Africa. OBJECTIVE This study aimed to evaluate the feasibility of an mHealth acclimatization and support program supported by CHWs in low-income communities. METHODS An application-based acclimatization and support was adapted and translated for use in low- and middle-income countries. This program was delivered in the form of 20 different voice notes accompanied by graphical illustrations via WhatsApp or 20 different SMS text messages. The program was provided to first-time hearing aid users immediately after a community-based hearing aid fitting in March 2021 in 2 low-income communities in the Western Cape, South Africa. The 20 messages were sent over a period of 45 days. Participants were contacted telephonically on days 8, 20, and 43 of the program and via open-ended paper-based questionnaires translated to isiXhosa 45 days and 6 months after the program started to obtain information on their experiences, perceptions, and accessibility of the program. Their responses were analyzed using inductive thematic analysis. RESULTS A total of 19 participants fitted with hearing aids received the mHealth acclimatization and support program. Most participants (15/19, 79%) received the program via WhatsApp, with 21% (4/19) of them receiving it via SMS text message. Participants described the program as helpful, supportive, informative, sufficient, and clear at both follow-ups. A total of 14 participants reported that they were still using their hearing aids at the 6-month follow-up. Three participants indicated that not all their questions about hearing aids were answered, and 5 others had minor hearing aid issues. This included feedback (n=1), battery performance (n=1), physical fit (n=2), and issues with hearing aid accessories (n=1). However, CHWs successfully addressed all these issues. There were no notable differences in responses between the participants who received the program via WhatsApp compared with those who received it through SMS text message. Most participants receiving WhatsApp messages reported that the voice notes were easier to understand, but the graphical illustrations supplemented the voice notes well. CONCLUSIONS An mHealth acclimatization and support program is feasible and potentially assists hearing aid acclimatization and use for first-time users in low-income communities. Scalable mHealth support options can facilitate increased access and improve outcomes of hearing care.
Collapse
Affiliation(s)
- Caitlin Frisby
- Department of Speech-Language Pathology and Audiology, University of Pretoria, Pretoria, South Africa
- Virtual Hearing Lab, Collaborative initiative between the University of Colorado and the University of Pretoria, Aurora, CO, United States
| | - Robert H Eikelboom
- Department of Speech-Language Pathology and Audiology, University of Pretoria, Pretoria, South Africa
- Ear Science Institute Australia, Subiaco, Australia
- Ear Sciences Centre, Medical School, The University of Western Australia, Nedlands, Australia
- Faculty of Health Sciences, Curtin University, Bentley, Australia
| | - Faheema Mahomed-Asmail
- Department of Speech-Language Pathology and Audiology, University of Pretoria, Pretoria, South Africa
- Virtual Hearing Lab, Collaborative initiative between the University of Colorado and the University of Pretoria, Aurora, CO, United States
| | - Hannah Kuper
- International Centre for Evidence in Disability, London School of Hygiene & Tropical Medicine, London, United Kingdom
| | - David R Moore
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center and University of Cincinnati, Cincinnati, OH, United States
- Manchester Centre for Audiology and Deafness, University of Manchester, Manchester, United Kingdom
| | | | - Vinaya Manchaiah
- Department of Speech-Language Pathology and Audiology, University of Pretoria, Pretoria, South Africa
- Virtual Hearing Lab, Collaborative initiative between the University of Colorado and the University of Pretoria, Aurora, CO, United States
- Department of Otolaryngology-Head and Neck Surgery, University of Colorado School of Medicine, Aurora, CO, United States
- UCHealth Hearing and Balance, University of Colorado Hospital, Aurora, CO, United States
- Department of Speech and Hearing, School of Allied Health Sciences, Manipal University, Manipal, India
| | - De Wet Swanepoel
- Department of Speech-Language Pathology and Audiology, University of Pretoria, Pretoria, South Africa
- Virtual Hearing Lab, Collaborative initiative between the University of Colorado and the University of Pretoria, Aurora, CO, United States
- Ear Science Institute Australia, Subiaco, Australia
- Department of Otolaryngology-Head and Neck Surgery, University of Colorado School of Medicine, Aurora, CO, United States
| |
Collapse
|
31
|
Odigie E, Andreadis K, Chandra I, Mocchetti V, Rives H, Cox S, Rameau A. Are Mobile Applications in Laryngology Designed for All Patients? Laryngoscope 2023; 133:1540-1549. [PMID: 36317789 PMCID: PMC10149562 DOI: 10.1002/lary.30465] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 09/19/2022] [Accepted: 10/10/2022] [Indexed: 02/24/2023]
Abstract
OBJECTIVES Mobile applications (apps) are multiplying in laryngology, with little standardization of content, functionality, or accessibility. The purpose of this study is to evaluate the quality, functionality, health literacy, readability, accessibility, and inclusivity of laryngology mobile applications. METHODS Of the 3230 apps identified from the Apple and Google Play stores, 28 patient-facing apps met inclusion criteria. Apps were evaluated using validated scales assessing quality and functionality: the Mobile App Rating Scale (MARS) and the Institute for Healthcare Informatics App Functionality Scale. The Clear Communication Index (CDC) Institute of Medicine Strategies for Creating Health Literate Mobile Applications, and Patient Education Materials Assessment Tool (PEMAT) were used to evaluate apps health literacy level. Readability was assessed using established readability formulas. Apps were evaluated for language, accessibility features, and representation of a diverse population. RESULTS Twenty-six apps (92%) had adequate quality (MARS score > 3). The mean PEMAT score was 89% for actionability and 86% for understandability. On average, apps utilized 25/33 health literate strategies. Twenty-two apps (79%) did not pass the CDC index threshold of 90% for health literacy. Twenty-four app descriptions (86%) were above an 8th grade reading level. Only 4 apps (14%) showed diverse representation, 3 (11%) had non-English language functions, and 2 (7%) offered subtitles. Inter-rater reliability for MARS was adequate (CA-ICC = 0.715). CONCLUSION While most apps scored well in quality and functionality, many laryngology apps did not meet standards for health literacy. Most apps were written at a reading level above the national average, lacked accessibility features, and did not represent diverse populations. Laryngoscope, 133:1540-1549, 2023.
Collapse
Affiliation(s)
- Eseosa Odigie
- Sean Parker Institute for the Voice, Department of Otolaryngology, Weill Cornell Medical College, New York, USA
| | - Katerina Andreadis
- Sean Parker Institute for the Voice, Department of Otolaryngology, Weill Cornell Medical College, New York, USA
| | - Iyra Chandra
- Sean Parker Institute for the Voice, Department of Otolaryngology, Weill Cornell Medical College, New York, USA
| | - Valentina Mocchetti
- Sean Parker Institute for the Voice, Department of Otolaryngology, Weill Cornell Medical College, New York, USA
| | - Hal Rives
- Sean Parker Institute for the Voice, Department of Otolaryngology, Weill Cornell Medical College, New York, USA
| | - Steven Cox
- Department of Communication Sciences and Disorders, Adelphi University, Garden City, USA
| | - Anaïs Rameau
- Sean Parker Institute for the Voice, Department of Otolaryngology, Weill Cornell Medical College, New York, USA
| |
Collapse
|
32
|
Michel C, Dijanic C, Abdelmalek G, Sudah S, Kerrigan D, Gorgy G, Yalamanchili P. Readability assessment of patient educational materials for pediatric spinal conditions from top academic orthopedic institutions. J Child Orthop 2023; 17:284-290. [PMID: 37288046 PMCID: PMC10242376 DOI: 10.1177/18632521231156435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 01/25/2023] [Indexed: 06/09/2023] Open
Abstract
Background The Internet has become a popular source of health information for patients and their families. Healthcare experts recommend that the readability of online education materials be at or below a sixth grade reading level. This translates to a standardized Flesch Reading Ease Score between 81 and 90, which is equivalent to conversational English. However, previous studies have demonstrated that the readability of online education materials of various orthopedic topics is too advanced for the average patient. To date, the readability of online education materials for pediatric spinal conditions has not been analyzed. The objective of this study was to assess the readability of online educational materials of top pediatric orthopedic hospital websites for pediatric spinal conditions. Methods Online patient education materials from the top 25 pediatric orthopedic institutions, as ranked by the U.S. News and World Report hospitals for pediatric orthopedics, were assessed utilizing multiple readability assessment metrics including Flesch-Kincaid, Flesch Reading Ease, Gunning Fog Index, and others. Correlations between academic institutional ranking, geographic location, and the use of concomitant multimedia modalities with Flesch-Kincaid scores were evaluated using a Spearman regression. Results Only 32% (8 of 25) of top pediatric orthopedic hospitals provided online health information at or below a sixth grade reading level. The mean Flesch-Kincaid score was 9.3 ± 2.5, Flesch Reading Ease 48.3 ± 16.2, Gunning Fog Score 10.7 ± 3.0, Coleman-Liau Index 12.1 ± 2.8, Simple Measure of the Gobbledygook Index 11.7 ± 2.1, Automated Readability Index 9.0 ± 2.7, FORCAST 11.3 ± 1.2, and Dale-Chall Readability Index 6.7 ± 1.4. There was no significant correlation between institutional ranking, geographic location, or use of video material with Flesch-Kincaid scores (p = 0.1042, p = 0.7776, p = 0.3275, respectively). Conclusion Online educational material for pediatric spinal conditions from top pediatric orthopedic institutional websites is associated with excessively complex language which may limit comprehension for the majority of the US population. Type of study/Level of evidence Economic and Decision Analysis/level III.
Collapse
Affiliation(s)
- Christopher Michel
- Department of Orthopedic Surgery, Monmouth Medical Center – RWJBarnabas Health, Long Branch, NJ, USA
| | - Christopher Dijanic
- Department of Orthopedic Surgery, Monmouth Medical Center – RWJBarnabas Health, Long Branch, NJ, USA
| | | | - Suleiman Sudah
- Department of Orthopedic Surgery, Monmouth Medical Center – RWJBarnabas Health, Long Branch, NJ, USA
| | - Daniel Kerrigan
- Department of Orthopedic Surgery, Monmouth Medical Center – RWJBarnabas Health, Long Branch, NJ, USA
| | - George Gorgy
- Department of Orthopedic Surgery, Monmouth Medical Center – RWJBarnabas Health, Long Branch, NJ, USA
| | - Praveen Yalamanchili
- Department of Orthopedic Surgery, Monmouth Medical Center – RWJBarnabas Health, Long Branch, NJ, USA
| |
Collapse
|
33
|
Amanda R, Rana K, Saunders P, Tracy M, Bridges N, Poudel P, Arora A. Evaluation of the usability, content, readability and cultural appropriateness of online alcohol and other drugs resources for Aboriginal and Torres Strait Islander Peoples in New South Wales, Australia. BMJ Open 2023; 13:e069756. [PMID: 37164458 PMCID: PMC10174040 DOI: 10.1136/bmjopen-2022-069756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 05/12/2023] Open
Abstract
OBJECTIVES This study aimed to analyse the usability, content, readability and cultural appropriateness of alcohol and other drugs (AODs) resources for Aboriginal and Torres Strait Islander Peoples in New South Wales (NSW), Australia. OUTCOME MEASURES The content of 30 AOD resources for Aboriginal and Torres Strait Islander Peoples was analysed according to the following criteria: general characteristics; elements of graphical design and written communication; thoroughness and content; readability (Flesch-Kincaid grade level (FKGL), Gunning Fog index (Fog), Simplified Measure of Gobbledygook and Flesch Reading Ease); and cultural appropriateness. RESULTS Most resources displayed good usability, depicted by the use of headings and subheadings (n=27), superior writing style (n=19), relevant visuals (n=19) and use of colour support (n=30). However, some resources used at least one professional jargon (n=13), and many did not provide any peer-reviewed references (n=22). During content analysis, 12 resources were categorised into the alcohol group and 18 resources in the other drugs group. Impact of alcohol during pregnancy and breast feeding (n=12) was the most common included topics in the resources related to alcohol, while the physical impact of drugs (n=15) was the most discussed topics among the other drugs group. Based on the FKGL readability score, 83% of resources met the recommended reading grade level of 6-8 by NSW Health. Many resources (n=21) met at least half of the cultural appropriateness elements of interest. However, less than one-third were developed in collaboration with the local community (n=9), used local terms (n=5), targeted the local community (n=3), included an Aboriginal voice (n=2) and addressed the underlying cause (n=1). CONCLUSIONS Many AOD resources are developed specifically for Aboriginal and Torres Strait Islander Peoples, but their usability, content and readability differed, and they were not culturally appropriate for all communities. Development of a standardised protocol for resource development is suggested.
Collapse
Affiliation(s)
- Rebecca Amanda
- School of Health Sciences, Western Sydney University, Penrith, NSW, Australia
- Health Equity Laboratory, Campbelltown, NSW, Australia
| | - Kritika Rana
- School of Health Sciences, Western Sydney University, Penrith, NSW, Australia
- Health Equity Laboratory, Campbelltown, NSW, Australia
- Translational Health Research Institute, Western Sydney University, Penrith, NSW, Australia
| | - Paul Saunders
- Translational Health Research Institute, Western Sydney University, Penrith, NSW, Australia
- School of Medicine, Western Sydney University, Penrith, NSW, Australia
| | - Marguerite Tracy
- General Practice Clinical School, Sydney Medical School, The University of Sydney, Sydney, NSW, Australia
- Drug Health Services, Cumberland Hospital, Western Sydney Local Health District, North Parramatta, NSW, Australia
| | - Nicole Bridges
- School of Humanities and Communication Arts, Western Sydney University, Kingswood, NSW, Australia
| | - Prakash Poudel
- Office of Research and Education, Canberra Hospital, Canberra Health Services, ACT Government, Canberra, ACT, Australia
| | - Amit Arora
- School of Health Sciences, Western Sydney University, Penrith, NSW, Australia
- Health Equity Laboratory, Campbelltown, NSW, Australia
- Translational Health Research Institute, Western Sydney University, Penrith, NSW, Australia
- Discipline of Child and Adoloscent Health, Sydney Medical School, The University of Sydney, Westmead, NSW, Australia
- Oral Health Services, Sydney Local Health District and Sydney Dental Hospital, Surry Hills, NSW, Australia
| |
Collapse
|
34
|
Padilla G, Awshah S, Mhaskar RS, Diab ARF, Sujka JA, DuCoin C, Docimo S. Spanish-language bariatric surgery patient education materials fail to meet healthcare literacy standards of readability. Surg Endosc 2023:10.1007/s00464-023-10088-9. [PMID: 37129638 DOI: 10.1007/s00464-023-10088-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 04/17/2023] [Indexed: 05/03/2023]
Abstract
BACKGROUND The Hispanic population is the fastest growing ethnic minority in the United States, contributing to nearly half of the population growth over the last decade. Unfortunately, this population suffers from lower-than-average health literacy rates, leading to poorer health outcomes. Per the American Medical Association and National Institutes of Health, patient education materials (PEMs) should be written at no higher than a 6th grade reading level. Given that US Hispanic adults have the second-highest obesity prevalence, this study aims to analyze the readability of Spanish-language PEMs regarding bariatric surgery available in US-based academic and medical centers. METHODS A total of 50 PEMs were found via the query ""cirugía de pérdida de peso" site: (edu OR.org)" on the Google search engine. Thirty-nine sources met the inclusion criteria of belonging to a US-based academic or medical center and containing information regarding the indications for bariatric surgery, descriptions of the types of bariatric surgery, what to expect before and after surgery, or the risks and benefits of bariatric surgery. The excerpts were analyzed according to three readability formulas designed specifically for the Spanish language and evaluated for their reading grade level. RESULTS All 39 sources were at the college reading level per the Fry graph corrected for Spanish. Per the Spaulding formula, 37 sources were "Grade 12 + " and two sources were "Grade 8-10." Per the Fernandez-Huerta formula, 16 sources were at the 8th/9th grade reading level, 22 sources were at the 7th grade reading level, and one was at the 6th grade reading level. CONCLUSION The Spanish-language bariatric surgery PEMs available online from US-based academic and medical centers are generally above the recommended 6th grade reading level. Failure to meet the recommended sixth-grade reading level decreases health care literacy for Spanish-speaking patients within the United States seeking bariatric surgery.
Collapse
Affiliation(s)
| | | | | | | | - Joseph A Sujka
- University of South Florida, Tampa, USA
- Tampa General Hospital, Tampa, USA
| | - Christopher DuCoin
- University of South Florida, Tampa, USA
- Tampa General Hospital, Tampa, USA
| | - Salvatore Docimo
- University of South Florida, Tampa, USA
- Tampa General Hospital, Tampa, USA
| |
Collapse
|
35
|
Beltran S, Yalim A, Morris A, Taylor L. Emerging social workers during COVID-19: Exploring perceived readiness and training needs. JOURNAL OF SOCIAL WORK (LONDON, ENGLAND) 2023; 23:428-442. [PMID: 38602920 PMCID: PMC10020856 DOI: 10.1177/14680173231162490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/13/2024]
Abstract
Summary methods and approach Social workers support clients' psychosocial and resource needs across care settings. Social workers are typically not, however, trained to engage in emergency response practices such as the ones that may be necessary to support needs brought on by the COVID-19 pandemic. This article reports findings from a cross-sectional survey of social work students and recent graduates entering the field of social work during COVID-19, exploring their preparation, perceived readiness, and training needs. Findings The study sample (N = 94) included 70 students and 24 recent graduates. The sample was 52% White, 22% Hispanic, and 21% Black/African American. Respondents reported training needs in the areas of trauma-informed care (70%), behavioral health (57%), culturally competent practice (49%), telehealth (48%), loss and grief (44%), and emergency management (43%). No significant differences emerged in self-efficacy ratings of students and recent graduates; both groups reported low self-efficacy in their ability to apply advanced practice skills. After controlling for demographics, receiving training specific to the COVID-19 pandemic (β = .271, p < .05), perceived readiness (β = .779, p < .001), and satisfaction with training/preparation (β = .4450, p < .001) significantly contribute to levels of perceived self-efficacy among SW students and recent graduates. Applications Social work curricular developments, and continuing education, are needed to prepare and support emerging social workers for practice in the context of COVID-19 and its long-term implications. This includes enhancing social workers' readiness to engage in telehealth, trauma-informed practice, emergency management, policy interpretation, self-care, and grief support.
Collapse
Affiliation(s)
| | - Asli Yalim
- University of Central
Florida, Orlando, USA
| | | | | |
Collapse
|
36
|
Maeda Y, Kawahira H, Asada Y, Yamamoto S, Shimpo M. The effect of refresher training on fact description in medical incident report writing in the Japanese language. APPLIED ERGONOMICS 2023; 109:103987. [PMID: 36716527 DOI: 10.1016/j.apergo.2023.103987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Revised: 12/12/2022] [Accepted: 01/25/2023] [Indexed: 06/18/2023]
Abstract
To maintain the effectiveness of the training (1st-Training Session: 1st-TS) to accurate describe facts in the medical incident reports (IRs) in Japanese, a refresher TS was designed and its effectiveness was examined. First, textual analysis showed that IRs' accuracy significantly decreased six months after the 1st-TS. Based on this result, the refresher TS was designed and conducted with 64 residents. To verify the refresher TS' effectiveness, IRs after the 1st-TS, six months later, and after the refresher TS were compared via text analysis. The results showed that the refresher TS restored the description rate of patient's background, safety check procedures, original work procedures, information on equipment used, reporter's actions, and post-incident response. The questionnaire was also administered and showed that the refresher TS contributed to residents' motivation to learn about IRs. In conclusion, the refresher TS contributed to sustaining the effect of the 1st-TS on accurately describing IRs.
Collapse
Affiliation(s)
- Yoshitaka Maeda
- Medical Simulation Centre, Jichi Medical University, 3311-1, Yakushiji, Shimotsuke-shi, Tochigi, 329-0498, Japan.
| | - Hiroshi Kawahira
- Medical Simulation Centre, Jichi Medical University, 3311-1, Yakushiji, Shimotsuke-shi, Tochigi, 329-0498, Japan.
| | - Yoshikazu Asada
- Medical Education Centre, Jichi Medical University, 3311-1, Yakushiji, Shimotsuke-shi, Tochigi, 329-0498, Japan.
| | - Shinichi Yamamoto
- Centre for Graduate Medical Education, Jichi Medical University Hospital, 3311-1, Yakushiji, Shimotsuke-shi, Tochigi, 329-0498, Japan.
| | - Masahisa Shimpo
- Centre for Quality Improvement and Patient Safety, Jichi Medical University Hospital, 3311-1, Yakushiji, Shimotsuke-shi, Tochigi, 329-0498, Japan.
| |
Collapse
|
37
|
Bourdache LR, Ould Brahim L, Wasserman S, Nicolas-Joseph M, Frati FYE, Belzile E, Lambert SD. Evaluation of quality, readability, suitability, and usefulness of online resources available to cancer survivors. J Cancer Surviv 2023; 17:544-555. [PMID: 36626094 DOI: 10.1007/s11764-022-01318-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Accepted: 12/08/2022] [Indexed: 01/11/2023]
Abstract
PURPOSE The aim of this study was to evaluate the quality, readability, suitability, and usefulness of resources publicly available to adult cancer survivors (aged 18 +) who have completed primary treatment. METHODS Resources were identified in July 2021 through Google. Search completeness was verified using Yahoo, Bing, and MedlinePlus. Retrieved resources were assessed for quality using the DISCERN, readability, suitability using the Suitability Assessment Measure (SAM), and usefulness based on a list of unmet needs and self-management skills derived from the literature. Descriptive analyses were conducted, and a cluster analysis identified the highest-scoring resources. RESULTS Forty-five resources were included. The mean DISCERN score was fair at 63.3% (SD 13.7%) with low-rated items being sources, publication date, and risks and mechanisms of treatment. The mean reading grade level was 11.19 (SD 1.61, range 8-16) with only one resource scoring an 8. The mean SAM score was in the adequate range at 48.2% (SD 10.6%), with graphics being the lowest-rated section. On average, included resources addressed 57.7% (SD 27.3%) of the unmet needs and 48.4% (SD 20.9%) of the self-management skills, the least addressed being problem-solving. CONCLUSION Quality and suitability were fair, whereas readability exceeded recommended levels. Only one resource had a superior score in both quality and suitability. IMPLICATIONS FOR CANCER SURVIVORS The most pressing need is to develop resources for cancer survivors that address their unmet needs and are accessible in terms of literacy. Study findings outline the highest-scoring resources currently available to survivors, families, and clinicians.
Collapse
Affiliation(s)
- Lydia Rosa Bourdache
- Faculty of Medicine and Health Sciences, McGill University, 3605 Rue de La Montagne, Montreal, QC, H3G 2M1, Canada
| | - Lydia Ould Brahim
- Ingram School of Nursing, McGill University, 680 Sherbrooke West, Montreal, QC, H3A 2M7, Canada
- St. Mary's Research Centre, 3830 Lacombe Ave, Montreal, QC, H3T 1M5, Canada
| | - Sydney Wasserman
- Ingram School of Nursing, McGill University, 680 Sherbrooke West, Montreal, QC, H3A 2M7, Canada
- St. Mary's Research Centre, 3830 Lacombe Ave, Montreal, QC, H3T 1M5, Canada
| | - Marrah Nicolas-Joseph
- Ingram School of Nursing, McGill University, 680 Sherbrooke West, Montreal, QC, H3A 2M7, Canada
| | - Francesca Y E Frati
- Schulich Library of Physical Sciences, Life Sciences, and Engineering, McGill University, 809 Sherbrooke West, QC, H3A 0C1, Montreal, Canada
| | - Eric Belzile
- St. Mary's Research Centre, 3830 Lacombe Ave, Montreal, QC, H3T 1M5, Canada
| | - Sylvie D Lambert
- Ingram School of Nursing, McGill University, 680 Sherbrooke West, Montreal, QC, H3A 2M7, Canada.
- St. Mary's Research Centre, 3830 Lacombe Ave, Montreal, QC, H3T 1M5, Canada.
| |
Collapse
|
38
|
Dağdelen C, Erdemoğlu E. Determination of the Readability Level of Consent Forms Used in the Gynecology and Obstetrics Clinic at Suleyman Demirel University. Cureus 2023; 15:e37147. [PMID: 37026110 PMCID: PMC10074016 DOI: 10.7759/cureus.37147] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/04/2023] [Indexed: 04/08/2023] Open
Abstract
Background This study aimed to evaluate the readability level of consent forms used for interventional procedures in the obstetrics and gynecology clinic and to determine the readability of the texts according to the education level of the patient. Methodology This study determined the readability of patient consent forms used before interventional procedures in the gynecology and obstetrics clinic at the Suleyman Demirel University Hospital, Isparta. The consent forms were divided into two main groups according to their use in obstetrics and gynecology procedures. The readability level of consent forms was assessed using two readability formulas developed by Ateşman and Bezirci-Yılmaz, which determine the readability level of Turkish texts in the literature. Results When the consent forms were analyzed according to Atesman's readability formula, they were found to be readable with more than 15 years of education at the undergraduate level, while according to Bezirci-Yılmaz's readability formula, they were found to be readable with 17 years of education at the postgraduate level. Conclusions Easy-to-read consent forms will ensure that patients are more informed about interventional procedures and participate more effectively in the treatment process. There is a need to develop readable consent forms suitable for the general education level.
Collapse
Affiliation(s)
- Cem Dağdelen
- Clinic of Obstetrics and Gynecology, Isparta Private Meddem Hospital, Isparta, TUR
| | - Evrim Erdemoğlu
- Department of Gynecologic Oncology, Suleyman Demirel University, Isparta, TUR
| |
Collapse
|
39
|
Diaz A, McErlane J, Jeon MH, Cunningham J, Sullivan V, Garvey G. Patient Information Resources on Cardiovascular Health After Cancer Treatment: An Audit of Australian Resources. JCO Glob Oncol 2023; 9:e2200361. [PMID: 37018632 DOI: 10.1200/go.22.00361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/07/2023] Open
Abstract
PURPOSE Up to one third of patients with cancer are thought to experience adverse cardiovascular events after their cancer diagnosis and treatment. High-quality information about cancer treatment-related cardiovascular disease can prepare patients and reduce anxiety. The aim of this project was to systematically identify Australian online information resources about cardiovascular health after cancer and assess the readability, understandability, actionability, and cultural relevance for Aboriginal and Torres Strait Islander patients. METHODS We conducted systematic Google and website searches to identify potentially relevant resources. Eligibility was assessed using predefined criteria. For each eligible resource, we summarized the content and assessed readability, understandability, actionability, and cultural relevance for Aboriginal and Torres Strait Islander people. RESULTS Seventeen online resources addressing cardiovascular health after cancer were identified: three focused solely on cardiovascular health and the remaining 14 dedicated between <1% and 48% of the word count to this topic. On average, three of 12 predefined content areas were covered by the resources. Only one resource was considered comprehensive, covering eight of 12 content areas. Overall, 18% of the resources were deemed readable for the average Australian adult, 41% deemed understandable, and only 24% had moderate actionability. None of the resources were considered culturally relevant for Aboriginal and Torres Strait Islander people, with 41% addressing only one of the seven possible criteria and the remainder addressing none of the criteria. CONCLUSION This audit confirms a gap in online information resources about cardiovascular health after cancer. New resources, especially for Aboriginal and Torres Strait Islander people, are needed. The development of such resources must be done through involvement and collaboration with Aboriginal and Torres Strait Islander patients, families, and carers, through a codesign process.
Collapse
Affiliation(s)
- Abbey Diaz
- First Nations Cancer and Wellbeing Research Team, School of Public Health, University of Queensland, Herston, QLD, Australia
- Menzies School of Health Research, Charles Darwin University, Casuarina, NT, Australia
| | - Jorja McErlane
- First Nations Cancer and Wellbeing Research Team, School of Public Health, University of Queensland, Herston, QLD, Australia
- Menzies School of Health Research, Charles Darwin University, Casuarina, NT, Australia
| | - Mi Hye Jeon
- First Nations Cancer and Wellbeing Research Team, School of Public Health, University of Queensland, Herston, QLD, Australia
- Menzies School of Health Research, Charles Darwin University, Casuarina, NT, Australia
| | - Joan Cunningham
- Menzies School of Health Research, Charles Darwin University, Casuarina, NT, Australia
| | - Victoria Sullivan
- First Nations Cancer and Wellbeing Research Team, School of Public Health, University of Queensland, Herston, QLD, Australia
| | - Gail Garvey
- First Nations Cancer and Wellbeing Research Team, School of Public Health, University of Queensland, Herston, QLD, Australia
- Menzies School of Health Research, Charles Darwin University, Casuarina, NT, Australia
| |
Collapse
|
40
|
Gutterman SA, Schroeder JN, Jacobson CE, Obeid NR, Suwanabol PA. Examining the Accessibility of Online Patient Materials for Bariatric Surgery. Obes Surg 2023; 33:975-977. [PMID: 36602722 DOI: 10.1007/s11695-022-06440-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 12/11/2022] [Accepted: 12/27/2022] [Indexed: 01/06/2023]
Affiliation(s)
- Sophia A Gutterman
- Medical School, University of Michigan, 7300 Medical Science Building I - A Wing, 1301 Catherine St., MI, 48109-5624, Ann Arbor, USA
| | - Julia N Schroeder
- Medical School, University of Michigan, 7300 Medical Science Building I - A Wing, 1301 Catherine St., MI, 48109-5624, Ann Arbor, USA
| | - Clare E Jacobson
- Department of Surgery, University of Michigan, 2122 Taubman Center, 1500 E Medical Center Dr, MI, 48109, Ann Arbor, USA
| | - Nabeel R Obeid
- Department of Surgery, University of Michigan, 2122 Taubman Center, 1500 E Medical Center Dr, MI, 48109, Ann Arbor, USA
- Michigan Bariatric Surgery Collaborative, MI, Ann Arbor, USA
| | - Pasithorn A Suwanabol
- Department of Surgery, University of Michigan, 2122 Taubman Center, 1500 E Medical Center Dr, MI, 48109, Ann Arbor, USA.
| |
Collapse
|
41
|
Winterbottom A, Stoves J, Ahmed S, Ahmed A, Daga S. Patient information about living donor kidney transplantation across UK renal units: A critical review. J Ren Care 2023; 49:45-55. [PMID: 34791808 DOI: 10.1111/jorc.12404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Revised: 10/20/2021] [Accepted: 10/31/2021] [Indexed: 11/28/2022]
Abstract
BACKGROUND Patient information about living donor kidney transplantation is used to supplement conversations between health professionals, people with advanced kidney disease and potential kidney donors. It is not known if the information is designed to support decision-making about renal replacement options and if it helps people discuss living kidney donation with family and friends. OBJECTIVE Critical review of resources used in outpatient kidney consultations to support patients' decision-making about living kidney donor transplantation. DESIGN Mixed methods including an audit questionnaire and critical analysis of patient information leaflets. PARTICIPANTS AND MEASUREMENTS All kidney transplant centres and renal units in United Kingdom received a questionnaire to elicit by whom, how, and when information about living kidney donation is delivered. Copies of leaflets were requested. A coding frame was utilised to produce a quality score for each leaflet. RESULTS Thirty-nine (54%) units participated. Patients discussed living donor kidney transplantation with nephrologists (100%), living donor nurse (94%), transplant co-ordinator (94%), and predialysis nurse (86%). Twenty-three leaflets were provided and reviewed, mean quality scores for inclusion of information known to support shared decision-making was m = 2.82 out of 10 (range = 0-6, SD = 1.53). Readability scores indicated they were 'fairly difficult to read' (M = 56.3, range = 0-100, SD = 9.4). Few included cultural and faith information. Two leaflets were designed to facilitate conversations with others about donation. CONCLUSIONS Leaflets are unlikely to adequately support decision-making between options and discussions about donation. Services writing and updating patient leaflets may benefit from our six principles to guide their development.
Collapse
Affiliation(s)
- Anna Winterbottom
- Adult Renal Services, Lincoln Wing, St James University Hospital, Leeds, UK
| | - John Stoves
- Bradford Renal Unit, Horton Wing, St Luke's Hospital, Bradford, UK
| | - Shenaz Ahmed
- Division of Psychological and Social Medicine, Leeds Institute of Health Sciences, University of Leeds, Leeds, UK
| | - Ahmed Ahmed
- Adult Renal Services, Lincoln Wing, St James University Hospital, Leeds, UK
| | - Sunil Daga
- Adult Renal Services, Lincoln Wing, St James University Hospital, Leeds, UK
| |
Collapse
|
42
|
Hanish SJ, Cherian N, Baumann J, Gieg SD, DeFroda S. Reducing the Use of Complex Words and Reducing Sentence Length to <15 Words Improves Readability of Patient Education Materials Regarding Sports Medicine Knee Injuries. Arthrosc Sports Med Rehabil 2022; 5:e1-e9. [PMID: 36866291 PMCID: PMC9971903 DOI: 10.1016/j.asmr.2022.10.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2022] [Accepted: 10/12/2022] [Indexed: 12/31/2022] Open
Abstract
Purpose To develop a standardized method to improve readability of orthopaedic patient education materials (PEMs) without diluting their critical content by reducing the use of complex words (≥3 syllables) and shortening sentence length to ≤15 words. Methods OrthoInfo, a patient education website developed by the Academy of American Orthopedic Surgeons, was queried for PEMs relevant to the care of athletic injuries of the knee. Inclusion criteria were PEMs that were unique, pertained to topics of knee pathology in sports medicine, and written in a prose format. Exclusion criteria were information presented in video or slideshow format, or topics not pertaining to knee pathology in sports medicine. Readability of PEMs was evaluated using 7 unique readability formulas before and after applying a standardized method to improve readability while preserving critical content (reducing the use of ≥3 syllable words and ensuring sentence length is ≤15 words). Paired samples t-tests were conducted to assess the relationship between reading levels of the original PEMs and reading level of edited PEMs. Results Reading levels differed significantly between the 22 original PEMs and edited PEMs across all 7 readability formulas (P < .01). Mean Flesch Kincaid Grade Level of original PEMs (9.8 ± 1.4) was significantly increased compared to that of edited PEMs (6.4 ± 1.1) (P = 1.9 × 10-13). 4.0% of original PEMs met National Institutes of Health recommendations of a sixth-grade reading level compared with 48.0% of modified PEMs. Conclusions A standardized method that reduces the use of ≥3 syllable words and ensures sentence length is ≤15 words significantly reduces the reading-grade level of PEMs for sports-related knee injuries. Orthopaedic organizations and institutions should apply this simple standardized method when creating PEMs to enhance health literacy. Clinical Relevance The readability of PEMs is important when communicating technical material to patients. While many studies have suggested strategies to improve the readability of PEMs, literature describing the benefit of these proposed changes is scarce. The information from this study details a simple standardized method to use when creating PEMs that may enhance health literacy and improve patient outcomes.
Collapse
Affiliation(s)
| | | | | | | | - Steven DeFroda
- Address correspondence to Steven DeFroda, M.D., M.Eng., Missouri Orthopaedic Institute, 1100 Virginia Ave., DC953.00, Columbia, MO 65201.
| |
Collapse
|
43
|
Readability of online monkeypox patient education materials: Improved recognition of health literacy is needed for dissemination of infectious disease information. Infect Dis Health 2022; 28:88-94. [PMID: 36564245 PMCID: PMC9770025 DOI: 10.1016/j.idh.2022.11.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Revised: 11/03/2022] [Accepted: 11/03/2022] [Indexed: 12/24/2022]
Abstract
BACKGROUND Health literacy is key to navigating the current global epidemic of misinformation and inaccuracy relating to healthcare. The American Medical Association (AMA) suggests health information should be written at the level of American sixth grade. With the monkeypox outbreak being declared a Public Health Emergency of International Concern (PHEIC) in July 2022, we sought to assess the readability of online patient education materials (PEMs) relating to monkeypox to see if they are at the target level of readability. METHODS A search was conducted on Google.com using the search term 'Monkeypox'. The top 50 English language webpages with patient education materials (PEMs) relating to monkeypox were compiled and categorised by country of publication and URL domain. Readability was assessed using five readability tools: Flesch Reading Ease Score (FRES), Flesch-Kincaid Grade Level (FKGL), Gunning Fog Index (GFI), Coleman-Liau Index (CLI), and, Simple Measure of Gobbledygook Index (SMOG). Unpaired t-test for URL domain, and one-way ANOVA for country were performed to determine influence on readability. RESULTS Three of the five tools (FRES, GFI, CLI) identified no webpages that met the target readability score. The FKGL and SMOG tools identified one (2%) and two (4%) webpages respectively that met the target level. County and URL domain demonstrated no influence on readability. CONCLUSION Online PEMs relating to monkeypox are written above the recommended reading level. Based on the previously established effect of health literacy, this is likely exacerbating health inequalities. This study highlights the need for readability to be considered when publishing online PEMs.
Collapse
|
44
|
How readable are orthognathic surgery consent forms? Int Orthod 2022; 20:100689. [PMID: 36117084 DOI: 10.1016/j.ortho.2022.100689] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 08/19/2022] [Accepted: 08/20/2022] [Indexed: 11/20/2022]
Abstract
BACKGROUND/OBJECTIVE The valid consent process for medical intervention requires the disclosure of information in a format that is easily understandable by the patient. The aim of this investigation was to assess the readability of orthognathic surgery informed consent forms (OSICFs). METHODS An online search methodology was conducted to identify OSICFs for analysis. The forms that satisfied inclusion/exclusion criteria were evaluated according to a standardised protocol. The readability of the content was assessed using three validated tools: the Simple Measure of Gobbledegook (SMOG) score, Flesch-Kincaid Grade-Level (FKGL) score and Flesch Reading Ease (FRE) score. RESULTS Most of the 26 evaluated OSICFs were sourced from websites within the United States (69.2%) and from oral and maxillo-facial surgery practices (76.9%). Two of the assessed forms were template OSICFs available from oral and maxillo-facial professional societies to its members. The scores from the three tools found that the content of 84.6% to 92.3% of the forms were "difficult" to read. The mean (SD) SMOG score for all evaluated OSICFs was 12.31(2.22) [95% CI: 11.42 to 13.21]. The SMOG and FKGL scores were closely correlated (r=0.99, P < 0.0001; 95% CI: 0.9864 to 0.9973). There was no association between SMOG scores and the number of words contained within each consent form (r=-0.047;95% CI: -0.44 to 0.36). CONCLUSIONS The OSICFs surveyed in this investigation failed to meet recommended readability levels. A significant number of patients are not likely to understand the information contained within the forms. Orthodontists are advised that poor literacy skills of their patients may preclude them from validly consenting to orthognathic surgery treatment procedures.
Collapse
|
45
|
Michel C, Dijanic C, Abdelmalek G, Sudah S, Kerrigan D, Gorgy G, Yalamanchili P. Readability assessment of patient educational materials for pediatric spinal deformity from top academic orthopedic institutions. Spine Deform 2022; 10:1315-1321. [PMID: 35819724 PMCID: PMC9579064 DOI: 10.1007/s43390-022-00545-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Accepted: 06/11/2022] [Indexed: 02/10/2023]
Abstract
STUDY DESIGN Cross-sectional analysis of patient educational materials from top pediatric orthopedic hospital websites. OBJECTIVE To assess the readability of online educational materials of top pediatric orthopedic hospital websites for pediatric spinal deformity. The internet has become an increasingly popular source of health information for patients and their families. Healthcare experts recommend that the readability of online education materials be at or below a 6th-grade reading level. However, previous studies have demonstrated that the readability of online education materials on various orthopedic topics is too advanced for the average patient. To date, the readability of online education materials for pediatric spinal deformity has not been analyzed. METHODS Online patient education materials from the top 25 pediatric orthopedic institutions, as ranked by the U.S. News and World Report hospitals for pediatric orthopedics, were accessed utilizing the following readability assessments: Flesch-Kincaid (FK), Flesch Reading Ease, Gunning Fog Index, Coleman-Liau Index, Simple Measure of the Gobbledygook Index (SMOG), Automated Readability Index, FORCAST, and the New Dale and Chall Readability. Correlations between academic institutional ranking, geographic location, and the use of concomitant multi-media modalities with FK scores were evaluated using a Spearman regression. RESULTS Only 48% (12 of 25) of top pediatric orthopedic hospitals provided online information regarding pediatric spinal deformity at or below a 6th-grade reading level. The mean FK score was 9.0 ± 2.7, Flesch Reading Ease 50.8 ± 15.6, Gunning Fog Score 10.6 ± 3.1, Coleman-Liau Index 11.6 ± 2.6, SMOG index 11.7 ± 2.0, Automated Readability Index 8.6 ± 2.8, and Dale-Chall Readability Score 6.4 ± 1.4. There was no significant correlation between institutional ranking, geographic location, or use of multimedia with FK scores. CONCLUSION Online educational material for pediatric spinal deformity from top pediatric orthopedic institutional websites are associated with poor readability.
Collapse
Affiliation(s)
- Christopher Michel
- Department of Orthopedic Surgery, Monmouth Medical Center-RWJBarnabas Health, Long Branch, NJ, 07740, USA
| | - Christopher Dijanic
- Department of Orthopedic Surgery, Monmouth Medical Center-RWJBarnabas Health, Long Branch, NJ, 07740, USA
| | | | - Suleiman Sudah
- Department of Orthopedic Surgery, Monmouth Medical Center-RWJBarnabas Health, Long Branch, NJ, 07740, USA
| | - Daniel Kerrigan
- Department of Orthopedic Surgery, Monmouth Medical Center-RWJBarnabas Health, Long Branch, NJ, 07740, USA
| | - George Gorgy
- Department of Orthopedic Surgery, Monmouth Medical Center-RWJBarnabas Health, Long Branch, NJ, 07740, USA
| | - Praveen Yalamanchili
- Department of Orthopedic Surgery, Monmouth Medical Center-RWJBarnabas Health, Long Branch, NJ, 07740, USA
| |
Collapse
|
46
|
Meade MJ, Dreyer CW. A Content Analysis of Orthodontic Treatment Information Contained within the Websites of General Dental Practices. JOURNAL OF CONSUMER HEALTH ON THE INTERNET 2022. [DOI: 10.1080/15398285.2022.2124494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Affiliation(s)
- Maurice J. Meade
- Orthodontic Unit, Adelaide Dental School, The University of Adelaide, Adelaide, Australia
| | - Craig W. Dreyer
- Orthodontic Unit, Adelaide Dental School, The University of Adelaide, Adelaide, Australia
| |
Collapse
|
47
|
Downey T, Millar BC, Moore JE. Improving health literacy with mumps, measles and rubella (MMR) vaccination: comparison of the readability of MMR patient-facing literature and MMR scientific abstracts. Ther Adv Vaccines Immunother 2022; 10:25151355221118812. [PMID: 36035444 PMCID: PMC9400405 DOI: 10.1177/25151355221118812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Accepted: 07/13/2022] [Indexed: 11/16/2022] Open
Abstract
Background: Historically, there have been many factors that have influenced mumps, measles and rubella (MMR) vaccine uptake, including media bias, social/economic determinants, parental education level, deprivation and concerns over vaccine safety. Readability metrics through online tools are now emerging as a means for healthcare professionals to determine the readability of patient-facing vaccine information. The aim of this study was to examine the readability of patient-facing materials describing MMR vaccination, through employment of nine readability and text parameter metrics, and to compare these with MMR vaccination literature for healthcare professionals and scientific abstracts relating to MMR vaccination. Materials and methods: The subscription-based online Readable program (readable.com) was used to determine nine readability indices using various readability formulae: Established readability metrics (n = 5) (Flesch–Kinkaid Grade Level, Gunning Fog Index, SMOG Index, Flesch Reading Ease and New Dale-Chall Score), as well as Text parameters (n = 4) (sentence count, word count, number of words per sentence, number of syllables per word) with 47 MMR vaccination texts [patient-facing literature (n = 22); healthcare professional–focused literature (n = 8); scientific abstracts (n = 17)]. Results: Patient-facing vaccination literature had a Flesch Reading Ease score of 58.4 and a Flesch–Kincaid Grade Level of 8.1, in comparison with poorer readability scores for healthcare professional literature of 30.7 and 12.6, respectively. MMR scientific abstracts had the poorest readability (24.0 and 14.8, respectively). Sentence structure was also considered, where better readability metrics were correlated with significantly lower number of words per sentence and less syllables per word. Conclusion: Use of these readability tools enables the author to ensure their research is more readable to the lay audience. Patient co-production initiatives would help to ensure that not only can the target audience read the literature, but that they understand the content. Increased patient-centric focus groups would give better insights into reasons for MMR-associated vaccine hesitation and vaccine refusal.
Collapse
Affiliation(s)
- Tina Downey
- School of Biomedical Sciences, Ulster University, Coleraine, UK
| | | | - John E Moore
- Laboratory for Disinfection and Pathogen Elimination Studies, Northern Ireland Public Health Laboratory, Nightingale (Belfast City) Hospital, Corry Building, Lisburn Road, Belfast BT9 7AD, UK
| |
Collapse
|
48
|
Readability is decreasing in language and linguistics. Scientometrics 2022. [DOI: 10.1007/s11192-022-04427-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
|
49
|
Docimo S, Seeras K, Acho R, Pryor A, Spaniolas K. Academic and community hernia center websites in the United States fail to meet healthcare literacy standards of readability. Hernia 2022; 26:779-786. [PMID: 35344107 DOI: 10.1007/s10029-022-02584-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Accepted: 02/09/2022] [Indexed: 11/30/2022]
Abstract
BACKGROUND Health literacy is considered the single best predictor of health status. Organizations including the American Medical Association (AMA) and the National Institutes of Health (NIH) have recommended that the readability of patient education materials not exceed the sixth-grade level. Our study focuses on the readability of self-designated hernia centers websites at both academic and community organizations across the United States to determine their ability to dispense patient information at an appropriate reading level. METHODS A search was conducted utilizing the Google search engine. The key words "Hernia Center" and "University Hernia Center" were used to identify links to surgical programs within the United States. The following readability tests were conducted via the program: Flesch-Kincaid Grade Level (FKGL), Gunning Fox Index (GFI), Coleman-Liau Index (CLI), Simple Measure of Gobbledygook (SMOG), and Flesch Reading Ease (FRE) score. RESULTS Of 96 websites, zero (0%) had fulfilled the recommended reading level in all four tests. The mean test scores for all non-academic centers (n = 50) were as follows: FKGL (11.14 ± 2.68), GFI (14.39 ± 3.07), CLI (9.29 ± 2.48) and SMOG (13.38 ± 2.03). The mean test scores [SK1] for all academic programs (n = 46) were as follows: FKGL (11.7 ± 2.66), GFI (15.01 ± 2.99), CLI (9.34 ± 1.91) and SMOG (13.71 ± 2.02). A one-sample t test was performed to compare the FKGL, GFI, CLI, and SMOG scores for each hernia center to a value of 6.9 (6.9 or less is considered an acceptable reading level) and a p value of 0.001 for all four tests were noted demonstrating statistical significance. The Academic and Community readability scores for both groups were compared to each other with a two-sample t test with a p value of > 0.05 for all four tests and there were no statistically significant differences. CONCLUSION Neither Academic nor Community hernia centers met the appropriate reading level of sixth-grade or less. Steps moving forward to improve patient comprehension and/or involving with their care should include appropriate reading level material, identification of a patient with a low literacy level with intervention or additional counseling when appropriate, and the addition of adjunct learning materials such as videos.
Collapse
Affiliation(s)
- S Docimo
- Division of Bariatric, Foregut, and Advanced Gastrointestinal Surgery, Renaissance School, Medicine at Stony Brook University, Stony Brook, NY, USA.
| | - K Seeras
- Division of Bariatric, Foregut, and Advanced Gastrointestinal Surgery, Renaissance School, Medicine at Stony Brook University, Stony Brook, NY, USA
| | - R Acho
- Henry Ford Macomb, Detroit, MI, USA
| | - A Pryor
- Division of Bariatric, Foregut, and Advanced Gastrointestinal Surgery, Renaissance School, Medicine at Stony Brook University, Stony Brook, NY, USA
| | - K Spaniolas
- Division of Bariatric, Foregut, and Advanced Gastrointestinal Surgery, Renaissance School, Medicine at Stony Brook University, Stony Brook, NY, USA
| |
Collapse
|
50
|
The popularity of contradictory information about COVID-19 vaccine on social media in China. COMPUTERS IN HUMAN BEHAVIOR 2022; 134:107320. [PMID: 35527790 PMCID: PMC9068608 DOI: 10.1016/j.chb.2022.107320] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Revised: 03/01/2022] [Accepted: 05/01/2022] [Indexed: 01/25/2023]
Abstract
To eliminate the impact of contradictory information on vaccine hesitancy on social media, this research developed a framework to compare the popularity of information expressing contradictory attitudes towards COVID-19 vaccine or vaccination, mine the similarities and differences among contradictory information's characteristics, and determine which factors influenced the popularity mostly. We called Sina Weibo API to collect data. Firstly, to extract multi-dimensional features from original tweets and quantify their popularity, content analysis, sentiment computing and k-medoids clustering were used. Statistical analysis showed that anti-vaccine tweets were more popular than pro-vaccine tweets, but not significant. Then, by visualizing the features' centrality and clustering in information-feature networks, we found that there were differences in text characteristics, information display dimension, topic, sentiment, readability, posters' characteristics of the original tweets expressing different attitudes. Finally, we employed regression models and SHapley Additive exPlanations to explore and explain the relationship between tweets' popularity and content and contextual features. Suggestions for adjusting the organizational strategy of contradictory information to control its popularity from different dimensions, such as poster's influence, activity and identity, tweets' topic, sentiment, readability were proposed, to reduce vaccine hesitancy.
Collapse
|