1
|
Patel PN, Patel PA, Ahmed H, Lai KE, Mackay DD, Mollan SP, Truong-Le M. Assessment of the Quality, Accountability, and Readability of Online Patient Education Materials for Optic Neuritis. Neuroophthalmology 2024; 48:257-266. [PMID: 38933748 PMCID: PMC11197904 DOI: 10.1080/01658107.2024.2301728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Accepted: 12/31/2023] [Indexed: 06/28/2024] Open
Abstract
Most cases of optic neuritis (ON) occur in women and in patients between the ages of 15 and 45 years, which represents a key demographic of individuals who seek health information using the internet. As clinical providers strive to ensure patients have accessible information to understand their condition, assessing the standard of online resources is essential. To assess the quality, content, accountability, and readability of online information for optic neuritis. This cross-sectional study analyzed 11 freely available medical sites with information on optic neuritis and used PubMed as a gold standard for comparison. Twelve questions were composed to include the information most relevant to patients, and each website was independently examined by four neuro-ophthalmologists. Readability was analyzed using an online readability tool. Journal of the American Medical Association (JAMA) benchmarks, four criteria designed to assess the quality of health information further were used to evaluate the accountability of each website. Freely available online information. On average, websites scored 27.98 (SD ± 9.93, 95% CI 24.96-31.00) of 48 potential points (58.3%) for the twelve questions. There were significant differences in the comprehensiveness and accuracy of content across websites (p < .001). The mean reading grade level of websites was 11.90 (SD ± 2.52, 95% CI 8.83-15.25). Zero websites achieved all four JAMA benchmarks. Interobserver reliability was robust between three of four neuro-ophthalmologist (NO) reviewers (ρ = 0.77 between NO3 and NO2, ρ = 0.91 between NO3 and NO1, ρ = 0.74 between NO2 and NO1; all p < .05). The quality of freely available online information detailing optic neuritis varies by source, with significant room for improvement. The material presented is difficult to interpret and exceeds the recommended reading level for health information. Most websites reviewed did not provide comprehensive information regarding non-therapeutic aspects of the disease. Ophthalmology organizations should be encouraged to create content that is more accessible to the general public.
Collapse
Affiliation(s)
- Prem N. Patel
- Department of Ophthalmology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Parth A. Patel
- Department of Ophthalmology, Medical College of Georgia, Augusta University, Augusta, Georgia, USA
| | - Harris Ahmed
- Department of Ophthalmology, Loma Linda University Medical Center, Loma Linda, California, USA
| | - Kevin E. Lai
- Departments of Neurology, Ophthalmology, and Neurosurgery, Indiana University School of Medicine, Indianapolis, Indiana, USA
- Ophthalmology Service, Richard L. Roudebush Veterans Affairs Medical Center, Indianapolis, Indiana, USA
- Neuro-Ophthalmology Section, Midwest Eye Institute, Carmel, Indiana, USA
- Circle City Neuro-Ophthalmology, Carmel, Indiana, USA
- Department of Ophthalmology and Visual Sciences, University of Louisville, Louisville, Kentucky, USA
| | - Devin D. Mackay
- Departments of Neurology, Ophthalmology, and Neurosurgery, Indiana University School of Medicine, Indianapolis, Indiana, USA
| | - Susan P. Mollan
- Queen Elizabeth Hospital, Department of Ophthalmology, Birmingham, UK
| | - Melanie Truong-Le
- Department of Ophthalmology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| |
Collapse
|
2
|
Eid K, Eid A, Wang D, Raiker RS, Chen S, Nguyen J. Optimizing Ophthalmology Patient Education via ChatBot-Generated Materials: Readability Analysis of AI-Generated Patient Education Materials and The American Society of Ophthalmic Plastic and Reconstructive Surgery Patient Brochures. Ophthalmic Plast Reconstr Surg 2024; 40:212-216. [PMID: 37972974 DOI: 10.1097/iop.0000000000002549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2023]
Abstract
PURPOSE This study aims to compare the readability of patient education materials (PEM) of the American Society of Ophthalmic Plastic and Reconstructive Surgery to that of PEMs generated by the AI-chat bots ChatGPT and Google Bard. METHODS PEMs on 16 common American Society of Ophthalmic Plastic and Reconstructive Surgery topics were generated by 2 AI models, ChatGPT 4.0 and Google Bard, with and without a 6th-grade reading level prompt modifier. The PEMs were analyzed using 7 readability metrics: Flesch Reading Ease Score, Gunning Fog Index, Flesch-Kincaid Grade Level, Coleman-Liau Index, Simple Measure of Gobbledygook Index Score, Automated Readability Index, and Linsear Write Readability Score. Each AI-generated PEM was compared with the equivalent American Society of Ophthalmic Plastic and Reconstructive Surgery PEM. RESULTS Across all readability indices, PEM generated by ChatGPT 4.0 consistently had the highest readability scores, indicating that the material generated by this AI chatbot may be most difficult to read in its unprompted form (Flesch Reading Ease Score: 36.5; Simple Measure of Gobbledygook: 14.7). Google's Bard was able to generate content that was easier to read than both the American Society of Ophthalmic Plastic and Reconstructive Surgery and ChatGPT 4.0 (Flesch Reading Ease Score: 52.3; Simple Measure of Gobbledygook: 12.7). When prompted to produce PEM at a 6th-grade reading level, both ChatGPT 4.0 and Bard were able to significantly improve in their readability scores, with prompted ChatGPT 4.0 being able to consistently generate content that was easier to read (Flesch Reading Ease Score: 67.9, Simple Measure of Gobbledygook: 10.2). CONCLUSION This study suggests that AI tools, when guided by appropriate prompts, can generate accessible and comprehensible PEMs in the field of ophthalmic plastic and reconstructive surgeries, balancing readability with the complexity of the necessary information.
Collapse
Affiliation(s)
- Kevin Eid
- Department of Ophthalmology, Moran Eye Center, University of Utah, Salt Lake City, Utah, U.S.A
| | - Alen Eid
- Department of Ophthalmology and Visual Sciences, West Virginia University, Morgantown, West Virginia, U.S.A
| | - Diane Wang
- Department of Ophthalmology and Visual Sciences, West Virginia University, Morgantown, West Virginia, U.S.A
| | - Rahul S Raiker
- Department of Medical Education, West Virginia University, Morgantown, West Virginia, U.S.A
| | - Stephen Chen
- Department of Medical Education, West Virginia University, Morgantown, West Virginia, U.S.A
| | - John Nguyen
- Department of Ophthalmology and Visual Sciences, West Virginia University, Morgantown, West Virginia, U.S.A
- Department of Otolaryngology and Head and Neck Surgery, West Virginia University, Morgantown, West Virginia, U.S.A
| |
Collapse
|
3
|
Skrzypczak T, Skrzypczak A, Szepietowski JC. Readability of Patient Electronic Materials for Atopic Dermatitis in 23 Languages: Analysis and Implications for Dermatologists. Dermatol Ther (Heidelb) 2024; 14:671-684. [PMID: 38402338 PMCID: PMC10965833 DOI: 10.1007/s13555-024-01115-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Accepted: 02/08/2024] [Indexed: 02/26/2024] Open
Abstract
INTRODUCTION Patients search on the Internet for information about various medical procedures and conditions. The main aim of this study was to evaluate the readability of online health information related to atopic dermatitis (AD). Online resources are becoming a standard in facilitating shared decision-making processes. With a pipeline of new therapeutic options like immunomodulators, understanding of the complexity of AD by the patients is crucial. METHODS The term "atopic dermatitis" translated into 23 official European Union languages was searched using the Google search engine. The first 50 records in each language were evaluated for suitability. Included materials were barrier-free, focused on patient education, and were not categorized as advertisements. Article sources were classified into four categories: non-profit, online shops, pharmaceutical companies, and dermatology clinic. Readability was assessed with Lix score. RESULTS A total of 615 articles in Swedish, Spanish, Slovenian, Slovak, Romanian, Portuguese, Polish, Lithuanian, Latvian, Irish, Italian, Hungarian, Greek, German, French, Finnish, Estonian, English, Dutch, Danish, Czech, Croatian, and Bulgarian were evaluated. The overall mean Lix score was 56 ± 8, which classified articles as very hard to comprehend. Significant differences in mean Lix scores were observed across all included languages (all P < 0.001). Articles released by non-profit organizations and pharmaceutical companies had the highest readability (P < 0.001). Low readability level was correlated with high article prevalence (R2 = 0.189, P = 0.031). CONCLUSIONS Although there was an abundance of online articles related to AD, the readability of the available information was low. As online health information has become essential in making shared decisions between patients and physicians, an improvement in AD-related materials is needed.
Collapse
Affiliation(s)
- Tomasz Skrzypczak
- University Hospital in Wroclaw, Borowska 213, 50-556, Wroclaw, Poland
| | - Anna Skrzypczak
- Faculty of Dentistry, Wroclaw Medical University, Krakowska 26, 50-425, Wroclaw, Poland
| | - Jacek C Szepietowski
- Chair of the Department of Dermatology, Venerology and Allergology, Wroclaw Medical University, Chalubinskiego 1, 50-368, Wroclaw, Poland.
| |
Collapse
|
4
|
Boyd CJ, Hemal K, Sorenson TJ, Patel PA, Bekisz JM, Choi M, Karp NS. Artificial Intelligence as a Triage Tool during the Perioperative Period: Pilot Study of Accuracy and Accessibility for Clinical Application. PLASTIC AND RECONSTRUCTIVE SURGERY-GLOBAL OPEN 2024; 12:e5580. [PMID: 38313585 PMCID: PMC10836902 DOI: 10.1097/gox.0000000000005580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 12/05/2023] [Indexed: 02/06/2024]
Abstract
Background Given the dialogistic properties of ChatGPT, we hypothesized that this artificial intelligence (AI) function can be used as a self-service tool where clinical questions can be directly answered by AI. Our objective was to assess the content, accuracy, and accessibility of AI-generated content regarding common perioperative questions for reduction mammaplasty. Methods ChatGPT (OpenAI, February Version, San Francisco, Calif.) was used to query 20 common patient concerns that arise in the perioperative period of a reduction mammaplasty. Searches were performed in duplicate for both a general term and a specific clinical question. Query outputs were analyzed both objectively and subjectively. Descriptive statistics, t tests, and chi-square tests were performed where appropriate with a predetermined level of significance of P less than 0.05. Results From a total of 40 AI-generated outputs, mean word length was 191.8 words. Readability was at the thirteenth grade level. Regarding content, of all query outputs, 97.5% were on the appropriate topic. Medical advice was deemed to be reasonable in 100% of cases. General queries more frequently reported overarching background information, whereas specific queries more frequently reported prescriptive information (P < 0.0001). AI outputs specifically recommended following surgeon provided postoperative instructions in 82.5% of instances. Conclusions Currently available AI tools, in their nascent form, can provide recommendations for common perioperative questions and concerns for reduction mammaplasty. With further calibration, AI interfaces may serve as a tool for fielding patient queries in the future; however, patients must always retain the ability to bypass technology and be able to contact their surgeon.
Collapse
Affiliation(s)
- Carter J Boyd
- From the Hansjörg Wyss Department of Plastic Surgery, NYU Langone, New York, N.Y
| | - Kshipra Hemal
- From the Hansjörg Wyss Department of Plastic Surgery, NYU Langone, New York, N.Y
| | - Thomas J Sorenson
- From the Hansjörg Wyss Department of Plastic Surgery, NYU Langone, New York, N.Y
| | | | - Jonathan M Bekisz
- From the Hansjörg Wyss Department of Plastic Surgery, NYU Langone, New York, N.Y
| | - Mihye Choi
- From the Hansjörg Wyss Department of Plastic Surgery, NYU Langone, New York, N.Y
| | - Nolan S Karp
- From the Hansjörg Wyss Department of Plastic Surgery, NYU Langone, New York, N.Y
| |
Collapse
|
5
|
Tang KWK, Millar BC, Moore JE. Improving health literacy of antibiotic use in people with cystic fibrosis (CF)-comparison of the readability of patient information leaflets (PILs) from the EU, USA and UK of 23 CF-related antibiotics used in the treatment of CF respiratory infections. JAC Antimicrob Resist 2023; 5:dlad129. [PMID: 38046567 PMCID: PMC10691746 DOI: 10.1093/jacamr/dlad129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 11/20/2023] [Indexed: 12/05/2023] Open
Abstract
Background Antibiotic adherence is poor amongst people with cystic fibrosis (CF). Low-quality patient information leaflets (PILs), which accompany prescription antibiotics, with poor readability may contribute to poor antibiotic adherence, with the potential for antimicrobial resistance (AMR) development. The aim of this study was to examine the readability of antibiotic PILs used to treat CF lung infections. Methods CF-related antibiotics (n = 23; seven classes: aminoglycosides, β-lactams, fluoroquinolones, macrolides/lincosamides, oxazolidinones, tetracyclines, trimethoprim/sulfamethoxazole) were investigated. Readability of PILs (n = 141; 23 antibiotics) from the EU (n = 40), USA (n = 42) and UK (n = 59) was calculated. Results Mean [± standard error of mean (SEM)] values for the Flesch Reading Ease (FRE) for EU, USA and UK were 50.0 ± 1.1, 56.2 ± 1.3 and 51.7 ± 1.1, respectively (FRE target ≥60). Mean (± SEM) values for the Flesch Kinkaid Grade Level (FKGL) for the EU, USA and UK were 9.0 ± 0.2, 7.5 ± 0.2 and 9.6 ± 0.2, respectively (FKGL target ≤8). US PILs were significantly shorter (P < 0.0001) in words (mean ± SEM = 1365 ± 52), than either UK or EU PILs, with fewer sentences (P < 0.0001), fewer words per sentence (P < 0.0001) and fewer syllables per word. The mean ( ± SEM) reading time of UK PILs (n = 59) was 12.7 ± 0.55 mins . Conclusions Readability of antibiotic PILs is poor. Improving PIL readability may lead to improved health literacy, which may translate to increased antibiotic adherence and AMR avoidance. Authors preparing written materials for the lay/patient CF community are encouraged to employ readability calculators, so that final materials are within recommended readability reference parameters, to support the health (antibiotic) literacy of their readers.
Collapse
Affiliation(s)
- Ka Wah Kelly Tang
- School of Biomedical Sciences, Ulster University, Cromore Road, Coleraine BT52 1SA Northern Ireland, UK
| | - Beverley C Millar
- School of Biomedical Sciences, Ulster University, Cromore Road, Coleraine BT52 1SA Northern Ireland, UK
- Laboratory for Disinfection and Pathogen Elimination Studies, Northern Ireland Public Health Laboratory, Belfast City Hospital, Lisburn Road, Belfast BT9 7AD Northern Ireland, UK
- Northern Ireland Regional Adult Cystic Fibrosis Centre, Level 8, Belfast City Hospital, Lisburn Road, Belfast BT9 7AB, Northern Ireland, UK
| | - John E Moore
- School of Biomedical Sciences, Ulster University, Cromore Road, Coleraine BT52 1SA Northern Ireland, UK
- Laboratory for Disinfection and Pathogen Elimination Studies, Northern Ireland Public Health Laboratory, Belfast City Hospital, Lisburn Road, Belfast BT9 7AD Northern Ireland, UK
- Northern Ireland Regional Adult Cystic Fibrosis Centre, Level 8, Belfast City Hospital, Lisburn Road, Belfast BT9 7AB, Northern Ireland, UK
| |
Collapse
|
6
|
Moore JE, Tang KWK, Millar BC. Improving health literacy of antifungal use-Comparison of the readability of antifungal medicines information from Australia, EU, UK, and US of 16 antifungal agents across 5 classes (allylamines, azoles, echinocandins, polyenes, and others). Med Mycol 2023; 61:myad084. [PMID: 37562942 PMCID: PMC10802897 DOI: 10.1093/mmy/myad084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 08/01/2023] [Accepted: 08/09/2023] [Indexed: 08/12/2023] Open
Abstract
Adherence to antifungals is poor in high endemic regions where antifungal resistance is high. Poor readability of prescription/over-the-counter (OTC) antifungals may contribute to poor adherence, due to the patient not fully understanding the purpose, importance, and dosage of their antifungal medicine. As there are no reports on the readability of antifungals, this study examined the readability of patient-facing antifungal information. Antifungals (n = 16; five classes [allylamines, azoles, echinocandins, polyenes, and others-flucytosine and griseofulvin]) were selected. Readability of four sources of information, (i) summary of product characteristics, (ii) patient information leaflets (PILs), (iii) OTC patient information, and (iv) patient web-based information, was calculated using Readable software, to obtain readability scores [(i) Flesch Reading Ease [FRE], (ii) Flesch-Kinkaid Grade Level [FKGL], (iii) Gunning Fog Index, and (iv) Simple Measure of Gobbledygook (SMOG) Index) and text metrics [word count, sentence count, words/sentence, and syllables/word]. PILs, web-based resources, and OTC patient information had good readability (FRE mean ± sd = 52.8 ± 6.7, 58.6 ± 6.9, and 57.3 ± 7.4, respectively), just falling short of the ≥ 60 target. For FKGL (target ≤ 8.0), PILs, web-based resources, and OTC patient information also had good readability (mean ± sd = 8.5 ± 1.0, 7.2 ± 0.86, and 7.8 ± 0.1, respectively). Improved readability scores observed correlate with reduced words, words/sentence and syllables/word. Improving readability may lead to improved patient health literacy. Healthcare professionals, academics, and publishers preparing written materials regarding antifungals for the lay/patient community are encouraged to employ readability calculators to check the readability of their work, so that the final material is within recommended readability reference parameters, to support the health literacy of their patients/readers.
Collapse
Affiliation(s)
- John E Moore
- School of Biomedical Sciences, Ulster University, Cromore Road, Coleraine BT52 1SA, Northern Ireland, UK
- Laboratory for Disinfection and Pathogen Elimination Studies, Northern Ireland Public Health Laboratory, Belfast City Hospital, Lisburn Road, Belfast BT9 7AD, Northern Ireland, UK
| | - Ka Wah Kelly Tang
- School of Biomedical Sciences, Ulster University, Cromore Road, Coleraine BT52 1SA, Northern Ireland, UK
| | - Beverley C Millar
- School of Biomedical Sciences, Ulster University, Cromore Road, Coleraine BT52 1SA, Northern Ireland, UK
- Laboratory for Disinfection and Pathogen Elimination Studies, Northern Ireland Public Health Laboratory, Belfast City Hospital, Lisburn Road, Belfast BT9 7AD, Northern Ireland, UK
| |
Collapse
|
7
|
Patel P, Patel P, Ahmed H, Bal S, Armstrong G, Sridhar J. Content, Readability, and Accountability of Online Health Information for Patients Regarding Blue Light and Impact on Ocular Health. Cureus 2023; 15:e38715. [PMID: 37303397 PMCID: PMC10249644 DOI: 10.7759/cureus.38715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/08/2023] [Indexed: 06/13/2023] Open
Abstract
Objective To evaluate the quality and readability of online health content regarding the ocular health effects of blue light. Methods Five commercial and five non-commercial websites with content regarding the ocular effect of blue light were examined. Quality evaluations were conducted using a 14-question assessment composed by the authors and the 16-question DISCERN instrument. Website accountability was evaluated via the Journal of the American Medical Association (JAMA) benchmarks. Readability was determined using an online tool (Readable). Correlational and comparative analyses were conducted where appropriate. Results The average questionnaire score was 84 (standard deviation [SD] ± 17.89, 95% confidence interval [CI] 77.32-90.68) out of 136 points (61.8%). Significant differences in quality were identified between websites (p = 0.02), with Healthline achieving the highest score. Compared to commercial websites, non-commercial websites trended toward having significantly higher median questionnaire scores (p = 0.06). Zero websites achieved all four JAMA benchmarks. The average reading grade level of content was 10.43 (SD ± 1.15, 95% CI 9.60 - 11.25), with differences between websites trending toward significance (p = 0.09). There was no correlation between resource readability and quality (ρ = 0.28; p = 0.43) or accountability (ρ = 0.47; p = 0.17). Conclusions There remain substantial deficiencies in the quality, accountability, and readability of online content concerning the effect of blue light on ocular health. Clinicians and patients must recognize such issues when recommending and consuming these resources.
Collapse
Affiliation(s)
- Parth Patel
- Ophthalmology, Augusta University Medical College of Georgia, Augusta, USA
| | - Prem Patel
- Ophthalmology, University of Texas Southwestern Medical School, Dallas, USA
| | - Harris Ahmed
- Ophthalmology, Loma Linda University Medical Center, Loma Linda, USA
| | - Sila Bal
- Ophthalmology, Massachusetts Eye and Ear Infirmary, Boston, USA
| | | | | |
Collapse
|
8
|
Patel PA, Ali MJ. Social Media in the New Age of #Ophthalmology. Semin Ophthalmol 2023; 38:217-218. [PMID: 36788662 DOI: 10.1080/08820538.2023.2178112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/16/2023]
|
9
|
Design of a Functional Eye Dressing for Treatment of the Vitreous Floater. J Pers Med 2022; 12:jpm12101659. [PMID: 36294798 PMCID: PMC9604789 DOI: 10.3390/jpm12101659] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Revised: 09/20/2022] [Accepted: 10/03/2022] [Indexed: 11/25/2022] Open
Abstract
With the rapid development of display technology, related diseases of the human eye are also increasing day by day. Eye floaters are one of the diseases that affect humans. Herein, we present a functional ophthalmic dressing that can permeate the skin tissues of the eyes through oxygen and hydrogen to improve the symptoms of floaters. In clinical tests, the symptoms of sensory floaters improved in 28 patients, and the recovery rates of mild, moderate, and severe floaters were about 70%, 66.7%, and 83.3%, respectively.
Collapse
|