1
|
Mishra V, Sarraju A, Kalwani NM, Dexter JP. Evaluation of Prompts to Simplify Cardiovascular Disease Information Generated Using a Large Language Model: Cross-Sectional Study. J Med Internet Res 2024; 26:e55388. [PMID: 38648104 DOI: 10.2196/55388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 01/25/2024] [Accepted: 01/31/2024] [Indexed: 04/25/2024] Open
Abstract
In this cross-sectional study, we evaluated the completeness, readability, and syntactic complexity of cardiovascular disease prevention information produced by GPT-4 in response to 4 kinds of prompts.
Collapse
Affiliation(s)
- Vishala Mishra
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC, United States
| | - Ashish Sarraju
- Department of Cardiovascular Medicine, Cleveland Clinic, Cleveland, OH, United States
| | - Neil M Kalwani
- Veterans Affairs Palo Alto Health Care System, Palo Alto, CA, United States
- Division of Cardiovascular Medicine and the Cardiovascular Institute, Department of Medicine, Stanford University School of Medicine, Stanford, CA, United States
| | - Joseph P Dexter
- Data Science Initiative, Harvard University, Allston, MA, United States
- Department of Human Evolutionary Biology, Harvard University, Cambridge, MA, United States
- Institute of Collaborative Innovation, University of Macau, Taipa, Macao
| |
Collapse
|
2
|
Bralić N, Mijatović A, Marušić A, Buljan I. Conclusiveness, readability and textual characteristics of plain language summaries from medical and non-medical organizations: a cross-sectional study. Sci Rep 2024; 14:6016. [PMID: 38472285 DOI: 10.1038/s41598-024-56727-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2023] [Accepted: 03/11/2024] [Indexed: 03/14/2024] Open
Abstract
This cross-sectional study compared plain language summaries (PLSs) from medical and non-medical organizations regarding conclusiveness, readability and textual characteristics. All Cochrane (medical PLSs, n = 8638) and Campbell Collaboration and International Initiative for Impact Evaluation (non-medical PLSs, n = 163) PLSs of latest versions of systematic reviews published until 10 November 2022 were analysed. PLSs were classified into three conclusiveness categories (conclusive, inconclusive and unclear) using a machine learning tool for medical PLSs and by two experts for non-medical PLSs. A higher proportion of non-medical PLSs were conclusive (17.79% vs 8.40%, P < 0.0001), they had higher readability (median number of years of education needed to read the text with ease 15.23 (interquartile range (IQR) 14.35 to 15.96) vs 15.51 (IQR 14.31 to 16.77), P = 0.010), used more words (median 603 (IQR 539.50 to 658.50) vs 345 (IQR 202 to 476), P < 0.001). Language analysis showed that medical PLSs scored higher for disgust and fear, and non-medical PLSs scored higher for positive emotions. The reason for the observed differences between medical and non-medical fields may be attributed to the differences in publication methodologies or disciplinary differences. This approach to analysing PLSs is crucial for enhancing the overall quality of PLSs and knowledge translation to the general public.
Collapse
Affiliation(s)
- Nensi Bralić
- Department of Research in Biomedicine and Health, University of Split School of Medicine, Šoltanska 2A, 21000, Split, Croatia.
| | - Antonija Mijatović
- Department of Research in Biomedicine and Health, University of Split School of Medicine, Šoltanska 2A, 21000, Split, Croatia
| | - Ana Marušić
- Department of Research in Biomedicine and Health, University of Split School of Medicine, Šoltanska 2A, 21000, Split, Croatia
| | - Ivan Buljan
- Department of Psychology, Faculty of Humanities and Social Sciences, University of Split, Split, Croatia
| |
Collapse
|
3
|
Doshi R, Amin KS, Khosla P, Bajaj SS, Chheang S, Forman HP. Quantitative Evaluation of Large Language Models to Streamline Radiology Report Impressions: A Multimodal Retrospective Analysis. Radiology 2024; 310:e231593. [PMID: 38530171 DOI: 10.1148/radiol.231593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/27/2024]
Abstract
Background The complex medical terminology of radiology reports may cause confusion or anxiety for patients, especially given increased access to electronic health records. Large language models (LLMs) can potentially simplify radiology report readability. Purpose To compare the performance of four publicly available LLMs (ChatGPT-3.5 and ChatGPT-4, Bard [now known as Gemini], and Bing) in producing simplified radiology report impressions. Materials and Methods In this retrospective comparative analysis of the four LLMs (accessed July 23 to July 26, 2023), the Medical Information Mart for Intensive Care (MIMIC)-IV database was used to gather 750 anonymized radiology report impressions covering a range of imaging modalities (MRI, CT, US, radiography, mammography) and anatomic regions. Three distinct prompts were employed to assess the LLMs' ability to simplify report impressions. The first prompt (prompt 1) was "Simplify this radiology report." The second prompt (prompt 2) was "I am a patient. Simplify this radiology report." The last prompt (prompt 3) was "Simplify this radiology report at the 7th grade level." Each prompt was followed by the radiology report impression and was queried once. The primary outcome was simplification as assessed by readability score. Readability was assessed using the average of four established readability indexes. The nonparametric Wilcoxon signed-rank test was applied to compare reading grade levels across LLM output. Results All four LLMs simplified radiology report impressions across all prompts tested (P < .001). Within prompts, differences were found between LLMs. Providing the context of being a patient or requesting simplification at the seventh-grade level reduced the reading grade level of output for all models and prompts (except prompt 1 to prompt 2 for ChatGPT-4) (P < .001). Conclusion Although the success of each LLM varied depending on the specific prompt wording, all four models simplified radiology report impressions across all modalities and prompts tested. © RSNA, 2024 Supplemental material is available for this article. See also the editorial by Rahsepar in this issue.
Collapse
Affiliation(s)
- Rushabh Doshi
- From the Yale School of Medicine (R.D., P.K.) and Department of Radiology and Biomedical Imaging (K.S.A., S.S.B., S.C., H.P.F.), Yale School of Medicine, 333 Cedar St, New Haven, CT 06510; Yale School of Management, New Haven, Conn (H.P.F.); and Department of Health Policy and Management, Yale School of Public Health, New Haven, Conn (H.P.F.)
| | - Kanhai S Amin
- From the Yale School of Medicine (R.D., P.K.) and Department of Radiology and Biomedical Imaging (K.S.A., S.S.B., S.C., H.P.F.), Yale School of Medicine, 333 Cedar St, New Haven, CT 06510; Yale School of Management, New Haven, Conn (H.P.F.); and Department of Health Policy and Management, Yale School of Public Health, New Haven, Conn (H.P.F.)
| | - Pavan Khosla
- From the Yale School of Medicine (R.D., P.K.) and Department of Radiology and Biomedical Imaging (K.S.A., S.S.B., S.C., H.P.F.), Yale School of Medicine, 333 Cedar St, New Haven, CT 06510; Yale School of Management, New Haven, Conn (H.P.F.); and Department of Health Policy and Management, Yale School of Public Health, New Haven, Conn (H.P.F.)
| | - Simar S Bajaj
- From the Yale School of Medicine (R.D., P.K.) and Department of Radiology and Biomedical Imaging (K.S.A., S.S.B., S.C., H.P.F.), Yale School of Medicine, 333 Cedar St, New Haven, CT 06510; Yale School of Management, New Haven, Conn (H.P.F.); and Department of Health Policy and Management, Yale School of Public Health, New Haven, Conn (H.P.F.)
| | - Sophie Chheang
- From the Yale School of Medicine (R.D., P.K.) and Department of Radiology and Biomedical Imaging (K.S.A., S.S.B., S.C., H.P.F.), Yale School of Medicine, 333 Cedar St, New Haven, CT 06510; Yale School of Management, New Haven, Conn (H.P.F.); and Department of Health Policy and Management, Yale School of Public Health, New Haven, Conn (H.P.F.)
| | - Howard P Forman
- From the Yale School of Medicine (R.D., P.K.) and Department of Radiology and Biomedical Imaging (K.S.A., S.S.B., S.C., H.P.F.), Yale School of Medicine, 333 Cedar St, New Haven, CT 06510; Yale School of Management, New Haven, Conn (H.P.F.); and Department of Health Policy and Management, Yale School of Public Health, New Haven, Conn (H.P.F.)
| |
Collapse
|
4
|
Amin KS, Mayes LC, Khosla P, Doshi RH. Assessing the Efficacy of Large Language Models in Health Literacy: A Comprehensive Cross-Sectional Study. THE YALE JOURNAL OF BIOLOGY AND MEDICINE 2024; 97:17-27. [PMID: 38559461 PMCID: PMC10964816 DOI: 10.59249/ztoz1966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Enhanced health literacy in children has been empirically linked to better health outcomes over the long term; however, few interventions have been shown to improve health literacy. In this context, we investigate whether large language models (LLMs) can serve as a medium to improve health literacy in children. We tested pediatric conditions using 26 different prompts in ChatGPT-3.5, ChatGPT-4, Microsoft Bing, and Google Bard (now known as Google Gemini). The primary outcome measurement was the reading grade level (RGL) of output as assessed by Gunning Fog, Flesch-Kincaid Grade Level, Automated Readability Index, and Coleman-Liau indices. Word counts were also assessed. Across all models, output for basic prompts such as "Explain" and "What is (are)," were at, or exceeded, the tenth-grade RGL. When prompts were specified to explain conditions from the first- to twelfth-grade level, we found that LLMs had varying abilities to tailor responses based on grade level. ChatGPT-3.5 provided responses that ranged from the seventh-grade to college freshmen RGL while ChatGPT-4 outputted responses from the tenth-grade to the college senior RGL. Microsoft Bing provided responses from the ninth- to eleventh-grade RGL while Google Bard provided responses from the seventh- to tenth-grade RGL. LLMs face challenges in crafting outputs below a sixth-grade RGL. However, their capability to modify outputs above this threshold, provides a potential mechanism for adolescents to explore, understand, and engage with information regarding their health conditions, spanning from simple to complex terms. Future studies are needed to verify the accuracy and efficacy of these tools.
Collapse
Affiliation(s)
| | - Linda C. Mayes
- Yale Child Study Center, Yale School of Medicine, New
Haven, CT, USA
| | | | | |
Collapse
|
5
|
Ciffone N, McNeal CJ, McGowan MP, Ferdinand KC. Lipoprotein(a): An important piece of the ASCVD risk factor puzzle across diverse populations. AMERICAN HEART JOURNAL PLUS : CARDIOLOGY RESEARCH AND PRACTICE 2024; 38:100350. [PMID: 38510747 PMCID: PMC10945898 DOI: 10.1016/j.ahjo.2023.100350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Accepted: 11/21/2023] [Indexed: 03/22/2024]
Abstract
Elevated lipoprotein(a) (Lp[a]) is an independent, genetic risk factor for atherosclerotic cardiovascular disease (ASCVD) that impacts ~1.4 billion people globally. Generally, Lp(a) levels remain stable over time; thus, most individuals need only undergo Lp(a) testing through a non-fasting blood draw once in their lifetime, unless elevated Lp(a) is identified. Despite the convenience of the test for clinicians and patients, routine Lp(a) testing has not been widely adopted. This review provides a guide to the benefits of Lp(a) testing and solutions for overcoming common barriers in practice, including access to testing and lack of awareness. Lp(a) testing provides the opportunity to reclassify ASCVD risk and drive intensive cardiovascular risk factor management in individuals with elevated Lp(a), and to identify patients potentially less likely to respond to statins. Moreover, cascade screening can help to identify elevated Lp(a) in relatives of individuals with a personal or family history of premature ASCVD. Overall, given the profound impact of elevated Lp(a) on cardiovascular risk, Lp(a) testing should be an essential component of risk assessment by primary and specialty care providers.
Collapse
Affiliation(s)
- Nicole Ciffone
- Arizona Center for Advanced Lipidology, 3925 E Fort Lowell Rd, Tucson, AZ 85712, USA
| | | | - Mary P. McGowan
- The Family Heart Foundation, 680 E. Colorado Blvd, Suite 180, Pasadena, CA 91101, USA
- Dartmouth Hitchcock Medical Center, Geisel School of Medicine at Dartmouth, 1 Rope Ferry Rd, Hanover, NH 03755, USA
| | - Keith C. Ferdinand
- John W. Deming Department of Medicine, Tulane University School of Medicine, 1430 Tulane Avenue, New Orleans, LA 70112, USA
| |
Collapse
|
6
|
Furukawa E, Okuhara T, Okada H, Nishiie Y, Kiuchi T. Evaluating the understandability and actionability of online CKD educational materials. Clin Exp Nephrol 2024; 28:31-39. [PMID: 37715844 PMCID: PMC10766677 DOI: 10.1007/s10157-023-02401-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 08/29/2023] [Indexed: 09/18/2023]
Abstract
BACKGROUND Previous studies have not fully determined whether online education materials on chronic kidney disease (CKD) for Japanese patients are easy to understand and help change their behavior. Therefore, this study quantitatively assessed the understandability and actionability of online CKD education materials. METHODS In September 2021, we searched Google and Yahoo Japan using the keywords "kidney," "kidney disease," "CKD," "chronic kidney disease," and "renal failure" to identify 538 webpages. We used the Japanese version of the Patient Education Materials Assessment Tool (PEMAT), ranging from 0 to 100%, to evaluate the understandability and actionability of webpages. We set the cutoff point to 70%. RESULTS Of the 186 materials included, the overall understandability and actionability were 61.5% (± 16.3%) and 38.7% (± 30.6%), respectively. The materials were highly technical in their terminology and lacked clear and concise charts and illustrations to encourage action. Compared to lifestyle modification materials on CKD overview, symptoms/signs, examination, and treatment scored significantly lower on the PEMAT. In addition, the materials produced by medical institutions and academic organizations scored significantly lower than those produced by for-profit companies. CONCLUSION Medical institutions and academic organizations are encouraged to use plain language and to attach explanations of medical terms when preparing materials for patients. They are also expected to improve visual aids to promote healthy behaviors.
Collapse
Affiliation(s)
- Emi Furukawa
- Department of Health Communication, The University of Tokyo Graduate School of Medicine, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.
| | - Tsuyoshi Okuhara
- Department of Health Communication, School of Public Health, The University of Tokyo, Tokyo, Japan
| | - Hiroko Okada
- Department of Health Communication, School of Public Health, The University of Tokyo, Tokyo, Japan
| | - Yuriko Nishiie
- Department of Health Communication, The University of Tokyo Graduate School of Medicine, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Takahiro Kiuchi
- Department of Health Communication, School of Public Health, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
7
|
Hillmann HAK, Angelini E, Karfoul N, Feickert S, Mueller-Leisse J, Duncker D. Accuracy and comprehensibility of chat-based artificial intelligence for patient information on atrial fibrillation and cardiac implantable electronic devices. Europace 2023; 26:euad369. [PMID: 38127304 PMCID: PMC10824484 DOI: 10.1093/europace/euad369] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 12/19/2023] [Indexed: 12/23/2023] Open
Abstract
AIMS Natural language processing chatbots (NLPC) can be used to gather information for medical content. However, these tools contain a potential risk of misinformation. This study aims to evaluate different aspects of responses given by different NLPCs on questions about atrial fibrillation (AF) and clinical implantable electronic devices (CIED). METHODS AND RESULTS Questions were entered into three different NLPC interfaces. Responses were evaluated with regard to appropriateness, comprehensibility, appearance of confabulation, absence of relevant content, and recommendations given for clinically relevant decisions. Moreover, readability was assessed by calculating word count and Flesch Reading Ease score. 52, 60, and 84% of responses on AF and 16, 72, and 88% on CIEDs were evaluated to be appropriate for all responses given by Google Bard, (GB) Bing Chat (BC) and ChatGPT Plus (CGP), respectively. Assessment of comprehensibility showed that 96, 88, and 92% of responses on AF and 92 and 88%, and 100% on CIEDs were comprehensible for all responses created by GB, BC, and CGP, respectively. Readability varied between different NLPCs. Relevant aspects were missing in 52% (GB), 60% (BC), and 24% (CGP) for AF, and in 92% (GB), 88% (BC), and 52% (CGP) for CIEDs. CONCLUSION Responses generated by an NLPC are mostly easy to understand with varying readability between the different NLPCs. The appropriateness of responses is limited and varies between different NLPCs. Important aspects are often missed to be mentioned. Thus, chatbots should be used with caution to gather medical information about cardiac arrhythmias and devices.
Collapse
Affiliation(s)
- Henrike A K Hillmann
- Hannover Heart Rhythm Center, Department of Cardiology and Angiology, Hannover Medical School, Carl-Neuberg-Str. 1, 30625 Hannover, Germany
| | - Eleonora Angelini
- Hannover Heart Rhythm Center, Department of Cardiology and Angiology, Hannover Medical School, Carl-Neuberg-Str. 1, 30625 Hannover, Germany
| | - Nizar Karfoul
- Hannover Heart Rhythm Center, Department of Cardiology and Angiology, Hannover Medical School, Carl-Neuberg-Str. 1, 30625 Hannover, Germany
| | - Sebastian Feickert
- Department of Cardiology and Internal Intensive Care Unit, Vivantes Clinic Am Urban, Dieffenbachstraße 1, 10967 Berlin, Germany
- Department of Cardiology, University Medical Center Rostock, Ernst-Heydemann-Straße 6, 18057 Rostock, Germany
| | - Johanna Mueller-Leisse
- Hannover Heart Rhythm Center, Department of Cardiology and Angiology, Hannover Medical School, Carl-Neuberg-Str. 1, 30625 Hannover, Germany
| | - David Duncker
- Hannover Heart Rhythm Center, Department of Cardiology and Angiology, Hannover Medical School, Carl-Neuberg-Str. 1, 30625 Hannover, Germany
| |
Collapse
|
8
|
Odeh M, Oqal M, AlDroubi H, Al-Omari B. Assessing the competency of pharmacists in writing effective curriculum vitae for job applications: a cross-sectional study and readability index evaluation. BMC MEDICAL EDUCATION 2023; 23:884. [PMID: 37985997 PMCID: PMC10662548 DOI: 10.1186/s12909-023-04870-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Accepted: 11/14/2023] [Indexed: 11/22/2023]
Abstract
BACKGROUND In today's competitive job market, pharmacists must have a well-crafted curriculum vitae (CV), cover letter, and personal statement. However, non-native English speakers may face challenges in crafting effective job application documents. Jordan is one such country where English is a second language for many, and little is known about the CV/job application writing skills of Jordanian pharmacists. Therefore, this study examined Jordanian pharmacists' ability to write job applications cover letters, and personal statements in English and investigated the association between several demographics and professional variables and the readability index of cover letters and personal statements. METHODS This study aimed to investigate Jordanian pharmacists' ability to write job applications cover letters, and personal statements in English and evaluate the readability of their personal statements and cover letters. The data were blindly and independently reviewed by two researchers. The readability of the cover letters and personal statements was assessed using an online calculator that assigns a readability index score. A readability score of 7-12 was considered "target", while scores above 12 or below 7 were considered "complicated" or "simple", respectively. The relationship between readability index scores and other variables was analyzed using the chi-square test with a statistical significance level of 0.05. RESULTS The study recruited 592 pharmacists. Most applicants, specifically 62.3%, were female, and 60.0% of them graduated more than six months before submitting their job applications. While 78.2% of the applications included a personal statement, only 34.8% included a cover letter, and 27.2% provided both. Of the 206 cover letters written in English, 43.2% were tailored, and 80.6% were structured. The study also found that the provision of an official photo was associated with providing a cover letter (P < 0.001, Phi(φ) = 0.14) while providing a structured cover letter was associated with including a personal statement (P < 0.001, Phi (φ) = 0.24). Only 102 cover letters and 65 personal statements had readability index scores within the target range. CONCLUSION In this study, most Jordanian pharmacists undervalue the importance of cover letters and personal statements and lack job application writing skills. The study also highlighted the need for improved pharmacists' English proficiency to write effective job application documents in Jordan.
Collapse
Affiliation(s)
- Mohanad Odeh
- Department of Clinical Pharmacy and Pharmacy Practice, Faculty of Pharmaceutical Sciences, The Hashemite University, P.O. Box 330127, Zarqa, 13133, Jordan
| | - Muna Oqal
- Department of Pharmaceutics and Pharmaceutical Technology, Faculty of Pharmaceutical Sciences, The Hashemite University, P.O Box 330127, Zarqa, 13133, Jordan
| | - Hanan AlDroubi
- Department of Clinical Pharmacy and Pharmacy Practice, Faculty of Pharmaceutical Sciences, The Hashemite University, P.O. Box 330127, Zarqa, 13133, Jordan
| | - Basem Al-Omari
- Department of Public Health and Epidemiology, College of Medicine and Health Sciences, Khalifa University of Science and Technology, P O Box 127788, Abu Dhabi, United Arab Emirates.
| |
Collapse
|
9
|
Nattam A, Vithala T, Wu TC, Bindhu S, Bond G, Liu H, Thompson A, Wu DTY. Assessing the Readability of Online Patient Education Materials in Obstetrics and Gynecology Using Traditional Measures: Comparative Analysis and Limitations. J Med Internet Res 2023; 25:e46346. [PMID: 37647115 PMCID: PMC10500363 DOI: 10.2196/46346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 06/06/2023] [Accepted: 07/04/2023] [Indexed: 09/01/2023] Open
Abstract
BACKGROUND Patient education materials (PEMs) can be vital sources of information for the general population. However, despite American Medical Association (AMA) and National Institutes of Health (NIH) recommendations to make PEMs easier to read for patients with low health literacy, they often do not adhere to these recommendations. The readability of online PEMs in the obstetrics and gynecology (OB/GYN) field, in particular, has not been thoroughly investigated. OBJECTIVE The study sampled online OB/GYN PEMs and aimed to examine (1) agreeability across traditional readability measures (TRMs), (2) adherence of online PEMs to AMA and NIH recommendations, and (3) whether the readability level of online PEMs varied by web-based source and medical topic. This study is not a scoping review, rather, it focused on scoring the readability of OB/GYN PEMs using the traditional measures to add empirical evidence to the literature. METHODS A total of 1576 online OB/GYN PEMs were collected via 3 major search engines. In total 93 were excluded due to shorter content (less than 100 words), yielding 1483 PEMs for analysis. Each PEM was scored by 4 TRMs, including Flesch-Kincaid grade level, Gunning fog index, Simple Measure of Gobbledygook, and the Dale-Chall. The PEMs were categorized based on publication source and medical topic by 2 research team members. The readability scores of the categories were compared statistically. RESULTS Results indicated that the 4 TRMs did not agree with each other, leading to the use of an averaged readability (composite) score for comparison. The composite scores across all online PEMs were not normally distributed and had a median at the 11th grade. Governmental PEMs were the easiest to read amongst source categorizations and PEMs about menstruation were the most difficult to read. However, the differences in the readability scores among the sources and the topics were small. CONCLUSIONS This study found that online OB/GYN PEMs did not meet the AMA and NIH readability recommendations and would be difficult to read and comprehend for patients with low health literacy. Both findings connected well to the literature. This study highlights the need to improve the readability of OB/GYN PEMs to help patients make informed decisions. Research has been done to create more sophisticated readability measures for medical and health documents. Once validated, these tools need to be used by web-based content creators of health education materials.
Collapse
Affiliation(s)
- Anunita Nattam
- College of Medicine, University of Cincinnati, Cincinnati, OH, United States
| | - Tripura Vithala
- College of Medicine, University of Cincinnati, Cincinnati, OH, United States
| | - Tzu-Chun Wu
- College of Medicine, University of Cincinnati, Cincinnati, OH, United States
| | - Shwetha Bindhu
- College of Medicine, University of Cincinnati, Cincinnati, OH, United States
| | - Gregory Bond
- Department of Computer Science, University of Cincinnati, Cincinnati, OH, United States
| | - Hexuan Liu
- School of Criminal Justice, University of Cincinnati, Cincinnati, OH, United States
| | - Amy Thompson
- College of Medicine, University of Cincinnati, Cincinnati, OH, United States
| | - Danny T Y Wu
- College of Medicine, University of Cincinnati, Cincinnati, OH, United States
- Department of Computer Science, University of Cincinnati, Cincinnati, OH, United States
| |
Collapse
|
10
|
Ahmadzadeh K, Bahrami M, Zare-Farashbandi F, Adibi P, Boroumand MA, Rahimi A. Patient education information material assessment criteria: A scoping review. Health Info Libr J 2023; 40:3-28. [PMID: 36637218 DOI: 10.1111/hir.12467] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2021] [Revised: 10/13/2022] [Accepted: 11/03/2022] [Indexed: 01/14/2023]
Abstract
BACKGROUND Patient education information material (PEIM) is an essential component of patient education programs in increasing patients' ability to cope with their diseases. Therefore, it is essential to consider the criteria that will be used to prepare and evaluate these resources. OBJECTIVE This paper aims to identify these criteria and recognize the tools or methods used to evaluate them. METHODS National and international databases and indexing banks, including PubMed, Scopus, Web of Science, ProQuest, the Cochrane Library, Magiran, SID and ISC, were searched for this review. Original or review articles, theses, short surveys, and conference papers published between January 1990 and June 2022 were included. RESULTS Overall, 4688 documents were retrieved, of which 298 documents met the inclusion criteria. The criteria were grouped into 24 overarching criteria. The most frequently used criteria were readability, quality, suitability, comprehensibility and understandability. CONCLUSION This review has provided empirical evidence to identify criteria, tools, techniques or methods for developing or evaluating a PEIM. The authors suggest that developing a comprehensive tool based on these findings is critical for evaluating the overall efficiency of PEIM using effective criteria.
Collapse
Affiliation(s)
- Khadijeh Ahmadzadeh
- Health Information Technology Research Center, Isfahan University of Medical Sciences, Isfahan, Iran.,Student Research Commitee, Sirjan School of Medical Sciences, Sirjan, Iran
| | - Masoud Bahrami
- Department of Adult Health Nursing, Nursing and Midwifery Care Research Center, School of Nursing and Midwifery, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Firoozeh Zare-Farashbandi
- Health Information Technology Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Payman Adibi
- Gastroenterology Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Mohammad Ali Boroumand
- Department of Medical Library and Information Sciences, School of Health Management and Information Sciences, Iran University of Medical Sciences, Tehran, Iran
| | - Alireza Rahimi
- Health Information Technology Research Center, Isfahan University of Medical Sciences, Isfahan, Iran
| |
Collapse
|
11
|
Health literacy characteristics of over-the-counter rapid antigen COVID-19 test materials. Res Social Adm Pharm 2022; 18:4124-4128. [PMID: 35987673 PMCID: PMC9376145 DOI: 10.1016/j.sapharm.2022.08.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 07/14/2022] [Accepted: 08/09/2022] [Indexed: 12/12/2022]
Abstract
Background The United States Food & Drug Administration's emergency authorized use, in December 2020, of over-the-counter (OTC) rapid antigen COVID-19 tests was a pandemic control milestone. Objective To assess health literacy-related characteristics of OTC rapid antigen COVID-19 test materials. Methods Between September–December 2021, we identified eleven (n = 11) OTC rapid antigen COVID-19 tests available for purchase in the US. We assessed readability (Flesch Reading Ease and Fernández-Huerta), formatting and layout features of English- and Spanish-language step-by-step OTC rapid antigen COVID-19 test package insert instructions. Video-based step-by-step OTC rapid antigen COVID-19 test instructions were evaluated for understandability and actionability (Patient Education Materials Assessment Tool for Audiovisual Materials [PEMAT-A/V]), overall quality (Global Quality Scale [GQS]) and cultural diversity and inclusiveness. Descriptive analyses were performed using IBM® Statistical Package for the Social Sciences. Results Nine (81.8%) OTC rapid antigen COVID-19 tests included English-language (≈8th-9th reading grade level) step-by-step instructions, while 4 included Spanish-language (≈10th-12th reading grade level) instructions. On average, instructions were printed on a tabloid sized piece of paper, with text size ranging from 4 to 12 point and including nearly 20 illustrations. English-language step-by-step OTC rapid antigen COVID-19 test video-based instructions (n = 6) ranged from 1:04 to 5:41 min with PEMAT-A/V scores ranging from 80% to 100%. As indicated by GQS scores, English-language videos were of high quality (5 videos scored 5/5; 1 video scored 4/5). One COVID-19 test product manufacturing website included Spanish-language video-based instructions (time = 4:59 min; PEMAT-A/V = 100%; GQS = 5). Conclusions OTC COVID-19 test step-by-step instructions—both package inserts and video-based—included features shown to foster patient understanding and facilitate proper use. Moving forward, greater attention needs to be placed on expanding both Spanish-language and video-based OTC COVID-19 test material availability to improve accessibility across diverse populations.
Collapse
|