1
|
Günay AE, Özer A, Yazıcı A, Sayer G. Comparison of ChatGPT versions in informing patients with rotator cuff injuries. JSES Int 2024; 8:1016-1018. [PMID: 39280147 PMCID: PMC11401580 DOI: 10.1016/j.jseint.2024.04.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/18/2024] Open
Abstract
Background The aim of this study is to evaluate whether Chat Generative Pretrained Transformer (ChatGPT) can be recommended as a resource for informing patients planning rotator cuff repairs, and to assess the differences between ChatGPT 3.5 and 4.0 versions in terms of information content and readability. Methods In August 2023, 13 commonly asked questions by patients with rotator cuff disease were posed to ChatGPT 3.5 and ChatGPT 4 programs using different internet protocol computers by 3 experienced surgeons in rotator cuff surgery. After converting the answers of both versions into text, the quality and readability of the answers were examined. Results The average Journal of the American Medical Association score for both versions was 0, and the average DISCERN score was 61.6. A statistically significant and strong correlation was found between ChatGPT 3.5 and 4.0 DISCERN scores. There was excellent agreement in DISCERN scores for both versions among the 3 evaluators. ChatGPT 3.5 was found to be less readable than ChatGPT 4.0. Conclusion The information provided by the ChatGPT conversational system was evaluated as of high quality, but there were significant shortcomings in terms of reliability due to the lack of citations. Despite the ChatGPT 4.0 version having higher readability scores, both versions were considered difficult to read.
Collapse
Affiliation(s)
- Ali Eray Günay
- Department of Orthopedics and Traumatology, Kayseri City Training and Research Hospital, Kayseri, Turkey
| | - Alper Özer
- Department of Orthopedics and Traumatology, Kayseri City Training and Research Hospital, Kayseri, Turkey
| | - Alparslan Yazıcı
- Department of Orthopedics and Traumatology, Develi State Hospital, Kayseri, Turkey
| | - Gökhan Sayer
- Department of Orthopedics and Traumatology, Bursa City Training and Research Hospital, Bursa, Turkey
| |
Collapse
|
2
|
Gaudiani MA, Castle JP, Abbas MJ, Pratt BA, Myles MD, Moutzouros V, Lynch TS. ChatGPT-4 Generates More Accurate and Complete Responses to Common Patient Questions About Anterior Cruciate Ligament Reconstruction Than Google's Search Engine. Arthrosc Sports Med Rehabil 2024; 6:100939. [PMID: 39006779 PMCID: PMC11240040 DOI: 10.1016/j.asmr.2024.100939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 03/27/2024] [Indexed: 07/16/2024] Open
Abstract
Purpose To replicate a patient's internet search to evaluate ChatGPT's appropriateness in answering common patient questions about anterior cruciate ligament reconstruction compared with a Google web search. Methods A Google web search was performed by searching the term "anterior cruciate ligament reconstruction." The top 20 frequently asked questions and responses were recorded. The prompt "What are the 20 most popular patient questions related to 'anterior cruciate ligament reconstruction?'" was input into ChatGPT and questions and responses were recorded. Questions were classified based on the Rothwell system and responses assessed via Flesch-Kincaid Grade Level, correctness, and completeness were for both Google web search and ChatGPT. Results Three of 20 (15%) questions were similar between Google web search and ChatGPT. The most common question types among the Google web search were value (8/20, 40%), fact (7/20, 35%), and policy (5/20, 25%). The most common question types amongst the ChatGPT search were fact (12/20, 60%), policy (6/20, 30%), and value (2/20, 10%). Mean Flesch-Kincaid Grade Level for Google web search responses was significantly lower (11.8 ± 3.8 vs 14.3 ± 2.2; P = .003) than for ChatGPT responses. The mean correctness for Google web search question answers was 1.47 ± 0.5, and mean completeness was 1.36 ± 0.5. Mean correctness for ChatGPT answers was 1.8 ± 0.4 and mean completeness was 1.9 ± 0.3, which were both significantly greater than Google web search answers (P = .03 and P = .0003). Conclusions ChatGPT-4 generated more accurate and complete responses to common patient questions about anterior cruciate ligament reconstruction than Google's search engine. Clinical Relevance The use of artificial intelligence such as ChatGPT is expanding. It is important to understand the quality of information as well as how the results of ChatGPT queries compare with those from Google web searches.
Collapse
Affiliation(s)
- Michael A. Gaudiani
- Department of Orthopedic Surgery, Henry Ford Health, Detroit, Michigan, U.S.A
| | - Joshua P. Castle
- Department of Orthopedic Surgery, Henry Ford Health, Detroit, Michigan, U.S.A
| | - Muhammad J. Abbas
- Department of Orthopedic Surgery, Henry Ford Health, Detroit, Michigan, U.S.A
| | - Brittaney A. Pratt
- Department of Orthopedic Surgery, Henry Ford Health, Detroit, Michigan, U.S.A
| | - Marquisha D. Myles
- Michigan State University College of Human Medicine, Detroit, Michigan, U.S.A
| | - Vasilios Moutzouros
- Department of Orthopedic Surgery, Henry Ford Health, Detroit, Michigan, U.S.A
| | - T. Sean Lynch
- Department of Orthopedic Surgery, Henry Ford Health, Detroit, Michigan, U.S.A
| |
Collapse
|
3
|
Hanci V, Otlu B, Biyikoğlu AS. Assessment of the Readability of the Online Patient Education Materials of Intensive and Critical Care Societies. Crit Care Med 2024; 52:e47-e57. [PMID: 37962133 DOI: 10.1097/ccm.0000000000006121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
OBJECTIVES This study aimed to evaluate the readability of patient education materials (PEMs) on websites of intensive and critical care societies. DATA SOURCES Websites of intensive and critical care societies, which are members of The World Federation of Intensive and Critical Care and The European Society of Intensive Care Medicine. SETTING Cross-sectional observational, internet-based, website, PEMs, readability study. STUDY SELECTION The readability of the PEMs available on societies' sites was evaluated. DATA EXTRACTION The readability formulas used were the Flesch Reading Ease Score (FRES), Flesch-Kincaid Grade Level (FKGL), Simple Measure of Gobbledygook (SMOG), and Gunning Fog (GFOG). DATA SYNTHESIS One hundred twenty-seven PEM from 11 different societies were included in our study. In the readability analysis of PEM, the FRES was 58.10 (48.85-63.77) (difficult), the mean FKGL and SMOG were 10.19 (8.93-11.72) and 11.10 (10.11-11.87) years, respectively, and the mean GFOG score was 12.73 (11.37-14.15) (very difficult). All readability formula results were significantly higher than the recommended sixth-grade level ( p < 0.001). All PEMs were above the sixth-grade level when the societies were evaluated individually according to all readability results ( p < 0.05). CONCLUSIONS Compared with the sixth-grade level recommended by the American Medical Association and the National Institutes of Health, the readability of PEMs in intensive and critical care societies is relatively high. PEMs in intensive and critical care societies should be prepared with attention to recommendations on readability.
Collapse
Affiliation(s)
- Volkan Hanci
- Anesthesiology and Reanimation Department, Dokuz Eylul University, Izmir, Tukey
| | | | | |
Collapse
|
4
|
Warren E, Hurley ET, Park CN, Crook BS, Lorentz S, Levin JM, Anakwenze O, MacDonald PB, Klifto CS. Evaluation of information from artificial intelligence on rotator cuff repair surgery. JSES Int 2024; 8:53-57. [PMID: 38312282 PMCID: PMC10837709 DOI: 10.1016/j.jseint.2023.09.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2024] Open
Abstract
Purpose The purpose of this study was to analyze the quality and readability of information regarding rotator cuff repair surgery available using an online AI software. Methods An open AI model (ChatGPT) was used to answer 24 commonly asked questions from patients on rotator cuff repair. Questions were stratified into one of three categories based on the Rothwell classification system: fact, policy, or value. The answers for each category were evaluated for reliability, quality and readability using The Journal of the American Medical Association Benchmark criteria, DISCERN score, Flesch-Kincaid Reading Ease Score and Grade Level. Results The Journal of the American Medical Association Benchmark criteria score for all three categories was 0, which is the lowest score indicating no reliable resources cited. The DISCERN score was 51 for fact, 53 for policy, and 55 for value questions, all of which are considered good scores. Across question categories, the reliability portion of the DISCERN score was low, due to a lack of resources. The Flesch-Kincaid Reading Ease Score (and Flesch-Kincaid Grade Level) was 48.3 (10.3) for the fact class, 42.0 (10.9) for the policy class, and 38.4 (11.6) for the value class. Conclusion The quality of information provided by the open AI chat system was generally high across all question types but had significant shortcomings in reliability due to the absence of source material citations. The DISCERN scores of the AI generated responses matched or exceeded previously published results of studies evaluating the quality of online information about rotator cuff repairs. The responses were U.S. 10th grade or higher reading level which is above the AMA and NIH recommendation of 6th grade reading level for patient materials. The AI software commonly referred the user to seek advice from orthopedic surgeons to improve their chances of a successful outcome.
Collapse
Affiliation(s)
- Eric Warren
- Duke University School of Medicine, Duke University, Durham, NC, USA
| | - Eoghan T. Hurley
- Department of Orthopaedic Surgery, Duke University, Durham, NC, USA
| | - Caroline N. Park
- Department of Orthopaedic Surgery, Duke University, Durham, NC, USA
| | - Bryan S. Crook
- Department of Orthopaedic Surgery, Duke University, Durham, NC, USA
| | - Samuel Lorentz
- Department of Orthopaedic Surgery, Duke University, Durham, NC, USA
| | - Jay M. Levin
- Department of Orthopaedic Surgery, Duke University, Durham, NC, USA
| | - Oke Anakwenze
- Department of Orthopaedic Surgery, Duke University, Durham, NC, USA
| | - Peter B. MacDonald
- Section of Orthopaedic Surgery & The Pan Am Clinic, University of Manitoba, Winnipeg, MB, Canada
| | | |
Collapse
|
5
|
Yilmaz Hanci S. How readable and quality are online patient education materials about Helicobacter pylori?: Assessment of the readability, quality and reliability. Medicine (Baltimore) 2023; 102:e35543. [PMID: 37904459 PMCID: PMC10615431 DOI: 10.1097/md.0000000000035543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Accepted: 09/15/2023] [Indexed: 11/01/2023] Open
Abstract
This study aimed to examine the readability, reliability, quality, and content of patient education materials (PEM) on the Internet about "Helicobacter pylori (H pylori)." A search was conducted on March 14, 2023, using the keyword "H pylori" in the Google search engine. The readability of PEMs was assessed using the Flesch reading ease score, FKGL, simple measure of gobbledygook, and gunning fog readability formulas. The reliability and quality of the websites were determined using the Journal of American Medical Association score, health on the net foundation code of conduct, global quality score, and DISCERN score. A total of 93 patient education websites were included in the study. In the readability analysis of PEMs, we determined that the Flesch reading ease score was 49,73 (47,46-52,00) (difficult), the mean Flesch-Kincaid grade level and simple measure of gobbledygook were 9,69 (9,26-10,12) and 9,28 (8,96-9,61) years, respectively, and the mean gunning fog score was 12,47 (12,03-12,91) (very difficult). Most of the evaluated patient educational materials were commercial websites (n = 50, 53.8%). It was found that 16.1% of the websites were of high quality according to global quality score, 30.1% were HON code certified, and 23.7% of the websites were highly reliable according to Journal of American Medical Association scores. There was no statistically significant difference between website typologies and readability (P > .05). However, there was a statistically significant difference between website typologies and quality and reliability scores (P < .005). Compared to the sixth grade level recommended by the American Medical Association and National Institutes of Health, the readability of H pylori-related internet-based PEMs is quite high. On the other hand, the reliability and quality of the PEMs were determined as moderate to poor. PEMs for issues threatening public health should be prepared with attention to recommendations on readability.
Collapse
Affiliation(s)
- Sevgi Yilmaz Hanci
- Specialist of Microbiology and Clinical Microbiology, Health Sciences University, İzmir Tepecik Training and Research Hospital, Microbiology and Clinical Microbiology Laboratory, Konak, Izmir, Turkey
| |
Collapse
|
6
|
Guzman AJ, Dela Rueda T, Williams N, Rayos Del Sol S, Jenkins S, Shin C, Bryant S, McGahan P, Chen Md Mph J. Online Patient Education Resources for Anterior Cruciate Ligament Reconstruction: An Assessment of the Accuracy and Reliability of Information on the Internet Over the Past Decade. Cureus 2023; 15:e46599. [PMID: 37937032 PMCID: PMC10627413 DOI: 10.7759/cureus.46599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/06/2023] [Indexed: 11/09/2023] Open
Abstract
PURPOSE The purpose of this study is to evaluate the quality of patient education materials accessible through popular online search engines regarding anterior cruciate ligament (ACL) injuries and anterior cruciate ligament reconstruction (ACLR). Methods: Two search terms ("ACL surgery" and "ACL reconstruction") were entered into three search engines (Google, Yahoo, and Bing). The quality of information was scored using a novel scoring system developed and overseen by sports medicine orthopedic clinical research fellows and fellowship-trained orthopedic surgeons. Website quality, credibility, and readability were further assessed by the DISCERN score, Journal of the American Medical Association (JAMA) benchmark criteria, and Flesch-Kincaid Reading Grade Level (FKRGL), respectively. The Health On the Net Code of Conduct (HONcode) certification was also utilized to assess the transparency of health information for each website. RESULTS We evaluated 39 websites. The average score for all websites was 11.2±5.6 out of 28 total points. Six out of the 39 websites (41%) were HONcode certified. The websites that contained HONcode certification had a higher average JAMA benchmark score (3.5±0.7) and DISCERN score (44.6±14.7) when compared to the websites without the certification, 2.2±1.2 and 37.6 ± 15.9 for JAMA and DISCERN, respectively. The mean JAMA benchmark score was 2.7±1.2 (67.5%) for all websites out of a possible four points. The average FKRGL for all 39 websites was 10.0±2.0 (range: 5.4-13). CONCLUSION The quality of patient education materials accessible on the internet regarding ACL injuries and ACLR can be misleading and directly impact the patient's decision-making process essential to the patient-physician relationship over the past decade. CLINICAL RELEVANCE The internet can be a helpful online resource, however, surgeon clarification and consultation with qualified healthcare professionals are strongly recommended prior to clinical decision-making regarding potential treatment options.
Collapse
Affiliation(s)
- Alvarho J Guzman
- Orthopedic Surgery, Advanced Orthopedics & Sports Medicine, San Francisco, USA
- Orthopedic Surgery, Albany Medical College, Albany, USA
| | - Therese Dela Rueda
- Orthopedic Surgery, Advanced Orthopedics & Sports Medicine, San Francisco, USA
| | - Nicholas Williams
- Orthopedic Surgery, University of Connecticut School of Medicine, Farmington, USA
| | - Shane Rayos Del Sol
- Orthopedic Surgery, Advanced Orthopedics & Sports Medicine, San Francisco, USA
| | - Sarah Jenkins
- Orthopedic Surgery, Advanced Orthopedics & Sports Medicine, San Francisco, USA
| | - Caleb Shin
- Orthopedic Surgery, Advanced Orthopedics & Sports Medicine, San Francisco, USA
| | - Stewart Bryant
- Orthopedic Surgery, Advanced Orthopedics & Sports Medicine, San Francisco, USA
| | - Patrick McGahan
- Orthopedic Surgery, Advanced Orthopedics & Sports Medicine, San Francisco, USA
| | - James Chen Md Mph
- Orthopedic Surgery, Advanced Orthopedics & Sports Medicine, San Francisco, USA
| |
Collapse
|
7
|
Martinez VH, Ojo D, Gutierrez-Naranjo JM, Proffitt M, Hartzler RU. The Most Popular YouTube Videos About Shoulder Replacement Are of Poor Quality for Patient Education. Arthrosc Sports Med Rehabil 2023; 5:e623-e628. [PMID: 37388878 PMCID: PMC10300530 DOI: 10.1016/j.asmr.2023.03.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 03/01/2023] [Accepted: 03/03/2023] [Indexed: 07/01/2023] Open
Abstract
Purpose To characterize the quality of YouTube total shoulder arthroplasty videos as a source of patient information using the DISCERN instrument. Methods An analysis of the YouTube video library was performed, using a string of 6 search terms related to "total shoulder replacement" and "total shoulder arthroplasty" in the YouTube search engine. The first 20 videos from each search (n = 120) were selected. The top 25 most viewed videos were compiled, screened, and evaluated with the DISCERN score in the final analysis. Pearson's correlation coefficients were used to assess the correlation of DISCERN scores and video characteristics. Inter-rater reliability was calculated with the conger kappa score for multiple raters. Results Twenty-five videos met inclusion criteria, 13 (52%) were produced by academic institutions, 7 (28%) by physicians, and 5 (20%) by commercial entities. Median total DISCERN score was 33 out of 80 (IQR: 28-44). The overall total DISCERN scores, showed no correlation with video likes or views and was negatively correlated with video power index (r = -0.75, P = .001). No association between total shoulder arthroscopy video source and DISCERN score could be demonstrated. All videos analyzed scored poorly by the DISCERN instrument. Conclusions The current most popular shoulder replacement videos on YouTube are low-quality patient education resources. Furthermore, our study found no correlation between video popularity, as measured by the number of views and the DISCERN score. Clinical Relevance Successful outcomes following total shoulder arthroplasty may be influenced by the quality of information patients receive.
Collapse
Affiliation(s)
- Victor H. Martinez
- University of the Incarnate Word, School of Osteopathic Medicine, San Antonio, Texas, U.S.A
- Burkhart Research Institute for Orthopaedics, San Antonio, Texas, U.S.A
| | - Desiree Ojo
- University of the Incarnate Word, School of Osteopathic Medicine, San Antonio, Texas, U.S.A
- Burkhart Research Institute for Orthopaedics, San Antonio, Texas, U.S.A
| | - Jose M. Gutierrez-Naranjo
- UT Health San Antonio, Department of Orthopaedic Surgery, San Antonio, Texas, U.S.A
- Burkhart Research Institute for Orthopaedics, San Antonio, Texas, U.S.A
| | - Mike Proffitt
- TSAOG Orthopaedics and Spine, San Antonio, Texas, U.S.A
- Burkhart Research Institute for Orthopaedics, San Antonio, Texas, U.S.A
| | - Robert U. Hartzler
- TSAOG Orthopaedics and Spine, San Antonio, Texas, U.S.A
- Burkhart Research Institute for Orthopaedics, San Antonio, Texas, U.S.A
| |
Collapse
|
8
|
Erkin Y, Hanci V, Ozduran E. Evaluating the readability, quality and reliability of online patient education materials on transcutaneuous electrical nerve stimulation (TENS). Medicine (Baltimore) 2023; 102:e33529. [PMID: 37083809 PMCID: PMC10118348 DOI: 10.1097/md.0000000000033529] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 03/24/2023] [Indexed: 04/22/2023] Open
Abstract
Increasing digitization also raises concerns regarding the reliability and comprehensibility of online health information. In this study, we aimed to examine the readability, reliability, and quality of internet-based patient education materials on "transcutaneous electrical nerve stimulation." On September 15, 2022, we used Google search engine to search the keyword "Transcutaneous Electrical Nerve Stimulation" and obtained information from 200 websites. The readability of the websites was evaluated using the Flesch Reading Ease Score (FRES), Flesch-Kincaid Grade Level, Simple Measure of Gobbledygook, and Gunning Fog. The Journal of American Medical Association score and Health on the Net Foundation code of conduct were used to determine the reliability of the websites, whereas the DISCERN score and Global Quality Score were used to evaluate the quality of the websites. In the readability analysis of 102 websites that met the inclusion criteria of this study, we found that the Flesch Reading Ease Score was 47.91 ± 13.79 (difficult), average Flesch-Kincaid Grade Level and Simple Measure of Gobbledygook were 11.20 ± 2.85 and 10.53 ± 2.11 years, respectively, and average Gunning Fog score was 14.04 ± 2.74 (very difficult). Commercial websites constituted the highest proportion of websites (n = 36, 35.5%). Overall, 16.7% of the websites were found to be of high quality according to the Global Quality Score, 16 (15.7%) websites had Health on the Net Foundation code of conduct certification, and 8.8% of the websites were found to be highly reliable according to the Journal of American Medical Association scores. There was a statistically significant difference between website typologies and quality and reliability scores (P < .001). Compared with the sixth-grade level recommended by the American Medical Association and the National Institute of Health, the readability of transcutaneous electrical nerve stimulation-related internet-based patient education materials was considerably high, but they showed low reliability and moderate-to-poor quality. Thus, the quality, reliability, and readability of websites developed by health professionals play a major role in conveying accurate and easily understandable information.
Collapse
Affiliation(s)
- Yüksel Erkin
- Anesthesiology and Reanimation, Algology, Dokuz Eylul University, İzmir Turkey
| | - Volkan Hanci
- Anesthesiology and Reanimation, Dokuz Eylul University, İzmir Turkey
| | - Erkan Ozduran
- Physical Medicine and Rehabilitation, Algology, Dokuz Eylul University, Izmir, Turkey
| |
Collapse
|
9
|
Growing Taller without Hormones? Dr. Consult Google-An Evaluation of Online Information Related to Limb Lengthening. Healthcare (Basel) 2023; 11:healthcare11020172. [PMID: 36673540 PMCID: PMC9858970 DOI: 10.3390/healthcare11020172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/27/2022] [Accepted: 01/04/2023] [Indexed: 01/09/2023] Open
Abstract
PURPOSE The aim of this study was to investigate the reliability, content and readability of the information available on the Internet related to limb lengthening surgeries, which have recently been progressively in fashion. METHODS The three most commonly used browsers on the Internet were determined and a search term for "Limb Lengthening Surgery" was typed for each browser. The websites were categorized by their type, and the content and the quality of them was evaluated using the DISCERN score, the Journal of American Medical Association (JAMA) benchmark and the Global Quality Score (GQS). The Flesch Kincaid Grade Level (FKGL) and the Flesch Reading Ease Score (FKRS) were used to evaluate the readability. Each website also assessed the presence (or absence) of the Health on Net (HON) code. RESULTS The academic category was found to be significantly higher than the medical and commercial categories. Mean FKGL and FCRS scores, DISCERN score values, JAMA, GQS and LLCS score values of Websites with HON code were significantly higher than those without. CONCLUSIONS The quality of online information related to limb lengthening was of low quality. Although some websites, especially academic resources, were of higher quality, the readability of their content is just about 2.5 degrees higher than the sixth-grade reading level.
Collapse
|
10
|
Gao B, Shamrock AG, Gulbrandsen TR, O’Reilly OC, Duchman KR, Westermann RW, Wolf BR. Can Patients Read, Understand, and Act on Online Resources for Anterior Cruciate Ligament Surgery? Orthop J Sports Med 2022; 10:23259671221089977. [PMID: 35928178 PMCID: PMC9344126 DOI: 10.1177/23259671221089977] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Accepted: 05/11/2022] [Indexed: 11/22/2022] Open
Abstract
Background: Patients undergoing elective procedures often utilize online educational
materials to familiarize themselves with the surgical procedure and expected
postoperative recovery. While the Internet is easily accessible and
ubiquitous today, the ability of patients to read, understand, and act on
these materials is unknown. Purpose: To evaluate online resources about anterior cruciate ligament (ACL) surgery
utilizing measures of readability, understandability, and actionability. Study Design: Cross-sectional study; Level of evidence, 4. Methods: Using the term “ACL surgery,” 2 independent searches were performed utilizing
a public search engine (Google.com). Patient education
materials were identified from the top 50 results. Audiovisual materials,
news articles, materials intended for advertising or medical professionals,
and materials unrelated to ACL surgery were excluded. Readability was
quantified using the Flesch Reading Ease, Flesch-Kincaid Grade Level, Simple
Measure of Gobbledygook, Coleman-Liau Index, Automated Readability Index,
and Gunning Fog Index. The Patient Education Materials Assessment Tool for
Printable Materials (PEMAT-P) was utilized to assess the actionability and
understandability of materials. For each online source, the relationship
between its Google search rank (from first to last) and its readability,
understandability, and actionability was calculated utilizing the Spearman
rank correlation coefficient (ρS). Results: Overall, we identified 68 unique websites, of which 39 met inclusion
criteria. The mean Flesch-Kincaid Grade Level was 10.08 ± 2.34, with no
website scoring at or below the 6th-grade level. Mean understandability and
actionability scores were 59.18 ± 10.86 (range, 33.64-79.17) and 34.41 ±
22.31 (range, 0.00-81.67), respectively. Only 5 (12.82%) and 1 (2.56%)
resource scored above the 70% adequate PEMAT-P threshold mark for
understandability and actionability, respectively. Readability (lowest
P value = .103), understandability (ρS =
–0.13; P = .441), and actionability (ρS = 0.28;
P = .096) scores were not associated with Google
rank. Conclusion: Patient education materials on ACL surgery scored poorly with respect to
readability, understandability, and actionability. No online resource scored
at the recommended reading level of the American Medical Association or
National Institutes of Health. Only 5 resources scored above the proven
threshold for understandability, and only 1 resource scored above it for
actionability.
Collapse
Affiliation(s)
- Burke Gao
- Department of Orthopaedic Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Alan G. Shamrock
- Department of Orthopaedic Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Trevor R. Gulbrandsen
- Department of Orthopaedic Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Olivia C. O’Reilly
- Department of Orthopaedic Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Kyle R. Duchman
- Department of Orthopaedic Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Robert W. Westermann
- Department of Orthopaedic Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Brian R. Wolf
- Department of Orthopaedic Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| |
Collapse
|
11
|
Sridharan M, Ulrich M, Thacher R, Swinehart S, Baria MR, Jones GL, Bishop JY, Cvetanovich GL, Rauck RC. The quality and accuracy of direct-to-consumer biologic marketing for shoulder pathology are poor. JSES Int 2022; 6:518-522. [PMID: 35572419 PMCID: PMC9091716 DOI: 10.1016/j.jseint.2021.12.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Background The growing role of biologic therapies as adjunct or standalone procedures in orthopedic practice has led to greater levels of direct-to-consumer biologic marketing. The present study aims to assess the quality, accuracy, and readability of online educational resources available to patients regarding biologic therapies for shoulder pathology. Methods Eight search terms relevant to shoulder biologic therapies (shoulder + BMAC, Bone Marrow Aspirate Concentrate, PRP, Platelet Rich Plasma, Lipogems, Adipose Tissue, Biologic therapy, and Stem cell therapy) were searched across three separate search engines. The first 25 websites of each search were recorded. Duplicate websites and those not specific to shoulder pathology were excluded. Three evaluators independently assessed quality using an author-derived scoring rubric for a total of 25 possible points and accuracy for a total of 12 possible points. The Flesch-Kincaid readability test was used to quantify reading levels. Websites were further characterized by authorship and the presence of commercial bias. Results Of the 600 results from the initial search, 59 met inclusion criteria. The mean quality of the websites was poor, with 7.97 ± 2.3 of 25 points (32%). The mean accuracy was low, with 8.47 ± 1.52 of 12 points (71%). The average reading level was 11.2 ± 1.93, with 32% of websites' reading at greater than 12th grade reading level. The search terms of "shoulder PRP" and "shoulder Platelet Rich Plasma" yielded the highest quality results (mean = 8.14 ± 2.63). "shoulder Lipogems" and "shoulder Adipose tissue" yielded the most accurate results (mean = 9.25 ± 0.96). "shoulder BMAC" and "shoulder bone marrow aspirate concentrate" were most difficult to read (mean = 12.54 ± 3.73). Sixty-four percent of websites were authored by physicians, hospitals, or medical groups. The accuracy of websites authored by health care professionals was significantly higher than the accuracy of those authored by other industry sources (P = .01). Fifteen percent of websites demonstrated commercial bias. Discussion The online resources available to patients seeking information about biologic therapies for the treatment of shoulder pathologies are of very poor quality, moderately poor accuracy, and advanced readability. Providers should caution patients about the reliability of direct-to-consumer biologic marketing for shoulder pathology. Conclusion The information available to patients online regarding the diagnosis, evaluation, and treatment of shoulder pathology with biologic therapies is of poor quality and accuracy and difficult readability.
Collapse
Affiliation(s)
- Mathangi Sridharan
- Department of Orthopaedics, The Ohio State University Wexner Medical Center, Columbus, OH, USA
| | - Marisa Ulrich
- Department of Orthopaedics, The Ohio State University Wexner Medical Center, Columbus, OH, USA
| | - Ryan Thacher
- Department of Orthopaedic Surgery, Hospital for Special Surgery, New York, NY, USA
| | - Steven Swinehart
- Department of Orthopaedics, The Ohio State University Wexner Medical Center, Columbus, OH, USA
| | - Michael R. Baria
- Department of Orthopaedics, The Ohio State University Wexner Medical Center, Columbus, OH, USA
| | - Grant L. Jones
- Department of Orthopaedics, The Ohio State University Wexner Medical Center, Columbus, OH, USA
| | - Julie Y. Bishop
- Department of Orthopaedics, The Ohio State University Wexner Medical Center, Columbus, OH, USA
| | - Gregory L. Cvetanovich
- Department of Orthopaedics, The Ohio State University Wexner Medical Center, Columbus, OH, USA
| | - Ryan C. Rauck
- Department of Orthopaedics, The Ohio State University Wexner Medical Center, Columbus, OH, USA
| |
Collapse
|
12
|
Schwarz I, Houck DA, Belk JW, Hop J, Bravman JT, McCarty E. The Quality and Content of Internet-Based Information on Orthopaedic Sports Medicine Requires Improvement: A Systematic Review. Arthrosc Sports Med Rehabil 2021; 3:e1547-e1555. [PMID: 34712992 PMCID: PMC8527260 DOI: 10.1016/j.asmr.2021.05.007] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Accepted: 05/29/2021] [Indexed: 12/23/2022] Open
Abstract
Purpose To evaluate the quality and content of internet-based information available for some of the most common orthopaedic sports medicine terms. Methods A search of the PubMed, Embase, and Cochrane databases following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines was performed. All English-language literature published from 2010 to 2020 discussing information quality pertaining to orthopaedic sports medicine terms was included. Outcomes included the search engines used, number and type of websites evaluated, platform, and quality scoring metrics. Descriptive statistics are presented. Results This review includes 21 studies. Of these, 3 evaluated both the upper and lower extremity. Twelve focused on either the upper or lower extremity, most commonly rotator cuff tears (3 of 12) and/or anterior cruciate ligament pathologies (7 of 12). The most common engines were Google (18 of 21), Bing (16 of 21), Yahoo (16 of 21), YouTube (3 of 21), Ask (3 of 21), and AOL (2 of 21). The average number of media files assessed per study was 87 ± 55. Website quality was assessed with DISCERN (7 of 21), Flesch-Kincaid (9 of 21), Health on the Net (7 of 21), and/or Journal of the American Medical Association Benchmark (7 of 21) scores. YouTube was evaluated with Journal of the American Medical Association Benchmark scores (1.74 ± 1.00). Image quality was reported in 2 studies and varied with search terminology. Conclusions The results of this systematic review suggest that physicians should improve the quality of online information and encourage patients to access credible sources when conducting their own research. Clinical Relevance Doctors can and should play an active role in closing the gap between the level of health literacy of their patients and that of most common online resources.
Collapse
Affiliation(s)
- Ilona Schwarz
- Division of Sports Medicine and Shoulder Surgery, Department of Orthopedics, University of Colorado School of Medicine, Aurora, Colorado, U.S.A
| | - Darby A Houck
- Division of Sports Medicine and Shoulder Surgery, Department of Orthopedics, University of Colorado School of Medicine, Aurora, Colorado, U.S.A
| | - John W Belk
- Division of Sports Medicine and Shoulder Surgery, Department of Orthopedics, University of Colorado School of Medicine, Aurora, Colorado, U.S.A
| | - Jack Hop
- Division of Sports Medicine and Shoulder Surgery, Department of Orthopedics, University of Colorado School of Medicine, Aurora, Colorado, U.S.A
| | - Jonathan T Bravman
- Division of Sports Medicine and Shoulder Surgery, Department of Orthopedics, University of Colorado School of Medicine, Aurora, Colorado, U.S.A
| | - Eric McCarty
- Division of Sports Medicine and Shoulder Surgery, Department of Orthopedics, University of Colorado School of Medicine, Aurora, Colorado, U.S.A
| |
Collapse
|
13
|
Abstract
Health information on the Internet can have a direct effect on healthcare decision-making. However, the quality of information online has seldom been evaluated. This study aimed to assess the quality of online information on high-risk pregnancies provided by English and Korean Web sites. Through a Google search, 30 English and 30 Korean Web sites were selected on January 2 and 3, 2020, respectively, and assessed using DISCERN, a Journal of the American Medical Association, and Health On the Net Foundation code questionnaires. The data assessed were analyzed using descriptive and nonparametric statistical tests. Overall, the information provided by the English Web sites presented higher-quality information than the Korean Web sites. Most Web sites did not provide the sources of the information presented on their Web sites, meet the Journal of the American Medical Association criteria, or provide information on complementarity. Based on our results, nurses need to be competent in assessing the quality of Web sites and the health information presented there, and nursing students need to be prepared to do so as well. Nurses are responsible for educating their patients about the possibility of incorrect information provided by Internet Web sites and informing their patients about reliable Web sites, thus assisting them to make informed decisions regarding their health.
Collapse
Affiliation(s)
- Shin-Young Lee
- Author Affiliations: Department of Nursing, Chosun University (Dr S.-Y. Lee); and College of Nursing, Chonnam National University (Dr S. Lee), Gwangju, Republic of Korea
| | | |
Collapse
|