1
|
Sinkler M, Li L, Adelstein J, Strony J. ChatGPT Has Potential to Be an Important Patient Education Tool and May Outperform Google. Arthroscopy 2024:S0749-8063(24)00501-2. [PMID: 39029814 DOI: 10.1016/j.arthro.2024.07.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/07/2024] [Accepted: 07/10/2024] [Indexed: 07/21/2024]
|
2
|
Hurley ET, Crook BS, Dickens JF. Editorial Commentary: At Present, ChatGPT Cannot Be Relied Upon to Answer Patient Questions and Requires Physician Expertise to Interpret Answers for Patients. Arthroscopy 2024; 40:2080-2082. [PMID: 38484923 DOI: 10.1016/j.arthro.2024.02.039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/25/2024] [Accepted: 02/28/2024] [Indexed: 04/14/2024]
Abstract
ChatGPT is designed to provide accurate and reliable information to the best of its abilities based on the data input and knowledge available. Thus, ChatGPT is being studied as a patient information tool. This artificial intelligence (AI) tool has been shown to frequently provide technically correct information but with limitations. ChatGPT provides different answers to similar questions based on the prompts, and patients may not have expertise in prompting ChatGPT to elicit a best answer. (Prompting large language models has been shown to be a skill that can improve.) Of greater concern, ChatGPT fails to provide sources or references for its answers. At present, ChatGPT cannot be relied upon to address patient questions; in the future, ChatGPT will improve. Today, AI requires physician expertise to interpret AI answers for patients.
Collapse
|
3
|
Oeding JF, Lu AZ, Mazzucco M, Fu MC, Taylor SA, Dines DM, Warren RF, Gulotta LV, Dines JS, Kunze KN. ChatGPT-4 Performs Clinical Information Retrieval Tasks Utilizing Consistently More Trustworthy Resources Than Does Google Search for Queries Concerning the Latarjet Procedure. Arthroscopy 2024:S0749-8063(24)00407-9. [PMID: 38936557 DOI: 10.1016/j.arthro.2024.05.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 05/15/2024] [Accepted: 05/16/2024] [Indexed: 06/29/2024]
Abstract
PURPOSE To assess the ability for ChatGPT-4, an automated Chatbot powered by artificial intelligence (AI), to answer common patient questions concerning the Latarjet procedure for patients with anterior shoulder instability and compare this performance to Google Search Engine. METHODS Using previously validated methods, a Google search was first performed using the query "Latarjet." Subsequently, the top ten frequently asked questions (FAQs) and associated sources were extracted. ChatGPT-4 was then prompted to provide the top ten FAQs and answers concerning the procedure. This process was repeated to identify additional FAQs requiring discrete-numeric answers to allow for a comparison between ChatGPT-4 and Google. Discrete, numeric answers were subsequently assessed for accuracy based on the clinical judgement of two fellowship-trained sports medicine surgeons blinded to search platform. RESULTS Mean (±standard deviation) accuracy to numeric-based answers were 2.9±0.9 for ChatGPT-4 versus 2.5±1.4 for Google (p=0.65). ChatGPT-4 derived information for answers only from academic sources, which was significantly different from Google Search Engine (p=0.003), which used only 30% academic sources and websites from individual surgeons (50%) and larger medical practices (20%). For general FAQs, 40% of FAQs were found to be identical when comparing ChatGPT-4 and Google Search Engine. In terms of sources used to answer these questions, ChatGPT-4 again used 100% academic resources, while Google Search Engine used 60% academic resources, 20% surgeon personal websites, and 20% medical practices (p=0.087). CONCLUSION ChatGPT-4 demonstrated the ability to provide accurate and reliable information about the Latarjet procedure in response to patient queries, using multiple academic sources in all cases. This was in contrast to Google Search Engine, which more frequently used single surgeon and large medical practice websites. Despite differences in the resources accessed to perform information retrieval tasks, the clinical relevance and accuracy of information provided did not significantly differ between ChatGPT-4 and Google Search Engine.
Collapse
|
4
|
Yüce A, Erkurt N, Yerli M, Misir A. The Potential of ChatGPT for High-Quality Information in Patient Education for Sports Surgery. Cureus 2024; 16:e58874. [PMID: 38800159 PMCID: PMC11116739 DOI: 10.7759/cureus.58874] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/23/2024] [Indexed: 05/29/2024] Open
Abstract
BACKGROUND AND OBJECTIVE Artificial intelligence (AI) advancements continue to have a profound impact on modern society, driving significant innovation and development across various fields. We sought to appraise the reliability of the information offered by Chat Generative Pre-Trained Transformer (ChatGPT) regarding diseases commonly associated with sports surgery. We hypothesized that ChatGPT could offer high-quality information on sports-related diseases and be used in patient education. METHODS On September 11, 2023, specific sports surgery-related diseases were identified to ask ChatGPT-4 (personal communication, March 4, 2023). The informative texts provided by ChatGPT were recorded by a non-observer senior orthopedic surgeon for this study. Ten texts provided by ChatGPT related to sports surgery diseases were evaluated blindly by two observers. Observers assessed and scored these texts based on the sports surgery-specific scoring (SSSS) and DISCERN criteria. The precision of the disease-related information offered by ChatGPT was evaluated. RESULTS The calculated average DISCERN score of the texts in the study was 44.75 points and the average SSSS score was 13.3 points. In the interclass correlation coefficient analysis of the measurements made by the observers, the agreement was found to be excellent (0.989; p < 0.001). CONCLUSION ChatGPT has the potential to be used in patient education for sports surgery-related diseases. The potential to provide quality information in this regard seems to be an advantage.
Collapse
Affiliation(s)
- Ali Yüce
- Department of Orthopedics and Traumatology, Prof. Dr. Cemil Taşçıoğlu City Hospital, Istanbul, TUR
| | - Nazım Erkurt
- Department of Orthopedics and Traumatology, Prof. Dr. Cemil Taşçıoğlu City Hospital, Istanbul, TUR
| | - Mustafa Yerli
- Department of Orthopedics and Traumatology, Prof. Dr. Cemil Taşçıoğlu City Hospital, Istanbul, TUR
| | - Abdulhamit Misir
- Department of Orthopedics and Traumatology, Bahcesehir University Göztepe Medicalpark Hospital, Istanbul, TUR
| |
Collapse
|
5
|
Hurley ET, Crook BS, Lorentz SG, Danilkowicz RM, Lau BC, Taylor DC, Dickens JF, Anakwenze O, Klifto CS. Evaluation High-Quality of Information from ChatGPT (Artificial Intelligence-Large Language Model) Artificial Intelligence on Shoulder Stabilization Surgery. Arthroscopy 2024; 40:726-731.e6. [PMID: 37567487 DOI: 10.1016/j.arthro.2023.07.048] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 06/27/2023] [Accepted: 07/28/2023] [Indexed: 08/13/2023]
Abstract
PURPOSE To analyze the quality and readability of information regarding shoulder stabilization surgery available using an online AI software (ChatGPT), using standardized scoring systems, as well as to report on the given answers by the AI. METHODS An open AI model (ChatGPT) was used to answer 23 commonly asked questions from patients on shoulder stabilization surgery. These answers were evaluated for medical accuracy, quality, and readability using The JAMA Benchmark criteria, DISCERN score, Flesch-Kincaid Reading Ease Score (FRES) & Grade Level (FKGL). RESULTS The JAMA Benchmark criteria score was 0, which is the lowest score, indicating no reliable resources cited. The DISCERN score was 60, which is considered a good score. The areas that open AI model did not achieve full marks were also related to the lack of available source material used to compile the answers, and finally some shortcomings with information not fully supported by the literature. The FRES was 26.2, and the FKGL was considered to be that of a college graduate. CONCLUSIONS There was generally high quality in the answers given on questions relating to shoulder stabilization surgery, but there was a high reading level required to comprehend the information presented. However, it is unclear where the answers came from with no source material cited. It is important to note that the ChatGPT software repeatedly references the need to discuss these questions with an orthopaedic surgeon and the importance of shared discussion making, as well as compliance with surgeon treatment recommendations. CLINICAL RELEVANCE As shoulder instability is an injury that predominantly affects younger individuals who may use the Internet for information, this study shows what information patients may be getting online.
Collapse
Affiliation(s)
| | | | | | | | - Brian C Lau
- Duke University, Durham, North Carolina, U.S.A
| | | | | | | | | |
Collapse
|
6
|
Gao B, Skalitzky MK, Rund J, Shamrock AG, Gulbrandsen TR, Buckwalter J. Carpal Tunnel Surgery: Can Patients Read, Understand, and Act on Online Educational Resources? THE IOWA ORTHOPAEDIC JOURNAL 2024; 44:47-58. [PMID: 38919356 PMCID: PMC11195886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/27/2024]
Abstract
Background Patients often access online resources to educate themselves prior to undergoing elective surgery such as carpal tunnel release (CTR). The purpose of this study was to evaluate available online resources regarding CTR on objective measures of readability (syntax reading grade-level), understandability (ability to convey key messages in a comprehensible manner), and actionability (providing actions the reader may take). Methods The study conducted two independent Google searches for "Carpal Tunnel Surgery" and among the top 50 results, analyzed articles aimed at educating patients about CTR. Readability was assessed using six different indices: Flesch-Kincaid Grade Level Index, Flesch Reading Ease, Gunning Fog Index, Simple Measure of Gobbledygook (SMOG) Index, Coleman Liau Index, Automated Readability Index. The Patient Education Materials Assessment Tool evaluated understandability and actionability on a 0-100% scale. Spearman's correlation assessed relationships between these metrics and Google search ranks, with p<0.05 indicating statistical significance. Results Of the 39 websites meeting the inclusion criteria, the mean readability grade level exceeded 9, with the lowest being 9.4 ± 1.5 (SMOG index). Readability did not correlate with Google search ranking (lowest p=0.25). Mean understandability and actionability were 59% ± 15 and 26% ± 24, respectively. Only 28% of the articles used visual aids, and few provided concise summaries or clear, actionable steps. Notably, lower grade reading levels were linked to higher actionability scores (p ≤ 0.02 in several indices), but no readability metrics significantly correlated with understandability. Google search rankings showed no significant association with either understandability or actionability scores. Conclusion Online educational materials for CTR score poorly in readability, understandability, and actionability. Quality metrics do not appear to affect Google search rankings. The poor quality metric scores found in our study highlight a need for hand specialists to improve online patient resources, especially in an era emphasizing shared decision-making in healthcare. Level of Evidence: IV.
Collapse
Affiliation(s)
- Burke Gao
- Department of Orthopedics and Rehabilitation, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Mary Kate Skalitzky
- Department of Orthopedics and Rehabilitation, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Joseph Rund
- Department of Orthopedics and Rehabilitation, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Alan G. Shamrock
- Department of Orthopedics and Rehabilitation, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Trevor R. Gulbrandsen
- Department of Orthopedics and Rehabilitation, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Joseph Buckwalter
- Department of Orthopedics and Rehabilitation, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| |
Collapse
|
7
|
Crook BS, Park CN, Hurley ET, Richard MJ, Pidgeon TS. Evaluation of Online Artificial Intelligence-Generated Information on Common Hand Procedures. J Hand Surg Am 2023; 48:1122-1127. [PMID: 37690015 DOI: 10.1016/j.jhsa.2023.08.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Revised: 07/25/2023] [Accepted: 08/02/2023] [Indexed: 09/11/2023]
Abstract
PURPOSE The purpose of this study was to analyze the quality and readability of the information generated by an online artificial intelligence (AI) platform regarding 4 common hand surgeries and to compare AI-generated responses to those provided in the informational articles published by the American Society for Surgery of the Hand (ASSH) HandCare website. METHODS An open AI model (ChatGPT) was used to answer questions commonly asked by patients on 4 common hand surgeries (carpal tunnel release, cubital tunnel release, trigger finger release, and distal radius fracture fixation). These answers were evaluated for medical accuracy, quality and readability and compared to answers derived from the ASSH HandCare materials. RESULTS For the AI model, the Journal of the American Medical Association benchmark criteria score was 0/4, and the DISCERN score was 58 (considered good). The areas in which the AI model lost points were primarily related to the lack of attribution, reliability and currency of the source material. For AI responses, the mean Flesch Kinkaid Reading Ease score was 15, and the Flesch Kinkaid Grade Level was 34, which is considered to be college level. For comparison, ASSH HandCare materials scored 3/4 on the Journal of the American Medical Association Benchmark, 71 on DISCERN (excellent), 9 on Flesch Kinkaid Grade Level, and 60 on Flesch Kinkaid Reading Ease score (eighth/ninth grade level). CONCLUSION An AI language model (ChatGPT) provided generally high-quality answers to frequently asked questions relating to the common hand procedures queried, but it is unclear when or where these answers came from without citations to source material. Furthermore, a high reading level was required to comprehend the information presented. The AI software repeatedly referenced the need to discuss these questions with a surgeon, the importance of shared decision-making and individualized care, and compliance with surgeon treatment recommendations. CLINICAL RELEVANCE As novel AI applications become increasingly mainstream, hand surgeons must understand the limitations and ramifications these technologies have for patient care.
Collapse
Affiliation(s)
- Bryan S Crook
- Department of Orthopaedic Surgery, Duke University Hospital, Durham, NC.
| | - Caroline N Park
- Department of Orthopaedic Surgery, Duke University Hospital, Durham, NC
| | - Eoghan T Hurley
- Department of Orthopaedic Surgery, Duke University Hospital, Durham, NC
| | - Marc J Richard
- Department of Orthopaedic Surgery, Duke University Hospital, Durham, NC
| | - Tyler S Pidgeon
- Department of Orthopaedic Surgery, Duke University Hospital, Durham, NC
| |
Collapse
|
8
|
Golgelioglu F, Canbaz SB. From quality to clarity: evaluating the effectiveness of online ınformation related to septic arthritis. J Orthop Surg Res 2023; 18:689. [PMID: 37715176 PMCID: PMC10503092 DOI: 10.1186/s13018-023-04181-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/13/2023] [Accepted: 09/10/2023] [Indexed: 09/17/2023] Open
Abstract
BACKGROUND The aim of this study was to assess the content, readability, and quality of online resources on septic arthritis, a crucial orthopedic condition necessitating immediate diagnosis and treatment to avert serious complications, with a particular focus on the relevance to individuals from the general public. METHODS Two search terms ("septic arthritis" and "joint infection") were input into three different search engines on the Internet (Google, Yahoo, and Bing) and 60 websites were evaluated, with the top 20 results in each search engine. The websites underwent categorization based on their type, and their content and quality were assessed utilizing the DISCERN score, the Journal of the American Medical Association (JAMA) benchmark, the Global Quality Score (GQS), and the Information Value Score (IVS). The readability of the text was assessed through the utilization of the Flesch Kincaid Grade Level (FKGL) and the Flesch Reading Ease Score (FKRS). The presence or absence of the Health on Net (HON) code was evaluated on each website. RESULTS The DISCERN, JAMA, GQS, FKGL, and IVS scores of the academic category were found to be substantially greater when compared with the physician, medical, and commercial categories. But at the same time, academic sites had high readability scores. Websites with HON code had significantly higher average FKGL, FCRS, DISCERN, JAMA, GQS, and IVS scores than those without. CONCLUSION The quality of websites giving information on septic arthritis was variable and not optimal. Although the content of the academic group was of higher quality, it could be difficult to understand. One of the key responsibilities of healthcare professionals should be to provide high quality and comprehensible information concerning joint infections on reputable academic platforms, thereby facilitating patients in attaining a fundamental level of health literacy.
Collapse
Affiliation(s)
- Fatih Golgelioglu
- Department of Orthopedics and Traumatology, Elazığ Fethi Sekin City Hospital, Doğukent Location, 23280 Elazığ, Turkey
| | - Sebati Baser Canbaz
- Department of Orthopedics and Traumatology, Faculty of Medicine, Erciyes University, Kayseri, Turkey
| |
Collapse
|
9
|
Clarke S, Jangid G, Nasr S, Atchade A, Moody BL, Narayan G. Polycystic Ovary Syndrome (PCOS): A Cross-Sectional Observational Study Analyzing the Quality of Content on YouTube. Cureus 2023; 15:e45354. [PMID: 37849574 PMCID: PMC10578195 DOI: 10.7759/cureus.45354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/15/2023] [Indexed: 10/19/2023] Open
Abstract
INTRODUCTION Polycystic ovary syndrome (PCOS), a chronic multifactorial disorder in women of reproductive age group, is a major public health problem. With most women resorting to platforms like "YouTube" that form a perfect source of edutainment, our aim was to analyze the quality of content available regarding the same. AIMS The aims and objectives of this study were to assess the quality and reliability of content related to PCOS on YouTube by analyzing the DISCERN score, global quality score (GQS), and video power index (VPI). METHODOLOGY It was a facility-based cross-sectional study undertaken on a single day with each author reviewing 10 videos from YouTube on PCOS using predetermined keywords. The number of likes, dislikes, views, comments, and uploader backgrounds were evaluated. DISCERN score, GQS, and VPI were also calculated for each video. While data entry was done using Microsoft Excel 2020 (Microsoft Corporation, Washington, United States), the analysis was carried out using SPSS Statistics version 16 (SPSS Inc. Released 2007. SPSS for Windows, Version 16.0. Chicago, SPSS Inc.). Categorical variables were expressed as frequency and percentages, and statistical significance was determined using the Kruskal-Wallis test/one-way ANOVA. RESULTS A total of 80 videos that fit the inclusion criteria were analyzed. A majority of the videos (80%) were posted a year back with no updates. Only 28.8% of the video content was posted by doctors. Though most videos (96.25%) shared information pertaining to symptomatology, only 45% spoke regarding prevention. Promotional content was noted in 28.75% of the video content. GQS and VPI were better with information being provided by doctors, hospitals, and healthcare organizations (p-value 0.033 and 0.006, respectively). CONCLUSIONS With women reaching out to edutainment platforms like YouTube to clarify their concerns surrounding lifestyle diseases such as PCOS in the digital era, it becomes relevant to evaluate the quality of content available on such platforms. The findings of the study form a prototype for addressing the existing gaps in the knowledge available on YouTube. Furthermore, the findings warrant frequent monitoring of such available web-based content and delivery of such content only from qualified wellness experts.
Collapse
Affiliation(s)
- Shereece Clarke
- Department of Obstetrics and Gynecology, University of the West Indies, Montego Bay, JAM
| | - Gurusha Jangid
- Department of Obstetrics and Gynecology, Dr. Sampurnanand Medical College, Jodhpur, IND
| | - Summer Nasr
- Department of Medicine, St. George's University, True Blue, GRD
| | - Axelle Atchade
- Department of Medicine, St. George's University, True Blue, GRD
| | - Britney L Moody
- Department of Medicine, St. George's University, True Blue, GRD
| | - Gaurang Narayan
- Department of Obstetrics and Gynecology, Indira Gandhi Government Medical College and Hospital, Nagpur, IND
| |
Collapse
|
10
|
Gao B, Shamrock AG, Gulbrandsen TR, O’Reilly OC, Duchman KR, Westermann RW, Wolf BR. Can Patients Read, Understand, and Act on Online Resources for Anterior Cruciate Ligament Surgery? Orthop J Sports Med 2022; 10:23259671221089977. [PMID: 35928178 PMCID: PMC9344126 DOI: 10.1177/23259671221089977] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Accepted: 05/11/2022] [Indexed: 11/22/2022] Open
Abstract
Background: Patients undergoing elective procedures often utilize online educational
materials to familiarize themselves with the surgical procedure and expected
postoperative recovery. While the Internet is easily accessible and
ubiquitous today, the ability of patients to read, understand, and act on
these materials is unknown. Purpose: To evaluate online resources about anterior cruciate ligament (ACL) surgery
utilizing measures of readability, understandability, and actionability. Study Design: Cross-sectional study; Level of evidence, 4. Methods: Using the term “ACL surgery,” 2 independent searches were performed utilizing
a public search engine (Google.com). Patient education
materials were identified from the top 50 results. Audiovisual materials,
news articles, materials intended for advertising or medical professionals,
and materials unrelated to ACL surgery were excluded. Readability was
quantified using the Flesch Reading Ease, Flesch-Kincaid Grade Level, Simple
Measure of Gobbledygook, Coleman-Liau Index, Automated Readability Index,
and Gunning Fog Index. The Patient Education Materials Assessment Tool for
Printable Materials (PEMAT-P) was utilized to assess the actionability and
understandability of materials. For each online source, the relationship
between its Google search rank (from first to last) and its readability,
understandability, and actionability was calculated utilizing the Spearman
rank correlation coefficient (ρS). Results: Overall, we identified 68 unique websites, of which 39 met inclusion
criteria. The mean Flesch-Kincaid Grade Level was 10.08 ± 2.34, with no
website scoring at or below the 6th-grade level. Mean understandability and
actionability scores were 59.18 ± 10.86 (range, 33.64-79.17) and 34.41 ±
22.31 (range, 0.00-81.67), respectively. Only 5 (12.82%) and 1 (2.56%)
resource scored above the 70% adequate PEMAT-P threshold mark for
understandability and actionability, respectively. Readability (lowest
P value = .103), understandability (ρS =
–0.13; P = .441), and actionability (ρS = 0.28;
P = .096) scores were not associated with Google
rank. Conclusion: Patient education materials on ACL surgery scored poorly with respect to
readability, understandability, and actionability. No online resource scored
at the recommended reading level of the American Medical Association or
National Institutes of Health. Only 5 resources scored above the proven
threshold for understandability, and only 1 resource scored above it for
actionability.
Collapse
Affiliation(s)
- Burke Gao
- Department of Orthopaedic Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Alan G. Shamrock
- Department of Orthopaedic Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Trevor R. Gulbrandsen
- Department of Orthopaedic Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Olivia C. O’Reilly
- Department of Orthopaedic Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Kyle R. Duchman
- Department of Orthopaedic Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Robert W. Westermann
- Department of Orthopaedic Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Brian R. Wolf
- Department of Orthopaedic Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| |
Collapse
|