1
|
Naja F, Taktouk M, Matbouli D, Khaleel S, Maher A, Uzun B, Alameddine M, Nasreddine L. Artificial intelligence chatbots for the nutrition management of diabetes and the metabolic syndrome. Eur J Clin Nutr 2024:10.1038/s41430-024-01476-y. [PMID: 39060542 DOI: 10.1038/s41430-024-01476-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Revised: 07/16/2024] [Accepted: 07/17/2024] [Indexed: 07/28/2024]
Abstract
BACKGROUND Recently, there has been a growing interest in exploring AI-driven chatbots, such as ChatGPT, as a resource for disease management and education. OBJECTIVE The study aims to evaluate ChatGPT's accuracy and quality/clarity in providing nutritional management for Type 2 Diabetes (T2DM), the Metabolic syndrome (MetS) and its components, in accordance with the Academy of Nutrition and Dietetics' guidelines. METHODS Three nutrition management-related domains were considered: (1) Dietary management, (2) Nutrition care process (NCP) and (3) Menu planning (1500 kcal). A total of 63 prompts were used. Two experienced dietitians evaluated the chatbot output's concordance with the guidelines. RESULTS Both dietitians provided similar assessments for most conditions examined in the study. Gaps in the ChatGPT-derived outputs were identified and included weight loss recommendations, energy deficit, anthropometric assessment, specific nutrients of concern and the adoption of specific dietary interventions. Gaps in physical activity recommendations were also observed, highlighting ChatGPT's limitations in providing holistic lifestyle interventions. Within the NCP, the generated output provided incomplete examples of diagnostic documentation statements and had significant gaps in the monitoring and evaluation step. In the 1500 kcal one-day menus, the amounts of carbohydrates, fat, vitamin D and calcium were discordant with dietary recommendations. Regarding clarity, dietitians rated the output as either good or excellent. CONCLUSION Although ChatGPT is an increasingly available resource for practitioners, users are encouraged to consider the gaps identified in this study in the dietary management of T2DM and the MetS.
Collapse
Affiliation(s)
- Farah Naja
- Department of Clinical Nutrition and Dietetics, College of Health Sciences, Research Institute of Medical and Health Sciences (RIMHS), University of Sharjah, Sharjah, United Arab Emirates
- Department of Nutrition and Food Sciences, Faculty of Agricultural and Food Sciences, American University of Beirut (AUB), Beirut, Lebanon
| | - Mandy Taktouk
- Department of Nutrition and Food Sciences, Faculty of Agricultural and Food Sciences, American University of Beirut (AUB), Beirut, Lebanon
| | - Dana Matbouli
- Department of Nutrition and Food Sciences, Faculty of Agricultural and Food Sciences, American University of Beirut (AUB), Beirut, Lebanon
| | - Sharfa Khaleel
- Department of Clinical Nutrition and Dietetics, College of Health Sciences, Research Institute of Medical and Health Sciences (RIMHS), University of Sharjah, Sharjah, United Arab Emirates
| | - Ayah Maher
- Department of Clinical Nutrition and Dietetics, College of Health Sciences, Research Institute of Medical and Health Sciences (RIMHS), University of Sharjah, Sharjah, United Arab Emirates
| | - Berna Uzun
- Department of Mathematics, Near East University, Nicosia, Turkey
| | | | - Lara Nasreddine
- Department of Nutrition and Food Sciences, Faculty of Agricultural and Food Sciences, American University of Beirut (AUB), Beirut, Lebanon.
| |
Collapse
|
2
|
Naz R, Akacı O, Erdoğan H, Açıkgöz A. Can large language models provide accurate and quality information to parents regarding chronic kidney diseases? J Eval Clin Pract 2024. [PMID: 38959373 DOI: 10.1111/jep.14084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 06/24/2024] [Indexed: 07/05/2024]
Abstract
RATIONALE Artificial Intelligence (AI) large language models (LLM) are tools capable of generating human-like text responses to user queries across topics. The use of these language models in various medical contexts is currently being studied. However, the performance and content quality of these language models have not been evaluated in specific medical fields. AIMS AND OBJECTIVES This study aimed to compare the performance of AI LLMs ChatGPT, Gemini and Copilot in providing information to parents about chronic kidney diseases (CKD) and compare the information accuracy and quality with that of a reference source. METHODS In this study, 40 frequently asked questions about CKD were identified. The accuracy and quality of the answers were evaluated with reference to the Kidney Disease: Improving Global Outcomes guidelines. The accuracy of the responses generated by LLMs was assessed using F1, precision and recall scores. The quality of the responses was evaluated using a five-point global quality score (GQS). RESULTS ChatGPT and Gemini achieved high F1 scores of 0.89 and 1, respectively, in the diagnosis and lifestyle categories, demonstrating significant success in generating accurate responses. Furthermore, ChatGPT and Gemini were successful in generating accurate responses with high precision values in the diagnosis and lifestyle categories. In terms of recall values, all LLMs exhibited strong performance in the diagnosis, treatment and lifestyle categories. Average GQ scores for the responses generated were 3.46 ± 0.55, 1.93 ± 0.63 and 2.02 ± 0.69 for Gemini, ChatGPT 3.5 and Copilot, respectively. In all categories, Gemini performed better than ChatGPT and Copilot. CONCLUSION Although LLMs provide parents with high-accuracy information about CKD, their use is limited compared with that of a reference source. The limitations in the performance of LLMs can lead to misinformation and potential misinterpretations. Therefore, patients and parents should exercise caution when using these models.
Collapse
Affiliation(s)
- Rüya Naz
- Bursa Yüksek Ihtisas Research and Training Hospital, University of Health Sciences, Bursa, Turkey
| | - Okan Akacı
- Clinic of Pediatric Nephrology, Bursa Yüksek Ihtisas Research and Training Hospital, University of Health Sciences, Bursa, Turkey
| | - Hakan Erdoğan
- Clinic of Pediatric Nephrology, Bursa City Hospital, Bursa, Turkey
| | - Ayfer Açıkgöz
- Department of Pediatric Nursing, Faculty of Health Sciences, Eskişehir Osmangazi University, Eskişehir, Turkey
| |
Collapse
|
3
|
Liao LL, Chang LC, Lai IJ. Assessing the Quality of ChatGPT's Dietary Advice for College Students from Dietitians' Perspectives. Nutrients 2024; 16:1939. [PMID: 38931294 PMCID: PMC11206595 DOI: 10.3390/nu16121939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2024] [Revised: 06/15/2024] [Accepted: 06/17/2024] [Indexed: 06/28/2024] Open
Abstract
BACKGROUND As ChatGPT becomes a primary information source for college students, its performance in providing dietary advice is under scrutiny. This study assessed ChatGPT's performance in providing nutritional guidance to college students. METHODS ChatGPT's performance on dietary advice was evaluated by 30 experienced dietitians and assessed using an objective nutrition literacy (NL) test. The dietitians were recruited to assess the quality of ChatGPT's dietary advice, including its NL achievement and response quality. RESULTS The results indicate that ChatGPT's performance varies across scenarios and is suboptimal for achieving NL with full achievement rates from 7.50% to 37.56%. While the responses excelled in readability, they lacked understandability, practicality, and completeness. In the NL test, ChatGPT showed an 84.38% accuracy rate, surpassing the NL level of Taiwanese college students. The top concern among the dietitians, cited 52 times in 242 feedback entries, was that the "response information lacks thoroughness or rigor, leading to misunderstandings or misuse". Despite the potential of ChatGPT as a supplementary educational tool, significant gaps must be addressed, especially in detailed dietary inquiries. CONCLUSION This study highlights the need for improved AI educational approaches and suggests the potential for developing ChatGPT teaching guides or usage instructions to train college students and support dietitians.
Collapse
Affiliation(s)
- Li-Ling Liao
- Department of Public Health, College of Health Science, Kaohsiung Medical University, Kaohsiung City 807378, Taiwan;
- Department of Medical Research, Kaohsiung Medical University Hospital, Kaohsiung City 807378, Taiwan
| | - Li-Chun Chang
- School of Nursing, Chang Gung University of Science and Technology, Tao-Yuan 333324, Taiwan;
- School of Nursing, College of Medicine, Chang Gung University, Tao-Yuan 333323, Taiwan
- Department of Nursing, Linkou Chang Gung Memorial Hospital, Linkou 333423, Taiwan
| | - I-Ju Lai
- Department of Nutrition, I-Shou University, Kaohsiung City 824005, Taiwan
| |
Collapse
|
4
|
Sosa BR, Cung M, Suhardi VJ, Morse K, Thomson A, Yang HS, Iyer S, Greenblatt MB. Capacity for large language model chatbots to aid in orthopedic management, research, and patient queries. J Orthop Res 2024; 42:1276-1282. [PMID: 38245845 DOI: 10.1002/jor.25782] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 12/18/2023] [Accepted: 12/24/2023] [Indexed: 01/22/2024]
Abstract
Large language model (LLM) chatbots possess a remarkable capacity to synthesize complex information into concise, digestible summaries across a wide range of orthopedic subject matter. As LLM chatbots become widely available they will serve as a powerful, accessible resource that patients, clinicians, and researchers may reference to obtain information about orthopedic science and clinical management. Here, we examined the performance of three well-known and easily accessible chatbots-ChatGPT, Bard, and Bing AI-in responding to inquiries relating to clinical management and orthopedic concepts. Although all three chatbots were found to be capable of generating relevant responses, ChatGPT outperformed Bard and BingAI in each category due to its ability to provide accurate and complete responses to orthopedic queries. Despite their promising applications in clinical management, shortcomings observed included incomplete responses, lack of context, and outdated information. Nonetheless, the ability for these LLM chatbots to address these inquires has largely yet to be evaluated and will be critical for understanding the risks and opportunities of LLM chatbots in orthopedics.
Collapse
Affiliation(s)
- Branden R Sosa
- Department of Pathology and Laboratory Medicine, Weill Cornell Medical College, New York, New York, USA
| | - Michelle Cung
- Department of Pathology and Laboratory Medicine, Weill Cornell Medical College, New York, New York, USA
| | - Vincentius J Suhardi
- Research Division and Department of Orthopedic Surgery, Hospital for Special Surgery, New York, New York, USA
| | - Kyle Morse
- Department of Spine Surgery, Hospital for Special Surgery, New York, New York, USA
| | - Andrew Thomson
- Research Division and Department of Orthopedic Surgery, Hospital for Special Surgery, New York, New York, USA
| | - He S Yang
- Department of Pathology and Laboratory Medicine, Weill Cornell Medical College, New York, New York, USA
| | - Sravisht Iyer
- Department of Spine Surgery, Hospital for Special Surgery, New York, New York, USA
| | - Matthew B Greenblatt
- Department of Pathology and Laboratory Medicine, Weill Cornell Medical College, New York, New York, USA
- Research Division and Department of Orthopedic Surgery, Hospital for Special Surgery, New York, New York, USA
| |
Collapse
|
5
|
Miao J, Thongprayoon C, Craici IM, Cheungpasitporn W. How to improve ChatGPT performance for nephrologists: a technique guide. J Nephrol 2024:10.1007/s40620-024-01974-z. [PMID: 38771519 DOI: 10.1007/s40620-024-01974-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Accepted: 04/26/2024] [Indexed: 05/22/2024]
Abstract
BACKGROUND The integration of ChatGPT into nephrology presents opportunities for enhanced decision-making and patient care. However, refining its performance to meet the specific needs of nephrologists remains a challenge. This guide offers a strategic roadmap for advancing ChatGPT's effectiveness in nephrological applications. METHODS Utilizing the advanced capabilities of GPT-4, we customized user profiles to optimize the model's response quality for nephrological inquiries. We assessed the efficacy of chain-of-thought prompting versus standard prompting in delineating the diagnostic pathway for nephrogenic diabetes insipidus-associated hypernatremia and polyuria. Additionally, we explored the influence of integrating retrieval-augmented generation on the model's proficiency in detailing pharmacological interventions to decelerate the progression from chronic kidney disease (CKD) G3 to end-stage kidney disease (ESKD), comparing it to responses without retrieval-augmented generation. RESULTS In contrast to the standard prompting, the chain-of-thought method offers a step-by-step diagnostic process that mirrors the intricate thought processes needed for diagnosing nephrogenic diabetes insipidus-related hypernatremia and polyuria. This begins with an initial assessment, notably including a water deprivation test. After evaluating the outcomes of this test, the approach continues by identifying potential causes. Furthermore, if a patient's history suggests lithium usage, the chain-of-thought model adjusts by proposing a more customized course of action. In response to "List medication treatment to help slow progression of CKD G3 to ESKD?", GPT-4 only provides a general summary of medication options. Nevertheless, a specialized GPT-4 model equipped with a retrieval-augmented generation system delivers more precise responses, including renin-angiotensin system inhibitors, sodium-glucose cotransporter-2 inhibitors, and mineralocorticoid receptor antagonists. This aligns well with the 2024 KDIGO guidelines. CONCLUSIONS GPT-4, when integrated with chain-of-thought prompting and retrieval-augmented generation techniques, demonstrates enhanced performance in the nephrology domain. This guide underscores the transformative potential of chain-of-thought and retrieval-augmented generation techniques in optimizing ChatGPT for nephrology, and highlights the ongoing need for innovative, tailored AI solutions in specialized medical fields.
Collapse
Affiliation(s)
- Jing Miao
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN, USA
| | - Charat Thongprayoon
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN, USA
| | - Iasmina M Craici
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN, USA
| | - Wisit Cheungpasitporn
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN, USA.
| |
Collapse
|
6
|
Jin H, Lin Q, Lu J, Hu C, Lu B, Jiang N, Wu S, Li X. Evaluating the Effectiveness of a Generative Pretrained Transformer-Based Dietary Recommendation System in Managing Potassium Intake for Hemodialysis Patients. J Ren Nutr 2024:S1051-2276(24)00059-1. [PMID: 38615701 DOI: 10.1053/j.jrn.2024.04.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 03/31/2024] [Accepted: 04/03/2024] [Indexed: 04/16/2024] Open
Abstract
OBJECTIVE Despite adequate dialysis, the prevalence of hyperkalemia in Chinese hemodialysis (HD) patients remains elevated. This study aims to evaluate the effectiveness of a dietary recommendation system driven by generative pretrained transformers (GPTs) in managing potassium levels in HD patients. METHODS We implemented a bespoke dietary guidance tool utilizing GPT technology. Patients undergoing HD at our center were enrolled in the study from October 2023 to November 2023. The intervention comprised of two distinct phases. Initially, patients were provided with conventional dietary education focused on potassium management in HD. Subsequently, in the second phase, they were introduced to a novel GPT-based dietary guidance tool. This artificial intelligence (AI)-powered tool offered real-time insights into the potassium content of various foods and personalized dietary suggestions. The effectiveness of the AI tool was evaluated by assessing the precision of its dietary recommendations. Additionally, we compared predialysis serum potassium levels and the proportion of patients with hyperkalemia among patients before and after the implementation of the GPT-based dietary guidance system. RESULTS In our analysis of 324 food photographs uploaded by 88 HD patients, the GPTs system evaluated potassium content with an overall accuracy of 65%. Notably, the accuracy was higher for high-potassium foods at 85%, while it stood at 48% for low-potassium foods. Furthermore, the study examined the effect of GPT-based dietary advice on patients' serum potassium levels, revealing a significant reduction in those adhering to GPTs recommendations compared to recipients of traditional dietary guidance (4.57 ± 0.76 mmol/L vs. 4.84 ± 0.94 mmol/L, P = .004). Importantly, compared to traditional dietary education, dietary education based on the GPTs tool reduced the proportion of hyperkalemia in HD patients from 39.8% to 25% (P = .036). CONCLUSION These results underscore the promising role of AI in improving dietary management for HD patients. Nonetheless, the study also points out the need for enhanced accuracy in identifying low potassium foods. It paves the way for future research, suggesting the incorporation of extensive nutritional databases and the assessment of long-term outcomes. This could potentially lead to more refined and effective dietary management strategies in HD care.
Collapse
Affiliation(s)
- Haijiao Jin
- Department of Nephrology, Ren Ji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China; Department of Nephrology, Ningbo Hangzhou Bay Hospital, China; Molecular Cell Lab for Kidney Disease, Shanghai, China; Shanghai Peritoneal Dialysis Research Center, Shanghai, China; Uremia Diagnosis and Treatment Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Qisheng Lin
- Department of Nephrology, Ren Ji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China; Molecular Cell Lab for Kidney Disease, Shanghai, China; Shanghai Peritoneal Dialysis Research Center, Shanghai, China; Uremia Diagnosis and Treatment Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jifang Lu
- Department of Nephrology, Ningbo Hangzhou Bay Hospital, China
| | - Cuirong Hu
- Department of Nephrology, Ningbo Hangzhou Bay Hospital, China
| | - Bohan Lu
- Department of Nephrology, Ningbo Hangzhou Bay Hospital, China
| | - Na Jiang
- Department of Nephrology, Ren Ji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China; Department of Nephrology, Ningbo Hangzhou Bay Hospital, China; Molecular Cell Lab for Kidney Disease, Shanghai, China; Shanghai Peritoneal Dialysis Research Center, Shanghai, China; Uremia Diagnosis and Treatment Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Shaun Wu
- WORK Medical Technology Group LTD, Hangzhou, China
| | - Xiaoyang Li
- Department of Medical Education, Ruijin Hospital Affifiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| |
Collapse
|
7
|
Sallam M, Barakat M, Sallam M. A Preliminary Checklist (METRICS) to Standardize the Design and Reporting of Studies on Generative Artificial Intelligence-Based Models in Health Care Education and Practice: Development Study Involving a Literature Review. Interact J Med Res 2024; 13:e54704. [PMID: 38276872 PMCID: PMC10905357 DOI: 10.2196/54704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Revised: 12/18/2023] [Accepted: 01/26/2024] [Indexed: 01/27/2024] Open
Abstract
BACKGROUND Adherence to evidence-based practice is indispensable in health care. Recently, the utility of generative artificial intelligence (AI) models in health care has been evaluated extensively. However, the lack of consensus guidelines on the design and reporting of findings of these studies poses a challenge for the interpretation and synthesis of evidence. OBJECTIVE This study aimed to develop a preliminary checklist to standardize the reporting of generative AI-based studies in health care education and practice. METHODS A literature review was conducted in Scopus, PubMed, and Google Scholar. Published records with "ChatGPT," "Bing," or "Bard" in the title were retrieved. Careful examination of the methodologies employed in the included records was conducted to identify the common pertinent themes and the possible gaps in reporting. A panel discussion was held to establish a unified and thorough checklist for the reporting of AI studies in health care. The finalized checklist was used to evaluate the included records by 2 independent raters. Cohen κ was used as the method to evaluate the interrater reliability. RESULTS The final data set that formed the basis for pertinent theme identification and analysis comprised a total of 34 records. The finalized checklist included 9 pertinent themes collectively referred to as METRICS (Model, Evaluation, Timing, Range/Randomization, Individual factors, Count, and Specificity of prompts and language). Their details are as follows: (1) Model used and its exact settings; (2) Evaluation approach for the generated content; (3) Timing of testing the model; (4) Transparency of the data source; (5) Range of tested topics; (6) Randomization of selecting the queries; (7) Individual factors in selecting the queries and interrater reliability; (8) Count of queries executed to test the model; and (9) Specificity of the prompts and language used. The overall mean METRICS score was 3.0 (SD 0.58). The tested METRICS score was acceptable, with the range of Cohen κ of 0.558 to 0.962 (P<.001 for the 9 tested items). With classification per item, the highest average METRICS score was recorded for the "Model" item, followed by the "Specificity" item, while the lowest scores were recorded for the "Randomization" item (classified as suboptimal) and "Individual factors" item (classified as satisfactory). CONCLUSIONS The METRICS checklist can facilitate the design of studies guiding researchers toward best practices in reporting results. The findings highlight the need for standardized reporting algorithms for generative AI-based studies in health care, considering the variability observed in methodologies and reporting. The proposed METRICS checklist could be a preliminary helpful base to establish a universally accepted approach to standardize the design and reporting of generative AI-based studies in health care, which is a swiftly evolving research topic.
Collapse
Affiliation(s)
- Malik Sallam
- Department of Pathology, Microbiology and Forensic Medicine, School of Medicine, The University of Jordan, Amman, Jordan
- Department of Clinical Laboratories and Forensic Medicine, Jordan University Hospital, Amman, Jordan
- Department of Translational Medicine, Faculty of Medicine, Lund University, Malmo, Sweden
| | - Muna Barakat
- Department of Clinical Pharmacy and Therapeutics, Faculty of Pharmacy, Applied Science Private University, Amman, Jordan
| | - Mohammed Sallam
- Department of Pharmacy, Mediclinic Parkview Hospital, Mediclinic Middle East, Dubai, United Arab Emirates
| |
Collapse
|
8
|
Aiumtrakul N, Thongprayoon C, Arayangkool C, Vo KB, Wannaphut C, Suppadungsuk S, Krisanapan P, Garcia Valencia OA, Qureshi F, Miao J, Cheungpasitporn W. Personalized Medicine in Urolithiasis: AI Chatbot-Assisted Dietary Management of Oxalate for Kidney Stone Prevention. J Pers Med 2024; 14:107. [PMID: 38248809 PMCID: PMC10817681 DOI: 10.3390/jpm14010107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2023] [Revised: 01/13/2024] [Accepted: 01/16/2024] [Indexed: 01/23/2024] Open
Abstract
Accurate information regarding oxalate levels in foods is essential for managing patients with hyperoxaluria, oxalate nephropathy, or those susceptible to calcium oxalate stones. This study aimed to assess the reliability of chatbots in categorizing foods based on their oxalate content. We assessed the accuracy of ChatGPT-3.5, ChatGPT-4, Bard AI, and Bing Chat to classify dietary oxalate content per serving into low (<5 mg), moderate (5-8 mg), and high (>8 mg) oxalate content categories. A total of 539 food items were processed through each chatbot. The accuracy was compared between chatbots and stratified by dietary oxalate content categories. Bard AI had the highest accuracy of 84%, followed by Bing (60%), GPT-4 (52%), and GPT-3.5 (49%) (p < 0.001). There was a significant pairwise difference between chatbots, except between GPT-4 and GPT-3.5 (p = 0.30). The accuracy of all the chatbots decreased with a higher degree of dietary oxalate content categories but Bard remained having the highest accuracy, regardless of dietary oxalate content categories. There was considerable variation in the accuracy of AI chatbots for classifying dietary oxalate content. Bard AI consistently showed the highest accuracy, followed by Bing Chat, GPT-4, and GPT-3.5. These results underline the potential of AI in dietary management for at-risk patient groups and the need for enhancements in chatbot algorithms for clinical accuracy.
Collapse
Affiliation(s)
- Noppawit Aiumtrakul
- Department of Medicine, John A. Burns School of Medicine, University of Hawaii, Honolulu, HI 96813, USA; (N.A.); (C.A.); (K.B.V.); (C.W.)
| | - Charat Thongprayoon
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (S.S.); (O.A.G.V.); (F.Q.); (J.M.); (W.C.)
| | - Chinnawat Arayangkool
- Department of Medicine, John A. Burns School of Medicine, University of Hawaii, Honolulu, HI 96813, USA; (N.A.); (C.A.); (K.B.V.); (C.W.)
| | - Kristine B. Vo
- Department of Medicine, John A. Burns School of Medicine, University of Hawaii, Honolulu, HI 96813, USA; (N.A.); (C.A.); (K.B.V.); (C.W.)
| | - Chalothorn Wannaphut
- Department of Medicine, John A. Burns School of Medicine, University of Hawaii, Honolulu, HI 96813, USA; (N.A.); (C.A.); (K.B.V.); (C.W.)
| | - Supawadee Suppadungsuk
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (S.S.); (O.A.G.V.); (F.Q.); (J.M.); (W.C.)
- Chakri Naruebodindra Medical Institute, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Samut Prakan 10540, Thailand
| | - Pajaree Krisanapan
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (S.S.); (O.A.G.V.); (F.Q.); (J.M.); (W.C.)
- Division of Nephrology, Department of Internal Medicine, Faculty of Medicine, Thammasat University, Pathum Thani 12120, Thailand
| | - Oscar A. Garcia Valencia
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (S.S.); (O.A.G.V.); (F.Q.); (J.M.); (W.C.)
| | - Fawad Qureshi
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (S.S.); (O.A.G.V.); (F.Q.); (J.M.); (W.C.)
| | - Jing Miao
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (S.S.); (O.A.G.V.); (F.Q.); (J.M.); (W.C.)
| | - Wisit Cheungpasitporn
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (S.S.); (O.A.G.V.); (F.Q.); (J.M.); (W.C.)
| |
Collapse
|
9
|
Miao J, Thongprayoon C, Suppadungsuk S, Krisanapan P, Radhakrishnan Y, Cheungpasitporn W. Chain of Thought Utilization in Large Language Models and Application in Nephrology. MEDICINA (KAUNAS, LITHUANIA) 2024; 60:148. [PMID: 38256408 PMCID: PMC10819595 DOI: 10.3390/medicina60010148] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 12/31/2023] [Accepted: 01/11/2024] [Indexed: 01/24/2024]
Abstract
Chain-of-thought prompting enhances the abilities of large language models (LLMs) significantly. It not only makes these models more specific and context-aware but also impacts the wider field of artificial intelligence (AI). This approach broadens the usability of AI, increases its efficiency, and aligns it more closely with human thinking and decision-making processes. As we improve this method, it is set to become a key element in the future of AI, adding more purpose, precision, and ethical consideration to these technologies. In medicine, the chain-of-thought prompting is especially beneficial. Its capacity to handle complex information, its logical and sequential reasoning, and its suitability for ethically and context-sensitive situations make it an invaluable tool for healthcare professionals. Its role in enhancing medical care and research is expected to grow as we further develop and use this technique. Chain-of-thought prompting bridges the gap between AI's traditionally obscure decision-making process and the clear, accountable standards required in healthcare. It does this by emulating a reasoning style familiar to medical professionals, fitting well into their existing practices and ethical codes. While solving AI transparency is a complex challenge, the chain-of-thought approach is a significant step toward making AI more comprehensible and trustworthy in medicine. This review focuses on understanding the workings of LLMs, particularly how chain-of-thought prompting can be adapted for nephrology's unique requirements. It also aims to thoroughly examine the ethical aspects, clarity, and future possibilities, offering an in-depth view of the exciting convergence of these areas.
Collapse
Affiliation(s)
- Jing Miao
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (S.S.)
| | - Charat Thongprayoon
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (S.S.)
| | - Supawadee Suppadungsuk
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (S.S.)
- Chakri Naruebodindra Medical Institute, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Samut Prakan 10540, Thailand
| | - Pajaree Krisanapan
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (S.S.)
- Division of Nephrology, Department of Internal Medicine, Faculty of Medicine, Thammasat University, Pathum Thani 12120, Thailand
- Division of Nephrology, Department of Internal Medicine, Thammasat University Hospital, Pathum Thani 12120, Thailand
| | - Yeshwanter Radhakrishnan
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (S.S.)
| | - Wisit Cheungpasitporn
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (S.S.)
| |
Collapse
|
10
|
Miao J, Thongprayoon C, Suppadungsuk S, Garcia Valencia OA, Qureshi F, Cheungpasitporn W. Ethical Dilemmas in Using AI for Academic Writing and an Example Framework for Peer Review in Nephrology Academia: A Narrative Review. Clin Pract 2023; 14:89-105. [PMID: 38248432 PMCID: PMC10801601 DOI: 10.3390/clinpract14010008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 12/23/2023] [Accepted: 12/28/2023] [Indexed: 01/23/2024] Open
Abstract
The emergence of artificial intelligence (AI) has greatly propelled progress across various sectors including the field of nephrology academia. However, this advancement has also given rise to ethical challenges, notably in scholarly writing. AI's capacity to automate labor-intensive tasks like literature reviews and data analysis has created opportunities for unethical practices, with scholars incorporating AI-generated text into their manuscripts, potentially undermining academic integrity. This situation gives rise to a range of ethical dilemmas that not only question the authenticity of contemporary academic endeavors but also challenge the credibility of the peer-review process and the integrity of editorial oversight. Instances of this misconduct are highlighted, spanning from lesser-known journals to reputable ones, and even infiltrating graduate theses and grant applications. This subtle AI intrusion hints at a systemic vulnerability within the academic publishing domain, exacerbated by the publish-or-perish mentality. The solutions aimed at mitigating the unethical employment of AI in academia include the adoption of sophisticated AI-driven plagiarism detection systems, a robust augmentation of the peer-review process with an "AI scrutiny" phase, comprehensive training for academics on ethical AI usage, and the promotion of a culture of transparency that acknowledges AI's role in research. This review underscores the pressing need for collaborative efforts among academic nephrology institutions to foster an environment of ethical AI application, thus preserving the esteemed academic integrity in the face of rapid technological advancements. It also makes a plea for rigorous research to assess the extent of AI's involvement in the academic literature, evaluate the effectiveness of AI-enhanced plagiarism detection tools, and understand the long-term consequences of AI utilization on academic integrity. An example framework has been proposed to outline a comprehensive approach to integrating AI into Nephrology academic writing and peer review. Using proactive initiatives and rigorous evaluations, a harmonious environment that harnesses AI's capabilities while upholding stringent academic standards can be envisioned.
Collapse
Affiliation(s)
- Jing Miao
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (S.S.); (O.A.G.V.); (F.Q.); (W.C.)
| | - Charat Thongprayoon
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (S.S.); (O.A.G.V.); (F.Q.); (W.C.)
| | - Supawadee Suppadungsuk
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (S.S.); (O.A.G.V.); (F.Q.); (W.C.)
- Chakri Naruebodindra Medical Institute, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bang Phli 10540, Samut Prakan, Thailand
| | - Oscar A. Garcia Valencia
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (S.S.); (O.A.G.V.); (F.Q.); (W.C.)
| | - Fawad Qureshi
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (S.S.); (O.A.G.V.); (F.Q.); (W.C.)
| | - Wisit Cheungpasitporn
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (S.S.); (O.A.G.V.); (F.Q.); (W.C.)
| |
Collapse
|
11
|
Miao J, Thongprayoon C, Suppadungsuk S, Garcia Valencia OA, Qureshi F, Cheungpasitporn W. Innovating Personalized Nephrology Care: Exploring the Potential Utilization of ChatGPT. J Pers Med 2023; 13:1681. [PMID: 38138908 PMCID: PMC10744377 DOI: 10.3390/jpm13121681] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 12/02/2023] [Accepted: 12/02/2023] [Indexed: 12/24/2023] Open
Abstract
The rapid advancement of artificial intelligence (AI) technologies, particularly machine learning, has brought substantial progress to the field of nephrology, enabling significant improvements in the management of kidney diseases. ChatGPT, a revolutionary language model developed by OpenAI, is a versatile AI model designed to engage in meaningful and informative conversations. Its applications in healthcare have been notable, with demonstrated proficiency in various medical knowledge assessments. However, ChatGPT's performance varies across different medical subfields, posing challenges in nephrology-related queries. At present, comprehensive reviews regarding ChatGPT's potential applications in nephrology remain lacking despite the surge of interest in its role in various domains. This article seeks to fill this gap by presenting an overview of the integration of ChatGPT in nephrology. It discusses the potential benefits of ChatGPT in nephrology, encompassing dataset management, diagnostics, treatment planning, and patient communication and education, as well as medical research and education. It also explores ethical and legal concerns regarding the utilization of AI in medical practice. The continuous development of AI models like ChatGPT holds promise for the healthcare realm but also underscores the necessity of thorough evaluation and validation before implementing AI in real-world medical scenarios. This review serves as a valuable resource for nephrologists and healthcare professionals interested in fully utilizing the potential of AI in innovating personalized nephrology care.
Collapse
Affiliation(s)
- Jing Miao
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (C.T.); (S.S.); (O.A.G.V.); (F.Q.)
| | - Charat Thongprayoon
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (C.T.); (S.S.); (O.A.G.V.); (F.Q.)
| | - Supawadee Suppadungsuk
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (C.T.); (S.S.); (O.A.G.V.); (F.Q.)
- Chakri Naruebodindra Medical Institute, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Samut Prakan 10540, Thailand
| | - Oscar A. Garcia Valencia
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (C.T.); (S.S.); (O.A.G.V.); (F.Q.)
| | - Fawad Qureshi
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (C.T.); (S.S.); (O.A.G.V.); (F.Q.)
| | - Wisit Cheungpasitporn
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (C.T.); (S.S.); (O.A.G.V.); (F.Q.)
| |
Collapse
|
12
|
Ittarat M, Cheungpasitporn W, Chansangpetch S. Personalized Care in Eye Health: Exploring Opportunities, Challenges, and the Road Ahead for Chatbots. J Pers Med 2023; 13:1679. [PMID: 38138906 PMCID: PMC10744965 DOI: 10.3390/jpm13121679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 11/29/2023] [Accepted: 11/30/2023] [Indexed: 12/24/2023] Open
Abstract
In modern eye care, the adoption of ophthalmology chatbots stands out as a pivotal technological progression. These digital assistants present numerous benefits, such as better access to vital information, heightened patient interaction, and streamlined triaging. Recent evaluations have highlighted their performance in both the triage of ophthalmology conditions and ophthalmology knowledge assessment, underscoring their potential and areas for improvement. However, assimilating these chatbots into the prevailing healthcare infrastructures brings challenges. These encompass ethical dilemmas, legal compliance, seamless integration with electronic health records (EHR), and fostering effective dialogue with medical professionals. Addressing these challenges necessitates the creation of bespoke standards and protocols for ophthalmology chatbots. The horizon for these chatbots is illuminated by advancements and anticipated innovations, poised to redefine the delivery of eye care. The synergy of artificial intelligence (AI) and machine learning (ML) with chatbots amplifies their diagnostic prowess. Additionally, their capability to adapt linguistically and culturally ensures they can cater to a global patient demographic. In this article, we explore in detail the utilization of chatbots in ophthalmology, examining their accuracy, reliability, data protection, security, transparency, potential algorithmic biases, and ethical considerations. We provide a comprehensive review of their roles in the triage of ophthalmology conditions and knowledge assessment, emphasizing their significance and future potential in the field.
Collapse
Affiliation(s)
- Mantapond Ittarat
- Surin Hospital and Surin Medical Education Center, Suranaree University of Technology, Surin 32000, Thailand;
| | | | - Sunee Chansangpetch
- Center of Excellence in Glaucoma, Chulalongkorn University, Bangkok 10330, Thailand;
- Department of Ophthalmology, Faculty of Medicine, Chulalongkorn University and King Chulalongkorn Memorial Hospital, Thai Red Cross Society, Bangkok 10330, Thailand
| |
Collapse
|
13
|
Verejan V. Contrast sensitivity and aspects of binocular vision alteration in school-aged children after head injury. Rom J Ophthalmol 2023; 67:394-397. [PMID: 38239421 PMCID: PMC10793377 DOI: 10.22336/rjo.2023.62] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/03/2023] [Indexed: 01/22/2024] Open
Abstract
Aim: The research aimed to establish whether contrast sensitivity is a reliable method of evaluation in the case of school-aged children after head injury, and also to establish aspects of binocular vision alteration in the acute phase of TBI. Materials and methods: Forty-eight individuals with persisting visual symptoms after brain injury have been examined. Results: The rate of contrast sensitivity was determined to be 61%-100% in the research group in 56,3%-58,3% cases, compared to the control group, in which the prevalence was 93,7%-95,8% cases. Repeated evaluation during 4 months after the head trauma revealed an incidence of 83,3%-89,6% for the research group and 97,9% for the control group in the same 61%-100% interval. Binocular vision proved to be unchanged in 79,17% of patients, being determined as absent only in 4,16% of patients who later presented a secondary divergent strabismus. Conclusions: Contrast sensitivity is an easily performed method for the group of school-aged children after head injury. Although it is often considered by children an interesting game, its results should be taken into consideration while suspecting a traumatic optic neuropathy. Since most of the pediatric patients aged between 7-18 years, show a slight decrease of contrast sensitivity ability after head trauma, this examination should be performed as a part of ophthalmological evaluation in pediatric patients following head injury.
Collapse
Affiliation(s)
- Victoria Verejan
- Department of Ophthalmology, "N. Testemițanu" State University of Medicine and Pharmacy, Chişinău, Republic of Moldova
| |
Collapse
|
14
|
Kaushik J, Bhatta S, Singh A, Jha R. Assessing the Competence of Artificial Intelligence Programs in Pediatric Ophthalmology and Strabismus and Comparing their Relative Advantages. Rom J Ophthalmol 2023; 67:389-393. [PMID: 38239420 PMCID: PMC10793362 DOI: 10.22336/rjo.2023.61] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/26/2023] [Indexed: 01/22/2024] Open
Abstract
Objective: The aim of the study was to determine the knowledge levels of ChatGPT, Bing, and Bard artificial intelligence programs produced by three different manufacturers regarding pediatric ophthalmology and strabismus and to compare their strengths and weaknesses. Methods: Forty-four questions testing the knowledge levels of pediatric ophthalmology and strabismus were asked in ChatGPT, Bing, and Bard artificial intelligence programs. Questions were grouped as correct or incorrect. The accuracy rates were statistically compared. Results: ChatGPT chatbot gave 59.1% correct answers, Bing chatbot gave 70.5% correct answers, and Bard chatbot gave 72.7% correct answers to the questions asked. No significant difference was observed between the rates of correct answers to the questions in all 3 artificial intelligence programs (p=0.343, Pearson's chi-square test). Conclusion: Although information about pediatric ophthalmology and strabismus can be accessed using current artificial intelligence programs, the answers given may not always be accurate. Care should always be taken when evaluating this information.
Collapse
Affiliation(s)
- Jaya Kaushik
- Department of Ophthalmology, Command Hospital (Lucknow), U.P, India
| | - Sunandan Bhatta
- Department of Ophthalmology, Military Hospital (Agra), U.P, India
| | - Ankita Singh
- Department of Ophthalmology, Military Hospital (Bathinda), Punjab, India
| | - Rakesh Jha
- Department of Ophthalmology, Command Hospital (Lucknow), U.P, India
| |
Collapse
|
15
|
Aiumtrakul N, Thongprayoon C, Suppadungsuk S, Krisanapan P, Miao J, Qureshi F, Cheungpasitporn W. Navigating the Landscape of Personalized Medicine: The Relevance of ChatGPT, BingChat, and Bard AI in Nephrology Literature Searches. J Pers Med 2023; 13:1457. [PMID: 37888068 PMCID: PMC10608326 DOI: 10.3390/jpm13101457] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2023] [Revised: 09/29/2023] [Accepted: 09/29/2023] [Indexed: 10/28/2023] Open
Abstract
BACKGROUND AND OBJECTIVES Literature reviews are foundational to understanding medical evidence. With AI tools like ChatGPT, Bing Chat and Bard AI emerging as potential aids in this domain, this study aimed to individually assess their citation accuracy within Nephrology, comparing their performance in providing precise. MATERIALS AND METHODS We generated the prompt to solicit 20 references in Vancouver style in each 12 Nephrology topics, using ChatGPT, Bing Chat and Bard. We verified the existence and accuracy of the provided references using PubMed, Google Scholar, and Web of Science. We categorized the validity of the references from the AI chatbot into (1) incomplete, (2) fabricated, (3) inaccurate, and (4) accurate. RESULTS A total of 199 (83%), 158 (66%) and 112 (47%) unique references were provided from ChatGPT, Bing Chat and Bard, respectively. ChatGPT provided 76 (38%) accurate, 82 (41%) inaccurate, 32 (16%) fabricated and 9 (5%) incomplete references. Bing Chat provided 47 (30%) accurate, 77 (49%) inaccurate, 21 (13%) fabricated and 13 (8%) incomplete references. In contrast, Bard provided 3 (3%) accurate, 26 (23%) inaccurate, 71 (63%) fabricated and 12 (11%) incomplete references. The most common error type across platforms was incorrect DOIs. CONCLUSIONS In the field of medicine, the necessity for faultless adherence to research integrity is highlighted, asserting that even small errors cannot be tolerated. The outcomes of this investigation draw attention to inconsistent citation accuracy across the different AI tools evaluated. Despite some promising results, the discrepancies identified call for a cautious and rigorous vetting of AI-sourced references in medicine. Such chatbots, before becoming standard tools, need substantial refinements to assure unwavering precision in their outputs.
Collapse
Affiliation(s)
- Noppawit Aiumtrakul
- Department of Medicine, John A. Burns School of Medicine, University of Hawaii, Honolulu, HI 96813, USA;
| | - Charat Thongprayoon
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (S.S.); (P.K.); (J.M.); (F.Q.); (W.C.)
| | - Supawadee Suppadungsuk
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (S.S.); (P.K.); (J.M.); (F.Q.); (W.C.)
- Chakri Naruebodindra Medical Institute, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Samut Prakan 10540, Thailand
| | - Pajaree Krisanapan
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (S.S.); (P.K.); (J.M.); (F.Q.); (W.C.)
- Department of Internal Medicine, Faculty of Medicine, Thammasat University, Pathum Thani 12120, Thailand
| | - Jing Miao
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (S.S.); (P.K.); (J.M.); (F.Q.); (W.C.)
| | - Fawad Qureshi
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (S.S.); (P.K.); (J.M.); (F.Q.); (W.C.)
| | - Wisit Cheungpasitporn
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (S.S.); (P.K.); (J.M.); (F.Q.); (W.C.)
| |
Collapse
|