1
|
Bedi S, Liu Y, Orr-Ewing L, Dash D, Koyejo S, Callahan A, Fries JA, Wornow M, Swaminathan A, Lehmann LS, Hong HJ, Kashyap M, Chaurasia AR, Shah NR, Singh K, Tazbaz T, Milstein A, Pfeffer MA, Shah NH. Testing and Evaluation of Health Care Applications of Large Language Models: A Systematic Review. JAMA 2025; 333:319-328. [PMID: 39405325 PMCID: PMC11480901 DOI: 10.1001/jama.2024.21700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Accepted: 09/30/2024] [Indexed: 10/19/2024]
Abstract
Importance Large language models (LLMs) can assist in various health care activities, but current evaluation approaches may not adequately identify the most useful application areas. Objective To summarize existing evaluations of LLMs in health care in terms of 5 components: (1) evaluation data type, (2) health care task, (3) natural language processing (NLP) and natural language understanding (NLU) tasks, (4) dimension of evaluation, and (5) medical specialty. Data Sources A systematic search of PubMed and Web of Science was performed for studies published between January 1, 2022, and February 19, 2024. Study Selection Studies evaluating 1 or more LLMs in health care. Data Extraction and Synthesis Three independent reviewers categorized studies via keyword searches based on the data used, the health care tasks, the NLP and NLU tasks, the dimensions of evaluation, and the medical specialty. Results Of 519 studies reviewed, published between January 1, 2022, and February 19, 2024, only 5% used real patient care data for LLM evaluation. The most common health care tasks were assessing medical knowledge such as answering medical licensing examination questions (44.5%) and making diagnoses (19.5%). Administrative tasks such as assigning billing codes (0.2%) and writing prescriptions (0.2%) were less studied. For NLP and NLU tasks, most studies focused on question answering (84.2%), while tasks such as summarization (8.9%) and conversational dialogue (3.3%) were infrequent. Almost all studies (95.4%) used accuracy as the primary dimension of evaluation; fairness, bias, and toxicity (15.8%), deployment considerations (4.6%), and calibration and uncertainty (1.2%) were infrequently measured. Finally, in terms of medical specialty area, most studies were in generic health care applications (25.6%), internal medicine (16.4%), surgery (11.4%), and ophthalmology (6.9%), with nuclear medicine (0.6%), physical medicine (0.4%), and medical genetics (0.2%) being the least represented. Conclusions and Relevance Existing evaluations of LLMs mostly focus on accuracy of question answering for medical examinations, without consideration of real patient care data. Dimensions such as fairness, bias, and toxicity and deployment considerations received limited attention. Future evaluations should adopt standardized applications and metrics, use clinical data, and broaden focus to include a wider range of tasks and specialties.
Collapse
Affiliation(s)
- Suhana Bedi
- Department of Biomedical Data Science, Stanford School of Medicine, Stanford, California
| | - Yutong Liu
- Clinical Excellence Research Center, Stanford University, Stanford, California
| | - Lucy Orr-Ewing
- Clinical Excellence Research Center, Stanford University, Stanford, California
| | - Dev Dash
- Clinical Excellence Research Center, Stanford University, Stanford, California
- Center for Biomedical Informatics Research, Stanford University, Stanford, California
| | - Sanmi Koyejo
- Department of Computer Science, Stanford University, Stanford, California
| | - Alison Callahan
- Center for Biomedical Informatics Research, Stanford University, Stanford, California
| | - Jason A Fries
- Center for Biomedical Informatics Research, Stanford University, Stanford, California
| | - Michael Wornow
- Center for Biomedical Informatics Research, Stanford University, Stanford, California
| | - Akshay Swaminathan
- Center for Biomedical Informatics Research, Stanford University, Stanford, California
| | | | - Hyo Jung Hong
- Department of Anesthesiology, Stanford University, Stanford, California
| | - Mehr Kashyap
- Stanford University School of Medicine, Stanford, California
| | - Akash R Chaurasia
- Center for Biomedical Informatics Research, Stanford University, Stanford, California
| | - Nirav R Shah
- Clinical Excellence Research Center, Stanford University, Stanford, California
| | - Karandeep Singh
- Digital Health Innovation, University of California San Diego Health, San Diego
| | - Troy Tazbaz
- Digital Health Center of Excellence, US Food and Drug Administration, Washington, DC
| | - Arnold Milstein
- Clinical Excellence Research Center, Stanford University, Stanford, California
| | - Michael A Pfeffer
- Department of Medicine, Stanford University School of Medicine, Stanford, California
| | - Nigam H Shah
- Clinical Excellence Research Center, Stanford University, Stanford, California
- Center for Biomedical Informatics Research, Stanford University, Stanford, California
| |
Collapse
|
2
|
Başaran M, Duman C. Dialogues with artificial intelligence: Exploring medical students' perspectives on ChatGPT. MEDICAL TEACHER 2024:1-10. [PMID: 39692300 DOI: 10.1080/0142159x.2024.2438766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Accepted: 12/03/2024] [Indexed: 12/19/2024]
Abstract
ChatGPT has initiated a new era of inquiry into sources of information within the scientific community. Studies leveraging ChatGPT in the medical field have demonstrated notable performance in academic processes and healthcare applications. This research presents how medical students have benefited from ChatGPT during their educational journey and the challenges they encountered, as reported through their personal experiences. The methodological framework of this study adheres to the stages of qualitative research. An explanatory case study, a qualitative research method, was adopted to determine user experiences with ChatGPT. Content analysis based on student experiences with ChatGPT indicates that it may offer advantages in health education as a resource for scientific research activities. However, adverse reports were also identified, including ethical issues, lack of personal data protection, and potential misuse in scientific research. This study emphasizes the need for comprehensive steps in effectively integrating AI tools like ChatGPT into medical education as a new technology.
Collapse
Affiliation(s)
- Mehmet Başaran
- Curriculum and Instruction, Gaziantep University, Gaziantep, Turkey
| | - Cevahir Duman
- Curriculum and Instruction, Gaziantep University, Gaziantep, Turkey
| |
Collapse
|
3
|
Akyol Onder EN, Ensari E, Ertan P. ChatGPT-4o's performance on pediatric Vesicoureteral reflux. J Pediatr Urol 2024:S1477-5131(24)00619-3. [PMID: 39694777 DOI: 10.1016/j.jpurol.2024.12.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/25/2024] [Revised: 11/16/2024] [Accepted: 12/02/2024] [Indexed: 12/20/2024]
Abstract
INTRODUCTION Vesicoureteral reflux (VUR) is a common congenital or acquired urinary disorder in children. Chat Generative Pre-trained Transformer (ChatGPT) is an artificial intelligence-driven platform offering medical information. This research aims to assess the reliability and readability of ChatGPT-4o's answers regarding pediatric VUR for general, non-medical audience. MATERIALS AND METHODS Twenty of the most frequently asked English-language questions about VUR in children were used to evaluate ChatGPT-4o's responses. Two independent reviewers rated the reliability and quality using the Global Quality Scale (GQS) and a modified version of the DISCERN tool. The readability of ChatGPT responses was assessed through the Flesch Reading Ease (FRE) Score, Flesch-Kincaid Grade Level (FKGL), Gunning Fog Index (GFI), Coleman-Liau Index (CLI), and Simple Measure of Gobbledygook (SMOG). RESULTS Median mDISCERN and GQS scores were 4 (4-5) and 5 (3-5), respectively. Most of the responses of ChatGPT have moderate (55 %) and good (45 %) reliability according to the mDISCERN score and high quality (95 %) according to GQS. The mean ± standard deviation scores for FRE, FKGL, SMOG, GFI, and CLI of the text were 26 ± 12, 15 ± 2.5, 16.3 ± 2, 18.8 ± 2.9, and 15.3 ± 2.2, respectively, indicating a high level of reading difficulty. DISCUSSION While ChatGPT-4o offers accurate and high-quality information about pediatric VUR, its readability poses challenges, as the content is difficult to understand for a general audience. CONCLUSION ChatGPT provides high-quality, accessible information about VUR. However, improving readability should be a priority to make this information more user-friendly for a broader audience.
Collapse
Affiliation(s)
- Esra Nagehan Akyol Onder
- Aksaray University Training and Research Hospital, Department of Paediatric Nephrology, Aksaray, TR-68200, Turkey.
| | - Esra Ensari
- Antalya City Hospital, Department of Paediatric Nephrology, Antalya, TR-07080, Turkey.
| | - Pelin Ertan
- Manisa Celal Bayar University, School of Medicine, Department of Paediatric Nephrology, Manisa, TR-45010, Turkey.
| |
Collapse
|
4
|
Yu H, Fan L, Li L, Zhou J, Ma Z, Xian L, Hua W, He S, Jin M, Zhang Y, Gandhi A, Ma X. Large Language Models in Biomedical and Health Informatics: A Review with Bibliometric Analysis. JOURNAL OF HEALTHCARE INFORMATICS RESEARCH 2024; 8:658-711. [PMID: 39463859 PMCID: PMC11499577 DOI: 10.1007/s41666-024-00171-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2024] [Revised: 08/16/2024] [Accepted: 08/22/2024] [Indexed: 10/29/2024]
Abstract
Large language models (LLMs) have rapidly become important tools in Biomedical and Health Informatics (BHI), potentially enabling new ways to analyze data, treat patients, and conduct research. This study aims to provide a comprehensive overview of LLM applications in BHI, highlighting their transformative potential and addressing the associated ethical and practical challenges. We reviewed 1698 research articles from January 2022 to December 2023, categorizing them by research themes and diagnostic categories. Additionally, we conducted network analysis to map scholarly collaborations and research dynamics. Our findings reveal a substantial increase in the potential applications of LLMs to a variety of BHI tasks, including clinical decision support, patient interaction, and medical document analysis. Notably, LLMs are expected to be instrumental in enhancing the accuracy of diagnostic tools and patient care protocols. The network analysis highlights dense and dynamically evolving collaborations across institutions, underscoring the interdisciplinary nature of LLM research in BHI. A significant trend was the application of LLMs in managing specific disease categories, such as mental health and neurological disorders, demonstrating their potential to influence personalized medicine and public health strategies. LLMs hold promising potential to further transform biomedical research and healthcare delivery. While promising, the ethical implications and challenges of model validation call for rigorous scrutiny to optimize their benefits in clinical settings. This survey serves as a resource for stakeholders in healthcare, including researchers, clinicians, and policymakers, to understand the current state and future potential of LLMs in BHI.
Collapse
Affiliation(s)
- Huizi Yu
- University of Michigan, Ann Arbor, MI USA
| | - Lizhou Fan
- University of Michigan, Ann Arbor, MI USA
| | - Lingyao Li
- University of Michigan, Ann Arbor, MI USA
| | | | - Zihui Ma
- University of Maryland, College Park, MD USA
| | - Lu Xian
- University of Michigan, Ann Arbor, MI USA
| | | | - Sijia He
- University of Michigan, Ann Arbor, MI USA
| | | | | | - Ashvin Gandhi
- University of California, Los Angeles, Los Angeles, CA USA
| | - Xin Ma
- Shandong University, Jinan, Shandong China
| |
Collapse
|
5
|
Miao J, Thongprayoon C, Garcia Valencia O, Craici IM, Cheungpasitporn W. Navigating Nephrology's Decline Through a GPT-4 Analysis of Internal Medicine Specialties in the United States: Qualitative Study. JMIR MEDICAL EDUCATION 2024; 10:e57157. [PMID: 39388702 PMCID: PMC11486450 DOI: 10.2196/57157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Revised: 05/22/2024] [Accepted: 08/15/2024] [Indexed: 10/12/2024]
Abstract
Background The 2024 Nephrology fellowship match data show the declining interest in nephrology in the United States, with an 11% drop in candidates and a mere 66% (321/488) of positions filled. Objective The study aims to discern the factors influencing this trend using ChatGPT, a leading chatbot model, for insights into the comparative appeal of nephrology versus other internal medicine specialties. Methods Using the GPT-4 model, the study compared nephrology with 13 other internal medicine specialties, evaluating each on 7 criteria including intellectual complexity, work-life balance, procedural involvement, research opportunities, patient relationships, career demand, and financial compensation. Each criterion was assigned scores from 1 to 10, with the cumulative score determining the ranking. The approach included counteracting potential bias by instructing GPT-4 to favor other specialties over nephrology in reverse scenarios. Results GPT-4 ranked nephrology only above sleep medicine. While nephrology scored higher than hospice and palliative medicine, it fell short in key criteria such as work-life balance, patient relationships, and career demand. When examining the percentage of filled positions in the 2024 appointment year match, nephrology's filled rate was 66%, only higher than the 45% (155/348) filled rate of geriatric medicine. Nephrology's score decreased by 4%-14% in 5 criteria including intellectual challenge and complexity, procedural involvement, career opportunity and demand, research and academic opportunities, and financial compensation. Conclusions ChatGPT does not favor nephrology over most internal medicine specialties, highlighting its diminishing appeal as a career choice. This trend raises significant concerns, especially considering the overall physician shortage, and prompts a reevaluation of factors affecting specialty choice among medical residents.
Collapse
Affiliation(s)
- Jing Miao
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, 200 1st st sw, Rochester, MN, 55905, United States, 1 507 594 4700
| | - Charat Thongprayoon
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, 200 1st st sw, Rochester, MN, 55905, United States, 1 507 594 4700
| | - Oscar Garcia Valencia
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, 200 1st st sw, Rochester, MN, 55905, United States, 1 507 594 4700
| | - Iasmina M Craici
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, 200 1st st sw, Rochester, MN, 55905, United States, 1 507 594 4700
| | - Wisit Cheungpasitporn
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, 200 1st st sw, Rochester, MN, 55905, United States, 1 507 594 4700
| |
Collapse
|
6
|
Cheungpasitporn W, Thongprayoon C, Ronco C, Kashani KB. Generative AI in Critical Care Nephrology: Applications and Future Prospects. Blood Purif 2024; 53:871-883. [PMID: 39217985 DOI: 10.1159/000541168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2024] [Accepted: 08/27/2024] [Indexed: 09/04/2024]
Abstract
BACKGROUND Generative artificial intelligence (AI) is rapidly transforming various aspects of healthcare, including critical care nephrology. Large language models (LLMs), a key technology in generative AI, show promise in enhancing patient care, streamlining workflows, and advancing research in this field. SUMMARY This review analyzes the current applications and future prospects of generative AI in critical care nephrology. Recent studies demonstrate the capabilities of LLMs in diagnostic accuracy, clinical reasoning, and continuous renal replacement therapy (CRRT) alarm troubleshooting. As we enter an era of multiagent models and automation, the integration of generative AI into critical care nephrology holds promise for improving patient care, optimizing clinical processes, and accelerating research. However, careful consideration of ethical implications and continued refinement of these technologies are essential for their responsible implementation in clinical practice. This review explores the current and potential applications of generative AI in nephrology, focusing on clinical decision support, patient education, research, and medical education. Additionally, we examine the challenges and limitations of AI implementation, such as privacy concerns, potential bias, and the necessity for human oversight. KEY MESSAGES (i) LLMs have shown potential in enhancing diagnostic accuracy, clinical reasoning, and CRRT alarm troubleshooting in critical care nephrology. (ii) Generative AI offers promising applications in patient education, literature review, and academic writing within the field of nephrology. (iii) The integration of AI into electronic health records and clinical workflows presents both opportunities and challenges for improving patient care and research. (iv) Addressing ethical concerns, ensuring data privacy, and maintaining human oversight are crucial for the responsible implementation of AI in critical care nephrology.
Collapse
Affiliation(s)
- Wisit Cheungpasitporn
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, Minnesota, USA
| | - Charat Thongprayoon
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, Minnesota, USA
| | - Claudio Ronco
- Department of Nephrology, San Bortolo Hospital and International Renal Research Institute of Vicenza (IRRIV), Vicenza, Italy
| | - Kianoush B Kashani
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, Minnesota, USA
- Division of Pulmonary and Critical Care Medicine, Department of Medicine, Mayo Clinic, Rochester, Minnesota, USA
| |
Collapse
|
7
|
Acharya PC, Alba R, Krisanapan P, Acharya CM, Suppadungsuk S, Csongradi E, Mao MA, Craici IM, Miao J, Thongprayoon C, Cheungpasitporn W. AI-Driven Patient Education in Chronic Kidney Disease: Evaluating Chatbot Responses against Clinical Guidelines. Diseases 2024; 12:185. [PMID: 39195184 DOI: 10.3390/diseases12080185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2024] [Revised: 08/10/2024] [Accepted: 08/13/2024] [Indexed: 08/29/2024] Open
Abstract
Chronic kidney disease (CKD) patients can benefit from personalized education on lifestyle and nutrition management strategies to enhance healthcare outcomes. The potential use of chatbots, introduced in 2022, as a tool for educating CKD patients has been explored. A set of 15 questions on lifestyle modification and nutrition, derived from a thorough review of three specific KDIGO guidelines, were developed and posed in various formats, including original, paraphrased with different adverbs, incomplete sentences, and misspellings. Four versions of AI were used to answer these questions: ChatGPT 3.5 (March and September 2023 versions), ChatGPT 4, and Bard AI. Additionally, 20 questions on lifestyle modification and nutrition were derived from the NKF KDOQI guidelines for nutrition in CKD (2020 Update) and answered by four versions of chatbots. Nephrologists reviewed all answers for accuracy. ChatGPT 3.5 produced largely accurate responses across the different question complexities, with occasional misleading statements from the March version. The September 2023 version frequently cited its last update as September 2021 and did not provide specific references, while the November 2023 version did not provide any misleading information. ChatGPT 4 presented answers similar to 3.5 but with improved reference citations, though not always directly relevant. Bard AI, while largely accurate with pictorial representation at times, occasionally produced misleading statements and had inconsistent reference quality, although an improvement was noted over time. Bing AI from November 2023 had short answers without detailed elaboration and sometimes just answered "YES". Chatbots demonstrate potential as personalized educational tools for CKD that utilize layman's terms, deliver timely and rapid responses in multiple languages, and offer a conversational pattern advantageous for patient engagement. Despite improvements observed from March to November 2023, some answers remained potentially misleading. ChatGPT 4 offers some advantages over 3.5, although the differences are limited. Collaboration between healthcare professionals and AI developers is essential to improve healthcare delivery and ensure the safe incorporation of chatbots into patient care.
Collapse
Affiliation(s)
- Prakrati C Acharya
- Division of Nephrology, Texas Tech Health Sciences Center El Paso, El Paso, TX 79905, USA
| | - Raul Alba
- Division of Nephrology, Texas Tech Health Sciences Center El Paso, El Paso, TX 79905, USA
| | - Pajaree Krisanapan
- Division of Nephrology, Thammasat University Hospital, Pathum Thani 12120, Thailand
- Division of Nephrology and Hypertension, Mayo Clinic, Rochester, MN 55905, USA
| | - Chirag M Acharya
- Division of Nephrology, Texas Tech Health Sciences Center El Paso, El Paso, TX 79905, USA
| | - Supawadee Suppadungsuk
- Division of Nephrology and Hypertension, Mayo Clinic, Rochester, MN 55905, USA
- Chakri Naruebodindra Medical Institute, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Samut Prakan 10540, Thailand
| | - Eva Csongradi
- Faculty of Medicine, University of Debrecen, 4032 Debrecen, Hungary
| | - Michael A Mao
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Jacksonville, FL 32224, USA
| | - Iasmina M Craici
- Division of Nephrology and Hypertension, Mayo Clinic, Rochester, MN 55905, USA
| | - Jing Miao
- Division of Nephrology and Hypertension, Mayo Clinic, Rochester, MN 55905, USA
| | - Charat Thongprayoon
- Division of Nephrology and Hypertension, Mayo Clinic, Rochester, MN 55905, USA
| | | |
Collapse
|
8
|
Nicikowski J, Szczepański M, Miedziaszczyk M, Kudliński B. The potential of ChatGPT in medicine: an example analysis of nephrology specialty exams in Poland. Clin Kidney J 2024; 17:sfae193. [PMID: 39099569 PMCID: PMC11295106 DOI: 10.1093/ckj/sfae193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2023] [Indexed: 08/06/2024] Open
Abstract
Background In November 2022, OpenAI released a chatbot named ChatGPT, a product capable of processing natural language to create human-like conversational dialogue. It has generated a lot of interest, including from the scientific community and the medical science community. Recent publications have shown that ChatGPT can correctly answer questions from medical exams such as the United States Medical Licensing Examination and other specialty exams. To date, there have been no studies in which ChatGPT has been tested on specialty questions in the field of nephrology anywhere in the world. Methods Using the ChatGPT-3.5 and -4.0 algorithms in this comparative cross-sectional study, we analysed 1560 single-answer questions from the national specialty exam in nephrology from 2017 to 2023 that were available in the Polish Medical Examination Center's question database along with answer keys. Results Of the 1556 questions posed to ChatGPT-4.0, correct answers were obtained with an accuracy of 69.84%, compared with ChatGPT-3.5 (45.70%, P = .0001) and with the top results of medical doctors (85.73%, P = .0001). Of the 13 tests, ChatGPT-4.0 exceeded the required ≥60% pass rate in 11 tests passed, and scored higher than the average of the human exam results. Conclusion ChatGPT-3.5 was not spectacularly successful in nephrology exams. The ChatGPT-4.0 algorithm was able to pass most of the analysed nephrology specialty exams. New generations of ChatGPT achieve similar results to humans. The best results of humans are better than those of ChatGPT-4.0.
Collapse
Affiliation(s)
- Jan Nicikowski
- University of Zielona Gora, Faculty of Medicine and Health Sciences, Student Scientific Section of Clinical Nutrition, Zielona Góra, Poland
- University of Zielona Góra, Faculty of Medicine and Health Sciences, Department of Anaesthesiology, Intensive Care and Emergency Medicine, Zielona Góra, Poland
| | - Mikołaj Szczepański
- University of Zielona Gora, Faculty of Medicine and Health Sciences, Student Scientific Section of Clinical Nutrition, Zielona Góra, Poland
- University of Zielona Góra, Faculty of Medicine and Health Sciences, Department of Anaesthesiology, Intensive Care and Emergency Medicine, Zielona Góra, Poland
| | - Miłosz Miedziaszczyk
- Poznan University of Medical Sciences, Department of General and Transplant Surgery, Poznan, Poland
| | - Bartosz Kudliński
- University of Zielona Góra, Faculty of Medicine and Health Sciences, Department of Anaesthesiology, Intensive Care and Emergency Medicine, Zielona Góra, Poland
| |
Collapse
|
9
|
Aljamaan F, Temsah MH, Altamimi I, Al-Eyadhy A, Jamal A, Alhasan K, Mesallam TA, Farahat M, Malki KH. Reference Hallucination Score for Medical Artificial Intelligence Chatbots: Development and Usability Study. JMIR Med Inform 2024; 12:e54345. [PMID: 39083799 PMCID: PMC11325115 DOI: 10.2196/54345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 01/05/2024] [Accepted: 07/03/2024] [Indexed: 08/02/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) chatbots have recently gained use in medical practice by health care practitioners. Interestingly, the output of these AI chatbots was found to have varying degrees of hallucination in content and references. Such hallucinations generate doubts about their output and their implementation. OBJECTIVE The aim of our study was to propose a reference hallucination score (RHS) to evaluate the authenticity of AI chatbots' citations. METHODS Six AI chatbots were challenged with the same 10 medical prompts, requesting 10 references per prompt. The RHS is composed of 6 bibliographic items and the reference's relevance to prompts' keywords. RHS was calculated for each reference, prompt, and type of prompt (basic vs complex). The average RHS was calculated for each AI chatbot and compared across the different types of prompts and AI chatbots. RESULTS Bard failed to generate any references. ChatGPT 3.5 and Bing generated the highest RHS (score=11), while Elicit and SciSpace generated the lowest RHS (score=1), and Perplexity generated a middle RHS (score=7). The highest degree of hallucination was observed for reference relevancy to the prompt keywords (308/500, 61.6%), while the lowest was for reference titles (169/500, 33.8%). ChatGPT and Bing had comparable RHS (β coefficient=-0.069; P=.32), while Perplexity had significantly lower RHS than ChatGPT (β coefficient=-0.345; P<.001). AI chatbots generally had significantly higher RHS when prompted with scenarios or complex format prompts (β coefficient=0.486; P<.001). CONCLUSIONS The variation in RHS underscores the necessity for a robust reference evaluation tool to improve the authenticity of AI chatbots. Further, the variations highlight the importance of verifying their output and citations. Elicit and SciSpace had negligible hallucination, while ChatGPT and Bing had critical hallucination levels. The proposed AI chatbots' RHS could contribute to ongoing efforts to enhance AI's general reliability in medical research.
Collapse
Affiliation(s)
- Fadi Aljamaan
- College of Medicine, King Saud University, Riyadh, Saudi Arabia
| | | | | | - Ayman Al-Eyadhy
- College of Medicine, King Saud University, Riyadh, Saudi Arabia
| | - Amr Jamal
- College of Medicine, King Saud University, Riyadh, Saudi Arabia
| | - Khalid Alhasan
- College of Medicine, King Saud University, Riyadh, Saudi Arabia
| | - Tamer A Mesallam
- Department of Otolaryngology, College of Medicine, Research Chair of Voice, Swallowing, and Communication Disorders, King Saud University, Riyadh, Saudi Arabia
| | - Mohamed Farahat
- Department of Otolaryngology, College of Medicine, Research Chair of Voice, Swallowing, and Communication Disorders, King Saud University, Riyadh, Saudi Arabia
| | - Khalid H Malki
- Department of Otolaryngology, College of Medicine, Research Chair of Voice, Swallowing, and Communication Disorders, King Saud University, Riyadh, Saudi Arabia
| |
Collapse
|
10
|
Gencer A. Readability analysis of ChatGPT's responses on lung cancer. Sci Rep 2024; 14:17234. [PMID: 39060365 PMCID: PMC11282056 DOI: 10.1038/s41598-024-67293-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2024] [Accepted: 07/09/2024] [Indexed: 07/28/2024] Open
Abstract
For common diseases such as lung cancer, patients often use the internet to obtain medical information. As a result of advances in artificial intelligence and large language models such as ChatGPT, patients and health professionals use these tools to obtain medical information. The aim of this study was to evaluate the readability of ChatGPT-generated responses with different readability scales in the context of lung cancer. The most common questions in the lung cancer section of Medscape® were reviewed, and questions on the definition, etiology, risk factors, diagnosis, treatment, and prognosis of lung cancer (both NSCLC and SCLC) were selected. A set of 80 questions were asked 10 times to ChatGPT via the OpenAI API. ChatGPT's responses were tested using various readability formulas. The mean Flesch Reading Ease, Flesch-Kincaid Grade Level, Gunning FOG Scale, SMOG Index, Automated Readability Index, Coleman-Liau Index, Linsear Write Formula, Dale-Chall Readability Score, and Spache Readability Formula scores are at a moderate level (mean and standard deviation: 40.52 ± 9.81, 12.56 ± 1.66, 13.63 ± 1.54, 14.61 ± 1.45, 15.04 ± 1.97, 14.24 ± 1.90, 11.96 ± 2.55, 10.03 ± 0.63 and 5.93 ± 0.50, respectively). The readability levels of the answers generated by ChatGPT are "collage" and above and are difficult to read. Perhaps in the near future, the ChatGPT can be programmed to produce responses that are appropriate for people of different educational and age groups.
Collapse
Affiliation(s)
- Adem Gencer
- Department of Thoracic Surgery, Faculty of Medicine, Afyonkarahisar Health Sciences University, Zafer Sağlık Külliyesi, Dörtyol Mah. 2078 Sok. No:3 A Blok Afyonkarahisar, Afyonkarahisar, Turkey.
| |
Collapse
|
11
|
Miao J, Thongprayoon C, Craici IM, Cheungpasitporn W. How to improve ChatGPT performance for nephrologists: a technique guide. J Nephrol 2024; 37:1397-1403. [PMID: 38771519 DOI: 10.1007/s40620-024-01974-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Accepted: 04/26/2024] [Indexed: 05/22/2024]
Abstract
BACKGROUND The integration of ChatGPT into nephrology presents opportunities for enhanced decision-making and patient care. However, refining its performance to meet the specific needs of nephrologists remains a challenge. This guide offers a strategic roadmap for advancing ChatGPT's effectiveness in nephrological applications. METHODS Utilizing the advanced capabilities of GPT-4, we customized user profiles to optimize the model's response quality for nephrological inquiries. We assessed the efficacy of chain-of-thought prompting versus standard prompting in delineating the diagnostic pathway for nephrogenic diabetes insipidus-associated hypernatremia and polyuria. Additionally, we explored the influence of integrating retrieval-augmented generation on the model's proficiency in detailing pharmacological interventions to decelerate the progression from chronic kidney disease (CKD) G3 to end-stage kidney disease (ESKD), comparing it to responses without retrieval-augmented generation. RESULTS In contrast to the standard prompting, the chain-of-thought method offers a step-by-step diagnostic process that mirrors the intricate thought processes needed for diagnosing nephrogenic diabetes insipidus-related hypernatremia and polyuria. This begins with an initial assessment, notably including a water deprivation test. After evaluating the outcomes of this test, the approach continues by identifying potential causes. Furthermore, if a patient's history suggests lithium usage, the chain-of-thought model adjusts by proposing a more customized course of action. In response to "List medication treatment to help slow progression of CKD G3 to ESKD?", GPT-4 only provides a general summary of medication options. Nevertheless, a specialized GPT-4 model equipped with a retrieval-augmented generation system delivers more precise responses, including renin-angiotensin system inhibitors, sodium-glucose cotransporter-2 inhibitors, and mineralocorticoid receptor antagonists. This aligns well with the 2024 KDIGO guidelines. CONCLUSIONS GPT-4, when integrated with chain-of-thought prompting and retrieval-augmented generation techniques, demonstrates enhanced performance in the nephrology domain. This guide underscores the transformative potential of chain-of-thought and retrieval-augmented generation techniques in optimizing ChatGPT for nephrology, and highlights the ongoing need for innovative, tailored AI solutions in specialized medical fields.
Collapse
Affiliation(s)
- Jing Miao
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN, USA
| | - Charat Thongprayoon
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN, USA
| | - Iasmina M Craici
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN, USA
| | - Wisit Cheungpasitporn
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN, USA.
| |
Collapse
|
12
|
Nasef H, Patel H, Amin Q, Baum S, Ratnasekera A, Ang D, Havron WS, Nakayama D, Elkbuli A. Evaluating the Accuracy, Comprehensiveness, and Validity of ChatGPT Compared to Evidence-Based Sources Regarding Common Surgical Conditions: Surgeons' Perspectives. Am Surg 2024:31348241256075. [PMID: 38794965 DOI: 10.1177/00031348241256075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/27/2024]
Abstract
BACKGROUND This study aims to assess the accuracy, comprehensiveness, and validity of ChatGPT compared to evidence-based sources regarding the diagnosis and management of common surgical conditions by surveying the perceptions of U.S. board-certified practicing surgeons. METHODS An anonymous cross-sectional survey was distributed to U.S. practicing surgeons from June 2023 to March 2024. The survey comprised 94 multiple-choice questions evaluating diagnostic and management information for five common surgical conditions from evidence-based sources or generated by ChatGPT. Statistical analysis included descriptive statistics and paired-sample t-tests. RESULTS Participating surgeons were primarily aged 40-50 years (43%), male (86%), White (57%), and had 5-10 years or >15 years of experience (86%). The majority of surgeons had no prior experience with ChatGPT in surgical practice (86%). For material discussing both acute cholecystitis and upper gastrointestinal hemorrhage, evidence-based sources were rated as significantly more comprehensive (3.57 (±.535) vs 2.00 (±1.16), P = .025) (4.14 (±.69) vs 2.43 (±.98), P < .001) and valid (3.71 (±.488) vs 2.86 (±1.07), P = .045) (3.71 (±.76) vs 2.71 (±.95) P = .038) than ChatGPT. However, there was no significant difference in accuracy between the two sources (3.71 vs 3.29, P = .289) (3.57 vs 2.71, P = .111). CONCLUSION Surveyed U.S. board-certified practicing surgeons rated evidence-based sources as significantly more comprehensive and valid compared to ChatGPT across the majority of surveyed surgical conditions. However, there was no significant difference in accuracy between the sources across the majority of surveyed conditions. While ChatGPT may offer potential benefits in surgical practice, further refinement and validation are necessary to enhance its utility and acceptance among surgeons.
Collapse
Affiliation(s)
- Hazem Nasef
- NOVA Southeastern University, Kiran Patel College of Allopathic Medicine, Fort Lauderdale, FL, USA
| | - Heli Patel
- NOVA Southeastern University, Kiran Patel College of Allopathic Medicine, Fort Lauderdale, FL, USA
| | - Quratulain Amin
- NOVA Southeastern University, Kiran Patel College of Allopathic Medicine, Fort Lauderdale, FL, USA
| | - Samuel Baum
- Louisiana State University Health Science Center, College of Medicine, New Orleans, LA, USA
| | | | - Darwin Ang
- Department of Surgery, Ocala Regional Medical Center, Ocala, FL, USA
| | - William S Havron
- Department of Surgical Education, Orlando Regional Medical Center, Orlando, FL, USA
- Department of Surgery, Division of Trauma and Surgical Critical Care, Orlando Regional Medical Center, Orlando, FL, USA
| | - Don Nakayama
- Mercer University School of Medicine, Columbus, GA, USA
| | - Adel Elkbuli
- Department of Surgical Education, Orlando Regional Medical Center, Orlando, FL, USA
- Department of Surgery, Division of Trauma and Surgical Critical Care, Orlando Regional Medical Center, Orlando, FL, USA
| |
Collapse
|
13
|
Ho YS, Fülöp T, Krisanapan P, Soliman KM, Cheungpasitporn W. Artificial intelligence and machine learning trends in kidney care. Am J Med Sci 2024; 367:281-295. [PMID: 38281623 DOI: 10.1016/j.amjms.2024.01.018] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 12/12/2023] [Accepted: 01/23/2024] [Indexed: 01/30/2024]
Abstract
BACKGROUND The integration of artificial intelligence (AI) and machine learning (ML) in kidney care has seen a significant rise in recent years. This study specifically analyzed AI and ML research publications related to kidney care to identify leading authors, institutions, and countries in this area. It aimed to examine publication trends and patterns, and to explore the impact of collaborative efforts on citation metrics. METHODS The study used the Science Citation Index Expanded (SCI-EXPANDED) of Clarivate Analytics Web of Science Core Collection to search for AI and machine learning publications related to nephrology from 1992 to 2021. The authors used quotation marks and Boolean operator "or" to search for keywords in the title, abstract, author keywords, and Keywords Plus. In addition, the 'front page' filter was applied. A total of 5425 documents were identified and analyzed. RESULTS The results showed that articles represent 75% of the analyzed documents, with an average author to publications ratio of 7.4 and an average number of citations per publication in 2021 of 18. English articles had a higher citation rate than non-English articles. The USA dominated in all publication indicators, followed by China. Notably, the research also showed that collaborative efforts tend to result in higher citation rates. A significant portion of the publications were found in urology journals, emphasizing the broader scope of kidney care beyond traditional nephrology. CONCLUSIONS The findings underscore the importance of AI and ML in enhancing kidney care, offering a roadmap for future research and implementation in this expanding field.
Collapse
Affiliation(s)
- Yuh-Shan Ho
- Trend Research Centre, Asia University, Wufeng, Taichung, Taiwan
| | - Tibor Fülöp
- Medical Services, Ralph H. Johnson VA Medical Center, Charleston, SC, USA; Department of Medicine, Division of Nephrology, Medical University of South Carolina, Charleston, SC, USA.
| | - Pajaree Krisanapan
- Division of Nephrology, Department of Internal Medicine, Thammasat University, Pathum Thani, Thailand, 12120
| | - Karim M Soliman
- Medical Services, Ralph H. Johnson VA Medical Center, Charleston, SC, USA; Department of Medicine, Division of Nephrology, Medical University of South Carolina, Charleston, SC, USA
| | | |
Collapse
|
14
|
Miao J, Thongprayoon C, Suppadungsuk S, Krisanapan P, Radhakrishnan Y, Cheungpasitporn W. Chain of Thought Utilization in Large Language Models and Application in Nephrology. MEDICINA (KAUNAS, LITHUANIA) 2024; 60:148. [PMID: 38256408 PMCID: PMC10819595 DOI: 10.3390/medicina60010148] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 12/31/2023] [Accepted: 01/11/2024] [Indexed: 01/24/2024]
Abstract
Chain-of-thought prompting enhances the abilities of large language models (LLMs) significantly. It not only makes these models more specific and context-aware but also impacts the wider field of artificial intelligence (AI). This approach broadens the usability of AI, increases its efficiency, and aligns it more closely with human thinking and decision-making processes. As we improve this method, it is set to become a key element in the future of AI, adding more purpose, precision, and ethical consideration to these technologies. In medicine, the chain-of-thought prompting is especially beneficial. Its capacity to handle complex information, its logical and sequential reasoning, and its suitability for ethically and context-sensitive situations make it an invaluable tool for healthcare professionals. Its role in enhancing medical care and research is expected to grow as we further develop and use this technique. Chain-of-thought prompting bridges the gap between AI's traditionally obscure decision-making process and the clear, accountable standards required in healthcare. It does this by emulating a reasoning style familiar to medical professionals, fitting well into their existing practices and ethical codes. While solving AI transparency is a complex challenge, the chain-of-thought approach is a significant step toward making AI more comprehensible and trustworthy in medicine. This review focuses on understanding the workings of LLMs, particularly how chain-of-thought prompting can be adapted for nephrology's unique requirements. It also aims to thoroughly examine the ethical aspects, clarity, and future possibilities, offering an in-depth view of the exciting convergence of these areas.
Collapse
Affiliation(s)
- Jing Miao
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (S.S.)
| | - Charat Thongprayoon
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (S.S.)
| | - Supawadee Suppadungsuk
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (S.S.)
- Chakri Naruebodindra Medical Institute, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Samut Prakan 10540, Thailand
| | - Pajaree Krisanapan
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (S.S.)
- Division of Nephrology, Department of Internal Medicine, Faculty of Medicine, Thammasat University, Pathum Thani 12120, Thailand
- Division of Nephrology, Department of Internal Medicine, Thammasat University Hospital, Pathum Thani 12120, Thailand
| | - Yeshwanter Radhakrishnan
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (S.S.)
| | - Wisit Cheungpasitporn
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (S.S.)
| |
Collapse
|
15
|
Miao J, Thongprayoon C, Garcia Valencia OA, Krisanapan P, Sheikh MS, Davis PW, Mekraksakit P, Suarez MG, Craici IM, Cheungpasitporn W. Performance of ChatGPT on Nephrology Test Questions. Clin J Am Soc Nephrol 2024; 19:35-43. [PMID: 37851468 PMCID: PMC10843340 DOI: 10.2215/cjn.0000000000000330] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Accepted: 10/12/2023] [Indexed: 10/19/2023]
Abstract
BACKGROUND ChatGPT is a novel tool that allows people to engage in conversations with an advanced machine learning model. ChatGPT's performance in the US Medical Licensing Examination is comparable with a successful candidate's performance. However, its performance in the nephrology field remains undetermined. This study assessed ChatGPT's capabilities in answering nephrology test questions. METHODS Questions sourced from Nephrology Self-Assessment Program and Kidney Self-Assessment Program were used, each with multiple-choice single-answer questions. Questions containing visual elements were excluded. Each question bank was run twice using GPT-3.5 and GPT-4. Total accuracy rate, defined as the percentage of correct answers obtained by ChatGPT in either the first or second run, and the total concordance, defined as the percentage of identical answers provided by ChatGPT during both runs, regardless of their correctness, were used to assess its performance. RESULTS A comprehensive assessment was conducted on a set of 975 questions, comprising 508 questions from Nephrology Self-Assessment Program and 467 from Kidney Self-Assessment Program. GPT-3.5 resulted in a total accuracy rate of 51%. Notably, the employment of Nephrology Self-Assessment Program yielded a higher accuracy rate compared with Kidney Self-Assessment Program (58% versus 44%; P < 0.001). The total concordance rate across all questions was 78%, with correct answers exhibiting a higher concordance rate (84%) compared with incorrect answers (73%) ( P < 0.001). When examining various nephrology subfields, the total accuracy rates were relatively lower in electrolyte and acid-base disorder, glomerular disease, and kidney-related bone and stone disorders. The total accuracy rate of GPT-4's response was 74%, higher than GPT-3.5 ( P < 0.001) but remained below the passing threshold and average scores of nephrology examinees (77%). CONCLUSIONS ChatGPT exhibited limitations regarding accuracy and repeatability when addressing nephrology-related questions. Variations in performance were evident across various subfields.
Collapse
Affiliation(s)
- Jing Miao
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, Minnesota
| | | | | | | | | | | | | | | | | | | |
Collapse
|
16
|
Ray PP. ChatGPT's competence in addressing urolithiasis: myth or reality? Int Urol Nephrol 2024; 56:149-150. [PMID: 37726510 DOI: 10.1007/s11255-023-03802-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 09/09/2023] [Indexed: 09/21/2023]
|
17
|
Kayaalp ME, Ollivier M, Winkler PW, Dahmen J, Musahl V, Hirschmann MT, Karlsson J. Embrace responsible ChatGPT usage to overcome language barriers in academic writing. Knee Surg Sports Traumatol Arthrosc 2024; 32:5-9. [PMID: 38226673 DOI: 10.1002/ksa.12014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/21/2023] [Accepted: 11/08/2023] [Indexed: 01/17/2024]
Affiliation(s)
- M Enes Kayaalp
- Department of Orthopaedic Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
- Department for Orthopaedics and Traumatology, Istanbul Kartal Research and Training Hospital, Istanbul, Turkiye
| | - Matthieu Ollivier
- CNRS, Institute of Movement Sciences (ISM), Aix Marseille University, Marseille, France
| | - Philipp W Winkler
- Department for Orthopaedics and Traumatology, Kepler University Hospital GmbH, Linz, Austria
| | - Jari Dahmen
- Department of Orthopaedic Surgery and Sports Medicine, Amsterdam Movement Sciences, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
- Academic Center for Evidence Based Sports Medicine (ACES), Amsterdam, The Netherlands
- Amsterdam Collaboration for Health and Safety in Sports (ACHSS), International Olympic Committee (IOC) Research Center Amsterdam UMC, Amsterdam, The Netherlands
| | - Volker Musahl
- Department of Orthopaedic Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Michael T Hirschmann
- Department of Orthopedic Surgery and Traumatology, Head Knee Surgery and DKF Head of Research, Kantonsspital Baselland, Bruderholz, Bottmingen, Switzerland
- University of Basel, Basel, Switzerland
| | - Jon Karlsson
- Department for Orthopaedics, Sahlgrenska University Hospital, Institute of Clinical Sciences, Sahlgrenska Academy, Gothenburg University, Gothenburg, Sweden
| |
Collapse
|
18
|
Miao J, Thongprayoon C, Suppadungsuk S, Garcia Valencia OA, Qureshi F, Cheungpasitporn W. Ethical Dilemmas in Using AI for Academic Writing and an Example Framework for Peer Review in Nephrology Academia: A Narrative Review. Clin Pract 2023; 14:89-105. [PMID: 38248432 PMCID: PMC10801601 DOI: 10.3390/clinpract14010008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 12/23/2023] [Accepted: 12/28/2023] [Indexed: 01/23/2024] Open
Abstract
The emergence of artificial intelligence (AI) has greatly propelled progress across various sectors including the field of nephrology academia. However, this advancement has also given rise to ethical challenges, notably in scholarly writing. AI's capacity to automate labor-intensive tasks like literature reviews and data analysis has created opportunities for unethical practices, with scholars incorporating AI-generated text into their manuscripts, potentially undermining academic integrity. This situation gives rise to a range of ethical dilemmas that not only question the authenticity of contemporary academic endeavors but also challenge the credibility of the peer-review process and the integrity of editorial oversight. Instances of this misconduct are highlighted, spanning from lesser-known journals to reputable ones, and even infiltrating graduate theses and grant applications. This subtle AI intrusion hints at a systemic vulnerability within the academic publishing domain, exacerbated by the publish-or-perish mentality. The solutions aimed at mitigating the unethical employment of AI in academia include the adoption of sophisticated AI-driven plagiarism detection systems, a robust augmentation of the peer-review process with an "AI scrutiny" phase, comprehensive training for academics on ethical AI usage, and the promotion of a culture of transparency that acknowledges AI's role in research. This review underscores the pressing need for collaborative efforts among academic nephrology institutions to foster an environment of ethical AI application, thus preserving the esteemed academic integrity in the face of rapid technological advancements. It also makes a plea for rigorous research to assess the extent of AI's involvement in the academic literature, evaluate the effectiveness of AI-enhanced plagiarism detection tools, and understand the long-term consequences of AI utilization on academic integrity. An example framework has been proposed to outline a comprehensive approach to integrating AI into Nephrology academic writing and peer review. Using proactive initiatives and rigorous evaluations, a harmonious environment that harnesses AI's capabilities while upholding stringent academic standards can be envisioned.
Collapse
Affiliation(s)
- Jing Miao
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (S.S.); (O.A.G.V.); (F.Q.); (W.C.)
| | - Charat Thongprayoon
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (S.S.); (O.A.G.V.); (F.Q.); (W.C.)
| | - Supawadee Suppadungsuk
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (S.S.); (O.A.G.V.); (F.Q.); (W.C.)
- Chakri Naruebodindra Medical Institute, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bang Phli 10540, Samut Prakan, Thailand
| | - Oscar A. Garcia Valencia
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (S.S.); (O.A.G.V.); (F.Q.); (W.C.)
| | - Fawad Qureshi
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (S.S.); (O.A.G.V.); (F.Q.); (W.C.)
| | - Wisit Cheungpasitporn
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (S.S.); (O.A.G.V.); (F.Q.); (W.C.)
| |
Collapse
|
19
|
Miao J, Thongprayoon C, Suppadungsuk S, Garcia Valencia OA, Qureshi F, Cheungpasitporn W. Innovating Personalized Nephrology Care: Exploring the Potential Utilization of ChatGPT. J Pers Med 2023; 13:1681. [PMID: 38138908 PMCID: PMC10744377 DOI: 10.3390/jpm13121681] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 12/02/2023] [Accepted: 12/02/2023] [Indexed: 12/24/2023] Open
Abstract
The rapid advancement of artificial intelligence (AI) technologies, particularly machine learning, has brought substantial progress to the field of nephrology, enabling significant improvements in the management of kidney diseases. ChatGPT, a revolutionary language model developed by OpenAI, is a versatile AI model designed to engage in meaningful and informative conversations. Its applications in healthcare have been notable, with demonstrated proficiency in various medical knowledge assessments. However, ChatGPT's performance varies across different medical subfields, posing challenges in nephrology-related queries. At present, comprehensive reviews regarding ChatGPT's potential applications in nephrology remain lacking despite the surge of interest in its role in various domains. This article seeks to fill this gap by presenting an overview of the integration of ChatGPT in nephrology. It discusses the potential benefits of ChatGPT in nephrology, encompassing dataset management, diagnostics, treatment planning, and patient communication and education, as well as medical research and education. It also explores ethical and legal concerns regarding the utilization of AI in medical practice. The continuous development of AI models like ChatGPT holds promise for the healthcare realm but also underscores the necessity of thorough evaluation and validation before implementing AI in real-world medical scenarios. This review serves as a valuable resource for nephrologists and healthcare professionals interested in fully utilizing the potential of AI in innovating personalized nephrology care.
Collapse
Affiliation(s)
- Jing Miao
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (C.T.); (S.S.); (O.A.G.V.); (F.Q.)
| | - Charat Thongprayoon
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (C.T.); (S.S.); (O.A.G.V.); (F.Q.)
| | - Supawadee Suppadungsuk
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (C.T.); (S.S.); (O.A.G.V.); (F.Q.)
- Chakri Naruebodindra Medical Institute, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Samut Prakan 10540, Thailand
| | - Oscar A. Garcia Valencia
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (C.T.); (S.S.); (O.A.G.V.); (F.Q.)
| | - Fawad Qureshi
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (C.T.); (S.S.); (O.A.G.V.); (F.Q.)
| | - Wisit Cheungpasitporn
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (J.M.); (C.T.); (S.S.); (O.A.G.V.); (F.Q.)
| |
Collapse
|
20
|
Ittarat M, Cheungpasitporn W, Chansangpetch S. Personalized Care in Eye Health: Exploring Opportunities, Challenges, and the Road Ahead for Chatbots. J Pers Med 2023; 13:1679. [PMID: 38138906 PMCID: PMC10744965 DOI: 10.3390/jpm13121679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 11/29/2023] [Accepted: 11/30/2023] [Indexed: 12/24/2023] Open
Abstract
In modern eye care, the adoption of ophthalmology chatbots stands out as a pivotal technological progression. These digital assistants present numerous benefits, such as better access to vital information, heightened patient interaction, and streamlined triaging. Recent evaluations have highlighted their performance in both the triage of ophthalmology conditions and ophthalmology knowledge assessment, underscoring their potential and areas for improvement. However, assimilating these chatbots into the prevailing healthcare infrastructures brings challenges. These encompass ethical dilemmas, legal compliance, seamless integration with electronic health records (EHR), and fostering effective dialogue with medical professionals. Addressing these challenges necessitates the creation of bespoke standards and protocols for ophthalmology chatbots. The horizon for these chatbots is illuminated by advancements and anticipated innovations, poised to redefine the delivery of eye care. The synergy of artificial intelligence (AI) and machine learning (ML) with chatbots amplifies their diagnostic prowess. Additionally, their capability to adapt linguistically and culturally ensures they can cater to a global patient demographic. In this article, we explore in detail the utilization of chatbots in ophthalmology, examining their accuracy, reliability, data protection, security, transparency, potential algorithmic biases, and ethical considerations. We provide a comprehensive review of their roles in the triage of ophthalmology conditions and knowledge assessment, emphasizing their significance and future potential in the field.
Collapse
Affiliation(s)
- Mantapond Ittarat
- Surin Hospital and Surin Medical Education Center, Suranaree University of Technology, Surin 32000, Thailand;
| | | | - Sunee Chansangpetch
- Center of Excellence in Glaucoma, Chulalongkorn University, Bangkok 10330, Thailand;
- Department of Ophthalmology, Faculty of Medicine, Chulalongkorn University and King Chulalongkorn Memorial Hospital, Thai Red Cross Society, Bangkok 10330, Thailand
| |
Collapse
|
21
|
Suppadungsuk S, Thongprayoon C, Miao J, Krisanapan P, Qureshi F, Kashani K, Cheungpasitporn W. Exploring the Potential of Chatbots in Critical Care Nephrology. MEDICINES (BASEL, SWITZERLAND) 2023; 10:58. [PMID: 37887265 PMCID: PMC10608511 DOI: 10.3390/medicines10100058] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Revised: 10/17/2023] [Accepted: 10/18/2023] [Indexed: 10/28/2023]
Abstract
The exponential growth of artificial intelligence (AI) has allowed for its integration into multiple sectors, including, notably, healthcare. Chatbots have emerged as a pivotal resource for improving patient outcomes and assisting healthcare practitioners through various AI-based technologies. In critical care, kidney-related conditions play a significant role in determining patient outcomes. This article examines the potential for integrating chatbots into the workflows of critical care nephrology to optimize patient care. We detail their specific applications in critical care nephrology, such as managing acute kidney injury, alert systems, and continuous renal replacement therapy (CRRT); facilitating discussions around palliative care; and bolstering collaboration within a multidisciplinary team. Chatbots have the potential to augment real-time data availability, evaluate renal health, identify potential risk factors, build predictive models, and monitor patient progress. Moreover, they provide a platform for enhancing communication and education for both patients and healthcare providers, paving the way for enriched knowledge and honed professional skills. However, it is vital to recognize the inherent challenges and limitations when using chatbots in this domain. Here, we provide an in-depth exploration of the concerns tied to chatbots' accuracy, dependability, data protection and security, transparency, potential algorithmic biases, and ethical implications in critical care nephrology. While human discernment and intervention are indispensable, especially in complex medical scenarios or intricate situations, the sustained advancements in AI signal that the integration of precision-engineered chatbot algorithms within critical care nephrology has considerable potential to elevate patient care and pivotal outcome metrics in the future.
Collapse
Affiliation(s)
- Supawadee Suppadungsuk
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
- Chakri Naruebodindra Medical Institute, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Samut Prakan 10540, Thailand
| | - Charat Thongprayoon
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
| | - Jing Miao
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
| | - Pajaree Krisanapan
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
- Division of Nephrology and Hypertension, Thammasat University Hospital, Pathum Thani 12120, Thailand
| | - Fawad Qureshi
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
| | - Kianoush Kashani
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
| | - Wisit Cheungpasitporn
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA
| |
Collapse
|
22
|
Qarajeh A, Tangpanithandee S, Thongprayoon C, Suppadungsuk S, Krisanapan P, Aiumtrakul N, Garcia Valencia OA, Miao J, Qureshi F, Cheungpasitporn W. AI-Powered Renal Diet Support: Performance of ChatGPT, Bard AI, and Bing Chat. Clin Pract 2023; 13:1160-1172. [PMID: 37887080 PMCID: PMC10605499 DOI: 10.3390/clinpract13050104] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 09/15/2023] [Accepted: 09/25/2023] [Indexed: 10/28/2023] Open
Abstract
Patients with chronic kidney disease (CKD) necessitate specialized renal diets to prevent complications such as hyperkalemia and hyperphosphatemia. A comprehensive assessment of food components is pivotal, yet burdensome for healthcare providers. With evolving artificial intelligence (AI) technology, models such as ChatGPT, Bard AI, and Bing Chat can be instrumental in educating patients and assisting professionals. To gauge the efficacy of different AI models in discerning potassium and phosphorus content in foods, four AI models-ChatGPT 3.5, ChatGPT 4, Bard AI, and Bing Chat-were evaluated. A total of 240 food items, curated from the Mayo Clinic Renal Diet Handbook for CKD patients, were input into each model. These items were characterized by their potassium (149 items) and phosphorus (91 items) content. Each model was tasked to categorize the items into high or low potassium and high phosphorus content. The results were juxtaposed with the Mayo Clinic Renal Diet Handbook's recommendations. The concordance between repeated sessions was also evaluated to assess model consistency. Among the models tested, ChatGPT 4 displayed superior performance in identifying potassium content, correctly classifying 81% of the foods. It accurately discerned 60% of low potassium and 99% of high potassium foods. In comparison, ChatGPT 3.5 exhibited a 66% accuracy rate. Bard AI and Bing Chat models had an accuracy rate of 79% and 81%, respectively. Regarding phosphorus content, Bard AI stood out with a flawless 100% accuracy rate. ChatGPT 3.5 and Bing Chat recognized 85% and 89% of the high phosphorus foods correctly, while ChatGPT 4 registered a 77% accuracy rate. Emerging AI models manifest a diverse range of accuracy in discerning potassium and phosphorus content in foods suitable for CKD patients. ChatGPT 4, in particular, showed a marked improvement over its predecessor, especially in detecting potassium content. The Bard AI model exhibited exceptional precision for phosphorus identification. This study underscores the potential of AI models as efficient tools in renal dietary planning, though refinements are warranted for optimal utility.
Collapse
Affiliation(s)
- Ahmad Qarajeh
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (A.Q.); (C.T.); (S.S.); (P.K.); (O.A.G.V.); (J.M.); (F.Q.)
- Faculty of Medicine, University of Jordan, Amman 11942, Jordan
| | - Supawit Tangpanithandee
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (A.Q.); (C.T.); (S.S.); (P.K.); (O.A.G.V.); (J.M.); (F.Q.)
- Chakri Naruebodindra Medical Institute, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Samut Prakan 10540, Thailand
| | - Charat Thongprayoon
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (A.Q.); (C.T.); (S.S.); (P.K.); (O.A.G.V.); (J.M.); (F.Q.)
| | - Supawadee Suppadungsuk
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (A.Q.); (C.T.); (S.S.); (P.K.); (O.A.G.V.); (J.M.); (F.Q.)
- Chakri Naruebodindra Medical Institute, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Samut Prakan 10540, Thailand
| | - Pajaree Krisanapan
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (A.Q.); (C.T.); (S.S.); (P.K.); (O.A.G.V.); (J.M.); (F.Q.)
- Department of Internal Medicine, Faculty of Medicine, Thammasat University, Pathum Thani 12120, Thailand
| | - Noppawit Aiumtrakul
- Department of Medicine, John A. Burns School of Medicine, University of Hawaii, Honolulu, HI 96813, USA;
| | - Oscar A. Garcia Valencia
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (A.Q.); (C.T.); (S.S.); (P.K.); (O.A.G.V.); (J.M.); (F.Q.)
| | - Jing Miao
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (A.Q.); (C.T.); (S.S.); (P.K.); (O.A.G.V.); (J.M.); (F.Q.)
| | - Fawad Qureshi
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (A.Q.); (C.T.); (S.S.); (P.K.); (O.A.G.V.); (J.M.); (F.Q.)
| | - Wisit Cheungpasitporn
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (A.Q.); (C.T.); (S.S.); (P.K.); (O.A.G.V.); (J.M.); (F.Q.)
| |
Collapse
|
23
|
Garcia Valencia OA, Thongprayoon C, Jadlowiec CC, Mao SA, Miao J, Cheungpasitporn W. Enhancing Kidney Transplant Care through the Integration of Chatbot. Healthcare (Basel) 2023; 11:2518. [PMID: 37761715 PMCID: PMC10530762 DOI: 10.3390/healthcare11182518] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Revised: 09/03/2023] [Accepted: 09/09/2023] [Indexed: 09/29/2023] Open
Abstract
Kidney transplantation is a critical treatment option for end-stage kidney disease patients, offering improved quality of life and increased survival rates. However, the complexities of kidney transplant care necessitate continuous advancements in decision making, patient communication, and operational efficiency. This article explores the potential integration of a sophisticated chatbot, an AI-powered conversational agent, to enhance kidney transplant practice and potentially improve patient outcomes. Chatbots and generative AI have shown promising applications in various domains, including healthcare, by simulating human-like interactions and generating contextually appropriate responses. Noteworthy AI models like ChatGPT by OpenAI, BingChat by Microsoft, and Bard AI by Google exhibit significant potential in supporting evidence-based research and healthcare decision making. The integration of chatbots in kidney transplant care may offer transformative possibilities. As a clinical decision support tool, it could provide healthcare professionals with real-time access to medical literature and guidelines, potentially enabling informed decision making and improved knowledge dissemination. Additionally, the chatbot has the potential to facilitate patient education by offering personalized and understandable information, addressing queries, and providing guidance on post-transplant care. Furthermore, under clinician or transplant pharmacist supervision, it has the potential to support post-transplant care and medication management by analyzing patient data, which may lead to tailored recommendations on dosages, monitoring schedules, and potential drug interactions. However, to fully ascertain its effectiveness and safety in these roles, further studies and validation are required. Its integration with existing clinical decision support systems may enhance risk stratification and treatment planning, contributing to more informed and efficient decision making in kidney transplant care. Given the importance of ethical considerations and bias mitigation in AI integration, future studies may evaluate long-term patient outcomes, cost-effectiveness, user experience, and the generalizability of chatbot recommendations. By addressing these factors and potentially leveraging AI capabilities, the integration of chatbots in kidney transplant care holds promise for potentially improving patient outcomes, enhancing decision making, and fostering the equitable and responsible use of AI in healthcare.
Collapse
Affiliation(s)
- Oscar A. Garcia Valencia
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (O.A.G.V.); (C.T.)
| | - Charat Thongprayoon
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (O.A.G.V.); (C.T.)
| | - Caroline C. Jadlowiec
- Division of Transplant Surgery, Department of Surgery, Mayo Clinic, Phoenix, AZ 85054, USA;
| | - Shennen A. Mao
- Division of Transplant Surgery, Department of Transplantation, Mayo Clinic, Jacksonville, FL 32224, USA
| | - Jing Miao
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (O.A.G.V.); (C.T.)
| | - Wisit Cheungpasitporn
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (O.A.G.V.); (C.T.)
| |
Collapse
|
24
|
Garcia Valencia OA, Suppadungsuk S, Thongprayoon C, Miao J, Tangpanithandee S, Craici IM, Cheungpasitporn W. Ethical Implications of Chatbot Utilization in Nephrology. J Pers Med 2023; 13:1363. [PMID: 37763131 PMCID: PMC10532744 DOI: 10.3390/jpm13091363] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 09/01/2023] [Accepted: 09/06/2023] [Indexed: 09/29/2023] Open
Abstract
This comprehensive critical review critically examines the ethical implications associated with integrating chatbots into nephrology, aiming to identify concerns, propose policies, and offer potential solutions. Acknowledging the transformative potential of chatbots in healthcare, responsible implementation guided by ethical considerations is of the utmost importance. The review underscores the significance of establishing robust guidelines for data collection, storage, and sharing to safeguard privacy and ensure data security. Future research should prioritize defining appropriate levels of data access, exploring anonymization techniques, and implementing encryption methods. Transparent data usage practices and obtaining informed consent are fundamental ethical considerations. Effective security measures, including encryption technologies and secure data transmission protocols, are indispensable for maintaining the confidentiality and integrity of patient data. To address potential biases and discrimination, the review suggests regular algorithm reviews, diversity strategies, and ongoing monitoring. Enhancing the clarity of chatbot capabilities, developing user-friendly interfaces, and establishing explicit consent procedures are essential for informed consent. Striking a balance between automation and human intervention is vital to preserve the doctor-patient relationship. Cultural sensitivity and multilingual support should be considered through chatbot training. To ensure ethical chatbot utilization in nephrology, it is imperative to prioritize the development of comprehensive ethical frameworks encompassing data handling, security, bias mitigation, informed consent, and collaboration. Continuous research and innovation in this field are crucial for maximizing the potential of chatbot technology and ultimately improving patient outcomes.
Collapse
Affiliation(s)
- Oscar A. Garcia Valencia
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (O.A.G.V.); (S.S.); (C.T.); (J.M.); (S.T.); (I.M.C.)
| | - Supawadee Suppadungsuk
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (O.A.G.V.); (S.S.); (C.T.); (J.M.); (S.T.); (I.M.C.)
- Chakri Naruebodindra Medical Institute, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Samut Prakan 10540, Thailand
| | - Charat Thongprayoon
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (O.A.G.V.); (S.S.); (C.T.); (J.M.); (S.T.); (I.M.C.)
| | - Jing Miao
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (O.A.G.V.); (S.S.); (C.T.); (J.M.); (S.T.); (I.M.C.)
| | - Supawit Tangpanithandee
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (O.A.G.V.); (S.S.); (C.T.); (J.M.); (S.T.); (I.M.C.)
- Chakri Naruebodindra Medical Institute, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Samut Prakan 10540, Thailand
| | - Iasmina M. Craici
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (O.A.G.V.); (S.S.); (C.T.); (J.M.); (S.T.); (I.M.C.)
| | - Wisit Cheungpasitporn
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA; (O.A.G.V.); (S.S.); (C.T.); (J.M.); (S.T.); (I.M.C.)
| |
Collapse
|