1
|
Dimitriadis F, Alkagiet S, Tsigkriki L, Kleitsioti P, Sidiropoulos G, Efstratiou D, Askalidi T, Tsaousidis A, Siarkos M, Giannakopoulou P, Mavrogianni AD, Zarifis J, Koulaouzidis G. ChatGPT and Patients With Heart Failure. Angiology 2024:33197241238403. [PMID: 38451243 DOI: 10.1177/00033197241238403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/08/2024]
Abstract
ChatGPT (Generative Pre-trained Transformer) is a large-scale language processing model, with possibilities for professional patient support in a patient-friendly way. The aim of the study was to examine the accuracy and reproducibility of ChatGPT in answering questions about knowledge and management of heart failure (HF). First, we recorded 47 most frequently asked questions by patients about HF. The answers of ChatGPT to these questions were independently assessed by two researchers. ChatGPT was able to render the definition of the disease in a very simple and explanatory way. It listed a number of the most important causes of HF and the most important risk factors for its occurrence. It provided correct answers about the most important diagnostic tests and why they are recommended. In addition, it answered health and dietary questions, such as the daily fluid and the alcohol intake. ChatGPT listed the most important classes of drugs in HF and their mechanism of action. It also answered with arguments to questions about patient's sex life, whether they could work, drive, or travel by plane. The performance of ChatGPT was described as very good as it was able to adequately answer all questions posed to it.
Collapse
Affiliation(s)
- Fotis Dimitriadis
- Cardiology Department, General Hospital G. Papanikolaou, Thessaloniki, Greece
| | - Stelina Alkagiet
- Cardiology Department, General Hospital G. Papanikolaou, Thessaloniki, Greece
| | - Lamprini Tsigkriki
- Cardiology Department, General Hospital G. Papanikolaou, Thessaloniki, Greece
| | | | - George Sidiropoulos
- Cardiology Department, General Hospital G. Papanikolaou, Thessaloniki, Greece
| | - Dimitris Efstratiou
- Cardiology Department, General Hospital G. Papanikolaou, Thessaloniki, Greece
| | - Taisa Askalidi
- Cardiology Department, General Hospital G. Papanikolaou, Thessaloniki, Greece
| | - Adam Tsaousidis
- Cardiology Department, General Hospital G. Papanikolaou, Thessaloniki, Greece
| | - Michail Siarkos
- Cardiology Department, General Hospital G. Papanikolaou, Thessaloniki, Greece
| | | | | | - John Zarifis
- Cardiology Department, General Hospital G. Papanikolaou, Thessaloniki, Greece
| | - George Koulaouzidis
- Department of Biochemical Sciences, Pomeranian Medical University, Szczecin, Poland
| |
Collapse
|
2
|
Agathokleous E, Rillig MC, Peñuelas J, Yu Z. One hundred important questions facing plant science derived using a large language model. Trends Plant Sci 2024; 29:210-218. [PMID: 37394309 DOI: 10.1016/j.tplants.2023.06.008] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 06/10/2023] [Accepted: 06/12/2023] [Indexed: 07/04/2023]
Abstract
Artificial intelligence (AI) is advancing rapidly and continually evolving in various fields. Recently, the release of ChatGPT has sparked significant public interest. In this study, we revisit the '100 Important Questions Facing Plant Science' by leveraging ChatGPT as a valuable tool for generating thought-provoking questions relevant to plant science. These questions primarily revolve around the utilization of plants in product development, understanding plant mechanisms, plant-environment interactions, and enhancing plant traits, with an emphasis on sustainable product development. While ChatGPT may not capture certain crucial aspects highlighted by scientists, it offers valuable insights into the questions generated by experts. Our analysis demonstrates that ChatGPT can be cautiously employed as a supportive tool to facilitate, streamline, and expedite specific tasks in plant science.
Collapse
Affiliation(s)
- Evgenios Agathokleous
- Key Laboratory of Ecosystem Carbon Source and Sink, China Meteorological Administration (ECSS-CMA), School of Applied Meteorology, Nanjing University of Information Science and Technology, Nanjing, 210044, China; Collaborative Innovation Center on Forecast and Evaluation of Meteorological Disasters (CIC-FEMD), Nanjing University of Information Science & Technology, Nanjing, China.
| | - Matthias C Rillig
- Freie Universität Berlin, Institut für Biologie, Altensteinstr. 6, D-14195, Berlin, Germany; Berlin-Brandenburg Institute of Advanced Biodiversity Research (BBIB), D-14195, Berlin, Germany
| | - Josep Peñuelas
- CSIC, Global Ecology Unit CREAF-CSIC-UAB, Bellaterra, Catalonia 08193, Spain; CREAF, Cerdanyola del Vallès, Catalonia 08193, Spain
| | - Zhen Yu
- Key Laboratory of Ecosystem Carbon Source and Sink, China Meteorological Administration (ECSS-CMA), School of Applied Meteorology, Nanjing University of Information Science and Technology, Nanjing, 210044, China; Collaborative Innovation Center on Forecast and Evaluation of Meteorological Disasters (CIC-FEMD), Nanjing University of Information Science & Technology, Nanjing, China.
| |
Collapse
|
3
|
Fear K, Gleber C. Shaping the Future of Older Adult Care: ChatGPT, Advanced AI, and the Transformation of Clinical Practice. JMIR Aging 2023; 6:e51776. [PMID: 37703085 PMCID: PMC10534283 DOI: 10.2196/51776] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Accepted: 08/25/2023] [Indexed: 09/14/2023] Open
Abstract
As the older adult population in the United States grows, new approaches to managing and streamlining clinical work are needed to accommodate their increased demand for health care. Deep learning and generative artificial intelligence (AI) have the potential to transform how care is delivered and how clinicians practice in geriatrics. In this editorial, we explore the opportunities and limitations of these technologies.
Collapse
Affiliation(s)
- Kathleen Fear
- UR Health Lab, University of Rochester Medical Center, Rochester, NY, United States
| | - Conrad Gleber
- UR Health Lab, University of Rochester Medical Center, Rochester, NY, United States
- Department of Medicine, University of Rochester Medical Center, Rochester, NY, United States
| |
Collapse
|
4
|
Rao A, Pang M, Kim J, Kamineni M, Lie W, Prasad AK, Landman A, Dreyer K, Succi MD. Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow: Development and Usability Study. J Med Internet Res 2023; 25:e48659. [PMID: 37606976 PMCID: PMC10481210 DOI: 10.2196/48659] [Citation(s) in RCA: 28] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 07/26/2023] [Accepted: 07/27/2023] [Indexed: 08/23/2023] Open
Abstract
BACKGROUND Large language model (LLM)-based artificial intelligence chatbots direct the power of large training data sets toward successive, related tasks as opposed to single-ask tasks, for which artificial intelligence already achieves impressive performance. The capacity of LLMs to assist in the full scope of iterative clinical reasoning via successive prompting, in effect acting as artificial physicians, has not yet been evaluated. OBJECTIVE This study aimed to evaluate ChatGPT's capacity for ongoing clinical decision support via its performance on standardized clinical vignettes. METHODS We inputted all 36 published clinical vignettes from the Merck Sharpe & Dohme (MSD) Clinical Manual into ChatGPT and compared its accuracy on differential diagnoses, diagnostic testing, final diagnosis, and management based on patient age, gender, and case acuity. Accuracy was measured by the proportion of correct responses to the questions posed within the clinical vignettes tested, as calculated by human scorers. We further conducted linear regression to assess the contributing factors toward ChatGPT's performance on clinical tasks. RESULTS ChatGPT achieved an overall accuracy of 71.7% (95% CI 69.3%-74.1%) across all 36 clinical vignettes. The LLM demonstrated the highest performance in making a final diagnosis with an accuracy of 76.9% (95% CI 67.8%-86.1%) and the lowest performance in generating an initial differential diagnosis with an accuracy of 60.3% (95% CI 54.2%-66.6%). Compared to answering questions about general medical knowledge, ChatGPT demonstrated inferior performance on differential diagnosis (β=-15.8%; P<.001) and clinical management (β=-7.4%; P=.02) question types. CONCLUSIONS ChatGPT achieves impressive accuracy in clinical decision-making, with increasing strength as it gains more clinical information at its disposal. In particular, ChatGPT demonstrates the greatest accuracy in tasks of final diagnosis as compared to initial diagnosis. Limitations include possible model hallucinations and the unclear composition of ChatGPT's training data set.
Collapse
Affiliation(s)
- Arya Rao
- Medically Engineered Solutions in Healthcare Incubator, Innovation in Operations Research Center (MESH IO), Massachusetts General Hospital, Boston, MA, United States
- Harvard Medical School, Boston, MA, United States
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Michael Pang
- Medically Engineered Solutions in Healthcare Incubator, Innovation in Operations Research Center (MESH IO), Massachusetts General Hospital, Boston, MA, United States
- Harvard Medical School, Boston, MA, United States
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - John Kim
- Medically Engineered Solutions in Healthcare Incubator, Innovation in Operations Research Center (MESH IO), Massachusetts General Hospital, Boston, MA, United States
- Harvard Medical School, Boston, MA, United States
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Meghana Kamineni
- Medically Engineered Solutions in Healthcare Incubator, Innovation in Operations Research Center (MESH IO), Massachusetts General Hospital, Boston, MA, United States
- Harvard Medical School, Boston, MA, United States
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Winston Lie
- Medically Engineered Solutions in Healthcare Incubator, Innovation in Operations Research Center (MESH IO), Massachusetts General Hospital, Boston, MA, United States
- Harvard Medical School, Boston, MA, United States
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Anoop K Prasad
- Medically Engineered Solutions in Healthcare Incubator, Innovation in Operations Research Center (MESH IO), Massachusetts General Hospital, Boston, MA, United States
- Harvard Medical School, Boston, MA, United States
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Adam Landman
- Harvard Medical School, Boston, MA, United States
- Department of Radiology, Brigham and Women's Hospital, Boston, MA, United States
| | - Keith Dreyer
- Harvard Medical School, Boston, MA, United States
- Data Science Office, Mass General Brigham, Boston, MA, United States
| | - Marc D Succi
- Medically Engineered Solutions in Healthcare Incubator, Innovation in Operations Research Center (MESH IO), Massachusetts General Hospital, Boston, MA, United States
- Harvard Medical School, Boston, MA, United States
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
- Mass General Brigham Innovation, Mass General Brigham, Boston, MA, United States
| |
Collapse
|
5
|
Shao CY, Li H, Liu XL, Li C, Yang LQ, Zhang YJ, Luo J, Zhao J. Appropriateness and Comprehensiveness of Using ChatGPT for Perioperative Patient Education in Thoracic Surgery in Different Language Contexts: Survey Study. Interact J Med Res 2023; 12:e46900. [PMID: 37578819 PMCID: PMC10463083 DOI: 10.2196/46900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 07/22/2023] [Accepted: 07/27/2023] [Indexed: 08/15/2023] Open
Abstract
BACKGROUND ChatGPT, a dialogue-based artificial intelligence language model, has shown promise in assisting clinical workflows and patient-clinician communication. However, there is a lack of feasibility assessments regarding its use for perioperative patient education in thoracic surgery. OBJECTIVE This study aimed to assess the appropriateness and comprehensiveness of using ChatGPT for perioperative patient education in thoracic surgery in both English and Chinese contexts. METHODS This pilot study was conducted in February 2023. A total of 37 questions focused on perioperative patient education in thoracic surgery were created based on guidelines and clinical experience. Two sets of inquiries were made to ChatGPT for each question, one in English and the other in Chinese. The responses generated by ChatGPT were evaluated separately by experienced thoracic surgical clinicians for appropriateness and comprehensiveness based on a hypothetical draft response to a patient's question on the electronic information platform. For a response to be qualified, it required at least 80% of reviewers to deem it appropriate and 50% to deem it comprehensive. Statistical analyses were performed using the unpaired chi-square test or Fisher exact test, with a significance level set at P<.05. RESULTS The set of 37 commonly asked questions covered topics such as disease information, diagnostic procedures, perioperative complications, treatment measures, disease prevention, and perioperative care considerations. In both the English and Chinese contexts, 34 (92%) out of 37 responses were qualified in terms of both appropriateness and comprehensiveness. The remaining 3 (8%) responses were unqualified in these 2 contexts. The unqualified responses primarily involved the diagnosis of disease symptoms and surgical-related complications symptoms. The reasons for determining the responses as unqualified were similar in both contexts. There was no statistically significant difference (34/37, 92% vs 34/37, 92%; P=.99) in the qualification rate between the 2 language sets. CONCLUSIONS This pilot study demonstrates the potential feasibility of using ChatGPT for perioperative patient education in thoracic surgery in both English and Chinese contexts. ChatGPT is expected to enhance patient satisfaction, reduce anxiety, and improve compliance during the perioperative period. In the future, there will be remarkable potential application for using artificial intelligence, in conjunction with human review, for patient education and health consultation after patients have provided their informed consent.
Collapse
Affiliation(s)
- Chen-Ye Shao
- Department of Thoracic Surgery, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Hui Li
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Xiao-Long Liu
- Department of Cardiothoracic Surgery, Jinling Hospital, Medical School of Nanjing University, Nanjing, China
| | - Chang Li
- Department of Thoracic Surgery, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Li-Qin Yang
- Department of Thoracic Surgery, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Yue-Juan Zhang
- Department of Thoracic Surgery, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Jing Luo
- Department of Thoracic Surgery, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Jun Zhao
- Department of Thoracic Surgery, The First Affiliated Hospital of Soochow University, Suzhou, China
| |
Collapse
|
6
|
Agathokleous E, Saitanis CJ, Fang C, Yu Z. Use of ChatGPT: What does it mean for biology and environmental science? Sci Total Environ 2023; 888:164154. [PMID: 37201835 DOI: 10.1016/j.scitotenv.2023.164154] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 05/09/2023] [Accepted: 05/10/2023] [Indexed: 05/20/2023]
Abstract
Artificial intelligence (AI) large language models (LLMs) have emerged as important technologies. Recently, ChatGPT (Generative Pre-trained Transformer) has been released and attracted massive interest from the public, owing to its unique capabilities to simplify many daily tasks of people from diverse backgrounds and social statuses. Here, we discuss how ChatGPT (and similar AI technologies) can impact biology and environmental science, providing examples obtained through interactive sessions with ChatGPT. The benefits that ChatGPT offers are ample and can impact many aspects of biology and environmental science, including education, research, scientific publishing, outreach, and societal translation. Among others, ChatGPT can simplify and expedite highly complex and challenging tasks. As an example to illustrate this, we provide 100 important questions for biology and 100 important questions for environmental science. Although ChatGPT offers a plethora of benefits, there are several risks and potential harms associated with its use, which we analyze herein. Awareness of risks and potential harms should be raised. However, understanding and overcoming the current limitations could lead these recent technological advances to push biology and environmental science to their limits.
Collapse
Affiliation(s)
- Evgenios Agathokleous
- School of Applied Meteorology, Nanjing University of Information Science & Technology (NUIST), Nanjing 210044, China.
| | - Costas J Saitanis
- Lab of Ecology and Environmental Science, Agricultural University of Athens, Iera Odos 75, Athens 11855, Greece
| | - Chao Fang
- School of Applied Meteorology, Nanjing University of Information Science & Technology (NUIST), Nanjing 210044, China
| | - Zhen Yu
- School of Applied Meteorology, Nanjing University of Information Science & Technology (NUIST), Nanjing 210044, China.
| |
Collapse
|