1
|
Olver IN. Ethics of artificial intelligence in supportive care in cancer. Med J Aust 2024; 220:499-501. [PMID: 38714360 DOI: 10.5694/mja2.52297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Accepted: 12/22/2023] [Indexed: 05/09/2024]
|
2
|
Moll M, Heilemann G, Georg D, Kauer-Dorner D, Kuess P. The role of artificial intelligence in informed patient consent for radiotherapy treatments-a case report. Strahlenther Onkol 2024; 200:544-548. [PMID: 38180493 DOI: 10.1007/s00066-023-02190-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 12/03/2023] [Indexed: 01/06/2024]
Abstract
Recent advancements in large language models (LMM; e.g., ChatGPT (OpenAI, San Francisco, California, USA)) have seen widespread use in various fields, including healthcare. This case study reports on the first use of LMM in a pretreatment discussion and in obtaining informed consent for a radiation oncology treatment. Further, the reproducibility of the replies by ChatGPT 3.5 was analyzed. A breast cancer patient, following legal consultation, engaged in a conversation with ChatGPT 3.5 regarding her radiotherapy treatment. The patient posed questions about side effects, prevention, activities, medications, and late effects. While some answers contained inaccuracies, responses closely resembled doctors' replies. In a final evaluation discussion, the patient, however, stated that she preferred the presence of a physician and expressed concerns about the source of the provided information. The reproducibility was tested in ten iterations. Future guidelines for using such models in radiation oncology should be driven by medical professionals. While artificial intelligence (AI) supports essential tasks, human interaction remains crucial.
Collapse
Affiliation(s)
- M Moll
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria.
| | - G Heilemann
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - Dietmar Georg
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - D Kauer-Dorner
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| | - P Kuess
- Department of Radiation Oncology, Comprehensive Cancer Center Vienna, Medical University Vienna, Vienna, Austria
| |
Collapse
|
3
|
Nguyen TP, Carvalho B, Sukhdeo H, Joudi K, Guo N, Chen M, Wolpaw JT, Kiefer JJ, Byrne M, Jamroz T, Mootz AA, Reale SC, Zou J, Sultan P. Comparison of artificial intelligence large language model chatbots in answering frequently asked questions in anaesthesia. BJA OPEN 2024; 10:100280. [PMID: 38764485 PMCID: PMC11099318 DOI: 10.1016/j.bjao.2024.100280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/26/2023] [Accepted: 03/20/2024] [Indexed: 05/21/2024]
Abstract
Background Patients are increasingly using artificial intelligence (AI) chatbots to seek answers to medical queries. Methods Ten frequently asked questions in anaesthesia were posed to three AI chatbots: ChatGPT4 (OpenAI), Bard (Google), and Bing Chat (Microsoft). Each chatbot's answers were evaluated in a randomised, blinded order by five residency programme directors from 15 medical institutions in the USA. Three medical content quality categories (accuracy, comprehensiveness, safety) and three communication quality categories (understandability, empathy/respect, and ethics) were scored between 1 and 5 (1 representing worst, 5 representing best). Results ChatGPT4 and Bard outperformed Bing Chat (median [inter-quartile range] scores: 4 [3-4], 4 [3-4], and 3 [2-4], respectively; P<0.001 with all metrics combined). All AI chatbots performed poorly in accuracy (score of ≥4 by 58%, 48%, and 36% of experts for ChatGPT4, Bard, and Bing Chat, respectively), comprehensiveness (score ≥4 by 42%, 30%, and 12% of experts for ChatGPT4, Bard, and Bing Chat, respectively), and safety (score ≥4 by 50%, 40%, and 28% of experts for ChatGPT4, Bard, and Bing Chat, respectively). Notably, answers from ChatGPT4, Bard, and Bing Chat differed statistically in comprehensiveness (ChatGPT4, 3 [2-4] vs Bing Chat, 2 [2-3], P<0.001; and Bard 3 [2-4] vs Bing Chat, 2 [2-3], P=0.002). All large language model chatbots performed well with no statistical difference for understandability (P=0.24), empathy (P=0.032), and ethics (P=0.465). Conclusions In answering anaesthesia patient frequently asked questions, the chatbots perform well on communication metrics but are suboptimal for medical content metrics. Overall, ChatGPT4 and Bard were comparable to each other, both outperforming Bing Chat.
Collapse
Affiliation(s)
- Teresa P. Nguyen
- Department of Anesthesiology, Perioperative and Pain Medicine, Stanford School of Medicine, Stanford, CA, USA
| | - Brendan Carvalho
- Department of Anesthesiology, Perioperative and Pain Medicine, Stanford School of Medicine, Stanford, CA, USA
| | - Hannah Sukhdeo
- Department of Anesthesiology, Perioperative and Pain Medicine, Stanford School of Medicine, Stanford, CA, USA
| | - Kareem Joudi
- Department of Anesthesiology, Perioperative and Pain Medicine, Stanford School of Medicine, Stanford, CA, USA
| | - Nan Guo
- Department of Anesthesiology, Perioperative and Pain Medicine, Stanford School of Medicine, Stanford, CA, USA
| | - Marianne Chen
- Department of Anesthesiology, Perioperative and Pain Medicine, Stanford School of Medicine, Stanford, CA, USA
| | - Jed T. Wolpaw
- Department of Anesthesiology and Critical Care Medicine, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Jesse J. Kiefer
- Department of Anesthesiology and Critical Care Medicine, University of Pennsylvania School of Medicine, Philadelphia, PA, USA
| | - Melissa Byrne
- Department of Anesthesiology, Perioperative and Pain Medicine, University of Michigan Ann Arbor School of Medicine, Ann Arbor, MI, USA
| | - Tatiana Jamroz
- Department of Anesthesiology, Perioperative and Pain Medicine, Cleveland Clinic Foundation and Hospitals, Cleveland, OH, USA
| | - Allison A. Mootz
- Department of Anesthesiology, Perioperative and Pain Medicine, Brigham and Women's Hospital, Harvard School of Medicine, Boston, MA, USA
| | - Sharon C. Reale
- Department of Anesthesiology, Perioperative and Pain Medicine, Brigham and Women's Hospital, Harvard School of Medicine, Boston, MA, USA
| | - James Zou
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA
| | - Pervez Sultan
- Department of Anesthesiology, Perioperative and Pain Medicine, Stanford School of Medicine, Stanford, CA, USA
| |
Collapse
|
4
|
Haber Y, Levkovich I, Hadar-Shoval D, Elyoseph Z. The Artificial Third: A Broad View of the Effects of Introducing Generative Artificial Intelligence on Psychotherapy. JMIR Ment Health 2024; 11:e54781. [PMID: 38787297 PMCID: PMC11137430 DOI: 10.2196/54781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 03/24/2024] [Accepted: 04/18/2024] [Indexed: 05/25/2024] Open
Abstract
Unlabelled This paper explores a significant shift in the field of mental health in general and psychotherapy in particular following generative artificial intelligence's new capabilities in processing and generating humanlike language. Following Freud, this lingo-technological development is conceptualized as the "fourth narcissistic blow" that science inflicts on humanity. We argue that this narcissistic blow has a potentially dramatic influence on perceptions of human society, interrelationships, and the self. We should, accordingly, expect dramatic changes in perceptions of the therapeutic act following the emergence of what we term the artificial third in the field of psychotherapy. The introduction of an artificial third marks a critical juncture, prompting us to ask the following important core questions that address two basic elements of critical thinking, namely, transparency and autonomy: (1) What is this new artificial presence in therapy relationships? (2) How does it reshape our perception of ourselves and our interpersonal dynamics? and (3) What remains of the irreplaceable human elements at the core of therapy? Given the ethical implications that arise from these questions, this paper proposes that the artificial third can be a valuable asset when applied with insight and ethical consideration, enhancing but not replacing the human touch in therapy.
Collapse
Affiliation(s)
- Yuval Haber
- The PhD Program of Hermeneutics and Cultural Studies, Interdisciplinary Studies Unit, Bar-Ilan University, Ramat Gan, Israel
| | | | - Dorit Hadar-Shoval
- Department of Psychology and Educational Counseling, The Max Stern Yezreel Valley College, Emek Yezreel, Israel
| | - Zohar Elyoseph
- Department of Brain Sciences, Faculty of Medicine, Imperial College London, London, United Kingdom
- The Center for Psychobiological Research, Department of Psychology and Educational Counseling, The Max Stern Yezreel Valley College, Emek Yezreel, Israel
| |
Collapse
|
5
|
Ye Q, Lu M, Min L, Tu C. Does ChatGPT have the potential to be a qualified orthopedic oncologist? Asian J Surg 2024; 47:2535-2537. [PMID: 38388277 DOI: 10.1016/j.asjsur.2024.02.053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2023] [Accepted: 02/02/2024] [Indexed: 02/24/2024] Open
Affiliation(s)
- Qiang Ye
- Department of Orthopedic Surgery and Orthopedic Research Institute, West China Hospital, Sichuan University, No. 37 Guoxuexiang, 610041, Chengdu, Sichuan, People's Republic of China; Department of Model Worker and Innovative Craftsman, West China Hospital, Sichuan University, No. 37 Guoxuexiang, 610041, Chengdu, Sichuan, People's Republic of China
| | - Minxun Lu
- Department of Orthopedic Surgery and Orthopedic Research Institute, West China Hospital, Sichuan University, No. 37 Guoxuexiang, 610041, Chengdu, Sichuan, People's Republic of China; Department of Model Worker and Innovative Craftsman, West China Hospital, Sichuan University, No. 37 Guoxuexiang, 610041, Chengdu, Sichuan, People's Republic of China
| | - Li Min
- Department of Orthopedic Surgery and Orthopedic Research Institute, West China Hospital, Sichuan University, No. 37 Guoxuexiang, 610041, Chengdu, Sichuan, People's Republic of China; Department of Model Worker and Innovative Craftsman, West China Hospital, Sichuan University, No. 37 Guoxuexiang, 610041, Chengdu, Sichuan, People's Republic of China
| | - Chongqi Tu
- Department of Orthopedic Surgery and Orthopedic Research Institute, West China Hospital, Sichuan University, No. 37 Guoxuexiang, 610041, Chengdu, Sichuan, People's Republic of China; Department of Model Worker and Innovative Craftsman, West China Hospital, Sichuan University, No. 37 Guoxuexiang, 610041, Chengdu, Sichuan, People's Republic of China.
| |
Collapse
|
6
|
Wang L, Wan Z, Ni C, Song Q, Li Y, Clayton EW, Malin BA, Yin Z. A Systematic Review of ChatGPT and Other Conversational Large Language Models in Healthcare. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.04.26.24306390. [PMID: 38712148 PMCID: PMC11071576 DOI: 10.1101/2024.04.26.24306390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
Background The launch of the Chat Generative Pre-trained Transformer (ChatGPT) in November 2022 has attracted public attention and academic interest to large language models (LLMs), facilitating the emergence of many other innovative LLMs. These LLMs have been applied in various fields, including healthcare. Numerous studies have since been conducted regarding how to employ state-of-the-art LLMs in health-related scenarios to assist patients, doctors, and public health administrators. Objective This review aims to summarize the applications and concerns of applying conversational LLMs in healthcare and provide an agenda for future research on LLMs in healthcare. Methods We utilized PubMed, ACM, and IEEE digital libraries as primary sources for this review. We followed the guidance of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRIMSA) to screen and select peer-reviewed research articles that (1) were related to both healthcare applications and conversational LLMs and (2) were published before September 1st, 2023, the date when we started paper collection and screening. We investigated these papers and classified them according to their applications and concerns. Results Our search initially identified 820 papers according to targeted keywords, out of which 65 papers met our criteria and were included in the review. The most popular conversational LLM was ChatGPT from OpenAI (60), followed by Bard from Google (1), Large Language Model Meta AI (LLaMA) from Meta (1), and other LLMs (5). These papers were classified into four categories in terms of their applications: 1) summarization, 2) medical knowledge inquiry, 3) prediction, and 4) administration, and four categories of concerns: 1) reliability, 2) bias, 3) privacy, and 4) public acceptability. There are 49 (75%) research papers using LLMs for summarization and/or medical knowledge inquiry, and 58 (89%) research papers expressing concerns about reliability and/or bias. We found that conversational LLMs exhibit promising results in summarization and providing medical knowledge to patients with a relatively high accuracy. However, conversational LLMs like ChatGPT are not able to provide reliable answers to complex health-related tasks that require specialized domain expertise. Additionally, no experiments in our reviewed papers have been conducted to thoughtfully examine how conversational LLMs lead to bias or privacy issues in healthcare research. Conclusions Future studies should focus on improving the reliability of LLM applications in complex health-related tasks, as well as investigating the mechanisms of how LLM applications brought bias and privacy issues. Considering the vast accessibility of LLMs, legal, social, and technical efforts are all needed to address concerns about LLMs to promote, improve, and regularize the application of LLMs in healthcare.
Collapse
Affiliation(s)
- Leyao Wang
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA, 37212
| | - Zhiyu Wan
- Department of Biomedical Informatics, Vanderbilt University Medical Center, TN, USA, 37203
| | - Congning Ni
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA, 37212
| | - Qingyuan Song
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA, 37212
| | - Yang Li
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA, 37212
| | - Ellen Wright Clayton
- Department of Pediatrics, Vanderbilt University Medical Center, Nashville, Tennessee, USA, 37203
- Center for Biomedical Ethics and Society, Vanderbilt University Medical Center, Nashville, Tennessee, USA, 37203
| | - Bradley A. Malin
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA, 37212
- Department of Biomedical Informatics, Vanderbilt University Medical Center, TN, USA, 37203
- Department of Biostatistics, Vanderbilt University Medical Center, TN, USA, 37203
| | - Zhijun Yin
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA, 37212
- Department of Biomedical Informatics, Vanderbilt University Medical Center, TN, USA, 37203
| |
Collapse
|
7
|
Feng ML, Gao X. Can ChatGPT assist urologists in managing overactive bladders? Int J Surg 2024; 110:1296-1297. [PMID: 38085847 PMCID: PMC10871627 DOI: 10.1097/js9.0000000000000887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Accepted: 10/25/2023] [Indexed: 02/17/2024]
Affiliation(s)
- Mei-Lin Feng
- School of Food and Biological Engineering, Chengdu University, Chengdu
| | - Xiaoshuai Gao
- Department of Urology and Institute of Urology (Laboratory of Reconstructive Urology), West China Hospital, Sichuan University, Sichuan, People’s Republic of China
| |
Collapse
|
8
|
Socol Y, Richardson A, Garali-Zineddine I, Grison S, Vares G, Klokov D. Artificial intelligence in biology and medicine, and radioprotection research: perspectives from Jerusalem. Front Artif Intell 2024; 6:1291136. [PMID: 38282906 PMCID: PMC10812117 DOI: 10.3389/frai.2023.1291136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Accepted: 12/15/2023] [Indexed: 01/30/2024] Open
Abstract
While AI is widely used in biomedical research and medical practice, its use is constrained to few specific practical areas, e.g., radiomics. Participants of the workshop on "Artificial Intelligence in Biology and Medicine" (Jerusalem, Feb 14-15, 2023), both researchers and practitioners, aimed to build a holistic picture by exploring AI advancements, challenges and perspectives, as well as to suggest new fields for AI applications. Presentations showcased the potential of large language models (LLMs) in generating molecular structures, predicting protein-ligand interactions, and promoting democratization of AI development. Ethical concerns in medical decision making were also addressed. In biological applications, AI integration of multi-omics and clinical data elucidated the health relevant effects of low doses of ionizing radiation. Bayesian latent modeling identified statistical associations between unobserved variables. Medical applications highlighted liquid biopsy methods for non-invasive diagnostics, routine laboratory tests to identify overlooked illnesses, and AI's role in oral and maxillofacial imaging. Explainable AI and diverse image processing tools improved diagnostics, while text classification detected anorexic behavior in blog posts. The workshop fostered knowledge sharing, discussions, and emphasized the need for further AI development in radioprotection research in support of emerging public health issues. The organizers plan to continue the initiative as an annual event, promoting collaboration and addressing issues and perspectives in AI applications with a focus on low-dose radioprotection research. Researchers involved in radioprotection research and experts in relevant public policy domains are invited to explore the utility of AI in low-dose radiation research at the next workshop.
Collapse
Affiliation(s)
- Yehoshua Socol
- Department of Electrical and Electronics Engineering, Jerusalem College of Technology, Jerusalem, Israel
| | - Ariella Richardson
- Department of Data Mining, Jerusalem College of Technology, Jerusalem, Israel
| | - Imene Garali-Zineddine
- Health and Environnent Division, Institut de Radioprotection et de Sûreté Nucléaire (IRSN), Fontenay-aux-Roses, France
| | - Stephane Grison
- Health and Environnent Division, Institut de Radioprotection et de Sûreté Nucléaire (IRSN), Fontenay-aux-Roses, France
| | - Guillaume Vares
- Health and Environnent Division, Institut de Radioprotection et de Sûreté Nucléaire (IRSN), Fontenay-aux-Roses, France
| | - Dmitry Klokov
- Health and Environnent Division, Institut de Radioprotection et de Sûreté Nucléaire (IRSN), Fontenay-aux-Roses, France
- Department of Biochemistry, Microbiology and Immunology, University of Ottawa, Ottawa, ON, Canada
| |
Collapse
|
9
|
Younis HA, Eisa TAE, Nasser M, Sahib TM, Noor AA, Alyasiri OM, Salisu S, Hayder IM, Younis HA. A Systematic Review and Meta-Analysis of Artificial Intelligence Tools in Medicine and Healthcare: Applications, Considerations, Limitations, Motivation and Challenges. Diagnostics (Basel) 2024; 14:109. [PMID: 38201418 PMCID: PMC10802884 DOI: 10.3390/diagnostics14010109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 12/02/2023] [Accepted: 12/04/2023] [Indexed: 01/12/2024] Open
Abstract
Artificial intelligence (AI) has emerged as a transformative force in various sectors, including medicine and healthcare. Large language models like ChatGPT showcase AI's potential by generating human-like text through prompts. ChatGPT's adaptability holds promise for reshaping medical practices, improving patient care, and enhancing interactions among healthcare professionals, patients, and data. In pandemic management, ChatGPT rapidly disseminates vital information. It serves as a virtual assistant in surgical consultations, aids dental practices, simplifies medical education, and aids in disease diagnosis. A total of 82 papers were categorised into eight major areas, which are G1: treatment and medicine, G2: buildings and equipment, G3: parts of the human body and areas of the disease, G4: patients, G5: citizens, G6: cellular imaging, radiology, pulse and medical images, G7: doctors and nurses, and G8: tools, devices and administration. Balancing AI's role with human judgment remains a challenge. A systematic literature review using the PRISMA approach explored AI's transformative potential in healthcare, highlighting ChatGPT's versatile applications, limitations, motivation, and challenges. In conclusion, ChatGPT's diverse medical applications demonstrate its potential for innovation, serving as a valuable resource for students, academics, and researchers in healthcare. Additionally, this study serves as a guide, assisting students, academics, and researchers in the field of medicine and healthcare alike.
Collapse
Affiliation(s)
- Hussain A. Younis
- College of Education for Women, University of Basrah, Basrah 61004, Iraq
| | | | - Maged Nasser
- Computer & Information Sciences Department, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Malaysia;
| | - Thaeer Mueen Sahib
- Kufa Technical Institute, Al-Furat Al-Awsat Technical University, Kufa 54001, Iraq;
| | - Ameen A. Noor
- Computer Science Department, College of Education, University of Almustansirya, Baghdad 10045, Iraq;
| | | | - Sani Salisu
- Department of Information Technology, Federal University Dutse, Dutse 720101, Nigeria;
| | - Israa M. Hayder
- Qurna Technique Institute, Southern Technical University, Basrah 61016, Iraq;
| | - Hameed AbdulKareem Younis
- Department of Cybersecurity, College of Computer Science and Information Technology, University of Basrah, Basrah 61016, Iraq;
| |
Collapse
|
10
|
Elyoseph Z, Hadar Shoval D, Levkovich I. Beyond Personhood: Ethical Paradigms in the Generative Artificial Intelligence Era. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2024; 24:57-59. [PMID: 38236857 DOI: 10.1080/15265161.2023.2278546] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/23/2024]
Affiliation(s)
- Zohar Elyoseph
- The Max Stern Yezreel Valley College
- Imperial College, London
| | | | | |
Collapse
|
11
|
van Manen M. What Does ChatGPT Mean for Qualitative Health Research? QUALITATIVE HEALTH RESEARCH 2023; 33:1135-1139. [PMID: 37897694 DOI: 10.1177/10497323231210816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/30/2023]
Affiliation(s)
- Michael van Manen
- Department of Pediatrics, University of Alberta, Edmonton, AB, Canada
- John Dossetor Health Ethics Centre, University of Alberta, Edmonton, AB, Canada
| |
Collapse
|
12
|
Barnhart AJ, Barnhart JEM, Dierickx K. Why ChatGPT Means Communication Ethics Problems for Bioethics. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:80-82. [PMID: 37812099 DOI: 10.1080/15265161.2023.2250278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
|
13
|
Ho A, Perry J. What We Owe Those Who Chat Woe: A Relational Lens for Mental Health Apps. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:77-80. [PMID: 37812122 DOI: 10.1080/15265161.2023.2250306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
Affiliation(s)
- Anita Ho
- University of British Columbia
- University of California, San Francisco
- CommonSpirit Health
| | | |
Collapse
|
14
|
Spector-Bagdady K. Generative-AI-Generated Challenges for Health Data Research. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:1-5. [PMID: 37831940 PMCID: PMC11024895 DOI: 10.1080/15265161.2023.2252311] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/15/2023]
|
15
|
Skorburg JA, Kupferschmidt KL, Taylor GW. "Large Language Models" Do Much More than Just Language: Some Bioethical Implications of Multi-Modal AI. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:110-113. [PMID: 37812107 DOI: 10.1080/15265161.2023.2250318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
Affiliation(s)
| | | | - Graham W Taylor
- University of Guelph
- Vector Institute for Artificial Intelligence
| |
Collapse
|
16
|
Zheng EL, Lee SSJ. The Epistemological Danger of Large Language Models. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:102-104. [PMID: 37812104 DOI: 10.1080/15265161.2023.2250294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
|
17
|
Ho CWL. Generative AI and the Foregrounding of Epistemic Injustice in Bioethics. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:99-102. [PMID: 37812106 DOI: 10.1080/15265161.2023.2250319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
|
18
|
Laacke S, Gauckler C. Why Personalized Large Language Models Fail to Do What Ethics is All About. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:60-63. [PMID: 37812095 DOI: 10.1080/15265161.2023.2250292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
|
19
|
Binkley CE, Pilkington BC. Informed Consent for Clinician-AI Collaboration and Patient Data Sharing: Substantive, Illusory, or Both. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:83-85. [PMID: 37812116 DOI: 10.1080/15265161.2023.2250289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
|
20
|
Nagappan A. Moving from Models to Responsible AI as a Moat. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:113-115. [PMID: 37812105 PMCID: PMC10622256 DOI: 10.1080/15265161.2023.2250293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
Affiliation(s)
- Ashwini Nagappan
- Department of Health Policy & Management,
University of California, Los Angeles, Los Angeles, CA, USA
| |
Collapse
|
21
|
Du L, Kamenova K. China's New Regulations on Generative AI: Implications for Bioethics. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:52-54. [PMID: 37812110 DOI: 10.1080/15265161.2023.2249854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
|
22
|
Bak M. AI Can Show You the World. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:107-110. [PMID: 37812112 DOI: 10.1080/15265161.2023.2250312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
Affiliation(s)
- Marieke Bak
- Amsterdam UMC, University of Amsterdam
- Technical University of Munich
| |
Collapse
|
23
|
Cheung ATM, Nasir-Moin M, Oermann EK. ChatGPT and the Law of the Horse. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:55-57. [PMID: 37812113 DOI: 10.1080/15265161.2023.2250279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
|
24
|
Alvarado R, Morar N. ChatGPT's Relevance for Bioethics: A Novel Challenge to the Intrinsically Relational, Critical, and Reason-Giving Aspect of Healthcare. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:71-73. [PMID: 37812123 DOI: 10.1080/15265161.2023.2250305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
|
25
|
McMillan J. Generative AI and Ethical Analysis. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:42-44. [PMID: 37812114 DOI: 10.1080/15265161.2023.2249852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
|
26
|
Vukov JM, Joseph TL, Lebkuecher G, Ramirez M, Burns MB. The Ouroboros Threat. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:58-60. [PMID: 37812118 DOI: 10.1080/15265161.2023.2250284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
|
27
|
Klugman CM, Erwin CJ. Machines Like Me: 4 Corollaries for Responsible Use of AI in the Bioethics Classroom. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:86-88. [PMID: 37812108 DOI: 10.1080/15265161.2023.2250317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
|
28
|
Meier LJ. ChatGPT's Responses to Dilemmas in Medical Ethics: The Devil is in the Details. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:63-65. [PMID: 37812097 DOI: 10.1080/15265161.2023.2250290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
|
29
|
Tal A, Elyoseph Z, Haber Y, Angert T, Gur T, Simon T, Asman O. The Artificial Third: Utilizing ChatGPT in Mental Health. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:74-77. [PMID: 37812102 DOI: 10.1080/15265161.2023.2250297] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
Affiliation(s)
- Amir Tal
- Faculty of Medicine, Tel Aviv University
- "The Artificial Third" Research Community
| | - Zohar Elyoseph
- "The Artificial Third" Research Community
- Department of Educational Psychology and Counseling, Max Stern Yezreel Valley College
- Department of Brain Sciences, Faculty of Medicine, Imperial College London
| | | | - Tal Angert
- "The Artificial Third" Research Community
| | - Tamar Gur
- Faculty of Medicine, Tel Aviv University
- "The Artificial Third" Research Community
- Department of Psychology, The Hebrew University of Jerusalem
| | - Tomer Simon
- "The Artificial Third" Research Community
- Microsoft Israel R&D Center
| | - Oren Asman
- Faculty of Medicine, Tel Aviv University
- "The Artificial Third" Research Community
- Sagol School of Neuroscience, Tel Aviv University
| |
Collapse
|
30
|
Victor G, Bélisle-Pipon JC, Ravitsky V. Generative AI, Specific Moral Values: A Closer Look at ChatGPT's New Ethical Implications for Medical AI. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:65-68. [PMID: 37812098 PMCID: PMC10575680 DOI: 10.1080/15265161.2023.2250311] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
|
31
|
Garg RK, Urs VL, Agarwal AA, Chaudhary SK, Paliwal V, Kar SK. Exploring the role of ChatGPT in patient care (diagnosis and treatment) and medical research: A systematic review. Health Promot Perspect 2023; 13:183-191. [PMID: 37808939 PMCID: PMC10558973 DOI: 10.34172/hpp.2023.22] [Citation(s) in RCA: 19] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 07/06/2023] [Indexed: 10/10/2023] Open
Abstract
Background ChatGPT is an artificial intelligence based tool developed by OpenAI (California, USA). This systematic review examines the potential of ChatGPT in patient care and its role in medical research. Methods The systematic review was done according to the PRISMA guidelines. Embase, Scopus, PubMed and Google Scholar data bases were searched. We also searched preprint data bases. Our search was aimed to identify all kinds of publications, without any restrictions, on ChatGPT and its application in medical research, medical publishing and patient care. We used search term "ChatGPT". We reviewed all kinds of publications including original articles, reviews, editorial/ commentaries, and even letter to the editor. Each selected records were analysed using ChatGPT and responses generated were compiled in a table. The word table was transformed in to a PDF and was further analysed using ChatPDF. Results We reviewed full texts of 118 articles. ChatGPT can assist with patient enquiries, note writing, decision-making, trial enrolment, data management, decision support, research support, and patient education. But the solutions it offers are usually insufficient and contradictory, raising questions about their originality, privacy, correctness, bias, and legality. Due to its lack of human-like qualities, ChatGPT's legitimacy as an author is questioned when used for academic writing. ChatGPT generated contents have concerns with bias and possible plagiarism. Conclusion Although it can help with patient treatment and research, there are issues with accuracy, authorship, and bias. ChatGPT can serve as a "clinical assistant" and be a help in research and scholarly writing.
Collapse
Affiliation(s)
| | - Vijeth L Urs
- Department of Neurology, King George’s Medical University, Lucknow, India
| | | | | | - Vimal Paliwal
- Department of Neurology, Sanjay Gandhi Institute of Medical Sciences, Lucknow, India
| | - Sujita Kumar Kar
- Department of Psychiatry, King George’s Medical University, Lucknow, India
| |
Collapse
|