1
|
Santa-Rosario JC, Gustafson EA, Sanabria Bellassai DE, Gustafson PE, de Socarraz M. Validation and three years of clinical experience in using an artificial intelligence algorithm as a second read system for prostate cancer diagnosis-real-world experience. J Pathol Inform 2024; 15:100378. [PMID: 38868487 PMCID: PMC11166872 DOI: 10.1016/j.jpi.2024.100378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Revised: 04/23/2024] [Accepted: 04/23/2024] [Indexed: 06/14/2024] Open
Abstract
Background Prostate cancer ranks as the most frequently diagnosed cancer in men in the USA, with significant mortality rates. Early detection is pivotal for optimal patient outcomes, providing increased treatment options and potentially less invasive interventions. There remain significant challenges in prostate cancer histopathology, including the potential for missed diagnoses due to pathologist variability and subjective interpretations. Methods To address these challenges, this study investigates the ability of artificial intelligence (AI) to enhance diagnostic accuracy. The Galen™ Prostate AI algorithm was validated on a cohort of Puerto Rican men to demonstrate its efficacy in cancer detection and Gleason grading. Subsequently, the AI algorithm was integrated into routine clinical practice during a 3-year period at a CLIA certified precision pathology laboratory. Results The Galen™ Prostate AI algorithm showed a 96.7% (95% CI 95.6-97.8) specificity and a 96.6% (95% CI 93.3-98.8) sensitivity for prostate cancer detection and 82.1% specificity (95% CI 73.9-88.5) and 81.1% sensitivity (95% CI 73.7-87.2) for distinction of Gleason Grade Group 1 from Grade Group 2+. The subsequent AI integration into routine clinical use examined prostate cancer diagnoses on >122,000 slides and 9200 cases over 3 years and had an overall AI Impact ™ factor of 1.8%. Conclusions The potential of AI to be a powerful, reliable, and effective diagnostic tool for pathologists is highlighted, while the AI Impact™ in a real-world setting demonstrates the ability of AI to standardize prostate cancer diagnosis at a high level of performance across pathologists.
Collapse
Affiliation(s)
- Juan Carlos Santa-Rosario
- CorePlus Servicios Clínicos y Patológicos; Plazoleta la Cerámica, Suite 2-6 Ave. Sánchez Vilella, Esq, PR-190, Carolina, PR 00983, USA
| | - Erik A. Gustafson
- CorePlus Servicios Clínicos y Patológicos; Plazoleta la Cerámica, Suite 2-6 Ave. Sánchez Vilella, Esq, PR-190, Carolina, PR 00983, USA
| | - Dario E. Sanabria Bellassai
- CorePlus Servicios Clínicos y Patológicos; Plazoleta la Cerámica, Suite 2-6 Ave. Sánchez Vilella, Esq, PR-190, Carolina, PR 00983, USA
| | - Phillip E. Gustafson
- CorePlus Servicios Clínicos y Patológicos; Plazoleta la Cerámica, Suite 2-6 Ave. Sánchez Vilella, Esq, PR-190, Carolina, PR 00983, USA
| | - Mariano de Socarraz
- CorePlus Servicios Clínicos y Patológicos; Plazoleta la Cerámica, Suite 2-6 Ave. Sánchez Vilella, Esq, PR-190, Carolina, PR 00983, USA
| |
Collapse
|
2
|
Sanchez DF, Oliveira P. Pathology of Squamous Cell Carcinoma of the Penis: Back to Square One. Urol Clin North Am 2024; 51:313-325. [PMID: 38925734 DOI: 10.1016/j.ucl.2024.03.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/28/2024]
Abstract
The landscape of squamous cell carcinoma of the penis (SCC-P) has undergone a significant transformation since the new World Health Organization classification of genitourinary cancers and recent European Association of Urology/American Association of Clinical Oncology guidelines. These changes emphasize the necessity to categorize SCC-P into 2 groups based on its association with human papillomavirus (HPV) infection. This shift has major implications, considering that prior knowledge was derived from a mix of both groups. Given the distinct prognosis, treatment options, and staging systems observed for HPV-associated tumors in other body areas, the question now arises: will similar patterns emerge for SCC-P?
Collapse
Affiliation(s)
- Diego F Sanchez
- Translational Oncogenomics Group, Manchester Cancer Research Centre & CRUK-MI, Wilmslow Road, Manchester M20 4GJ, UK.
| | - Pedro Oliveira
- Department of Pathology, Christie NHS Foundation Trust, Wilmslow Road, Manchester M20 4BX, UK
| |
Collapse
|
3
|
Butt MA, Kaleem MF, Bilal M, Hanif MS. Using multi-label ensemble CNN classifiers to mitigate labelling inconsistencies in patch-level Gleason grading. PLoS One 2024; 19:e0304847. [PMID: 38968206 PMCID: PMC11226137 DOI: 10.1371/journal.pone.0304847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 05/21/2024] [Indexed: 07/07/2024] Open
Abstract
This paper presents a novel approach to enhance the accuracy of patch-level Gleason grading in prostate histopathology images, a critical task in the diagnosis and prognosis of prostate cancer. This study shows that the Gleason grading accuracy can be improved by addressing the prevalent issue of label inconsistencies in the SICAPv2 prostate dataset, which employs a majority voting scheme for patch-level labels. We propose a multi-label ensemble deep-learning classifier that effectively mitigates these inconsistencies and yields more accurate results than the state-of-the-art works. Specifically, our approach leverages the strengths of three different one-vs-all deep learning models in an ensemble to learn diverse features from the histopathology images to individually indicate the presence of one or more Gleason grades (G3, G4, and G5) in each patch. These deep learning models have been trained using transfer learning to fine-tune a variant of the ResNet18 CNN classifier chosen after an extensive ablation study. Experimental results demonstrate that our multi-label ensemble classifier significantly outperforms traditional single-label classifiers reported in the literature by at least 14% and 4% on accuracy and f1-score metrics respectively. These results underscore the potential of our proposed machine learning approach to improve the accuracy and consistency of prostate cancer grading.
Collapse
Affiliation(s)
- Muhammad Asim Butt
- Department of Electrical Engineering, University of Management and Technology, Lahore, Pakistan
| | | | - Muhammad Bilal
- Center of Excellence in Intelligent Engineering Systems, King Abdulaziz University, Jeddah, Saudi Arabia
- Department of Electrical and Computer Engineering, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Muhammad Shehzad Hanif
- Center of Excellence in Intelligent Engineering Systems, King Abdulaziz University, Jeddah, Saudi Arabia
- Department of Electrical and Computer Engineering, King Abdulaziz University, Jeddah, Saudi Arabia
| |
Collapse
|
4
|
Hershenhouse JS, Mokhtar D, Eppler MB, Rodler S, Storino Ramacciotti L, Ganjavi C, Hom B, Davis RJ, Tran J, Russo GI, Cocci A, Abreu A, Gill I, Desai M, Cacciamani GE. Accuracy, readability, and understandability of large language models for prostate cancer information to the public. Prostate Cancer Prostatic Dis 2024:10.1038/s41391-024-00826-y. [PMID: 38744934 DOI: 10.1038/s41391-024-00826-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 03/14/2024] [Accepted: 03/26/2024] [Indexed: 05/16/2024]
Abstract
BACKGROUND Generative Pretrained Model (GPT) chatbots have gained popularity since the public release of ChatGPT. Studies have evaluated the ability of different GPT models to provide information about medical conditions. To date, no study has assessed the quality of ChatGPT outputs to prostate cancer related questions from both the physician and public perspective while optimizing outputs for patient consumption. METHODS Nine prostate cancer-related questions, identified through Google Trends (Global), were categorized into diagnosis, treatment, and postoperative follow-up. These questions were processed using ChatGPT 3.5, and the responses were recorded. Subsequently, these responses were re-inputted into ChatGPT to create simplified summaries understandable at a sixth-grade level. Readability of both the original ChatGPT responses and the layperson summaries was evaluated using validated readability tools. A survey was conducted among urology providers (urologists and urologists in training) to rate the original ChatGPT responses for accuracy, completeness, and clarity using a 5-point Likert scale. Furthermore, two independent reviewers evaluated the layperson summaries on correctness trifecta: accuracy, completeness, and decision-making sufficiency. Public assessment of the simplified summaries' clarity and understandability was carried out through Amazon Mechanical Turk (MTurk). Participants rated the clarity and demonstrated their understanding through a multiple-choice question. RESULTS GPT-generated output was deemed correct by 71.7% to 94.3% of raters (36 urologists, 17 urology residents) across 9 scenarios. GPT-generated simplified layperson summaries of this output was rated as accurate in 8 of 9 (88.9%) scenarios and sufficient for a patient to make a decision in 8 of 9 (88.9%) scenarios. Mean readability of layperson summaries was higher than original GPT outputs ([original ChatGPT v. simplified ChatGPT, mean (SD), p-value] Flesch Reading Ease: 36.5(9.1) v. 70.2(11.2), <0.0001; Gunning Fog: 15.8(1.7) v. 9.5(2.0), p < 0.0001; Flesch Grade Level: 12.8(1.2) v. 7.4(1.7), p < 0.0001; Coleman Liau: 13.7(2.1) v. 8.6(2.4), 0.0002; Smog index: 11.8(1.2) v. 6.7(1.8), <0.0001; Automated Readability Index: 13.1(1.4) v. 7.5(2.1), p < 0.0001). MTurk workers (n = 514) rated the layperson summaries as correct (89.5-95.7%) and correctly understood the content (63.0-87.4%). CONCLUSION GPT shows promise for correct patient education for prostate cancer-related contents, but the technology is not designed for delivering patients information. Prompting the model to respond with accuracy, completeness, clarity and readability may enhance its utility when used for GPT-powered medical chatbots.
Collapse
Affiliation(s)
- Jacob S Hershenhouse
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
- Artificial Intelligence Center, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Daniel Mokhtar
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
- Artificial Intelligence Center, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Michael B Eppler
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
- Artificial Intelligence Center, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Severin Rodler
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
- Artificial Intelligence Center, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Lorenzo Storino Ramacciotti
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
- Artificial Intelligence Center, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Conner Ganjavi
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
- Artificial Intelligence Center, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Brian Hom
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
- Artificial Intelligence Center, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Ryan J Davis
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
- Artificial Intelligence Center, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - John Tran
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
- Artificial Intelligence Center, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | | | - Andrea Cocci
- Urology Section, University of Florence, Florence, Italy
| | - Andre Abreu
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
- Artificial Intelligence Center, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Inderbir Gill
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
- Artificial Intelligence Center, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Mihir Desai
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
| | - Giovanni E Cacciamani
- USC Institute of Urology and Catherine and Joseph Aresty Department of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA.
- Artificial Intelligence Center, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
5
|
Nicoletti R, Nicoletti G, Giannini V, Teoh JYC. Developers-Doctor-patients: the artificial intelligence's trifecta. Prostate Cancer Prostatic Dis 2024; 27:3-4. [PMID: 37723251 DOI: 10.1038/s41391-023-00718-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Revised: 08/15/2023] [Accepted: 08/30/2023] [Indexed: 09/20/2023]
Affiliation(s)
- Rossella Nicoletti
- Department of Experimental and Clinical Biomedical Science, University of Florence, Florence, Italy
- S.H.Ho Urology Centre, Department of Surgery, The Chinese University of Hong Kong, Hong Kong, China
| | - Giulia Nicoletti
- Department of Electronics and Telecommunications, Polytechnic University of Turin, Turin (TO), Italy
- Department of Surgical Sciences, University of Turin, Turin (TO), Italy
| | | | - Jeremy Yuen Chun Teoh
- S.H.Ho Urology Centre, Department of Surgery, The Chinese University of Hong Kong, Hong Kong, China.
| |
Collapse
|
6
|
Abd-Alrazaq A, Alajlani M, Ahmad R, AlSaad R, Aziz S, Ahmed A, Alsahli M, Damseh R, Sheikh J. The Performance of Wearable AI in Detecting Stress Among Students: Systematic Review and Meta-Analysis. J Med Internet Res 2024; 26:e52622. [PMID: 38294846 PMCID: PMC10867751 DOI: 10.2196/52622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2023] [Revised: 10/24/2023] [Accepted: 12/19/2023] [Indexed: 02/01/2024] Open
Abstract
BACKGROUND Students usually encounter stress throughout their academic path. Ongoing stressors may lead to chronic stress, adversely affecting their physical and mental well-being. Thus, early detection and monitoring of stress among students are crucial. Wearable artificial intelligence (AI) has emerged as a valuable tool for this purpose. It offers an objective, noninvasive, nonobtrusive, automated approach to continuously monitor biomarkers in real time, thereby addressing the limitations of traditional approaches such as self-reported questionnaires. OBJECTIVE This systematic review and meta-analysis aim to assess the performance of wearable AI in detecting and predicting stress among students. METHODS Search sources in this review included 7 electronic databases (MEDLINE, Embase, PsycINFO, ACM Digital Library, Scopus, IEEE Xplore, and Google Scholar). We also checked the reference lists of the included studies and checked studies that cited the included studies. The search was conducted on June 12, 2023. This review included research articles centered on the creation or application of AI algorithms for the detection or prediction of stress among students using data from wearable devices. In total, 2 independent reviewers performed study selection, data extraction, and risk-of-bias assessment. The Quality Assessment of Diagnostic Accuracy Studies-Revised tool was adapted and used to examine the risk of bias in the included studies. Evidence synthesis was conducted using narrative and statistical techniques. RESULTS This review included 5.8% (19/327) of the studies retrieved from the search sources. A meta-analysis of 37 accuracy estimates derived from 32% (6/19) of the studies revealed a pooled mean accuracy of 0.856 (95% CI 0.70-0.93). Subgroup analyses demonstrated that the accuracy of wearable AI was moderated by the number of stress classes (P=.02), type of wearable device (P=.049), location of the wearable device (P=.02), data set size (P=.009), and ground truth (P=.001). The average estimates of sensitivity, specificity, and F1-score were 0.755 (SD 0.181), 0.744 (SD 0.147), and 0.759 (SD 0.139), respectively. CONCLUSIONS Wearable AI shows promise in detecting student stress but currently has suboptimal performance. The results of the subgroup analyses should be carefully interpreted given that many of these findings may be due to other confounding factors rather than the underlying grouping characteristics. Thus, wearable AI should be used alongside other assessments (eg, clinical questionnaires) until further evidence is available. Future research should explore the ability of wearable AI to differentiate types of stress, distinguish stress from other mental health issues, predict future occurrences of stress, consider factors such as the placement of the wearable device and the methods used to assess the ground truth, and report detailed results to facilitate the conduct of meta-analyses. TRIAL REGISTRATION PROSPERO CRD42023435051; http://tinyurl.com/3fzb5rnp.
Collapse
Affiliation(s)
- Alaa Abd-Alrazaq
- AI Center for Precision Health, Weill Cornell Medicine-Qatar, Qatar Foundation, Doha, Qatar
| | - Mohannad Alajlani
- Institute of Digital Healthcare, WMG, University of Warwick, Warwick, United Kingdom
| | - Reham Ahmad
- Institute of Digital Healthcare, WMG, University of Warwick, Warwick, United Kingdom
| | - Rawan AlSaad
- AI Center for Precision Health, Weill Cornell Medicine-Qatar, Qatar Foundation, Doha, Qatar
| | - Sarah Aziz
- AI Center for Precision Health, Weill Cornell Medicine-Qatar, Qatar Foundation, Doha, Qatar
| | - Arfan Ahmed
- AI Center for Precision Health, Weill Cornell Medicine-Qatar, Qatar Foundation, Doha, Qatar
| | - Mohammed Alsahli
- Health Informatics Department, College of Health Science, Saudi Electronic University, Riyadh, Saudi Arabia
| | - Rafat Damseh
- Department of Computer Science and Software Engineering, United Arab Emirates University, Al Ain, Abu Dhabi, United Arab Emirates
| | - Javaid Sheikh
- AI Center for Precision Health, Weill Cornell Medicine-Qatar, Qatar Foundation, Doha, Qatar
| |
Collapse
|
7
|
Lombardo R, Gallo G, Stira J, Turchi B, Santoro G, Riolo S, Romagnoli M, Cicione A, Tema G, Pastore A, Al Salhi Y, Fuschi A, Franco G, Nacchia A, Tubaro A, De Nunzio C. Quality of information and appropriateness of Open AI outputs for prostate cancer. Prostate Cancer Prostatic Dis 2024:10.1038/s41391-024-00789-0. [PMID: 38228809 DOI: 10.1038/s41391-024-00789-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 12/22/2023] [Accepted: 01/05/2024] [Indexed: 01/18/2024]
Abstract
Chat-GPT, a natural language processing (NLP) tool created by Open-AI, can potentially be used as a quick source for obtaining information related to prostate cancer. This study aims to analyze the quality and appropriateness of Chat-GPT's responses to inquiries related to prostate cancer compared to those of the European Urology Association's (EAU) 2023 prostate cancer guidelines. Overall, 195 questions were prepared according to the recommendations gathered in the prostate cancer section of the EAU 2023 Guideline. All questions were systematically presented to Chat-GPT's August 3 Version, and two expert urologists independently assessed and assigned scores ranging from 1 to 4 to each response (1: completely correct, 2: correct but inadequate, 3: a mix of correct and misleading information, and 4: completely incorrect). Sub-analysis per chapter and per grade of recommendation were performed. Overall, 195 recommendations were evaluated. Overall, 50/195 (26%) were completely correct, 51/195 (26%) correct but inadequate, 47/195 (24%) a mix of correct and misleading and 47/195 (24%) incorrect. When looking at different chapters Open AI was particularly accurate in answering questions on follow-up and QoL. Worst performance was recorded for the diagnosis and treatment chapters with respectively 19% and 30% of the answers completely incorrect. When looking at the strength of recommendation, no differences in terms of accuracy were recorded when comparing weak and strong recommendations (p > 0,05). Chat-GPT has a poor accuracy when answering questions on the PCa EAU guidelines recommendations. Future studies should assess its performance after adequate training.
Collapse
Affiliation(s)
| | - Giacomo Gallo
- Department of Urology, 'Sapienza' University of Rome, Rome, Italy
| | - Jordi Stira
- Department of Urology, 'Sapienza' University of Rome, Rome, Italy
| | - Beatrice Turchi
- Department of Urology, 'Sapienza' University of Rome, Rome, Italy
| | - Giuseppe Santoro
- Department of Urology, 'Sapienza' University of Rome, Rome, Italy
| | - Sara Riolo
- Department of Urology, 'Sapienza' University of Rome, Rome, Italy
| | - Matteo Romagnoli
- Department of Urology, 'Sapienza' University of Rome, Rome, Italy
| | - Antonio Cicione
- Department of Urology, 'Sapienza' University of Rome, Rome, Italy
| | - Giorgia Tema
- Department of Urology, 'Sapienza' University of Rome, Rome, Italy
| | - Antonio Pastore
- Department of Urology, 'Sapienza' University of Rome, Rome, Italy
| | - Yazan Al Salhi
- Department of Urology, 'Sapienza' University of Rome, Rome, Italy
| | - Andrea Fuschi
- Department of Urology, 'Sapienza' University of Rome, Rome, Italy
| | - Giorgio Franco
- Department of Urology, 'Sapienza' University of Rome, Rome, Italy
| | - Antonio Nacchia
- Department of Urology, 'Sapienza' University of Rome, Rome, Italy
| | - Andrea Tubaro
- Department of Urology, 'Sapienza' University of Rome, Rome, Italy
| | - Cosimo De Nunzio
- Department of Urology, 'Sapienza' University of Rome, Rome, Italy
| |
Collapse
|
8
|
Cacciamani GE, Chen A, Gill IS, Hung AJ. Artificial intelligence and urology: ethical considerations for urologists and patients. Nat Rev Urol 2024; 21:50-59. [PMID: 37524914 DOI: 10.1038/s41585-023-00796-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/22/2023] [Indexed: 08/02/2023]
Abstract
The use of artificial intelligence (AI) in medicine and in urology specifically has increased over the past few years, during which time it has enabled optimization of patient workflow, increased diagnostic accuracy and enhanced computer analysis of radiological and pathological images. However, before further use of AI is undertaken, possible ethical issues need to be evaluated to improve understanding of this technology and to protect patients and providers. Possible ethical issues that require consideration when applying AI in clinical practice include patient safety, cybersecurity, transparency and interpretability of the data, inclusivity and equity, fostering responsibility and accountability, and the preservation of providers' decision-making and autonomy. Ethical principles for the application of AI to health care and in urology are proposed to guide urologists, patients and regulators to improve use of AI technologies and guide policy-making.
Collapse
Affiliation(s)
- Giovanni E Cacciamani
- The Catherine and Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA.
- AI Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA.
- Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA.
| | - Andrew Chen
- The Catherine and Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
- AI Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Inderbir S Gill
- The Catherine and Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
- AI Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| | - Andrew J Hung
- The Catherine and Joseph Aresty Department of Urology, USC Institute of Urology, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA
- AI Center at USC Urology, USC Institute of Urology, University of Southern California, Los Angeles, CA, USA
| |
Collapse
|