1
|
Hatherley J. Are clinicians ethically obligated to disclose their use of medical machine learning systems to patients? JOURNAL OF MEDICAL ETHICS 2024:jme-2024-109905. [PMID: 39117396 DOI: 10.1136/jme-2024-109905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Accepted: 07/26/2024] [Indexed: 08/10/2024]
Abstract
It is commonly accepted that clinicians are ethically obligated to disclose their use of medical machine learning systems to patients, and that failure to do so would amount to a moral fault for which clinicians ought to be held accountable. Call this 'the disclosure thesis.' Four main arguments have been, or could be, given to support the disclosure thesis in the ethics literature: the risk-based argument, the rights-based argument, the materiality argument and the autonomy argument. In this article, I argue that each of these four arguments are unconvincing, and therefore, that the disclosure thesis ought to be rejected. I suggest that mandating disclosure may also even risk harming patients by providing stakeholders with a way to avoid accountability for harm that results from improper applications or uses of these systems.
Collapse
Affiliation(s)
- Joshua Hatherley
- Department of Philosophy and History of Ideas, Aarhus University, Aarhus, Denmark
| |
Collapse
|
2
|
Daher H, Punchayil SA, Ismail AAE, Fernandes RR, Jacob J, Algazzar MH, Mansour M. Advancements in Pancreatic Cancer Detection: Integrating Biomarkers, Imaging Technologies, and Machine Learning for Early Diagnosis. Cureus 2024; 16:e56583. [PMID: 38646386 PMCID: PMC11031195 DOI: 10.7759/cureus.56583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/20/2024] [Indexed: 04/23/2024] Open
Abstract
Artificial intelligence (AI) has come to play a pivotal role in revolutionizing medical practices, particularly in the field of pancreatic cancer detection and management. As a leading cause of cancer-related deaths, pancreatic cancer warrants innovative approaches due to its typically advanced stage at diagnosis and dismal survival rates. Present detection methods, constrained by limitations in accuracy and efficiency, underscore the necessity for novel solutions. AI-driven methodologies present promising avenues for enhancing early detection and prognosis forecasting. Through the analysis of imaging data, biomarker profiles, and clinical information, AI algorithms excel in discerning subtle abnormalities indicative of pancreatic cancer with remarkable precision. Moreover, machine learning (ML) algorithms facilitate the amalgamation of diverse data sources to optimize patient care. However, despite its huge potential, the implementation of AI in pancreatic cancer detection faces various challenges. Issues such as the scarcity of comprehensive datasets, biases in algorithm development, and concerns regarding data privacy and security necessitate thorough scrutiny. While AI offers immense promise in transforming pancreatic cancer detection and management, ongoing research and collaborative efforts are indispensable in overcoming technical hurdles and ethical dilemmas. This review delves into the evolution of AI, its application in pancreatic cancer detection, and the challenges and ethical considerations inherent in its integration.
Collapse
Affiliation(s)
- Hisham Daher
- Internal Medicine, University of Debrecen, Debrecen, HUN
| | - Sneha A Punchayil
- Internal Medicine, University Hospital of North Tees, Stockton-on-Tees, GBR
| | | | | | - Joel Jacob
- General Medicine, Diana Princess of Wales Hospital, Grimsby, GBR
| | | | - Mohammad Mansour
- General Medicine, University of Debrecen, Debrecen, HUN
- General Medicine, Jordan University Hospital, Amman, JOR
| |
Collapse
|
3
|
Amin A, Cardoso SA, Suyambu J, Abdus Saboor H, Cardoso RP, Husnain A, Isaac NV, Backing H, Mehmood D, Mehmood M, Maslamani ANJ. Future of Artificial Intelligence in Surgery: A Narrative Review. Cureus 2024; 16:e51631. [PMID: 38318552 PMCID: PMC10839429 DOI: 10.7759/cureus.51631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/03/2024] [Indexed: 02/07/2024] Open
Abstract
Artificial intelligence (AI) is the capability of a machine to execute cognitive processes that are typically considered to be functions of the human brain. It is the study of algorithms that enable machines to reason and perform mental tasks, including problem-solving, object and word recognition, and decision-making. Once considered science fiction, AI today is a fact and an increasingly prevalent subject in both academic and popular literature. It is expected to reshape medicine, benefiting both healthcare professionals and patients. Machine learning (ML) is a subset of AI that allows machines to learn and make predictions by recognizing patterns, thus empowering the medical team to deliver better care to patients through accurate diagnosis and treatment. ML is expanding its footprint in a variety of surgical specialties, including general surgery, ophthalmology, cardiothoracic surgery, and vascular surgery, to name a few. In recent years, we have seen AI make its way into the operating theatres. Though it has not yet been able to replace the surgeon, it has the potential to become a highly valuable surgical tool. Rest assured that the day is not far off when AI shall play a significant intraoperative role, a projection that is currently marred by safety concerns. This review aims to explore the present application of AI in various surgical disciplines and how it benefits both patients and physicians, as well as the current obstacles and limitations facing its seemingly unstoppable rise.
Collapse
Affiliation(s)
- Aamir Amin
- Cardiothoracic Surgery, Harefield Hospital, Guy's and St Thomas' NHS Foundation Trust, London, GBR
| | - Swizel Ann Cardoso
- Major Trauma Services, University Hospital Birmingham NHS Foundation Trust DC, Birmingham, GBR
| | - Jenisha Suyambu
- Medicine, University of Perpetual Help System Data - Jonelta Foundation School of Medicine, Las Piñas, PHL
| | | | - Rayner P Cardoso
- Medicine and Surgery, All India Institute of Medical Sciences, Jodhpur, Jodhpur, IND
| | - Ali Husnain
- Radiology, Northwestern University, Lahore, PAK
| | - Natasha Varghese Isaac
- Medicine and Surgery, St John's Medical College Hospital, Rajiv Gandhi University of Health Sciences, Bengaluru, IND
| | - Haydee Backing
- Medicine, Universidad de San Martin de Porres, Lima, PER
| | - Dalia Mehmood
- Community Medicine, Fatima Jinnah Medical University, Lahore, PAK
| | - Maria Mehmood
- Internal Medicine, Shalamar Medical and Dental College, Lahore, PAK
| | | |
Collapse
|
4
|
Park HJ. Patient perspectives on informed consent for medical AI: A web-based experiment. Digit Health 2024; 10:20552076241247938. [PMID: 38698829 PMCID: PMC11064747 DOI: 10.1177/20552076241247938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Accepted: 03/28/2024] [Indexed: 05/05/2024] Open
Abstract
Objective Despite the increasing use of AI applications as a clinical decision support tool in healthcare, patients are often unaware of their use in the physician's decision-making process. This study aims to determine whether doctors should disclose the use of AI tools in diagnosis and what kind of information should be provided. Methods A survey experiment with 1000 respondents in South Korea was conducted to estimate the patients' perceived importance of information regarding the use of an AI tool in diagnosis in deciding whether to receive the treatment. Results The study found that the use of an AI tool increases the perceived importance of information related to its use, compared with when a physician consults with a human radiologist. Information regarding the AI tool when AI is used was perceived by participants either as more important than or similar to the regularly disclosed information regarding short-term effects when AI is not used. Further analysis revealed that gender, age, and income have a statistically significant effect on the perceived importance of every piece of AI information. Conclusions This study supports the disclosure of AI use in diagnosis during the informed consent process. However, the disclosure should be tailored to the individual patient's needs, as patient preferences for information regarding AI use vary across gender, age and income levels. It is recommended that ethical guidelines be developed for informed consent when using AI in diagnoses that go beyond mere legal requirements.
Collapse
Affiliation(s)
- Hai Jin Park
- Center for AI and Law, Hanyang University Law School, Seoul, South Korea
| |
Collapse
|
5
|
Staes CJ, Beck AC, Chalkidis G, Scheese CH, Taft T, Guo JW, Newman MG, Kawamoto K, Sloss EA, McPherson JP. Design of an interface to communicate artificial intelligence-based prognosis for patients with advanced solid tumors: a user-centered approach. J Am Med Inform Assoc 2023; 31:174-187. [PMID: 37847666 PMCID: PMC10746322 DOI: 10.1093/jamia/ocad201] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 09/18/2023] [Accepted: 10/02/2023] [Indexed: 10/19/2023] Open
Abstract
OBJECTIVES To design an interface to support communication of machine learning (ML)-based prognosis for patients with advanced solid tumors, incorporating oncologists' needs and feedback throughout design. MATERIALS AND METHODS Using an interdisciplinary user-centered design approach, we performed 5 rounds of iterative design to refine an interface, involving expert review based on usability heuristics, input from a color-blind adult, and 13 individual semi-structured interviews with oncologists. Individual interviews included patient vignettes and a series of interfaces populated with representative patient data and predicted survival for each treatment decision point when a new line of therapy (LoT) was being considered. Ongoing feedback informed design decisions, and directed qualitative content analysis of interview transcripts was used to evaluate usability and identify enhancement requirements. RESULTS Design processes resulted in an interface with 7 sections, each addressing user-focused questions, supporting oncologists to "tell a story" as they discuss prognosis during a clinical encounter. The iteratively enhanced interface both triggered and reflected design decisions relevant when attempting to communicate ML-based prognosis, and exposed misassumptions. Clinicians requested enhancements that emphasized interpretability over explainability. Qualitative findings confirmed that previously identified issues were resolved and clarified necessary enhancements (eg, use months not days) and concerns about usability and trust (eg, address LoT received elsewhere). Appropriate use should be in the context of a conversation with an oncologist. CONCLUSION User-centered design, ongoing clinical input, and a visualization to communicate ML-related outcomes are important elements for designing any decision support tool enabled by artificial intelligence, particularly when communicating prognosis risk.
Collapse
Affiliation(s)
- Catherine J Staes
- College of Nursing, University of Utah, Salt Lake City, UT 84112, United States
- Department of Biomedical Informatics, School of Medicine, University of Utah, Salt Lake City, UT 84108, United States
| | - Anna C Beck
- Department of Internal Medicine, Huntsman Cancer Institute, University of Utah, Salt Lake City, UT 84112, United States
| | - George Chalkidis
- Healthcare IT Research Department, Center for Digital Services, Hitachi Ltd., Tokyo, Japan
| | - Carolyn H Scheese
- College of Nursing, University of Utah, Salt Lake City, UT 84112, United States
- Department of Biomedical Informatics, School of Medicine, University of Utah, Salt Lake City, UT 84108, United States
| | - Teresa Taft
- Department of Biomedical Informatics, School of Medicine, University of Utah, Salt Lake City, UT 84108, United States
| | - Jia-Wen Guo
- College of Nursing, University of Utah, Salt Lake City, UT 84112, United States
- Department of Biomedical Informatics, School of Medicine, University of Utah, Salt Lake City, UT 84108, United States
| | - Michael G Newman
- Department of Population Sciences, Huntsman Cancer Institute, Salt Lake City, UT 84112, United States
| | - Kensaku Kawamoto
- Department of Biomedical Informatics, School of Medicine, University of Utah, Salt Lake City, UT 84108, United States
| | - Elizabeth A Sloss
- College of Nursing, University of Utah, Salt Lake City, UT 84112, United States
| | - Jordan P McPherson
- Department of Pharmacotherapy, College of Pharmacy, University of Utah, Salt Lake City, UT 84108, United States
- Department of Pharmacy, Huntsman Cancer Institute, Salt Lake City, UT 84112, United States
| |
Collapse
|
6
|
Shi J, Bendig D, Vollmar HC, Rasche P. Mapping the Bibliometrics Landscape of AI in Medicine: Methodological Study. J Med Internet Res 2023; 25:e45815. [PMID: 38064255 PMCID: PMC10746970 DOI: 10.2196/45815] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Revised: 08/16/2023] [Accepted: 09/30/2023] [Indexed: 12/18/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI), conceived in the 1950s, has permeated numerous industries, intensifying in tandem with advancements in computing power. Despite the widespread adoption of AI, its integration into medicine trails other sectors. However, medical AI research has experienced substantial growth, attracting considerable attention from researchers and practitioners. OBJECTIVE In the absence of an existing framework, this study aims to outline the current landscape of medical AI research and provide insights into its future developments by examining all AI-related studies within PubMed over the past 2 decades. We also propose potential data acquisition and analysis methods, developed using Python (version 3.11) and to be executed in Spyder IDE (version 5.4.3), for future analogous research. METHODS Our dual-pronged approach involved (1) retrieving publication metadata related to AI from PubMed (spanning 2000-2022) via Python, including titles, abstracts, authors, journals, country, and publishing years, followed by keyword frequency analysis and (2) classifying relevant topics using latent Dirichlet allocation, an unsupervised machine learning approach, and defining the research scope of AI in medicine. In the absence of a universal medical AI taxonomy, we used an AI dictionary based on the European Commission Joint Research Centre AI Watch report, which emphasizes 8 domains: reasoning, planning, learning, perception, communication, integration and interaction, service, and AI ethics and philosophy. RESULTS From 2000 to 2022, a comprehensive analysis of 307,701 AI-related publications from PubMed highlighted a 36-fold increase. The United States emerged as a clear frontrunner, producing 68,502 of these articles. Despite its substantial contribution in terms of volume, China lagged in terms of citation impact. Diving into specific AI domains, as the Joint Research Centre AI Watch report categorized, the learning domain emerged dominant. Our classification analysis meticulously traced the nuanced research trajectories across each domain, revealing the multifaceted and evolving nature of AI's application in the realm of medicine. CONCLUSIONS The research topics have evolved as the volume of AI studies increases annually. Machine learning remains central to medical AI research, with deep learning expected to maintain its fundamental role. Empowered by predictive algorithms, pattern recognition, and imaging analysis capabilities, the future of AI research in medicine is anticipated to concentrate on medical diagnosis, robotic intervention, and disease management. Our topic modeling outcomes provide a clear insight into the focus of AI research in medicine over the past decades and lay the groundwork for predicting future directions. The domains that have attracted considerable research attention, primarily the learning domain, will continue to shape the trajectory of AI in medicine. Given the observed growing interest, the domain of AI ethics and philosophy also stands out as a prospective area of increased focus.
Collapse
Affiliation(s)
- Jin Shi
- Institute for Entrepreneurship, University of Münster, Münster, Germany
| | - David Bendig
- Institute for Entrepreneurship, University of Münster, Münster, Germany
| | | | - Peter Rasche
- Department of Healthcare, University of Applied Science - Hochschule Niederrhein, Krefeld, Germany
| |
Collapse
|
7
|
Gruber K. The growing threat of cyberwarfare in cancer healthcare. NATURE CANCER 2023; 4:1615-1617. [PMID: 38102356 DOI: 10.1038/s43018-023-00659-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2023]
|
8
|
Katta MR, Kalluru PKR, Bavishi DA, Hameed M, Valisekka SS. Artificial intelligence in pancreatic cancer: diagnosis, limitations, and the future prospects-a narrative review. J Cancer Res Clin Oncol 2023:10.1007/s00432-023-04625-1. [PMID: 36739356 DOI: 10.1007/s00432-023-04625-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Accepted: 01/27/2023] [Indexed: 02/06/2023]
Abstract
PURPOSE This review aims to explore the role of AI in the application of pancreatic cancer management and make recommendations to minimize the impact of the limitations to provide further benefits from AI use in the future. METHODS A comprehensive review of the literature was conducted using a combination of MeSH keywords, including "Artificial intelligence", "Pancreatic cancer", "Diagnosis", and "Limitations". RESULTS The beneficial implications of AI in the detection of biomarkers, diagnosis, and prognosis of pancreatic cancer have been explored. In addition, current drawbacks of AI use have been divided into subcategories encompassing statistical, training, and knowledge limitations; data handling, ethical and medicolegal aspects; and clinical integration and implementation. CONCLUSION Artificial intelligence (AI) refers to computational machine systems that accomplish a set of given tasks by imitating human intelligence in an exponential learning pattern. AI in gastrointestinal oncology has continued to provide significant advancements in the clinical, molecular, and radiological diagnosis and intervention techniques required to improve the prognosis of many gastrointestinal cancer types, particularly pancreatic cancer.
Collapse
Affiliation(s)
| | | | | | - Maha Hameed
- Clinical Research Department, King Faisal Specialist Hospital and Research Centre, Riyadh, Saudi Arabia.
| | | |
Collapse
|
9
|
Chaddad A, Peng J, Xu J, Bouridane A. Survey of Explainable AI Techniques in Healthcare. SENSORS (BASEL, SWITZERLAND) 2023; 23:634. [PMID: 36679430 PMCID: PMC9862413 DOI: 10.3390/s23020634] [Citation(s) in RCA: 33] [Impact Index Per Article: 33.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 12/14/2022] [Accepted: 12/29/2022] [Indexed: 05/27/2023]
Abstract
Artificial intelligence (AI) with deep learning models has been widely applied in numerous domains, including medical imaging and healthcare tasks. In the medical field, any judgment or decision is fraught with risk. A doctor will carefully judge whether a patient is sick before forming a reasonable explanation based on the patient's symptoms and/or an examination. Therefore, to be a viable and accepted tool, AI needs to mimic human judgment and interpretation skills. Specifically, explainable AI (XAI) aims to explain the information behind the black-box model of deep learning that reveals how the decisions are made. This paper provides a survey of the most recent XAI techniques used in healthcare and related medical imaging applications. We summarize and categorize the XAI types, and highlight the algorithms used to increase interpretability in medical imaging topics. In addition, we focus on the challenging XAI problems in medical applications and provide guidelines to develop better interpretations of deep learning models using XAI concepts in medical image and text analysis. Furthermore, this survey provides future directions to guide developers and researchers for future prospective investigations on clinical topics, particularly on applications with medical imaging.
Collapse
Affiliation(s)
- Ahmad Chaddad
- School of Artificial Intelligence, Guilin University of Electronic Technology, Jinji Road, Guilin 541004, China
- The Laboratory for Imagery Vision and Artificial Intelligence, Ecole de Technologie Superieure, 1100 Rue Notre Dame O, Montreal, QC H3C 1K3, Canada
| | - Jihao Peng
- School of Artificial Intelligence, Guilin University of Electronic Technology, Jinji Road, Guilin 541004, China
| | - Jian Xu
- School of Artificial Intelligence, Guilin University of Electronic Technology, Jinji Road, Guilin 541004, China
| | - Ahmed Bouridane
- Centre for Data Analytics and Cybersecurity, University of Sharjah, Sharjah 27272, United Arab Emirates
| |
Collapse
|
10
|
Müller S. Is there a civic duty to support medical AI development by sharing electronic health records? BMC Med Ethics 2022; 23:134. [PMID: 36496427 PMCID: PMC9736708 DOI: 10.1186/s12910-022-00871-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Accepted: 12/01/2022] [Indexed: 12/13/2022] Open
Abstract
Medical artificial intelligence (AI) is considered to be one of the most important assets for the future of innovative individual and public health care. To develop innovative medical AI, it is necessary to repurpose data that are primarily generated in and for the health care context. Usually, health data can only be put to a secondary use if data subjects provide their informed consent (IC). This regulation, however, is believed to slow down or even prevent vital medical research, including AI development. For this reason, a number of scholars advocate a moral civic duty to share electronic health records (EHRs) that overrides IC requirements in certain contexts. In the medical AI context, the common arguments for such a duty have not been subjected to a comprehensive challenge. This article sheds light on the correlation between two normative discourses concerning informed consent for secondary health record use and the development and use of medical AI. There are three main arguments in favour of a civic duty to support certain developments in medical AI by sharing EHRs: the 'rule to rescue argument', the 'low risks, high benefits argument', and the 'property rights argument'. This article critiques all three arguments because they either derive a civic duty from premises that do not apply to the medical AI context, or they rely on inappropriate analogies, or they ignore significant risks entailed by the EHR sharing process and the use of medical AI. Given this result, the article proposes an alternative civic responsibility approach that can attribute different responsibilities to different social groups and individuals and that can contextualise those responsibilities for the purpose of medical AI development.
Collapse
Affiliation(s)
- Sebastian Müller
- grid.10388.320000 0001 2240 3300Center for Life Ethics/Heinrich Hertz Chair TRA4, University of Bonn, Schaumburg- Lippe-Straße 5-7, 53113 Bonn, Germany
| |
Collapse
|
11
|
Hantel A, Clancy DD, Kehl KL, Marron JM, Van Allen EM, Abel GA. A Process Framework for Ethically Deploying Artificial Intelligence in Oncology. J Clin Oncol 2022; 40:3907-3911. [PMID: 35849792 PMCID: PMC9746763 DOI: 10.1200/jco.22.01113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 06/01/2022] [Accepted: 06/21/2022] [Indexed: 12/24/2022] Open
|
12
|
Hatherley J, Sparrow R. Diachronic and synchronic variation in the performance of adaptive machine learning systems: the ethical challenges. J Am Med Inform Assoc 2022; 30:361-366. [PMID: 36377970 PMCID: PMC9846684 DOI: 10.1093/jamia/ocac218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 09/19/2022] [Accepted: 11/01/2022] [Indexed: 11/16/2022] Open
Abstract
OBJECTIVES Machine learning (ML) has the potential to facilitate "continual learning" in medicine, in which an ML system continues to evolve in response to exposure to new data over time, even after being deployed in a clinical setting. In this article, we provide a tutorial on the range of ethical issues raised by the use of such "adaptive" ML systems in medicine that have, thus far, been neglected in the literature. TARGET AUDIENCE The target audiences for this tutorial are the developers of ML AI systems, healthcare regulators, the broader medical informatics community, and practicing clinicians. SCOPE Discussions of adaptive ML systems to date have overlooked the distinction between 2 sorts of variance that such systems may exhibit-diachronic evolution (change over time) and synchronic variation (difference between cotemporaneous instantiations of the algorithm at different sites)-and underestimated the significance of the latter. We highlight the challenges that diachronic evolution and synchronic variation present for the quality of patient care, informed consent, and equity, and discuss the complex ethical trade-offs involved in the design of such systems.
Collapse
Affiliation(s)
- Joshua Hatherley
- Corresponding Author: Joshua Hatherley, MBioethics, Philosophy Department, School of Philosophical, Historical and International Studies, Monash University, Level 6, 20 Chancellor's Walk (Menzies Building), Wellington Road, Clayton, VIC 3800, Australia;
| | - Robert Sparrow
- Philosophy Department, School of Philosophical, Historical and International Studies, Monash University, Clayton, Victoria 3800, Australia
| |
Collapse
|
13
|
Schwarz GM, Simon S, Mitterer JA, Frank BJH, Aichmair A, Dominkus M, Hofstaetter JG. Artificial intelligence enables reliable and standardized measurements of implant alignment in long leg radiographs with total knee arthroplasties. Knee Surg Sports Traumatol Arthrosc 2022; 30:2538-2547. [PMID: 35819465 DOI: 10.1007/s00167-022-07037-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 06/01/2022] [Indexed: 11/26/2022]
Abstract
PURPOSE The purpose of this study was to evaluate the reliability of a newly developed AI-algorithm for the evaluation of long leg radiographs (LLR) after total knee arthroplasties (TKA). METHODS In the validation cohort 200 calibrated LLRs of eight different common unconstrained and constrained knee systems were analysed. Accuracy and reproducibility of the AI-algorithm were compared to manual reads regarding the hip-knee-ankle (HKA) as well as femoral (FCA) and tibial component (TCA) angles. In the evaluation cohort all institutional LLRs with TKAs in 2018 (n = 1312) were evaluated to assess the algorithms' ability of handling large data sets. Intraclass correlation (ICC) coefficient and mean absolute deviation (sMAD) were calculated to assess conformity between the AI software and manual reads. RESULTS Validation cohort: The AI-software was reproducible on 96% and reliable on 92.1% of LLRs with an output and showed excellent reliability in all measured angles (ICC > 0.97) compared to manual measurements. Excellent results were found for primary unconstrained TKAs. In constrained TKAs landmark setting on the femoral and tibial component failed in 12.5% of LLRs (n = 9). Evaluation cohort: Mean measurements for all postoperative TKAs (n = 1240) were 0.2° varus ± 2.5° (HKA), 89.3° ± 1.9° (FCA), and 89.1° ± 1.6° (TCA). Mean measurements on preoperative revision TKAs (n = 74) were 1.6 varus ± 6.4° (HKA), 90.5° ± 3.1° (FCA), and 88.9° ± 4.1° (TCA). CONCLUSIONS AI-powered applications are reliable for automated analysis of lower limb alignment on LLRs with TKAs. They are capable of handling large data sets and could, therefore, lead to more standardized and efficient postoperative quality controls. LEVEL OF EVIDENCE Diagnostic Level III.
Collapse
Affiliation(s)
- Gilbert M Schwarz
- Department of Orthopedics and Trauma-Surgery, Medical University of Vienna, Währinger Gürtel 18-20, 1090, Vienna, Austria
- Michael Ogon Laboratory for Orthopaedic Research, Orthopaedic Hospital Vienna Speising, Speisinger Straße 109, 1130, Vienna, Austria
- Center for Anatomy and Cell Biology, Medical University Vienna, Währinger Straße 13, 1090, Vienna, Austria
| | - Sebastian Simon
- Michael Ogon Laboratory for Orthopaedic Research, Orthopaedic Hospital Vienna Speising, Speisinger Straße 109, 1130, Vienna, Austria
- 2nd Department, Orthopaedic Hospital Vienna Speising, Speisinger Straße 109, 1130, Vienna, Austria
| | - Jennyfer A Mitterer
- Michael Ogon Laboratory for Orthopaedic Research, Orthopaedic Hospital Vienna Speising, Speisinger Straße 109, 1130, Vienna, Austria
| | - Bernhard J H Frank
- Michael Ogon Laboratory for Orthopaedic Research, Orthopaedic Hospital Vienna Speising, Speisinger Straße 109, 1130, Vienna, Austria
| | - Alexander Aichmair
- Michael Ogon Laboratory for Orthopaedic Research, Orthopaedic Hospital Vienna Speising, Speisinger Straße 109, 1130, Vienna, Austria
- 2nd Department, Orthopaedic Hospital Vienna Speising, Speisinger Straße 109, 1130, Vienna, Austria
| | - Martin Dominkus
- 2nd Department, Orthopaedic Hospital Vienna Speising, Speisinger Straße 109, 1130, Vienna, Austria
- School of Medicine, Sigmund Freud University Vienna, Freudplatz 3, 1020, Vienna, Austria
| | - Jochen G Hofstaetter
- Michael Ogon Laboratory for Orthopaedic Research, Orthopaedic Hospital Vienna Speising, Speisinger Straße 109, 1130, Vienna, Austria.
- 2nd Department, Orthopaedic Hospital Vienna Speising, Speisinger Straße 109, 1130, Vienna, Austria.
| |
Collapse
|
14
|
Wellnhofer E. Real-World and Regulatory Perspectives of Artificial Intelligence in Cardiovascular Imaging. Front Cardiovasc Med 2022; 9:890809. [PMID: 35935648 PMCID: PMC9354141 DOI: 10.3389/fcvm.2022.890809] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2022] [Accepted: 06/13/2022] [Indexed: 12/02/2022] Open
Abstract
Recent progress in digital health data recording, advances in computing power, and methodological approaches that extract information from data as artificial intelligence are expected to have a disruptive impact on technology in medicine. One of the potential benefits is the ability to extract new and essential insights from the vast amount of data generated during health care delivery every day. Cardiovascular imaging is boosted by new intelligent automatic methods to manage, process, segment, and analyze petabytes of image data exceeding historical manual capacities. Algorithms that learn from data raise new challenges for regulatory bodies. Partially autonomous behavior and adaptive modifications and a lack of transparency in deriving evidence from complex data pose considerable problems. Controlling new technologies requires new controlling techniques and ongoing regulatory research. All stakeholders must participate in the quest to find a fair balance between innovation and regulation. The regulatory approach to artificial intelligence must be risk-based and resilient. A focus on unknown emerging risks demands continuous surveillance and clinical evaluation during the total product life cycle. Since learning algorithms are data-driven, high-quality data is fundamental for good machine learning practice. Mining, processing, validation, governance, and data control must account for bias, error, inappropriate use, drifts, and shifts, particularly in real-world data. Regulators worldwide are tackling twenty-first century challenges raised by "learning" medical devices. Ethical concerns and regulatory approaches are presented. The paper concludes with a discussion on the future of responsible artificial intelligence.
Collapse
Affiliation(s)
- Ernst Wellnhofer
- Institute of Computer-Assisted Cardiovascular Medicine, Charité University Medicine Berlin, Berlin, Germany
| |
Collapse
|
15
|
Bhatia S, Bansal D, Patil S, Pandya S, Ilyas QM, Imran S. A Retrospective Study of Climate Change Affecting Dengue: Evidences, Challenges and Future Directions. Front Public Health 2022; 10:884645. [PMID: 35712272 PMCID: PMC9197220 DOI: 10.3389/fpubh.2022.884645] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2022] [Accepted: 04/26/2022] [Indexed: 11/30/2022] Open
Abstract
Climate change is unexpected weather patterns that can create an alarming situation. Due to climate change, various sectors are affected, and one of the sectors is healthcare. As a result of climate change, the geographic range of several vector-borne human infectious diseases will expand. Currently, dengue is taking its toll, and climate change is one of the key reasons contributing to the intensification of dengue disease transmission. The most important climatic factors linked to dengue transmission are temperature, rainfall, and relative humidity. The present study carries out a systematic literature review on the surveillance system to predict dengue outbreaks based on Machine Learning modeling techniques. The systematic literature review discusses the methodology and objectives, the number of studies carried out in different regions and periods, the association between climatic factors and the increase in positive dengue cases. This study also includes a detailed investigation of meteorological data, the dengue positive patient data, and the pre-processing techniques used for data cleaning. Furthermore, correlation techniques in several studies to determine the relationship between dengue incidence and meteorological parameters and machine learning models for predictive analysis are discussed. In the future direction for creating a dengue surveillance system, several research challenges and limitations of current work are discussed.
Collapse
Affiliation(s)
- Surbhi Bhatia
- Department of Information Systems, College of Computer Sciences and Information Technology, King Faisal University, Al-Ahsa, Saudi Arabia
| | - Dhruvisha Bansal
- Symbiosis Institute of Technology, Symbiosis International (Deemed) University, Pune, India
| | - Seema Patil
- Symbiosis Institute of Technology, Symbiosis International (Deemed) University, Pune, India
| | - Sharnil Pandya
- Symbiosis Institute of Technology, Symbiosis International (Deemed) University, Pune, India
| | - Qazi Mudassar Ilyas
- Department of Information Systems, College of Computer Sciences and Information Technology, King Faisal University, Al-Ahsa, Saudi Arabia
| | - Sajida Imran
- Department of Computer Engineering, College of Computer Sciences and Information Technology, King Faisal University, Al-Ahsa, Saudi Arabia
| |
Collapse
|
16
|
Fully automated deep learning for knee alignment assessment in lower extremity radiographs: a cross-sectional diagnostic study. Skeletal Radiol 2022; 51:1249-1259. [PMID: 34773485 DOI: 10.1007/s00256-021-03948-9] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Revised: 10/27/2021] [Accepted: 10/27/2021] [Indexed: 02/02/2023]
Abstract
OBJECTIVES Accurate assessment of knee alignment and leg length discrepancy is currently measured manually from standing long-leg radiographs (LLR), a process that is both time consuming and poorly reproducible. The aim was to assess the performance of a commercial available AI software by comparing its outputs with manually performed measurements. MATERIALS AND METHODS The AI was trained on over 15,000 radiographs to measure various clinical angles and lengths from LLRs. We performed a retrospective single-center analysis on 295 LLRs obtained between 2015 and 2020 from male and female patients over 18 years. AI and expert measurements were performed independently. Kellgren-Lawrence score and reading time were assessed. All measurements were compared and non-inferiority, mean-absolute-deviation (sMAD), and intra-class-correlation (ICC) were calculated. RESULTS A total of 295 LLRs from 284 patients (mean age, 65 years (18; 90); 97 (34.2%) men) were analyzed. The AI model produces outputs on 98.0% of the LLRs. Manually annotations were considered as 100% accurate. For each measurement, its divergence was calculated, resulting in an overall accuracy of 89.2% when comparing the AI outputs to the manually measured. AI vs. mean observer revealed an sMAD between 0.39 and 2.19° for angles and 1.45-5.00 mm for lengths. AI showed good reliability in all lengths and angles (ICC ≥ 0.87). Non-inferiority comparing AI to the mean observer revealed an equivalence-index (γ) between 0.54 and 3.03° for angles and - 0.70-1.95 mm for lengths. On average, AI was 130 s faster than clinicians. CONCLUSION Automated measurements of knee alignment and length measurements produced with an AI tool result in reproducible, accurate measures with a time savings compared to manually acquired measurements.
Collapse
|
17
|
Funer F. The Deception of Certainty: how Non-Interpretable Machine Learning Outcomes Challenge the Epistemic Authority of Physicians. A deliberative-relational Approach. MEDICINE, HEALTH CARE AND PHILOSOPHY 2022; 25:167-178. [PMID: 35538267 PMCID: PMC9089291 DOI: 10.1007/s11019-022-10076-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 03/03/2022] [Accepted: 03/03/2022] [Indexed: 02/06/2023]
Abstract
Developments in Machine Learning (ML) have attracted attention in a wide range of healthcare fields to improve medical practice and the benefit of patients. Particularly, this should be achieved by providing more or less automated decision recommendations to the treating physician. However, some hopes placed in ML for healthcare seem to be disappointed, at least in part, by a lack of transparency or traceability. Skepticism exists primarily in the fact that the physician, as the person responsible for diagnosis, therapy, and care, has no or insufficient insight into how such recommendations are reached. The following paper aims to make understandable the specificity of the deliberative model of a physician-patient relationship that has been achieved over decades. By outlining the (social-)epistemic and inherently normative relationship between physicians and patients, I want to show how this relationship might be altered by non-traceable ML recommendations. With respect to some healthcare decisions, such changes in deliberative practice may create normatively far-reaching challenges. Therefore, in the future, a differentiation of decision-making situations in healthcare with respect to the necessary depth of insight into the process of outcome generation seems essential.
Collapse
|
18
|
Kiseleva A, Kotzinos D, De Hert P. Transparency of AI in Healthcare as a Multilayered System of Accountabilities: Between Legal Requirements and Technical Limitations. Front Artif Intell 2022; 5:879603. [PMID: 35707765 PMCID: PMC9189302 DOI: 10.3389/frai.2022.879603] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2022] [Accepted: 03/31/2022] [Indexed: 11/13/2022] Open
Abstract
The lack of transparency is one of the artificial intelligence (AI)'s fundamental challenges, but the concept of transparency might be even more opaque than AI itself. Researchers in different fields who attempt to provide the solutions to improve AI's transparency articulate different but neighboring concepts that include, besides transparency, explainability and interpretability. Yet, there is no common taxonomy neither within one field (such as data science) nor between different fields (law and data science). In certain areas like healthcare, the requirements of transparency are crucial since the decisions directly affect people's lives. In this paper, we suggest an interdisciplinary vision on how to tackle the issue of AI's transparency in healthcare, and we propose a single point of reference for both legal scholars and data scientists on transparency and related concepts. Based on the analysis of the European Union (EU) legislation and literature in computer science, we submit that transparency shall be considered the “way of thinking” and umbrella concept characterizing the process of AI's development and use. Transparency shall be achieved through a set of measures such as interpretability and explainability, communication, auditability, traceability, information provision, record-keeping, data governance and management, and documentation. This approach to deal with transparency is of general nature, but transparency measures shall be always contextualized. By analyzing transparency in the healthcare context, we submit that it shall be viewed as a system of accountabilities of involved subjects (AI developers, healthcare professionals, and patients) distributed at different layers (insider, internal, and external layers, respectively). The transparency-related accountabilities shall be built-in into the existing accountability picture which justifies the need to investigate the relevant legal frameworks. These frameworks correspond to different layers of the transparency system. The requirement of informed medical consent correlates to the external layer of transparency and the Medical Devices Framework is relevant to the insider and internal layers. We investigate the said frameworks to inform AI developers on what is already expected from them with regards to transparency. We also discover the gaps in the existing legislative frameworks concerning AI's transparency in healthcare and suggest the solutions to fill them in.
Collapse
Affiliation(s)
- Anastasiya Kiseleva
- LSTS Research Group (Law, Science, Technology and Society), Faculty of Law, Vrije Universiteit Brussels, Brussels, Belgium
- ETIS Research Lab, Faculity of Computer Science, CY Cergy Paris University, Cergy-Pontoise, France
- *Correspondence: Anastasiya Kiseleva
| | - Dimitris Kotzinos
- ETIS Research Lab, Faculity of Computer Science, CY Cergy Paris University, Cergy-Pontoise, France
| | - Paul De Hert
- LSTS Research Group (Law, Science, Technology and Society), Faculty of Law, Vrije Universiteit Brussels, Brussels, Belgium
| |
Collapse
|
19
|
Čartolovni A, Tomičić A, Lazić Mosler E. Ethical, legal, and social considerations of AI-based medical decision-support tools: A scoping review. Int J Med Inform 2022; 161:104738. [PMID: 35299098 DOI: 10.1016/j.ijmedinf.2022.104738] [Citation(s) in RCA: 45] [Impact Index Per Article: 22.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Revised: 02/11/2022] [Accepted: 03/10/2022] [Indexed: 10/18/2022]
Abstract
INTRODUCTION Recent developments in the field of Artificial Intelligence (AI) applied to healthcare promise to solve many of the existing global issues in advancing human health and managing global health challenges. This comprehensive review aims not only to surface the underlying ethical and legal but also social implications (ELSI) that have been overlooked in recent reviews while deserving equal attention in the development stage, and certainly ahead of implementation in healthcare. It is intended to guide various stakeholders (eg. designers, engineers, clinicians) in addressing the ELSI of AI at the design stage using the Ethics by Design (EbD) approach. METHODS The authors followed a systematised scoping methodology and searched the following databases: Pubmed, Web of science, Ovid, Scopus, IEEE Xplore, EBSCO Search (Academic Search Premier, CINAHL, PSYCINFO, APA PsycArticles, ERIC) for the ELSI of AI in healthcare through January 2021. Data were charted and synthesised, and the authors conducted a descriptive and thematic analysis of the collected data. RESULTS After reviewing 1108 papers, 94 were included in the final analysis. Our results show a growing interest in the academic community for ELSI in the field of AI. The main issues of concern identified in our analysis fall into four main clusters of impact: AI algorithms, physicians, patients, and healthcare in general. The most prevalent issues are patient safety, algorithmic transparency, lack of proper regulation, liability & accountability, impact on patient-physician relationship and governance of AI empowered healthcare. CONCLUSIONS The results of our review confirm the potential of AI to significantly improve patient care, but the drawbacks to its implementation relate to complex ELSI that have yet to be addressed. Most ELSI refer to the impact on and extension of the reciprocal and fiduciary patient-physician relationship. With the integration of AIbased decision making tools, a bilateral patient-physician relationship may shift into a trilateral one.
Collapse
Affiliation(s)
- Anto Čartolovni
- Digital Healthcare Ethics Laboratory (Digit-HeaL), Catholic University of Croatia, Ilica 242, 10 000 Zagreb, Croatia; School of Medicine, Catholic University of Croatia, Ilica 242, 10 000 Zagreb, Croatia.
| | - Ana Tomičić
- Digital Healthcare Ethics Laboratory (Digit-HeaL), Catholic University of Croatia, Ilica 242, 10 000 Zagreb, Croatia.
| | - Elvira Lazić Mosler
- School of Medicine, Catholic University of Croatia, Ilica 242, 10 000 Zagreb, Croatia; General Hospital Dr. Ivo Pedišić, Sisak, Croatia.
| |
Collapse
|
20
|
Iqbal JD, Christen M. The use of artificial intelligence applications in medicine and the standard required for healthcare provider-patient briefings-an exploratory study. Digit Health 2022; 8:20552076221147423. [PMID: 36601281 PMCID: PMC9806394 DOI: 10.1177/20552076221147423] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Accepted: 12/08/2022] [Indexed: 12/28/2022] Open
Abstract
Introduction Digital Health Technologies (DHTs) are currently being funneled through legacy regulatory processes that are not adapted to the unique particularities of this new technology class. In the absence of adequate regulation of DHTs, the briefing of a patient by their healthcare provider (HCP) as a component of informed consent can present the last line of defense before potentially harmful technologies are employed on a patient. Methods This exploratory study utilizes a case vignette of a machine learning-based technology for the diagnosis of ischemic heart disease that is presented to a group of medical students, physicians, and bioethicists. What constitutes the necessary standard and content of the HCP-patient briefings is explored using a survey (N = 34). Whether participants actually provide a sufficient HCP-patient briefing is evaluated based on audio recordings. Results and Conclusions We find that participants deem artificial intelligence use in medical context should be declared to patients and argue that the explanation should currently follow the standard required of other experimental procedures. Further, since our study provides indications that implementation of HCP-patient briefings lacks behind the identified standard, opportunities for incorporation of training on the use of DHTs into medical curricula and continuous training schedules should be considered.
Collapse
Affiliation(s)
- Jeffrey David Iqbal
- Faculty of Medicine, University of Zurich, Zurich, Switzerland
- Digital Society Initiative, University of Zurich, Zürich, Switzerland
| | - Markus Christen
- Faculty of Medicine, University of Zurich, Zurich, Switzerland
- Digital Society Initiative, University of Zurich, Zürich, Switzerland
| |
Collapse
|
21
|
Babaei Rikan S, Sorayaie Azar A, Ghafari A, Bagherzadeh Mohasefi J, Pirnejad H. COVID-19 Diagnosis from Routine Blood Tests using Artificial Intelligence Techniques. Biomed Signal Process Control 2021; 72:103263. [PMID: 34745318 PMCID: PMC8559794 DOI: 10.1016/j.bspc.2021.103263] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Revised: 09/30/2021] [Accepted: 10/15/2021] [Indexed: 12/21/2022]
Abstract
Coronavirus disease (COVID-19) is a
unique worldwide pandemic. With new mutations of the virus with higher
transmission rates, it is imperative to diagnose positive cases as
quickly and accurately as possible. Therefore, a fast, accurate, and
automatic system for COVID-19 diagnosis can be very useful for
clinicians. In this study, seven machine learning and four deep learning
models were presented to diagnose positive cases of COVID-19 from three
routine laboratory blood tests datasets. Three correlation coefficient
methods, i.e., Pearson, Spearman, and Kendall, were used to demonstrate
the relevance among samples. A four-fold cross-validation method was used
to train, validate, and test the proposed models. In all three datasets,
the proposed deep neural network (DNN) model achieved the highest values
of accuracy, precision, recall or sensitivity, specificity, F1-Score,
AUC, and MCC. On average, accuracy 92.11%, specificity 84.56%, and AUC
92.20% values have been obtained in the first dataset. In the second
dataset, on average, accuracy 93.16%, specificity 93.02%, and AUC 93.20%
values have been obtained. Finally, in the third dataset, on average, the
values of accuracy 92.5%, specificity 85%, and AUC 92.20% have been
obtained. In this study, we used a statistical t-test to validate the
results. Finally, using artificial intelligence interpretation methods,
important and impactful features in the developed model were presented.
The proposed DNN model can be used as a supplementary tool for diagnosing
COVID-19, which can quickly provide clinicians with highly accurate
diagnoses of positive cases in a timely manner.
Collapse
Affiliation(s)
| | | | - Ali Ghafari
- Medical Physics and Biomedical Engineering Department, Medical Faculty, Tehran University of Medical Sciences, Tehran, Iran
| | | | - Habibollah Pirnejad
- Patient Safety Research Center, Clinical Research Institute, Urmia University of Medical Sciences, Urmia, Iran.,Erasmus School of Health Policy & Management (ESHPM), Erasmus University Rotterdam, Rotterdam, The Netherlands
| |
Collapse
|
22
|
Coppola F, Faggioni L, Gabelloni M, De Vietro F, Mendola V, Cattabriga A, Cocozza MA, Vara G, Piccinino A, Lo Monaco S, Pastore LV, Mottola M, Malavasi S, Bevilacqua A, Neri E, Golfieri R. Human, All Too Human? An All-Around Appraisal of the "Artificial Intelligence Revolution" in Medical Imaging. Front Psychol 2021; 12:710982. [PMID: 34650476 PMCID: PMC8505993 DOI: 10.3389/fpsyg.2021.710982] [Citation(s) in RCA: 48] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Accepted: 09/02/2021] [Indexed: 12/22/2022] Open
Abstract
Artificial intelligence (AI) has seen dramatic growth over the past decade, evolving from a niche super specialty computer application into a powerful tool which has revolutionized many areas of our professional and daily lives, and the potential of which seems to be still largely untapped. The field of medicine and medical imaging, as one of its various specialties, has gained considerable benefit from AI, including improved diagnostic accuracy and the possibility of predicting individual patient outcomes and options of more personalized treatment. It should be noted that this process can actively support the ongoing development of advanced, highly specific treatment strategies (e.g., target therapies for cancer patients) while enabling faster workflow and more efficient use of healthcare resources. The potential advantages of AI over conventional methods have made it attractive for physicians and other healthcare stakeholders, raising much interest in both the research and the industry communities. However, the fast development of AI has unveiled its potential for disrupting the work of healthcare professionals, spawning concerns among radiologists that, in the future, AI may outperform them, thus damaging their reputations or putting their jobs at risk. Furthermore, this development has raised relevant psychological, ethical, and medico-legal issues which need to be addressed for AI to be considered fully capable of patient management. The aim of this review is to provide a brief, hopefully exhaustive, overview of the state of the art of AI systems regarding medical imaging, with a special focus on how AI and the entire healthcare environment should be prepared to accomplish the goal of a more advanced human-centered world.
Collapse
Affiliation(s)
- Francesca Coppola
- Department of Radiology, IRCCS Azienda Ospedaliero Universitaria di Bologna, Bologna, Italy
- SIRM Foundation, Italian Society of Medical and Interventional Radiology, Milan, Italy
| | - Lorenzo Faggioni
- Academic Radiology, Department of Translational Research, University of Pisa, Pisa, Italy
| | - Michela Gabelloni
- Academic Radiology, Department of Translational Research, University of Pisa, Pisa, Italy
| | - Fabrizio De Vietro
- Academic Radiology, Department of Translational Research, University of Pisa, Pisa, Italy
| | - Vincenzo Mendola
- Academic Radiology, Department of Translational Research, University of Pisa, Pisa, Italy
| | - Arrigo Cattabriga
- Department of Radiology, IRCCS Azienda Ospedaliero Universitaria di Bologna, Bologna, Italy
| | - Maria Adriana Cocozza
- Department of Radiology, IRCCS Azienda Ospedaliero Universitaria di Bologna, Bologna, Italy
| | - Giulio Vara
- Department of Radiology, IRCCS Azienda Ospedaliero Universitaria di Bologna, Bologna, Italy
| | - Alberto Piccinino
- Department of Radiology, IRCCS Azienda Ospedaliero Universitaria di Bologna, Bologna, Italy
| | - Silvia Lo Monaco
- Department of Radiology, IRCCS Azienda Ospedaliero Universitaria di Bologna, Bologna, Italy
| | - Luigi Vincenzo Pastore
- Department of Radiology, IRCCS Azienda Ospedaliero Universitaria di Bologna, Bologna, Italy
| | - Margherita Mottola
- Department of Computer Science and Engineering, University of Bologna, Bologna, Italy
| | - Silvia Malavasi
- Department of Computer Science and Engineering, University of Bologna, Bologna, Italy
| | - Alessandro Bevilacqua
- Department of Computer Science and Engineering, University of Bologna, Bologna, Italy
| | - Emanuele Neri
- SIRM Foundation, Italian Society of Medical and Interventional Radiology, Milan, Italy
- Academic Radiology, Department of Translational Research, University of Pisa, Pisa, Italy
| | - Rita Golfieri
- Department of Radiology, IRCCS Azienda Ospedaliero Universitaria di Bologna, Bologna, Italy
| |
Collapse
|
23
|
Ursin F, Timmermann C, Orzechowski M, Steger F. Diagnosing Diabetic Retinopathy With Artificial Intelligence: What Information Should Be Included to Ensure Ethical Informed Consent? Front Med (Lausanne) 2021; 8:695217. [PMID: 34368192 PMCID: PMC8333706 DOI: 10.3389/fmed.2021.695217] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Accepted: 06/22/2021] [Indexed: 11/13/2022] Open
Abstract
Purpose: The method of diagnosing diabetic retinopathy (DR) through artificial intelligence (AI)-based systems has been commercially available since 2018. This introduces new ethical challenges with regard to obtaining informed consent from patients. The purpose of this work is to develop a checklist of items to be disclosed when diagnosing DR with AI systems in a primary care setting. Methods: Two systematic literature searches were conducted in PubMed and Web of Science databases: a narrow search focusing on DR and a broad search on general issues of AI-based diagnosis. An ethics content analysis was conducted inductively to extract two features of included publications: (1) novel information content for AI-aided diagnosis and (2) the ethical justification for its disclosure. Results: The narrow search yielded n = 537 records of which n = 4 met the inclusion criteria. The information process was scarcely addressed for primary care setting. The broad search yielded n = 60 records of which n = 11 were included. In total, eight novel elements were identified to be included in the information process for ethical reasons, all of which stem from the technical specifics of medical AI. Conclusions: Implications for the general practitioner are two-fold: First, doctors need to be better informed about the ethical implications of novel technologies and must understand them to properly inform patients. Second, patient's overconfidence or fears can be countered by communicating the risks, limitations, and potential benefits of diagnostic AI systems. If patients accept and are aware of the limitations of AI-aided diagnosis, they increase their chances of being diagnosed and treated in time.
Collapse
|