1
|
Zwanenburg A, Price G, Löck S. Artificial intelligence for response prediction and personalisation in radiation oncology. Strahlenther Onkol 2025; 201:266-273. [PMID: 39212687 PMCID: PMC11839704 DOI: 10.1007/s00066-024-02281-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Accepted: 07/14/2024] [Indexed: 09/04/2024]
Abstract
Artificial intelligence (AI) systems may personalise radiotherapy by assessing complex and multifaceted patient data and predicting tumour and normal tissue responses to radiotherapy. Here we describe three distinct generations of AI systems, namely personalised radiotherapy based on pretreatment data, response-driven radiotherapy and dynamically optimised radiotherapy. Finally, we discuss the main challenges in clinical translation of AI systems for radiotherapy personalisation.
Collapse
Affiliation(s)
- Alex Zwanenburg
- OncoRay-National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Helmholtz-Zentrum Dresden-Rossendorf, Fetscherstr. 74, PF 41, 01307, Dresden, Germany.
- National Center for Tumor Diseases Dresden (NCT/UCC), Germany:, German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany; Helmholtz-Zentrum Dresden-Rossendorf (HZDR), Dresden, Germany.
- German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany.
| | - Gareth Price
- Division of Cancer Sciences, University of Manchester, Manchester, UK
- The Christie NHS Foundation Trust, Manchester, UK
| | - Steffen Löck
- OncoRay-National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Helmholtz-Zentrum Dresden-Rossendorf, Fetscherstr. 74, PF 41, 01307, Dresden, Germany
- Department of Radiotherapy and Radiation Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany
| |
Collapse
|
2
|
Grosek Š, Štivić S, Borovečki A, Ćurković M, Lajovic J, Marušić A, Mijatović A, Miksić M, Mimica S, Škrlep E, Lah Tomulić K, Erčulj V. Ethical attitudes and perspectives of AI use in medicine between Croatian and Slovenian faculty members of school of medicine: Cross-sectional study. PLoS One 2024; 19:e0310599. [PMID: 39637041 PMCID: PMC11620630 DOI: 10.1371/journal.pone.0310599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2024] [Accepted: 09/03/2024] [Indexed: 12/07/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) is present in preclinical, clinical and research work, in various branches of medicine. Researchers and teachers at school of medicines may have different ethical attitudes and perspectives about the implementation of AI systems in medicine. METHODS We conducted an online survey among researchers and teachers (RTs) at the departments and institutes of two Slovenian and four Croatian Schools of Medicine. RESULTS The sample included 165 and 214 researchers and teachers in Slovenia and Croatia, respectively. The sample of respondents in Slovenia and Croatia was comparable in demographical characteristics. All participants reported high emphasis on the bioethical principles when using artificial intelligence in medicine, its usefulness in certain circumstances, but also caution regarding companies providing AI systems and tools. Slovenian and Croatian researchers and teachers shared three similar perspectives on the use of AI in medicine-complying with highest ethical principles, explainability and transparency and usefulness of AI tools. Higher caution towards use of AI in medicine and effect on autonomy of physicians was expressed in Croatia, while in Slovenia high emphasis was put on understanding how AI works, but also the concerns regarding willingness and time of physicians to learn about AI. CONCLUSION Slovenian and Croatian researchers and teachers share ethical attitudes and perspectives with international researchers and physicians. It is important to facilitate understanding of the implications of AI use in medicine and set a solid evidence-based ground to tackle ethical and legal issues.
Collapse
Affiliation(s)
- Štefan Grosek
- Neonatology Section, Department of Perinatology, Division of Gynaecology and Obstetrics, University Medical Centre, Ljubljana, Ljubljana, Slovenia
| | - Stjepan Štivić
- Institute of Bioethics, Faculty of Theology, University of Ljubljana, Ljubljana Slovenia
| | - Ana Borovečki
- School of Medicine, ‘A, Štampar’ School of Public Health, University of Zagreb, Zagreb, Croatia
| | | | - Jaro Lajovic
- Rho Sigma Research & Consulting, Ljubljana, Slovenia
| | - Ana Marušić
- Center for Evidence-based Medicine, Department of Research in Biomedicine and Health, School of Medicine, University of Split, Split, Croatia
| | - Antonija Mijatović
- Center for Evidence-based Medicine, Department of Research in Biomedicine and Health, School of Medicine, University of Split, Split, Croatia
| | - Mirjana Miksić
- University Medical Centre Maribor, Clinic for Gynecology and Perinatology, Maribor, Slovenia
| | | | - Eva Škrlep
- Faculty of Medicine, University of Ljubljana, Ljubljana, Slovenia
| | - Kristina Lah Tomulić
- Department of Pediatrics, Faculty of Medicine, University of Rijeka, Croatia Pediatric Intensive Care Unit, Department of Pediatrics, Clinical Hospital Centre Rijeka, Rijeka, Croatia
| | - Vanja Erčulj
- Faculty of Criminal Justice and Security, University of Maribor, Ljubljana, Slovenia
| |
Collapse
|
3
|
Ursin F, Müller R, Funer F, Liedtke W, Renz D, Wiertz S, Ranisch R. Non-empirical methods for ethics research on digital technologies in medicine, health care and public health: a systematic journal review. MEDICINE, HEALTH CARE, AND PHILOSOPHY 2024; 27:513-528. [PMID: 39120780 PMCID: PMC11519279 DOI: 10.1007/s11019-024-10222-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 07/27/2024] [Indexed: 08/10/2024]
Abstract
Bioethics has developed approaches to address ethical issues in health care, similar to how technology ethics provides guidelines for ethical research on artificial intelligence, big data, and robotic applications. As these digital technologies are increasingly used in medicine, health care and public health, thus, it is plausible that the approaches of technology ethics have influenced bioethical research. Similar to the "empirical turn" in bioethics, which led to intense debates about appropriate moral theories, ethical frameworks and meta-ethics due to the increased use of empirical methodologies from social sciences, the proliferation of health-related subtypes of technology ethics might have a comparable impact on current bioethical research. This systematic journal review analyses the reporting of ethical frameworks and non-empirical methods in argument-based research articles on digital technologies in medicine, health care and public health that have been published in high-impact bioethics journals. We focus on articles reporting non-empirical research in original contributions. Our aim is to describe currently used methods for the ethical analysis of ethical issues regarding the application of digital technologies in medicine, health care and public health. We confine our analysis to non-empirical methods because empirical methods have been well-researched elsewhere. Finally, we discuss our findings against the background of established methods for health technology assessment, the lack of a typology for non-empirical methods as well as conceptual and methodical change in bioethics. Our descriptive results may serve as a starting point for reflecting on whether current ethical frameworks and non-empirical methods are appropriate to research ethical issues deriving from the application of digital technologies in medicine, health care and public health.
Collapse
Affiliation(s)
- Frank Ursin
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School, Carl-Neuberg-Strasse 1, 30625, Hannover, Germany.
| | - Regina Müller
- Institute of Philosophy, University of Bremen, Enrique-Schmidt-Straße 7, 28359, Bremen, Germany
| | - Florian Funer
- Institute for Ethics and History of Medicine, Eberhard Karls University, Gartenstrasse 47, 72074, Tübingen, Tübingen, Germany
| | - Wenke Liedtke
- Faculty of Theology, University of Greifswald, Am Rubenowplatz 2-3, 17489, Greifswald, Germany
| | - David Renz
- Faculty of Protestant Theology, University of Bonn, Am Hofgarten 8, 53113, Bonn, Germany
| | - Svenja Wiertz
- Department of Medical Ethics and the History of Medicine, University of Freiburg, Stefan-Meier-Str. 26, 79104, Freiburg, Germany
| | - Robert Ranisch
- Junior Professorship for Medical Ethics with a Focus on Digitization, Faculty of Health Sciences Brandenburg, University of Potsdam, Am Mühlenberg 9, 14476, Potsdam, Golm, Germany
| |
Collapse
|
4
|
Sparrow R, Hatherley J, Oakley J, Bain C. Should the Use of Adaptive Machine Learning Systems in Medicine be Classified as Research? THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2024; 24:58-69. [PMID: 38662360 DOI: 10.1080/15265161.2024.2337429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/26/2024]
Abstract
A novel advantage of the use of machine learning (ML) systems in medicine is their potential to continue learning from new data after implementation in clinical practice. To date, considerations of the ethical questions raised by the design and use of adaptive machine learning systems in medicine have, for the most part, been confined to discussion of the so-called "update problem," which concerns how regulators should approach systems whose performance and parameters continue to change even after they have received regulatory approval. In this paper, we draw attention to a prior ethical question: whether the continuous learning that will occur in such systems after their initial deployment should be classified, and regulated, as medical research? We argue that there is a strong prima facie case that the use of continuous learning in medical ML systems should be categorized, and regulated, as research and that individuals whose treatment involves such systems should be treated as research subjects.
Collapse
|
5
|
Vogt RL, Heck PR, Mestechkin RM, Heydari P, Chabris CF, Meyer MN. Aversion to pragmatic randomised controlled trials: three survey experiments with clinicians and laypeople in the USA. BMJ Open 2024; 14:e084699. [PMID: 39289015 PMCID: PMC11459322 DOI: 10.1136/bmjopen-2024-084699] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Accepted: 07/31/2024] [Indexed: 09/19/2024] Open
Abstract
OBJECTIVES Pragmatic randomised controlled trials (pRCTs) are essential for determining the real-world safety and effectiveness of healthcare interventions. However, both laypeople and clinicians often demonstrate experiment aversion: preferring to implement either of two interventions for everyone rather than comparing them to determine which is best. We studied whether clinician and layperson views of pRCTs for COVID-19, as well as non-COVID-19, interventions became more positive during the pandemic, which increased both the urgency and public discussion of pRCTs. DESIGN Randomised survey experiments. SETTING Geisinger, a network of hospitals and clinics in central and northeastern Pennsylvania, USA; Amazon Mechanical Turk, a research participant platform used to recruit online participants residing across the USA. Data were collected between August 2020 and February 2021. PARTICIPANTS 2149 clinicians (the types of people who conduct or make decisions about conducting pRCTs) and 2909 laypeople (the types of people who are included in pRCTs as patients). The clinician sample was primarily female (81%), comprised doctors (15%), physician assistants (9%), registered nurses (54%) and other medical professionals, including other nurses, genetic counsellors and medical students (23%), and the majority of clinicians (62%) had more than 10 years of experience. The layperson sample ranges in age from 18 to 88 years old (mean=38, SD=13) and the majority were white (75%) and female (56%). OUTCOME MEASURES Participants read vignettes in which a hypothetical decision-maker who sought to improve health could choose to implement intervention A for all, implement intervention B for all, or experimentally compare A and B and implement the superior intervention. Participants rated and ranked the appropriateness of each decision. Experiment aversion was defined as the degree to which a participant rated the experiment below their lowest-rated intervention. RESULTS In a survey of laypeople administered during the pandemic, we found significant aversion to experiments involving catheterisation checklists and hypertension drugs unrelated to the treatment of COVID-19 (Cohen's d=0.25-0.46, p<0.001). Similarly, among both laypeople and clinicians, we found significant aversion to most (comparing different checklist, proning and mask protocols; Cohen's d=0.17-0.56, p<0.001) but not all (comparing school reopening protocols; Cohen's d=0.03, p=0.64) non-pharmaceutical COVID-19 experiments. Interestingly, we found the lowest experiment aversion to pharmaceutical COVID-19 experiments (comparing new drugs and new vaccine protocols for treating the novel coronavirus; Cohen's d=0.04-0.12, p=0.12-0.55). Across all vignettes and samples, 28%-57% of participants expressed experiment aversion, whereas only 6%-35% expressed experiment appreciation by rating the trial higher than their highest-rated intervention. CONCLUSIONS Advancing evidence-based medicine through pRCTs will require anticipating and addressing experiment aversion among patients and healthcare professionals. STUDY REGISTRATION http://osf.io/6p5c7/.
Collapse
Affiliation(s)
- Randi L Vogt
- Bioethics & Decision Sciences, Geisinger, Danville, Pennsylvania, USA
| | - Patrick R Heck
- Bioethics & Decision Sciences, Geisinger, Danville, Pennsylvania, USA
| | | | - Pedram Heydari
- Economics, Northeastern University—Boston Campus, Boston, Massachusetts, USA
| | | | - Michelle N Meyer
- Bioethics & Decision Sciences, Geisinger, Danville, Pennsylvania, USA
| |
Collapse
|
6
|
Youssef A, Nichol AA, Martinez-Martin N, Larson DB, Abramoff M, Wolf RM, Char D. Ethical Considerations in the Design and Conduct of Clinical Trials of Artificial Intelligence. JAMA Netw Open 2024; 7:e2432482. [PMID: 39240560 PMCID: PMC11380101 DOI: 10.1001/jamanetworkopen.2024.32482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/20/2024] [Accepted: 07/15/2024] [Indexed: 09/07/2024] Open
Abstract
Importance Safe integration of artificial intelligence (AI) into clinical settings often requires randomized clinical trials (RCT) to compare AI efficacy with conventional care. Diabetic retinopathy (DR) screening is at the forefront of clinical AI applications, marked by the first US Food and Drug Administration (FDA) De Novo authorization for an autonomous AI for such use. Objective To determine the generalizability of the 7 ethical research principles for clinical trials endorsed by the National Institute of Health (NIH), and identify ethical concerns unique to clinical trials of AI. Design, Setting, and Participants This qualitative study included semistructured interviews conducted with 11 investigators engaged in the design and implementation of clinical trials of AI for DR screening from November 11, 2022, to February 20, 2023. The study was a collaboration with the ACCESS (AI for Children's Diabetic Eye Exams) trial, the first clinical trial of autonomous AI in pediatrics. Participant recruitment initially utilized purposeful sampling, and later expanded with snowball sampling. Study methodology for analysis combined a deductive approach to explore investigators' perspectives of the 7 ethical principles for clinical research endorsed by the NIH and an inductive approach to uncover the broader ethical considerations implementing clinical trials of AI within care delivery. Results A total of 11 participants (mean [SD] age, 47.5 [12.0] years; 7 male [64%], 4 female [36%]; 3 Asian [27%], 8 White [73%]) were included, with diverse expertise in ethics, ophthalmology, translational medicine, biostatistics, and AI development. Key themes revealed several ethical challenges unique to clinical trials of AI. These themes included difficulties in measuring social value, establishing scientific validity, ensuring fair participant selection, evaluating risk-benefit ratios across various patient subgroups, and addressing the complexities inherent in the data use terms of informed consent. Conclusions and Relevance This qualitative study identified practical ethical challenges that investigators need to consider and negotiate when conducting AI clinical trials, exemplified by the DR screening use-case. These considerations call for further guidance on where to focus empirical and normative ethical efforts to best support conduct clinical trials of AI and minimize unintended harm to trial participants.
Collapse
Affiliation(s)
- Alaa Youssef
- Departments of Radiology, Stanford University School of Medicine, Stanford, California
| | - Ariadne A. Nichol
- Center for Biomedical Ethics, Stanford University School of Medicine, Stanford, California
| | - Nicole Martinez-Martin
- Center for Biomedical Ethics, Stanford University School of Medicine, Stanford, California
- Department of Psychiatry, Stanford University School of Medicine, Stanford, California
| | - David B. Larson
- Departments of Radiology, Stanford University School of Medicine, Stanford, California
| | - Michael Abramoff
- Department of Ophthalmology and Visual Sciences, University of Iowa Hospital and Clinics, Iowa City
- Electrical and Computer Engineering, University of Iowa, Iowa City
| | - Risa M. Wolf
- Division of Endocrinology, Department of Pediatrics, The Johns Hopkins School of Medicine, Baltimore, Maryland
| | - Danton Char
- Center for Biomedical Ethics, Stanford University School of Medicine, Stanford, California
- Department of Anesthesiology, Division of Pediatric Cardiac Anesthesia, Stanford, California
| |
Collapse
|
7
|
Webb J. Machine learning, healthcare resource allocation, and patient consent. New Bioeth 2024; 30:206-227. [PMID: 39545564 DOI: 10.1080/20502877.2024.2416858] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2024]
Abstract
The impact of machine learning in healthcare on patient informed consent is now the subject of significant inquiry in bioethics. However, the topic has predominantly been considered in the context of black box diagnostic or treatment recommendation algorithms. The impact of machine learning involved in healthcare resource allocation on patient consent remains undertheorized. This paper will establish where patient consent is relevant in healthcare resource allocation, before exploring the impact on informed consent from the introduction of black box machine learning into resource allocation. It will then consider the arguments for informing patients about the use of machine learning in resource allocation, before exploring the challenge of whether individual patients could principally contest algorithmic prioritization decisions involving black box machine learning. Finally, this paper will examine how different forms of opacity in machine learning involved in resource allocation could be a barrier to patient consent to clinical decision-making in different healthcare contexts.
Collapse
Affiliation(s)
- Jamie Webb
- Centre for Technomoral Futures, University of Edinburgh, Edinburgh, UK
| |
Collapse
|
8
|
Campagner A, Milella F, Banfi G, Cabitza F. Second opinion machine learning for fast-track pathway assignment in hip and knee replacement surgery: the use of patient-reported outcome measures. BMC Med Inform Decis Mak 2024; 24:203. [PMID: 39044277 PMCID: PMC11267678 DOI: 10.1186/s12911-024-02602-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Accepted: 07/09/2024] [Indexed: 07/25/2024] Open
Abstract
BACKGROUND The frequency of hip and knee arthroplasty surgeries has been rising steadily in recent decades. This trend is attributed to an aging population, leading to increased demands on healthcare systems. Fast Track (FT) surgical protocols, perioperative procedures designed to expedite patient recovery and early mobilization, have demonstrated efficacy in reducing hospital stays, convalescence periods, and associated costs. However, the criteria for selecting patients for FT procedures have not fully capitalized on the available patient data, including patient-reported outcome measures (PROMs). METHODS Our study focused on developing machine learning (ML) models to support decision making in assigning patients to FT procedures, utilizing data from patients' self-reported health status. These models are specifically designed to predict the potential health status improvement in patients initially selected for FT. Our approach focused on techniques inspired by the concept of controllable AI. This includes eXplainable AI (XAI), which aims to make the model's recommendations comprehensible to clinicians, and cautious prediction, a method used to alert clinicians about potential control losses, thereby enhancing the models' trustworthiness and reliability. RESULTS Our models were trained and tested using a dataset comprising 899 records from individual patients admitted to the FT program at IRCCS Ospedale Galeazzi-Sant'Ambrogio. After training and selecting hyper-parameters, the models were assessed using a separate internal test set. The interpretable models demonstrated performance on par or even better than the most effective 'black-box' model (Random Forest). These models achieved sensitivity, specificity, and positive predictive value (PPV) exceeding 70%, with an area under the curve (AUC) greater than 80%. The cautious prediction models exhibited enhanced performance while maintaining satisfactory coverage (over 50%). Further, when externally validated on a separate cohort from the same hospital-comprising patients from a subsequent time period-the models showed no pragmatically notable decline in performance. CONCLUSIONS Our results demonstrate the effectiveness of utilizing PROMs as basis to develop ML models for planning assignments to FT procedures. Notably, the application of controllable AI techniques, particularly those based on XAI and cautious prediction, emerges as a promising approach. These techniques provide reliable and interpretable support, essential for informed decision-making in clinical processes.
Collapse
Affiliation(s)
| | - Frida Milella
- Department of Computer Science, Systems and Communication, University of Milano-Bicocca, Milan, Italy
| | - Giuseppe Banfi
- IRCCS Ospedale Galeazzi Sant'Ambrogio, Milan, Italy
- Faculty of Medicine and Surgery, Universitá Vita-Salute San Raffaele, Milan, Italy
| | - Federico Cabitza
- IRCCS Ospedale Galeazzi Sant'Ambrogio, Milan, Italy
- Department of Computer Science, Systems and Communication, University of Milano-Bicocca, Milan, Italy
| |
Collapse
|
9
|
Bouhouita-Guermech S, Gogognon P, Bélisle-Pipon JC. Specific challenges posed by artificial intelligence in research ethics. Front Artif Intell 2023; 6:1149082. [PMID: 37483869 PMCID: PMC10358356 DOI: 10.3389/frai.2023.1149082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 06/13/2023] [Indexed: 07/25/2023] Open
Abstract
Background The twenty first century is often defined as the era of Artificial Intelligence (AI), which raises many questions regarding its impact on society. It is already significantly changing many practices in different fields. Research ethics (RE) is no exception. Many challenges, including responsibility, privacy, and transparency, are encountered. Research ethics boards (REB) have been established to ensure that ethical practices are adequately followed during research projects. This scoping review aims to bring out the challenges of AI in research ethics and to investigate if REBs are equipped to evaluate them. Methods Three electronic databases were selected to collect peer-reviewed articles that fit the inclusion criteria (English or French, published between 2016 and 2021, containing AI, RE, and REB). Two instigators independently reviewed each piece by screening with Covidence and then coding with NVivo. Results From having a total of 657 articles to review, we were left with a final sample of 28 relevant papers for our scoping review. The selected literature described AI in research ethics (i.e., views on current guidelines, key ethical concept and approaches, key issues of the current state of AI-specific RE guidelines) and REBs regarding AI (i.e., their roles, scope and approaches, key practices and processes, limitations and challenges, stakeholder perceptions). However, the literature often described REBs ethical assessment practices of projects in AI research as lacking knowledge and tools. Conclusion Ethical reflections are taking a step forward while normative guidelines adaptation to AI's reality is still dawdling. This impacts REBs and most stakeholders involved with AI. Indeed, REBs are not equipped enough to adequately evaluate AI research ethics and require standard guidelines to help them do so.
Collapse
Affiliation(s)
| | | | - Jean-Christophe Bélisle-Pipon
- School of Public Health, Université de Montréal, Montréal, QC, Canada
- Faculty of Health Sciences, Simon Fraser University, Burnaby, BC, Canada
| |
Collapse
|
10
|
Zohny H, McMillan J, King M. Ethics of generative AI. JOURNAL OF MEDICAL ETHICS 2023; 49:79-80. [PMID: 36693706 DOI: 10.1136/jme-2023-108909] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 01/09/2023] [Indexed: 06/17/2023]
Affiliation(s)
- Hazem Zohny
- Bioethics Centre, The University of Otago, Dunedin, New Zealand
| | - John McMillan
- Bioethics Centre, The University of Otago, Dunedin, New Zealand
| | - Mike King
- Bioethics Centre, The University of Otago, Dunedin, New Zealand
| |
Collapse
|
11
|
Hatherley J, Sparrow R, Howard M. The Virtues of Interpretable Medical Artificial Intelligence. Camb Q Healthc Ethics 2022:1-10. [PMID: 36524245 DOI: 10.1017/s0963180122000305] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are "black boxes." The initial response in the literature was a demand for "explainable AI." However, recently, several authors have suggested that making AI more explainable or "interpretable" is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a "lethal prejudice." In this article, we defend the value of interpretability in the context of the use of AI in medicine. Clinicians may prefer interpretable systems over more accurate black boxes, which in turn is sufficient to give designers of AI reason to prefer more interpretable systems in order to ensure that AI is adopted and its benefits realized. Moreover, clinicians may be justified in this preference. Achieving the downstream benefits from AI is critically dependent on how the outputs of these systems are interpreted by physicians and patients. A preference for the use of highly accurate black box AI systems, over less accurate but more interpretable systems, may itself constitute a form of lethal prejudice that may diminish the benefits of AI to-and perhaps even harm-patients.
Collapse
Affiliation(s)
- Joshua Hatherley
- School of Philosophical, Historical, and International Studies, Monash University, Clayton, Victoria3168, Australia
| | - Robert Sparrow
- School of Philosophical, Historical, and International Studies, Monash University, Clayton, Victoria3168, Australia
| | - Mark Howard
- School of Philosophical, Historical, and International Studies, Monash University, Clayton, Victoria3168, Australia
| |
Collapse
|