1
|
Debono B, Lonjon G, Guillain A, Moncany AH, Hamel O, Challier V, Diebo B. Spine surgeons facing second opinions: a qualitative study. Spine J 2024; 24:1485-1494. [PMID: 38556219 DOI: 10.1016/j.spinee.2024.03.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 03/04/2024] [Accepted: 03/24/2024] [Indexed: 04/02/2024]
Abstract
BACKGROUND CONTEXT The social and technological mutation of our contemporary period disrupts the traditional dyad that prevails in the relationship between physicians and patients. PURPOSE The solicitation of a second opinion by the patient may potentially alter this dyad and degrade the mutual trust between the stakeholders concerned. The doctor-patient relationship has often been studied from the patient's perspective, but data are scarce from the spine surgeon's point of view. STUDY DESIGN/SETTING This qualitative study used the grounded theory approach, an inductive methodology emphasizing field data and rejecting predetermined assumptions. PATIENT SAMPLE We interviewed spine surgeons of different ages, experiences, and practice locations. We initially contacted 30 practitioners, but the final number (24 interviews; 11 orthopedists and 13 neurosurgeons) was determined by data saturation (the point at which no new topics appeared). OUTCOME MEASURES Themes and subthemes were analyzed using semistructured interviews until saturation was reached. METHODS Data were collected through individual interviews, independently analyzed thematically using specialized software, and triangulated by three researchers (an anthropologist, psychiatrist, and neurosurgeon). RESULTS Index surgeons were defined when their patients went for a second opinion and recourse surgeons were defined as surgeons who were asked for a second opinion. Data analysis identified five overarching themes based on recurring elements in the interviews: (1) analysis of the patient's motivations for seeking a second opinion; (2) impaired trust and disloyalty; (3) ego, authority, and surgeon image; (4) management of a consultation recourse (measurement and ethics); and (5) the second opinion as an avoidance strategy. Despite the inherent asymmetry in the doctor-patient relationship, surgeons and patients share two symmetrical continua according to their perspective (professional or consumerist), involving power and control on the one hand and loyalty and autonomy on the other. These shared elements can be found in index consultations (seeking high-level care/respecting trust/closing the loyalty gap/managing disengagement) and referral consultations (objective and independent advice/trusting of the index advice/avoiding negative and anxiety-provoking situations). CONCLUSIONS The second opinion often has a negative connotation with spine surgeons, who see it as a breach of loyalty and trust, without neglecting ego injury in their relationship with the patient. A paradigm shift would allow the second opinion to be perceived as a valuable resource that broadens the physician-patient relationship and optimizes the shared surgical decision-making process.
Collapse
Affiliation(s)
- Bertrand Debono
- Paris-Versailles Spine Center (Centre Francilien du Dos), Paris, France; Ramsay Santé-Hôpital Privé de Versailles, Versailles, France.
| | - Guillaume Lonjon
- Department of Orthopedic Surgery, Orthosud, Clinique St-Jean- Sud de France, Santecite Group. St Jean de Vedas, Montpellier Metropole, France
| | - Antoine Guillain
- AMADES (Medical Anthropology, Development and Health), Centre de la Vieille Charité, Marseille, France
| | - Anne-Hélène Moncany
- Department of Psychiatry and Addictive Behaviour, Gerard Marchant Hospital Center, Toulouse, France
| | - Olivier Hamel
- Department of Neurosurgery, Ramsay Santé-Clinique des Cèdres, Cornebarrieu, France
| | - Vincent Challier
- Department of Orthopedic Surgery, Hôpital privé du dos Francheville, Périgueux, France
| | - Bassel Diebo
- Department of Orthopedic surgery, Brown University Warren Alpert Medical School, East Providence, RI, USA
| |
Collapse
|
2
|
Salloch S, Eriksen A. What Are Humans Doing in the Loop? Co-Reasoning and Practical Judgment When Using Machine Learning-Driven Decision Aids. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2024:1-12. [PMID: 38767971 DOI: 10.1080/15265161.2024.2353800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
Within the ethical debate on Machine Learning-driven decision support systems (ML_CDSS), notions such as "human in the loop" or "meaningful human control" are often cited as being necessary for ethical legitimacy. In addition, ethical principles usually serve as the major point of reference in ethical guidance documents, stating that conflicts between principles need to be weighed and balanced against each other. Starting from a neo-Kantian viewpoint inspired by Onora O'Neill, this article makes a concrete suggestion of how to interpret the role of the "human in the loop" and to overcome the perspective of rivaling ethical principles in the evaluation of AI in health care. We argue that patients should be perceived as "fellow workers" and epistemic partners in the interpretation of ML_CDSS outputs. We further highlight that a meaningful process of integrating (rather than weighing and balancing) ethical principles is most appropriate in the evaluation of medical AI.
Collapse
|
3
|
Cheng Y, Li L, Bi Y, Su S, Zhang B, Feng X, Wang N, Zhang W, Yao Y, Ru N, Xiang J, Sun L, Hu K, Wen F, Wang Z, Bai L, Wang X, Wang R, Lv X, Wang P, Meng F, Xiao W, Linghu E, Chai N. Computer-aided diagnosis system for optical diagnosis of colorectal polyps under white light imaging. Dig Liver Dis 2024:S1590-8658(24)00723-0. [PMID: 38744557 DOI: 10.1016/j.dld.2024.04.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 03/21/2024] [Accepted: 04/23/2024] [Indexed: 05/16/2024]
Abstract
OBJECTIVES This study presents a novel computer-aided diagnosis (CADx) designed for optically diagnosing colorectal polyps using white light imaging (WLI).We aimed to evaluate the effectiveness of the CADx and its auxiliary role among endoscopists with different levels of expertise. METHODS We collected 2,324 neoplastic and 3,735 nonneoplastic polyp WLI images for model training, and 838 colorectal polyp images from 740 patients for model validation. We compared the diagnostic accuracy of the CADx with that of 15 endoscopists under WLI and narrow band imaging (NBI). The auxiliary benefits of CADx for endoscopists of different experience levels and for identifying different types of colorectal polyps was also evaluated. RESULTS The CADx demonstrated an optical diagnostic accuracy of 84.49%, showing considerable superiority over all endoscopists, irrespective of whether WLI or NBI was used (P < 0.001). Assistance from the CADx significantly improved the diagnostic accuracy of the endoscopists from 68.84% to 77.49% (P = 0.001), with the most significant impact observed among novice endoscopists. Notably, novices using CADx-assisted WLI outperform junior and expert endoscopists without such assistance. CONCLUSIONS The CADx demonstrated a crucial role in substantially enhancing the precision of optical diagnosis for colorectal polyps under WLI and showed the greatest auxiliary benefits for novice endoscopists.
Collapse
Affiliation(s)
- Yaxuan Cheng
- Chinese PLA Medical School, Beijing, 100853, PR China; Department of Gastroenterology, The First Medical Center of Chinese PLA General Hospital, Beijing, 100853, PR China
| | - Longsong Li
- Department of Gastroenterology, The First Medical Center of Chinese PLA General Hospital, Beijing, 100853, PR China
| | - Yawei Bi
- Department of Gastroenterology, The First Medical Center of Chinese PLA General Hospital, Beijing, 100853, PR China
| | - Song Su
- Department of Gastroenterology, The First Medical Center of Chinese PLA General Hospital, Beijing, 100853, PR China
| | - Bo Zhang
- Department of Gastroenterology, The First Medical Center of Chinese PLA General Hospital, Beijing, 100853, PR China
| | - Xiuxue Feng
- Department of Gastroenterology, The First Medical Center of Chinese PLA General Hospital, Beijing, 100853, PR China
| | - Nanjun Wang
- Department of Gastroenterology, The First Medical Center of Chinese PLA General Hospital, Beijing, 100853, PR China
| | - Wengang Zhang
- Department of Gastroenterology, The First Medical Center of Chinese PLA General Hospital, Beijing, 100853, PR China
| | - Yi Yao
- Department of Gastroenterology, The First Medical Center of Chinese PLA General Hospital, Beijing, 100853, PR China
| | - Nan Ru
- Department of Gastroenterology, The First Medical Center of Chinese PLA General Hospital, Beijing, 100853, PR China
| | - Jingyuan Xiang
- Department of Gastroenterology, The First Medical Center of Chinese PLA General Hospital, Beijing, 100853, PR China
| | - Lihua Sun
- Department of Gastroenterology, The First Medical Center of Chinese PLA General Hospital, Beijing, 100853, PR China
| | - Kang Hu
- Department of Gastroenterology, The 987 Hospital of PLA Joint Logistic Support Force, Baoji, 721004, PR China
| | - Feng Wen
- Department of Gastroenterology, General Hospital of Central Theater Command of PLA,Wuhan 430070, PR China
| | - Zixin Wang
- Department of Gastroenterology, The First Medical Center of Chinese PLA General Hospital, Beijing, 100853, PR China
| | - Lu Bai
- Department of Gastroenterology, The First Medical Center of Chinese PLA General Hospital, Beijing, 100853, PR China
| | - Xueting Wang
- Department of Gastroenterology, The First Medical Center of Chinese PLA General Hospital, Beijing, 100853, PR China
| | - Runzi Wang
- Department of Gastroenterology, The First Medical Center of Chinese PLA General Hospital, Beijing, 100853, PR China
| | - Xingping Lv
- Department of Gastroenterology, The First Medical Center of Chinese PLA General Hospital, Beijing, 100853, PR China
| | - Pengju Wang
- Chinese PLA Medical School, Beijing, 100853, PR China; Department of Gastroenterology, The First Medical Center of Chinese PLA General Hospital, Beijing, 100853, PR China
| | - Fanqi Meng
- Medical Department, HighWise Medical Technology Co, Ltd, Changsha, 410000, PR China
| | - Wen Xiao
- Medical Department, HighWise Medical Technology Co, Ltd, Changsha, 410000, PR China
| | - Enqiang Linghu
- Department of Gastroenterology, The First Medical Center of Chinese PLA General Hospital, Beijing, 100853, PR China.
| | - Ningli Chai
- Department of Gastroenterology, The First Medical Center of Chinese PLA General Hospital, Beijing, 100853, PR China.
| |
Collapse
|
4
|
Heredia-Negrón F, Tosado-Rodríguez EL, Meléndez-Berrios J, Nieves B, Amaya-Ardila CP, Roche-Lima A. Assessing the Impact of AI Education on Hispanic Healthcare Professionals' Perceptions and Knowledge. EDUCATION SCIENCES 2024; 14:339. [PMID: 38818527 PMCID: PMC11138866 DOI: 10.3390/educsci14040339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/01/2024]
Abstract
This study investigates the awareness and perceptions of artificial intelligence (AI) among Hispanic healthcare-related professionals, focusing on integrating AI in healthcare. The study participants were recruited from an asynchronous course offered twice within a year at the University of Puerto Rico Medical Science Campus, titled "Artificial Intelligence and Machine Learning Applied to Health Disparities Research", which aimed to bridge the gaps in AI knowledge among participants. The participants were divided into Experimental (n = 32; data-illiterate) and Control (n = 18; data-literate) groups, and pre-test and post-test surveys were administered to assess knowledge and attitudes toward AI. Descriptive statistics, power analysis, and the Mann-Whitney U test were employed to determine the influence of the course on participants' comprehension and perspectives regarding AI. Results indicate significant improvements in knowledge and attitudes among participants, emphasizing the effectiveness of the course in enhancing understanding and fostering positive attitudes toward AI. Findings also reveal limited practical exposure to AI applications, highlighting the need for improved integration into education. This research highlights the significance of educating healthcare professionals about AI to enable its advantageous incorporation into healthcare procedures. The study provides valuable perspectives from a broad spectrum of healthcare workers, serving as a basis for future investigations and educational endeavors aimed at AI implementation in healthcare.
Collapse
Affiliation(s)
- Frances Heredia-Negrón
- CCRHD RCMI-Program, Medical Sciences Campus, University of Puerto Rico, San Juan, PR 00934, USA
| | | | - Joshua Meléndez-Berrios
- CCRHD RCMI-Program, Medical Sciences Campus, University of Puerto Rico, San Juan, PR 00934, USA
| | - Brenda Nieves
- CCRHD RCMI-Program, Medical Sciences Campus, University of Puerto Rico, San Juan, PR 00934, USA
| | - Claudia P. Amaya-Ardila
- Department of Biostatistics and Epidemiology, Medical Science Campus, University of Puerto Rico, San Juan, PR 00934, USA
| | - Abiel Roche-Lima
- CCRHD RCMI-Program, Medical Sciences Campus, University of Puerto Rico, San Juan, PR 00934, USA
| |
Collapse
|
5
|
Funer F, Wiesing U. Physician's autonomy in the face of AI support: walking the ethical tightrope. Front Med (Lausanne) 2024; 11:1324963. [PMID: 38606162 PMCID: PMC11007068 DOI: 10.3389/fmed.2024.1324963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 03/18/2024] [Indexed: 04/13/2024] Open
Abstract
The introduction of AI support tools raises questions about the normative orientation of medical practice and the need to rethink its basic concepts. One of these concepts that is central to the discussion is the physician’s autonomy and its appropriateness in the face of high-powered AI applications. In this essay, a differentiation of the physician’s autonomy is made on the basis of a conceptual analysis. It is argued that the physician’s decision-making autonomy is a purposeful autonomy. The physician’s decision-making autonomy is fundamentally anchored in the medical ethos for the purpose to promote the patient’s health and well-being and to prevent him or her from harm. It follows from this purposefulness that the physician’s autonomy is not to be protected for its own sake, but only insofar as it serves this end better than alternative means. We argue that today, given existing limitations of AI support tools, physicians still need physician’s decision-making autonomy. For the possibility of physicians to exercise decision-making autonomy in the face of AI support, we elaborate three conditions: (1) sufficient information about AI support and its statements, (2) sufficient competencies to integrate AI statements into clinical decision-making, and (3) a context of voluntariness that allows, in justified cases, deviations from AI support. If the physician should fulfill his or her moral obligation to promote the health and well-being of the patient, then the use of AI should be designed in such a way that it promotes or at least maintains the physician’s decision-making autonomy.
Collapse
Affiliation(s)
- Florian Funer
- Institute for Ethics and History of Medicine, University Hospital and Faculty of Medicine, University of Tübingen, Tübingen, Germany
| | | |
Collapse
|
6
|
Tsai AY, Carter SR, Greene AC. Artificial intelligence in pediatric surgery. Semin Pediatr Surg 2024; 33:151390. [PMID: 38242061 DOI: 10.1016/j.sempedsurg.2024.151390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2024]
Abstract
Artificial intelligence (AI) is rapidly changing the landscape of medicine and is already being utilized in conjunction with medical diagnostics and imaging analysis. We hereby explore AI applications in surgery and examine its relevance to pediatric surgery, covering its evolution, current state, and promising future. The various fields of AI are explored including machine learning and applications to predictive analytics and decision support in surgery, computer vision and image analysis in preoperative planning, image segmentation, surgical navigation, and finally, natural language processing assist in expediting clinical documentation, identification of clinical indications, quality improvement, outcome research, and other types of automated data extraction. The purpose of this review is to familiarize the pediatric surgical community with the rise of AI and highlight the ongoing advancements and challenges in its adoption, including data privacy, regulatory considerations, and the imperative for interdisciplinary collaboration. We hope this review serves as a comprehensive guide to AI's transformative influence on surgery, demonstrating its potential to enhance pediatric surgical patient outcomes, improve precision, and usher in a new era of surgical excellence.
Collapse
Affiliation(s)
- Anthony Y Tsai
- Division of Pediatric Surgery, Penn State Health Children's Hospital, 500 University Drive, Hershey, PA 17033, United States.
| | - Stewart R Carter
- Division of Pediatric Surgery, University of Louisville School of Medicine, Louisville, KY, United States
| | - Alicia C Greene
- Division of Pediatric Surgery, Penn State Health Children's Hospital, 500 University Drive, Hershey, PA 17033, United States
| |
Collapse
|
7
|
Funer F, Liedtke W, Tinnemeyer S, Klausen AD, Schneider D, Zacharias HU, Langanke M, Salloch S. Responsibility and decision-making authority in using clinical decision support systems: an empirical-ethical exploration of German prospective professionals' preferences and concerns. JOURNAL OF MEDICAL ETHICS 2023; 50:6-11. [PMID: 37217277 PMCID: PMC10803986 DOI: 10.1136/jme-2022-108814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 03/11/2023] [Indexed: 05/24/2023]
Abstract
Machine learning-driven clinical decision support systems (ML-CDSSs) seem impressively promising for future routine and emergency care. However, reflection on their clinical implementation reveals a wide array of ethical challenges. The preferences, concerns and expectations of professional stakeholders remain largely unexplored. Empirical research, however, may help to clarify the conceptual debate and its aspects in terms of their relevance for clinical practice. This study explores, from an ethical point of view, future healthcare professionals' attitudes to potential changes of responsibility and decision-making authority when using ML-CDSS. Twenty-seven semistructured interviews were conducted with German medical students and nursing trainees. The data were analysed based on qualitative content analysis according to Kuckartz. Interviewees' reflections are presented under three themes the interviewees describe as closely related: (self-)attribution of responsibility, decision-making authority and need of (professional) experience. The results illustrate the conceptual interconnectedness of professional responsibility and its structural and epistemic preconditions to be able to fulfil clinicians' responsibility in a meaningful manner. The study also sheds light on the four relata of responsibility understood as a relational concept. The article closes with concrete suggestions for the ethically sound clinical implementation of ML-CDSS.
Collapse
Affiliation(s)
- Florian Funer
- Institute of Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
- Institute of Ethics and History of Medicine, Eberhard Karls University Tübingen, Tübingen, Germany
| | - Wenke Liedtke
- Department of Social Work, Protestant University of Applied Sciences RWL, Bochum, Germany
| | - Sara Tinnemeyer
- Institute of Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| | | | - Diana Schneider
- Competence Center Emerging Technologies, Fraunhofer Institute for Systems and Innovation Research ISI, Karlsruhe, Germany
| | - Helena U Zacharias
- Peter L. Reichertz Institute for Medical Informatics of TU Braunschweig and Hannover Medical School, Hannover Medical School, Hannover, Germany
| | - Martin Langanke
- Department of Social Work, Protestant University of Applied Sciences RWL, Bochum, Germany
| | - Sabine Salloch
- Institute of Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| |
Collapse
|
8
|
Qureshi R, Irfan M, Gondal TM, Khan S, Wu J, Hadi MU, Heymach J, Le X, Yan H, Alam T. AI in drug discovery and its clinical relevance. Heliyon 2023; 9:e17575. [PMID: 37396052 PMCID: PMC10302550 DOI: 10.1016/j.heliyon.2023.e17575] [Citation(s) in RCA: 28] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 06/17/2023] [Accepted: 06/21/2023] [Indexed: 07/04/2023] Open
Abstract
The COVID-19 pandemic has emphasized the need for novel drug discovery process. However, the journey from conceptualizing a drug to its eventual implementation in clinical settings is a long, complex, and expensive process, with many potential points of failure. Over the past decade, a vast growth in medical information has coincided with advances in computational hardware (cloud computing, GPUs, and TPUs) and the rise of deep learning. Medical data generated from large molecular screening profiles, personal health or pathology records, and public health organizations could benefit from analysis by Artificial Intelligence (AI) approaches to speed up and prevent failures in the drug discovery pipeline. We present applications of AI at various stages of drug discovery pipelines, including the inherently computational approaches of de novo design and prediction of a drug's likely properties. Open-source databases and AI-based software tools that facilitate drug design are discussed along with their associated problems of molecule representation, data collection, complexity, labeling, and disparities among labels. How contemporary AI methods, such as graph neural networks, reinforcement learning, and generated models, along with structure-based methods, (i.e., molecular dynamics simulations and molecular docking) can contribute to drug discovery applications and analysis of drug responses is also explored. Finally, recent developments and investments in AI-based start-up companies for biotechnology, drug design and their current progress, hopes and promotions are discussed in this article.
Collapse
Affiliation(s)
- Rizwan Qureshi
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
- Department of Imaging Physics, MD Anderson Cancer Center, The University of Texas, Houston, USA
| | - Muhammad Irfan
- Faculty of Electrical Engineering, Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Swabi, Pakistan
| | | | - Sheheryar Khan
- School of Professional Education & Executive Development, The Hong Kong Polytechnic University, Hong Kong
| | - Jia Wu
- Department of Imaging Physics, MD Anderson Cancer Center, The University of Texas, Houston, USA
| | | | - John Heymach
- Department of Thoracic Head and Neck Medical Oncology, Division of Cancer Medicine, The University of Texas, MD Anderson Cancer Center, Houston, USA
| | - Xiuning Le
- Department of Thoracic Head and Neck Medical Oncology, Division of Cancer Medicine, The University of Texas, MD Anderson Cancer Center, Houston, USA
| | - Hong Yan
- Department of Electrical Engineering, City University of Hong Kong, Kowloon, Hong Kong
| | - Tanvir Alam
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| |
Collapse
|
9
|
Funer F, Salloch S. 'Can I trust my patient?' Machine Learning support for predicting patient behaviour. JOURNAL OF MEDICAL ETHICS 2023:jme-2023-109094. [PMID: 37188507 DOI: 10.1136/jme-2023-109094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 05/04/2023] [Indexed: 05/17/2023]
Affiliation(s)
- Florian Funer
- Institute for Ethics and History of Medicine, Eberhard Karls Universitat Tubingen, Tübingen, Baden-Württemberg, Germany
| | - Sabine Salloch
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Niedersachsen, Germany
| |
Collapse
|
10
|
Ursin F, Lindner F, Ropinski T, Salloch S, Timmermann C. Ebenen der Explizierbarkeit für medizinische künstliche Intelligenz: Was brauchen wir normativ und was können wir technisch erreichen? Ethik Med 2023. [DOI: 10.1007/s00481-023-00761-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2023]
Abstract
Abstract
Definition of the problem
The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which levels of explicability are needed to obtain informed consent when utilizing medical AI?
Arguments
We proceed in five steps: First, we map the terms commonly associated with explicability as described in the ethics and computer science literature, i.e., disclosure, intelligibility, interpretability, and explainability. Second, we conduct a conceptual analysis of the ethical requirements for explicability when it comes to informed consent. Third, we distinguish hurdles for explicability in terms of epistemic and explanatory opacity. Fourth, this then allows to conclude the level of explicability physicians must reach and what patients can expect. In a final step, we show how the identified levels of explicability can technically be met from the perspective of computer science. Throughout our work, we take diagnostic AI systems in radiology as an example.
Conclusion
We determined four levels of explicability that need to be distinguished for ethically defensible informed consent processes and showed how developers of medical AI can technically meet these requirements.
Collapse
|
11
|
Zohny H, McMillan J, King M. Ethics of generative AI. JOURNAL OF MEDICAL ETHICS 2023; 49:79-80. [PMID: 36693706 DOI: 10.1136/jme-2023-108909] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 01/09/2023] [Indexed: 06/17/2023]
Affiliation(s)
- Hazem Zohny
- Bioethics Centre, The University of Otago, Dunedin, New Zealand
| | - John McMillan
- Bioethics Centre, The University of Otago, Dunedin, New Zealand
| | - Mike King
- Bioethics Centre, The University of Otago, Dunedin, New Zealand
| |
Collapse
|
12
|
Grote T, Keeling G. Enabling Fairness in Healthcare Through Machine Learning. ETHICS AND INFORMATION TECHNOLOGY 2022; 24:39. [PMID: 36060496 PMCID: PMC9428374 DOI: 10.1007/s10676-022-09658-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/27/2022] [Indexed: 06/15/2023]
Abstract
The use of machine learning systems for decision-support in healthcare may exacerbate health inequalities. However, recent work suggests that algorithms trained on sufficiently diverse datasets could in principle combat health inequalities. One concern about these algorithms is that their performance for patients in traditionally disadvantaged groups exceeds their performance for patients in traditionally advantaged groups. This renders the algorithmic decisions unfair relative to the standard fairness metrics in machine learning. In this paper, we defend the permissible use of affirmative algorithms; that is, algorithms trained on diverse datasets that perform better for traditionally disadvantaged groups. Whilst such algorithmic decisions may be unfair, the fairness of algorithmic decisions is not the appropriate locus of moral evaluation. What matters is the fairness of final decisions, such as diagnoses, resulting from collaboration between clinicians and algorithms. We argue that affirmative algorithms can permissibly be deployed provided the resultant final decisions are fair.
Collapse
Affiliation(s)
- Thomas Grote
- Ethics and Philosophy Lab; Cluster of Excellence: Machine Learning: New Perspectives for Science, University of Tübingen, Maria von Linden Str. 6, D-72076 Tübingen, Germany
| | - Geoff Keeling
- Institute for Human-Centered AI and McCoy Family Center for Ethics in Society, Stanford University, 450 Serra Mall, 94305 Stanford, CA USA
| |
Collapse
|
13
|
Van Cauwenberge D, Van Biesen W, Decruyenaere J, Leune T, Sterckx S. "Many roads lead to Rome and the Artificial Intelligence only shows me one road": an interview study on physician attitudes regarding the implementation of computerised clinical decision support systems. BMC Med Ethics 2022; 23:50. [PMID: 35524301 PMCID: PMC9077861 DOI: 10.1186/s12910-022-00787-8] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Accepted: 04/20/2022] [Indexed: 11/16/2022] Open
Abstract
Research regarding the drivers of acceptance of clinical decision support systems (CDSS) by physicians is still rather limited. The literature that does exist, however, tends to focus on problems regarding the user-friendliness of CDSS. We have performed a thematic analysis of 24 interviews with physicians concerning specific clinical case vignettes, in order to explore their underlying opinions and attitudes regarding the introduction of CDSS in clinical practice, to allow a more in-depth analysis of factors underlying (non-)acceptance of CDSS. We identified three general themes from the results. First, 'the perceived role of the AI', including items referring to the tasks that may properly be assigned to the CDSS according to the respondents. Second, 'the perceived role of the physician', referring to the aspects of clinical practice that were seen as being fundamentally 'human' or non-automatable. Third, 'concerns regarding AI', including items referring to more general issues that were raised by the respondents regarding the introduction of CDSS in general and/or in clinical medicine in particular. Apart from the overall concerns expressed by the respondents regarding user-friendliness, we will explain how our results indicate that our respondents were primarily occupied by distinguishing between parts of their job that should be automated and aspects that should be kept in human hands. We refer to this distinction as 'the division of clinical labor.' This division is not based on knowledge regarding AI or medicine, but rather on which parts of a physician's job were seen by the respondents as being central to who they are as physicians and as human beings. Often the respondents' view that certain core parts of their job ought to be shielded from automation was closely linked to claims concerning the uniqueness of medicine as a domain. Finally, although almost all respondents claimed that they highly value their final responsibility, a closer investigation of this concept suggests that their view of 'final responsibility' was not that demanding after all.
Collapse
Affiliation(s)
- Daan Van Cauwenberge
- Department of Philosophy and Moral Sciences, Bioethics Institute Ghent, Ghent University, Ghent, Belgium
- Consortium for Justifiable Digital Healthcare, Ghent University Hospital, Ghent, Belgium
| | - Wim Van Biesen
- Consortium for Justifiable Digital Healthcare, Ghent University Hospital, Ghent, Belgium
- Department of Nephrology, Ghent University Hospital, Ghent, Belgium
| | - Johan Decruyenaere
- Consortium for Justifiable Digital Healthcare, Ghent University Hospital, Ghent, Belgium
- Department of Intensive Care Medicine, Ghent University Hospital, Ghent, Belgium
| | - Tamara Leune
- Consortium for Justifiable Digital Healthcare, Ghent University Hospital, Ghent, Belgium
- Department of Nephrology, Ghent University Hospital, Ghent, Belgium
| | - Sigrid Sterckx
- Department of Philosophy and Moral Sciences, Bioethics Institute Ghent, Ghent University, Ghent, Belgium.
- Consortium for Justifiable Digital Healthcare, Ghent University Hospital, Ghent, Belgium.
| |
Collapse
|
14
|
Fritz Z. When the frameworks don't work: data protection, trust and artificial intelligence. JOURNAL OF MEDICAL ETHICS 2022; 48:213-214. [PMID: 35321903 DOI: 10.1136/medethics-2022-108263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Affiliation(s)
- Zoë Fritz
- THIS institute (The Healthcare Improvement Studies Institute), University of Cambridge School of Clinical Medicine, Cambridge, UK
- Acute Medicine, Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK
| |
Collapse
|
15
|
Jongsma KR, Sand M. Agree to disagree: the symmetry of burden of proof in human-AI collaboration. JOURNAL OF MEDICAL ETHICS 2022; 48:230-231. [PMID: 35321904 DOI: 10.1136/medethics-2022-108242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 03/03/2022] [Indexed: 06/14/2023]
Affiliation(s)
| | - Martin Sand
- Department of Values, Technology and Innovation, TU Delft, Delft, Netherlands
| |
Collapse
|
16
|
Lang BH. Are physicians requesting a second opinion really engaging in a reason-giving dialectic? Normative questions on the standards for second opinions and AI. JOURNAL OF MEDICAL ETHICS 2022; 48:234-235. [PMID: 35321906 DOI: 10.1136/medethics-2022-108246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Accepted: 03/03/2022] [Indexed: 06/14/2023]
|
17
|
Luxton DD. AI decision-support: a dystopian future of machine paternalism? JOURNAL OF MEDICAL ETHICS 2022; 48:232-233. [PMID: 35321905 DOI: 10.1136/medethics-2022-108243] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 03/09/2022] [Indexed: 06/14/2023]
Affiliation(s)
- David D Luxton
- Department of Psychiatry & Behavioral Sciences, University of Washington School of Medicine, Seattle, Washington, USA
| |
Collapse
|