1
|
Kara MA. Clouds on the horizon: clinical decision support systems, the control problem, and physician-patient dialogue. MEDICINE, HEALTH CARE, AND PHILOSOPHY 2024:10.1007/s11019-024-10241-8. [PMID: 39644445 DOI: 10.1007/s11019-024-10241-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 11/25/2024] [Indexed: 12/09/2024]
Abstract
Artificial intelligence-based clinical decision support systems have a potential to improve clinical practice, but they may have a negative impact on the physician-patient dialogue, because of the control problem. Physician-patient dialogue depends on human qualities such as compassion, trust, and empathy, which are shared by both parties. These qualities are necessary for the parties to reach a shared understanding -the merging of horizons- about clinical decisions. The patient attends the clinical encounter not only with a malfunctioning body, but also with an 'unhomelike' experience of illness that is related to a world of values and meanings, a life-world. Making wise individual decisions in accordance with the patient's life-world requires not only scientific analysis of causal relationships, but also listening with empathy to the patient's concerns. For a decision to be made, clinical information should be interpreted considering the patient's life-world. This side of clinical practice is not a job for computers, and they cannot be final decision-makers. On the other hand, in the control problem users blindly accept system output because of over-reliance, rather than evaluating it with their own judgement. This means over-reliant parties leave their place in the dialogue to the system. In this case, the dialogue may be disrupted and mutual trust may be lost. Therefore, it is necessary to design decision support systems to avoid the control problem and to limit their use when this is not possible, in order to protect the physician-patient dialogue.
Collapse
Affiliation(s)
- Mahmut Alpertunga Kara
- Medicine School, History of Medicine and Ethics Department, Istanbul Medeniyet University, Kuzey Kampus - Unalan Mahallesi, Unalan Sok D-100 Karayolu Yanyol, 34700, Uskudar/Istanbul, Turkey.
| |
Collapse
|
2
|
Funer F, Tinnemeyer S, Liedtke W, Salloch S. Clinicians' roles and necessary levels of understanding in the use of artificial intelligence: A qualitative interview study with German medical students. BMC Med Ethics 2024; 25:107. [PMID: 39375660 PMCID: PMC11457475 DOI: 10.1186/s12910-024-01109-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 09/26/2024] [Indexed: 10/09/2024] Open
Abstract
BACKGROUND Artificial intelligence-driven Clinical Decision Support Systems (AI-CDSS) are being increasingly introduced into various domains of health care for diagnostic, prognostic, therapeutic and other purposes. A significant part of the discourse on ethically appropriate conditions relate to the levels of understanding and explicability needed for ensuring responsible clinical decision-making when using AI-CDSS. Empirical evidence on stakeholders' viewpoints on these issues is scarce so far. The present study complements the empirical-ethical body of research by, on the one hand, investigating the requirements for understanding and explicability in depth with regard to the rationale behind them. On the other hand, it surveys medical students at the end of their studies as stakeholders, of whom little data is available so far, but for whom AI-CDSS will be an important part of their medical practice. METHODS Fifteen semi-structured qualitative interviews (each lasting an average of 56 min) were conducted with German medical students to investigate their perspectives and attitudes on the use of AI-CDSS. The problem-centred interviews draw on two hypothetical case vignettes of AI-CDSS employed in nephrology and surgery. Interviewees' perceptions and convictions of their own clinical role and responsibilities in dealing with AI-CDSS were elicited as well as viewpoints on explicability as well as the necessary level of understanding and competencies needed on the clinicians' side. The qualitative data were analysed according to key principles of qualitative content analysis (Kuckartz). RESULTS In response to the central question about the necessary understanding of AI-CDSS tools and the emergence of their outputs as well as the reasons for the requirements placed on them, two types of argumentation could be differentiated inductively from the interviewees' statements: the first type, the clinician as a systemic trustee (or "the one relying"), highlights that there needs to be empirical evidence and adequate approval processes that guarantee minimised harm and a clinical benefit from the employment of an AI-CDSS. Based on proof of these requirements, the use of an AI-CDSS would be appropriate, as according to "the one relying", clinicians should choose those measures that statistically cause the least harm. The second type, the clinician as an individual expert (or "the one controlling"), sets higher prerequisites that go beyond ensuring empirical evidence and adequate approval processes. These higher prerequisites relate to the clinician's necessary level of competence and understanding of how a specific AI-CDSS works and how to use it properly in order to evaluate its outputs and to mitigate potential risks for the individual patient. Both types are unified in their high esteem of evidence-based clinical practice and the need to communicate with the patient on the use of medical AI. However, the interviewees' different conceptions of the clinician's role and responsibilities cause them to have different requirements regarding the clinician's understanding and explicability of an AI-CDSS beyond the proof of benefit. CONCLUSIONS The study results highlight two different types among (future) clinicians regarding their view of the necessary levels of understanding and competence. These findings should inform the debate on appropriate training programmes and professional standards (e.g. clinical practice guidelines) that enable the safe and effective clinical employment of AI-CDSS in various clinical fields. While current approaches search for appropriate minimum requirements of the necessary understanding and competence, the differences between (future) clinicians in terms of their information and understanding needs described here can lead to more differentiated approaches to solutions.
Collapse
Affiliation(s)
- F Funer
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School (MHH), Carl-Neuberg-Str. 1, 30625, Hannover, Germany
- Institute for Ethics and History of Medicine, Eberhard Karls University Tübingen, Gartenstr. 47, 72074, Tübingen, Germany
| | - S Tinnemeyer
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School (MHH), Carl-Neuberg-Str. 1, 30625, Hannover, Germany
| | - W Liedtke
- Faculty of Theology, University of Greifswald, Am Rubenowplatz 2/3, 17489, Greifswald, Germany
| | - S Salloch
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School (MHH), Carl-Neuberg-Str. 1, 30625, Hannover, Germany.
| |
Collapse
|
3
|
Freyer N, Groß D, Lipprandt M. The ethical requirement of explainability for AI-DSS in healthcare: a systematic review of reasons. BMC Med Ethics 2024; 25:104. [PMID: 39354512 PMCID: PMC11443763 DOI: 10.1186/s12910-024-01103-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2024] [Accepted: 09/13/2024] [Indexed: 10/03/2024] Open
Abstract
BACKGROUND Despite continuous performance improvements, especially in clinical contexts, a major challenge of Artificial Intelligence based Decision Support Systems (AI-DSS) remains their degree of epistemic opacity. The conditions of and the solutions for the justified use of the occasionally unexplainable technology in healthcare are an active field of research. In March 2024, the European Union agreed upon the Artificial Intelligence Act (AIA), requiring medical AI-DSS to be ad-hoc explainable or to use post-hoc explainability methods. The ethical debate does not seem to settle on this requirement yet. This systematic review aims to outline and categorize the positions and arguments in the ethical debate. METHODS We conducted a literature search on PubMed, BASE, and Scopus for English-speaking scientific peer-reviewed publications from 2016 to 2024. The inclusion criterion was to give explicit requirements of explainability for AI-DSS in healthcare and reason for it. Non-domain-specific documents, as well as surveys, reviews, and meta-analyses were excluded. The ethical requirements for explainability outlined in the documents were qualitatively analyzed with respect to arguments for the requirement of explainability and the required level of explainability. RESULTS The literature search resulted in 1662 documents; 44 documents were included in the review after eligibility screening of the remaining full texts. Our analysis showed that 17 records argue in favor of the requirement of explainable AI methods (xAI) or ad-hoc explainable models, providing 9 categories of arguments. The other 27 records argued against a general requirement, providing 11 categories of arguments. Also, we found that 14 works advocate the need for context-dependent levels of explainability, as opposed to 30 documents, arguing for context-independent, absolute standards. CONCLUSIONS The systematic review of reasons shows no clear agreement on the requirement of post-hoc explainability methods or ad-hoc explainable models for AI-DSS in healthcare. The arguments found in the debate were referenced and responded to from different perspectives, demonstrating an interactive discourse. Policymakers and researchers should watch the development of the debate closely. Conversely, ethicists should be well informed by empirical and technical research, given the frequency of advancements in the field.
Collapse
Affiliation(s)
- Nils Freyer
- Institute of Medical Informatics, Medical Faculty, RWTH Aachen University, Aachen, Germany.
| | - Dominik Groß
- Institute for the History, Theory and Ethics of Medicine, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Myriam Lipprandt
- Institute of Medical Informatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
4
|
Wabro A, Herrmann M, Winkler EC. When time is of the essence: ethical reconsideration of XAI in time-sensitive environments. JOURNAL OF MEDICAL ETHICS 2024:jme-2024-110046. [PMID: 39299730 DOI: 10.1136/jme-2024-110046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Accepted: 09/06/2024] [Indexed: 09/22/2024]
Abstract
The objective of explainable artificial intelligence systems designed for clinical decision support (XAI-CDSS) is to enhance physicians' diagnostic performance, confidence and trust through the implementation of interpretable methods, thus providing for a superior epistemic positioning, a robust foundation for critical reflection and trustworthiness in times of heightened technological dependence. However, recent studies have revealed shortcomings in achieving these goals, questioning the widespread endorsement of XAI by medical professionals, ethicists and policy-makers alike. Based on a surgical use case, this article challenges generalising calls for XAI-CDSS and emphasises the significance of time-sensitive clinical environments which frequently preclude adequate consideration of system explanations. Therefore, XAI-CDSS may not be able to meet expectations of augmenting clinical decision-making in specific circumstances where time is of the essence. This article, by employing a principled ethical balancing methodology, highlights several fallacies associated with XAI deployment in time-sensitive clinical situations and recommends XAI endorsement only where scientific evidence or stakeholder assessments do not contradict such deployment in specific target settings.
Collapse
Affiliation(s)
- Andreas Wabro
- National Center for Tumor Diseases (NCT) Heidelberg, NCT Heidelberg, a partnership between DKFZ and Heidelberg University Hospital, Germany, Heidelberg University, Medical Faculty Heidelberg, Heidelberg University Hospital, Department of Medical Oncology, Section Translational Medical Ethics, Heidelberg, Germany
| | - Markus Herrmann
- National Center for Tumor Diseases (NCT) Heidelberg, NCT Heidelberg, a partnership between DKFZ and Heidelberg University Hospital, Germany, Heidelberg University, Medical Faculty Heidelberg, Heidelberg University Hospital, Department of Medical Oncology, Section Translational Medical Ethics, Heidelberg, Germany
| | - Eva C Winkler
- National Center for Tumor Diseases (NCT) Heidelberg, NCT Heidelberg, a partnership between DKFZ and Heidelberg University Hospital, Germany, Heidelberg University, Medical Faculty Heidelberg, Heidelberg University Hospital, Department of Medical Oncology, Section Translational Medical Ethics, Heidelberg, Germany
| |
Collapse
|
5
|
Kaplan H, Kostick-Quenet K, Lang B, Volk RJ, Blumenthal-Barby J. Impact of personalized risk scores on shared decision making in left ventricular assist device implantation: Findings from a qualitative study. PATIENT EDUCATION AND COUNSELING 2024; 130:108418. [PMID: 39288559 DOI: 10.1016/j.pec.2024.108418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 08/26/2024] [Accepted: 08/31/2024] [Indexed: 09/19/2024]
Abstract
OBJECTIVE To assess stakeholders' perspectives on integrating personalized risk scores (PRS) into left ventricular assist device (LVAD) implantation decisions and how these perspectives might impact shared decision making (SDM). METHODS We conducted 40 in-depth interviews with physicians, nurse coordinators, patients, and caregivers about integrating PRS into LVAD implantation decisions. A codebook was developed to identify thematic patterns, and quotations were consolidated for analysis. We used Thematic Content Analysis in MAXQDA software to identify themes by abstracting relevant quotes. RESULTS Clinicians had varying preferences regarding PRS integration into LVAD decision making, while patients and caregivers preferred real-time discussions about PRS with their physicians. Physicians voiced concerns about time constraints and suggested delegating PRS discussions to advanced practice providers or nurse coordinators. CONCLUSIONS Integrating PRS information into LVAD decision aids presents both opportunities and challenges for SDM. Given variable preferences among clinicians and patients, clinicians should elicit patients' desired role in the decision-making process. Addressing time constraints and ensuring patient-centered care will be crucial for optimizing SDM. Practice implications Clinicians should elicit patient preferences for PRS information disclosure and address challenges, such as time constraints and delegation of PRS discussions to other team members.
Collapse
Affiliation(s)
- Holland Kaplan
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, USA; Section of General Internal Medicine, Baylor College of Medicine, Houston, TX, USA.
| | - Kristin Kostick-Quenet
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, USA
| | - Benjamin Lang
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, USA
| | | | | |
Collapse
|
6
|
Grimbly MJ, Koopowitz SM, Chen R, Sun Z, Foster PJ, He M, Stein DJ, Ipser J, Zhu Z. Estimating biological age from retinal imaging: a scoping review. BMJ Open Ophthalmol 2024; 9:e001794. [PMID: 39181547 PMCID: PMC11344507 DOI: 10.1136/bmjophth-2024-001794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2024] [Accepted: 07/25/2024] [Indexed: 08/27/2024] Open
Abstract
BACKGROUND/AIMS The emerging concept of retinal age, a biomarker derived from retinal images, holds promise in estimating biological age. The retinal age gap (RAG) represents the difference between retinal age and chronological age, which serves as an indicator of deviations from normal ageing. This scoping review aims to collate studies on retinal age to determine its potential clinical utility and to identify knowledge gaps for future research. METHODS Using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses checklist, eligible non-review, human studies were identified, selected and appraised. PubMed, Scopus, SciELO, PsycINFO, Google Scholar, Cochrane, CINAHL, Africa Wide EBSCO, MedRxiv and BioRxiv databases were searched to identify literature pertaining to retinal age, the RAG and their associations. No restrictions were imposed on publication date. RESULTS Thirteen articles published between 2022 and 2023 were analysed, revealing four models capable of determining biological age from retinal images. Three models, 'Retinal Age', 'EyeAge' and a 'convolutional network-based model', achieved comparable mean absolute errors: 3.55, 3.30 and 3.97, respectively. A fourth model, 'RetiAGE', predicting the probability of being older than 65 years, also demonstrated strong predictive ability with respect to clinical outcomes. In the models identified, a higher predicted RAG demonstrated an association with negative occurrences, notably mortality and cardiovascular health outcomes. CONCLUSION This review highlights the potential clinical application of retinal age and RAG, emphasising the need for further research to establish their generalisability for clinical use, particularly in neuropsychiatry. The identified models showcase promising accuracy in estimating biological age, suggesting its viability for evaluating health status.
Collapse
Affiliation(s)
- Michaela Joan Grimbly
- SAMRC Unit on Risk & Resilience in Mental Disorders, Department of Psychiatry and Neuroscience Institute, University of Cape Town, Cape Town, South Africa
| | - Sheri-Michelle Koopowitz
- SAMRC Unit on Risk & Resilience in Mental Disorders, Department of Psychiatry and Neuroscience Institute, University of Cape Town, Cape Town, South Africa
| | - Ruiye Chen
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, Victoria, Australia
- Ophthalmology, Department of Surgery, Univeristy of Melbourne, Melbourne, Victoria, Australia
| | - Zihan Sun
- NIHR Biomedical Research Centre, Moorfields NHS Foundation Trust and The UCL Institute of Ophthalmology, London, United Kingdon
| | - Paul J Foster
- NIHR Biomedical Research Centre, Moorfields NHS Foundation Trust and The UCL Institute of Ophthalmology, London, United Kingdon
| | - Mingguang He
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, Victoria, Australia
- Ophthalmology, Department of Surgery, Univeristy of Melbourne, Melbourne, Victoria, Australia
- School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Dan J Stein
- SAMRC Unit on Risk & Resilience in Mental Disorders, Department of Psychiatry and Neuroscience Institute, University of Cape Town, Cape Town, South Africa
| | - Jonathan Ipser
- SAMRC Unit on Risk & Resilience in Mental Disorders, Department of Psychiatry and Neuroscience Institute, University of Cape Town, Cape Town, South Africa
| | - Zhuoting Zhu
- Centre for Eye Research Australia, Ophthalmology, University of Melbourne, Melbourne, Victoria, Australia
- Ophthalmology, Department of Surgery, Univeristy of Melbourne, Melbourne, Victoria, Australia
| |
Collapse
|
7
|
Funer F, Schneider D, Heyen NB, Aichinger H, Klausen AD, Tinnemeyer S, Liedtke W, Salloch S, Bratan T. Impacts of Clinical Decision Support Systems on the Relationship, Communication, and Shared Decision-Making Between Health Care Professionals and Patients: Multistakeholder Interview Study. J Med Internet Res 2024; 26:e55717. [PMID: 39178023 PMCID: PMC11380058 DOI: 10.2196/55717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 05/02/2024] [Accepted: 06/07/2024] [Indexed: 08/24/2024] Open
Abstract
BACKGROUND Clinical decision support systems (CDSSs) are increasingly being introduced into various domains of health care. Little is known so far about the impact of such systems on the health care professional-patient relationship, and there is a lack of agreement about whether and how patients should be informed about the use of CDSSs. OBJECTIVE This study aims to explore, in an empirically informed manner, the potential implications for the health care professional-patient relationship and to underline the importance of this relationship when using CDSSs for both patients and future professionals. METHODS Using a methodological triangulation, 15 medical students and 12 trainee nurses were interviewed in semistructured interviews and 18 patients were involved in focus groups between April 2021 and April 2022. All participants came from Germany. Three examples of CDSSs covering different areas of health care (ie, surgery, nephrology, and intensive home care) were used as stimuli in the study to identify similarities and differences regarding the use of CDSSs in different fields of application. The interview and focus group transcripts were analyzed using a structured qualitative content analysis. RESULTS From the interviews and focus groups analyzed, three topics were identified that interdependently address the interactions between patients and health care professionals: (1) CDSSs and their impact on the roles of and requirements for health care professionals, (2) CDSSs and their impact on the relationship between health care professionals and patients (including communication requirements for shared decision-making), and (3) stakeholders' expectations for patient education and information about CDSSs and their use. CONCLUSIONS The results indicate that using CDSSs could restructure established power and decision-making relationships between (future) health care professionals and patients. In addition, respondents expected that the use of CDSSs would involve more communication, so they anticipated an increased time commitment. The results shed new light on the existing discourse by demonstrating that the anticipated impact of CDSSs on the health care professional-patient relationship appears to stem less from the function of a CDSS and more from its integration in the relationship. Therefore, the anticipated effects on the relationship between health care professionals and patients could be specifically addressed in patient information about the use of CDSSs.
Collapse
Affiliation(s)
- Florian Funer
- Institute for Ethics and History of Medicine, Eberhard Karls University Tuebingen, Tuebingen, Germany
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| | - Diana Schneider
- Competence Center Emerging Technologies, Fraunhofer Institute for Systems and Innovation Research ISI, Karlsruhe, Germany
| | - Nils B Heyen
- Competence Center Emerging Technologies, Fraunhofer Institute for Systems and Innovation Research ISI, Karlsruhe, Germany
| | - Heike Aichinger
- Competence Center Emerging Technologies, Fraunhofer Institute for Systems and Innovation Research ISI, Karlsruhe, Germany
| | - Andrea Diana Klausen
- Institute for Medical Informatics, University Medical Center - RWTH Aachen, Aachen, Germany
| | - Sara Tinnemeyer
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| | - Wenke Liedtke
- Department of Social Work, Protestant University of Applied Sciences Rhineland-Westphalia-Lippe, Bochum, Germany
- Ethics and its Didactics, Faculty of Theology, University of Greifswald, Greifswald, Germany
| | - Sabine Salloch
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| | - Tanja Bratan
- Competence Center Emerging Technologies, Fraunhofer Institute for Systems and Innovation Research ISI, Karlsruhe, Germany
| |
Collapse
|
8
|
Grzybowski A, Jin K, Wu H. Challenges of artificial intelligence in medicine and dermatology. Clin Dermatol 2024; 42:210-215. [PMID: 38184124 DOI: 10.1016/j.clindermatol.2023.12.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2024]
Abstract
Artificial intelligence (AI) in medicine and dermatology brings additional challenges related to bias, transparency, ethics, security, and inequality. Bias in AI algorithms can arise from biased training data or decision-making processes, leading to disparities in health care outcomes. Addressing bias requires careful examination of the data used to train AI models and implementation of strategies to mitigate bias during algorithm development. Transparency is another critical challenge, as AI systems often operate as black boxes, making it difficult to understand how decisions are reached. Ensuring transparency in AI algorithms is vital to gaining trust from both patients and health care providers. Ethical considerations arise when using AI in health care, including issues such as informed consent, privacy, and the responsibility for the decisions made by AI systems. It is essential to establish clear guidelines and frameworks that govern the ethical use of AI, including maintaining patient autonomy and protecting sensitive health information. Security is a significant concern in AI systems, as they rely on vast amounts of sensitive patient data. Protecting these data from unauthorized access, breaches, or malicious attacks is paramount to maintaining patient privacy and trust in AI technologies. Lastly, the potential for inequality arises if AI technologies are not accessible to all populations, leading to a digital divide in health care. Efforts should be made to ensure that AI solutions are affordable, accessible, and tailored to the needs of diverse communities, mitigating the risk of exacerbating existing health care disparities. Addressing these challenges is crucial for AI's responsible and equitable integration in medicine and dermatology.
Collapse
Affiliation(s)
- Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland.
| | - Kai Jin
- Eye Center, Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| | - Hongkang Wu
- Eye Center, Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China
| |
Collapse
|
9
|
Funer F, Wiesing U. Physician's autonomy in the face of AI support: walking the ethical tightrope. Front Med (Lausanne) 2024; 11:1324963. [PMID: 38606162 PMCID: PMC11007068 DOI: 10.3389/fmed.2024.1324963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 03/18/2024] [Indexed: 04/13/2024] Open
Abstract
The introduction of AI support tools raises questions about the normative orientation of medical practice and the need to rethink its basic concepts. One of these concepts that is central to the discussion is the physician’s autonomy and its appropriateness in the face of high-powered AI applications. In this essay, a differentiation of the physician’s autonomy is made on the basis of a conceptual analysis. It is argued that the physician’s decision-making autonomy is a purposeful autonomy. The physician’s decision-making autonomy is fundamentally anchored in the medical ethos for the purpose to promote the patient’s health and well-being and to prevent him or her from harm. It follows from this purposefulness that the physician’s autonomy is not to be protected for its own sake, but only insofar as it serves this end better than alternative means. We argue that today, given existing limitations of AI support tools, physicians still need physician’s decision-making autonomy. For the possibility of physicians to exercise decision-making autonomy in the face of AI support, we elaborate three conditions: (1) sufficient information about AI support and its statements, (2) sufficient competencies to integrate AI statements into clinical decision-making, and (3) a context of voluntariness that allows, in justified cases, deviations from AI support. If the physician should fulfill his or her moral obligation to promote the health and well-being of the patient, then the use of AI should be designed in such a way that it promotes or at least maintains the physician’s decision-making autonomy.
Collapse
Affiliation(s)
- Florian Funer
- Institute for Ethics and History of Medicine, University Hospital and Faculty of Medicine, University of Tübingen, Tübingen, Germany
| | | |
Collapse
|
10
|
Funer F, Liedtke W, Tinnemeyer S, Klausen AD, Schneider D, Zacharias HU, Langanke M, Salloch S. Responsibility and decision-making authority in using clinical decision support systems: an empirical-ethical exploration of German prospective professionals' preferences and concerns. JOURNAL OF MEDICAL ETHICS 2023; 50:6-11. [PMID: 37217277 PMCID: PMC10803986 DOI: 10.1136/jme-2022-108814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 03/11/2023] [Indexed: 05/24/2023]
Abstract
Machine learning-driven clinical decision support systems (ML-CDSSs) seem impressively promising for future routine and emergency care. However, reflection on their clinical implementation reveals a wide array of ethical challenges. The preferences, concerns and expectations of professional stakeholders remain largely unexplored. Empirical research, however, may help to clarify the conceptual debate and its aspects in terms of their relevance for clinical practice. This study explores, from an ethical point of view, future healthcare professionals' attitudes to potential changes of responsibility and decision-making authority when using ML-CDSS. Twenty-seven semistructured interviews were conducted with German medical students and nursing trainees. The data were analysed based on qualitative content analysis according to Kuckartz. Interviewees' reflections are presented under three themes the interviewees describe as closely related: (self-)attribution of responsibility, decision-making authority and need of (professional) experience. The results illustrate the conceptual interconnectedness of professional responsibility and its structural and epistemic preconditions to be able to fulfil clinicians' responsibility in a meaningful manner. The study also sheds light on the four relata of responsibility understood as a relational concept. The article closes with concrete suggestions for the ethically sound clinical implementation of ML-CDSS.
Collapse
Affiliation(s)
- Florian Funer
- Institute of Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
- Institute of Ethics and History of Medicine, Eberhard Karls University Tübingen, Tübingen, Germany
| | - Wenke Liedtke
- Department of Social Work, Protestant University of Applied Sciences RWL, Bochum, Germany
| | - Sara Tinnemeyer
- Institute of Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| | | | - Diana Schneider
- Competence Center Emerging Technologies, Fraunhofer Institute for Systems and Innovation Research ISI, Karlsruhe, Germany
| | - Helena U Zacharias
- Peter L. Reichertz Institute for Medical Informatics of TU Braunschweig and Hannover Medical School, Hannover Medical School, Hannover, Germany
| | - Martin Langanke
- Department of Social Work, Protestant University of Applied Sciences RWL, Bochum, Germany
| | - Sabine Salloch
- Institute of Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| |
Collapse
|
11
|
Tokgöz P, Hafner J, Dockweiler C. [Factors influencing the implementation of AI-based decision support systems for antibiotic prescription in hospitals: a qualitative analysis from the perspective of health professionals]. DAS GESUNDHEITSWESEN 2023; 85:1220-1228. [PMID: 37451276 PMCID: PMC10713341 DOI: 10.1055/a-2098-3108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/18/2023]
Abstract
BACKGROUND Decision support systems based on artificial intelligence might optimize antibiotic prescribing in hospitals and prevent the development of antimicrobial resistance. The aim of this study was to identify impeding and facilitating factors for successful implementation from the perspective of health professionals. METHODS Problem-centered individual interviews were conducted with health professionals working in hospitals. Data evaluation was based on the structured qualitative content analysis according Kuckartz. RESULTS Attitudes of health professionals were presented along the Human-Organization -Technology-fit model. Technological and organizational themes were the most important factors for system implementation. Especially, compatibility with existing systems and user-friendliness were seen to play a major role in successful implementation. Additionally, the training of potential users and the technical equipment of the organization were considered essential. Finally, the importance of promoting technical skills of potential users in the long term and creating trust in the benefits of the system were highlighted. CONCLUSION The identified factors provide a basis for prioritizing and quantifying needs and attitudes in a next step. It becomes clear that, beside technological factors, attention to context-specific and user-related conditions are of fundamental importance to ensure successful implementation and system trust in the long term.
Collapse
Affiliation(s)
- Pinar Tokgöz
- Department für Digitale Gesundheitswissenschaften und
Biomedizin; Professur für Digital Public Health, Universität
Siegen Fakultät V Lebenswissenschaftliche Fakultät,
Germany
| | - Jessica Hafner
- Department für Digitale Gesundheitswissenschaften und
Biomedizin; Professur für Digital Public Health, Universität
Siegen Fakultät V Lebenswissenschaftliche Fakultät,
Germany
| | - Christoph Dockweiler
- Department für Digitale Gesundheitswissenschaften und
Biomedizin; Professur für Digital Public Health, Universität
Siegen Fakultät V Lebenswissenschaftliche Fakultät,
Germany
| |
Collapse
|
12
|
Ten Have H, Gordijn B. Medicine and machines. MEDICINE, HEALTH CARE, AND PHILOSOPHY 2022; 25:165-166. [PMID: 35366171 PMCID: PMC8976455 DOI: 10.1007/s11019-022-10080-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Affiliation(s)
- Henk Ten Have
- Duquesne University, Pittsburgh, USA.
- Anahuac University, Mexico City, Mexico.
| | | |
Collapse
|