1
|
Funer F, Tinnemeyer S, Liedtke W, Salloch S. Clinicians' roles and necessary levels of understanding in the use of artificial intelligence: A qualitative interview study with German medical students. BMC Med Ethics 2024; 25:107. [PMID: 39375660 PMCID: PMC11457475 DOI: 10.1186/s12910-024-01109-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 09/26/2024] [Indexed: 10/09/2024] Open
Abstract
BACKGROUND Artificial intelligence-driven Clinical Decision Support Systems (AI-CDSS) are being increasingly introduced into various domains of health care for diagnostic, prognostic, therapeutic and other purposes. A significant part of the discourse on ethically appropriate conditions relate to the levels of understanding and explicability needed for ensuring responsible clinical decision-making when using AI-CDSS. Empirical evidence on stakeholders' viewpoints on these issues is scarce so far. The present study complements the empirical-ethical body of research by, on the one hand, investigating the requirements for understanding and explicability in depth with regard to the rationale behind them. On the other hand, it surveys medical students at the end of their studies as stakeholders, of whom little data is available so far, but for whom AI-CDSS will be an important part of their medical practice. METHODS Fifteen semi-structured qualitative interviews (each lasting an average of 56 min) were conducted with German medical students to investigate their perspectives and attitudes on the use of AI-CDSS. The problem-centred interviews draw on two hypothetical case vignettes of AI-CDSS employed in nephrology and surgery. Interviewees' perceptions and convictions of their own clinical role and responsibilities in dealing with AI-CDSS were elicited as well as viewpoints on explicability as well as the necessary level of understanding and competencies needed on the clinicians' side. The qualitative data were analysed according to key principles of qualitative content analysis (Kuckartz). RESULTS In response to the central question about the necessary understanding of AI-CDSS tools and the emergence of their outputs as well as the reasons for the requirements placed on them, two types of argumentation could be differentiated inductively from the interviewees' statements: the first type, the clinician as a systemic trustee (or "the one relying"), highlights that there needs to be empirical evidence and adequate approval processes that guarantee minimised harm and a clinical benefit from the employment of an AI-CDSS. Based on proof of these requirements, the use of an AI-CDSS would be appropriate, as according to "the one relying", clinicians should choose those measures that statistically cause the least harm. The second type, the clinician as an individual expert (or "the one controlling"), sets higher prerequisites that go beyond ensuring empirical evidence and adequate approval processes. These higher prerequisites relate to the clinician's necessary level of competence and understanding of how a specific AI-CDSS works and how to use it properly in order to evaluate its outputs and to mitigate potential risks for the individual patient. Both types are unified in their high esteem of evidence-based clinical practice and the need to communicate with the patient on the use of medical AI. However, the interviewees' different conceptions of the clinician's role and responsibilities cause them to have different requirements regarding the clinician's understanding and explicability of an AI-CDSS beyond the proof of benefit. CONCLUSIONS The study results highlight two different types among (future) clinicians regarding their view of the necessary levels of understanding and competence. These findings should inform the debate on appropriate training programmes and professional standards (e.g. clinical practice guidelines) that enable the safe and effective clinical employment of AI-CDSS in various clinical fields. While current approaches search for appropriate minimum requirements of the necessary understanding and competence, the differences between (future) clinicians in terms of their information and understanding needs described here can lead to more differentiated approaches to solutions.
Collapse
Affiliation(s)
- F Funer
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School (MHH), Carl-Neuberg-Str. 1, 30625, Hannover, Germany
- Institute for Ethics and History of Medicine, Eberhard Karls University Tübingen, Gartenstr. 47, 72074, Tübingen, Germany
| | - S Tinnemeyer
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School (MHH), Carl-Neuberg-Str. 1, 30625, Hannover, Germany
| | - W Liedtke
- Faculty of Theology, University of Greifswald, Am Rubenowplatz 2/3, 17489, Greifswald, Germany
| | - S Salloch
- Institute for Ethics, History and Philosophy of Medicine, Hannover Medical School (MHH), Carl-Neuberg-Str. 1, 30625, Hannover, Germany.
| |
Collapse
|
2
|
Marques M, Almeida A, Pereira H. The Medicine Revolution Through Artificial Intelligence: Ethical Challenges of Machine Learning Algorithms in Decision-Making. Cureus 2024; 16:e69405. [PMID: 39411643 PMCID: PMC11473215 DOI: 10.7759/cureus.69405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/14/2024] [Indexed: 10/19/2024] Open
Abstract
The integration of artificial intelligence (AI) and its autonomous learning processes (or machine learning) in medicine has revolutionized the global health landscape, providing faster and more accurate diagnoses, personalization of medical treatment, and efficient management of clinical information. However, this transformation is not without ethical challenges, which require a comprehensive and responsible approach. There are many fields where AI and medicine intersect, such as health education, patient-doctor interface, data management, diagnosis, intervention, and decision-making processes. For some of these fields, there are some guidelines to regulate them. AI has numerous applications in medicine, including medical imaging analysis, diagnosis, predictive analytics for patient outcomes, drug discovery and development, virtual health assistants, and remote patient monitoring. It is also used in robotic surgery, clinical decision support systems, AI-powered chatbots for triage, administrative workflow automation, and treatment recommendations. Despite numerous applications, there are several problems related to the use of AI identified in the literature in general and in medicine in particular. These problems are data privacy and security, bias and discrimination, lack of transparency (Black Box Problem), integration with existing systems, cost and accessibility disparities, risk of overconfidence in AI, technical limitations, accountability for AI errors, algorithmic interpretability, data standardization issues, unemployment, and challenges in clinical validation. Of the various problems already identified, the most worrying are data bias, the black box phenomenon, questions about data privacy, responsibility for decision-making, security issues for the human species, and technological unemployment. There are still several ethical problems associated with the use of AI autonomous learning algorithms, namely epistemic, normative, and comprehensive ethical problems (overarching). Addressing all these issues is crucial to ensure that the use of AI in healthcare is implemented ethically and responsibly, providing benefits to populations without compromising fundamental values. Ongoing dialogue between healthcare providers and the industry, the establishment of ethical guidelines and regulations, and considering not only current ethical dilemmas but also future perspectives are fundamental points for the application of AI to medical practice. The purpose of this review is to discuss the ethical issues of AI algorithms used mainly in data management, diagnosis, intervention, and decision-making processes.
Collapse
Affiliation(s)
- Marta Marques
- Anesthesiology, Centro Hospitalar Universitário São João, Porto, PRT
| | - Ana Almeida
- Anesthesiology, Centro Hospitalar Universitário São João, Porto, PRT
| | - Helder Pereira
- Surgery and Physiology, Faculty of Medicine, Universidade do Porto, Porto, PRT
| |
Collapse
|
3
|
Bratan T, Schneider D, Funer F, Heyen NB, Klausen A, Liedtke W, Lipprandt M, Salloch S, Langanke M. [Supporting medical and nursing activities with AI: recommendations for responsible design and use]. Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz 2024; 67:1039-1046. [PMID: 39017712 PMCID: PMC11349829 DOI: 10.1007/s00103-024-03918-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Accepted: 06/12/2024] [Indexed: 07/18/2024]
Abstract
Clinical decision support systems (CDSS) based on artificial intelligence (AI) are complex socio-technical innovations and are increasingly being used in medicine and nursing to improve the overall quality and efficiency of care, while also addressing limited financial and human resources. However, in addition to such intended clinical and organisational effects, far-reaching ethical, social and legal implications of AI-based CDSS on patient care and nursing are to be expected. To date, these normative-social implications have not been sufficiently investigated. The BMBF-funded project DESIREE (DEcision Support In Routine and Emergency HEalth Care: Ethical and Social Implications) has developed recommendations for the responsible design and use of clinical decision support systems. This article focuses primarily on ethical and social aspects of AI-based CDSS that could have a negative impact on patient health. Our recommendations are intended as additions to existing recommendations and are divided into the following action fields with relevance across all stakeholder groups: development, clinical use, information and consent, education and training, and (accompanying) research.
Collapse
Affiliation(s)
- Tanja Bratan
- Competence Center Neue Technologien, Fraunhofer-Institut für System- und Innovationsforschung ISI, Breslauer Straße 48, 76139, Karlsruhe, Deutschland.
| | - Diana Schneider
- Competence Center Neue Technologien, Fraunhofer-Institut für System- und Innovationsforschung ISI, Breslauer Straße 48, 76139, Karlsruhe, Deutschland
| | - Florian Funer
- Institut für Ethik, Geschichte und Philosophie der Medizin, Medizinische Hochschule Hannover (MHH), Hannover, Deutschland
- Institut für Ethik und Geschichte der Medizin, Eberhard Karls Universität Tübingen, Tübingen, Deutschland
| | - Nils B Heyen
- Competence Center Neue Technologien, Fraunhofer-Institut für System- und Innovationsforschung ISI, Breslauer Straße 48, 76139, Karlsruhe, Deutschland
| | - Andrea Klausen
- Uniklinik RWTH Aachen, Institut für Medizinische Informatik, Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen, Aachen, Deutschland
| | - Wenke Liedtke
- Theologische Fakultät, Universität Greifswald, Greifswald, Deutschland
| | - Myriam Lipprandt
- Uniklinik RWTH Aachen, Institut für Medizinische Informatik, Rheinisch-Westfälische Technische Hochschule (RWTH) Aachen, Aachen, Deutschland
| | - Sabine Salloch
- Institut für Ethik, Geschichte und Philosophie der Medizin, Medizinische Hochschule Hannover (MHH), Hannover, Deutschland
| | - Martin Langanke
- Angewandte Ethik/Fachbereich Soziale Arbeit, Evangelische Hochschule Rheinland-Westfalen-Lippe, Bochum, Deutschland
| |
Collapse
|
4
|
Earp BD, Porsdam Mann S, Allen J, Salloch S, Suren V, Jongsma K, Braun M, Wilkinson D, Sinnott-Armstrong W, Rid A, Wendler D, Savulescu J. A Personalized Patient Preference Predictor for Substituted Judgments in Healthcare: Technically Feasible and Ethically Desirable. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2024; 24:13-26. [PMID: 38226965 PMCID: PMC11248995 DOI: 10.1080/15265161.2023.2296402] [Citation(s) in RCA: 23] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/17/2024]
Abstract
When making substituted judgments for incapacitated patients, surrogates often struggle to guess what the patient would want if they had capacity. Surrogates may also agonize over having the (sole) responsibility of making such a determination. To address such concerns, a Patient Preference Predictor (PPP) has been proposed that would use an algorithm to infer the treatment preferences of individual patients from population-level data about the known preferences of people with similar demographic characteristics. However, critics have suggested that even if such a PPP were more accurate, on average, than human surrogates in identifying patient preferences, the proposed algorithm would nevertheless fail to respect the patient's (former) autonomy since it draws on the 'wrong' kind of data: namely, data that are not specific to the individual patient and which therefore may not reflect their actual values, or their reasons for having the preferences they do. Taking such criticisms on board, we here propose a new approach: the Personalized Patient Preference Predictor (P4). The P4 is based on recent advances in machine learning, which allow technologies including large language models to be more cheaply and efficiently 'fine-tuned' on person-specific data. The P4, unlike the PPP, would be able to infer an individual patient's preferences from material (e.g., prior treatment decisions) that is in fact specific to them. Thus, we argue, in addition to being potentially more accurate at the individual level than the previously proposed PPP, the predictions of a P4 would also more directly reflect each patient's own reasons and values. In this article, we review recent discoveries in artificial intelligence research that suggest a P4 is technically feasible, and argue that, if it is developed and appropriately deployed, it should assuage some of the main autonomy-based concerns of critics of the original PPP. We then consider various objections to our proposal and offer some tentative replies.
Collapse
Affiliation(s)
- Brian D. Earp
- University of Oxford
- National University of Singapore
- Yale University and The Hastings Center
| | | | | | | | | | - Karin Jongsma
- Julius Center of the University Medical Center Utrecht
| | | | - Dominic Wilkinson
- University of Oxford
- National University of Singapore
- John Radcliffe Hospital
- Murdoch Children’s Research Institute
| | | | | | | | | |
Collapse
|
5
|
Bouhouita-Guermech S, Haidar H. Scoping Review Shows the Dynamics and Complexities Inherent to the Notion of "Responsibility" in Artificial Intelligence within the Healthcare Context. Asian Bioeth Rev 2024; 16:315-344. [PMID: 39022380 PMCID: PMC11250714 DOI: 10.1007/s41649-024-00292-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 03/06/2024] [Accepted: 03/07/2024] [Indexed: 07/20/2024] Open
Abstract
The increasing integration of artificial intelligence (AI) in healthcare presents a host of ethical, legal, social, and political challenges involving various stakeholders. These challenges prompt various studies proposing frameworks and guidelines to tackle these issues, emphasizing distinct phases of AI development, deployment, and oversight. As a result, the notion of responsible AI has become widespread, incorporating ethical principles such as transparency, fairness, responsibility, and privacy. This paper explores the existing literature on AI use in healthcare to examine how it addresses, defines, and discusses the concept of responsibility. We conducted a scoping review of literature related to AI responsibility in healthcare, searching databases and reference lists between January 2017 and January 2022 for terms related to "responsibility" and "AI in healthcare", and their derivatives. Following screening, 136 articles were included. Data were grouped into four thematic categories: (1) the variety of terminology used to describe and address responsibility; (2) principles and concepts associated with responsibility; (3) stakeholders' responsibilities in AI clinical development, use, and deployment; and (4) recommendations for addressing responsibility concerns. The results show the lack of a clear definition of AI responsibility in healthcare and highlight the importance of ensuring responsible development and implementation of AI in healthcare. Further research is necessary to clarify this notion to contribute to developing frameworks regarding the type of responsibility (ethical/moral/professional, legal, and causal) of various stakeholders involved in the AI lifecycle.
Collapse
Affiliation(s)
| | - Hazar Haidar
- Ethics Programs, Department of Letters and Humanities, University of Quebec at Rimouski, Rimouski, Québec Canada
| |
Collapse
|
6
|
Zeiser J. Owning Decisions: AI Decision-Support and the Attributability-Gap. SCIENCE AND ENGINEERING ETHICS 2024; 30:27. [PMID: 38888795 PMCID: PMC11189344 DOI: 10.1007/s11948-024-00485-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 04/30/2024] [Indexed: 06/20/2024]
Abstract
Artificial intelligence (AI) has long been recognised as a challenge to responsibility. Much of this discourse has been framed around robots, such as autonomous weapons or self-driving cars, where we arguably lack control over a machine's behaviour and therefore struggle to identify an agent that can be held accountable. However, most of today's AI is based on machine-learning technology that does not act on its own, but rather serves as a decision-support tool, automatically analysing data to help human agents make better decisions. I argue that decision-support tools pose a challenge to responsibility that goes beyond the familiar problem of finding someone to blame or punish for the behaviour of agent-like systems. Namely, they pose a problem for what we might call "decision ownership": they make it difficult to identify human agents to whom we can attribute value-judgements that are reflected in decisions. Drawing on recent philosophical literature on responsibility and its various facets, I argue that this is primarily a problem of attributability rather than of accountability. This particular responsibility problem comes in different forms and degrees, most obviously when an AI provides direct recommendations for actions, but also, less obviously, when it provides mere descriptive information on the basis of which a decision is made.
Collapse
Affiliation(s)
- Jannik Zeiser
- Leibniz Universität Hannover, Institut für Philosophie, Im Moore 21, 30167, Hannover, Germany.
| |
Collapse
|
7
|
Farah L, Borget I, Martelli N, Vallee A. Suitability of the Current Health Technology Assessment of Innovative Artificial Intelligence-Based Medical Devices: Scoping Literature Review. J Med Internet Res 2024; 26:e51514. [PMID: 38739911 PMCID: PMC11130781 DOI: 10.2196/51514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 12/17/2023] [Accepted: 12/28/2023] [Indexed: 05/16/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI)-based medical devices have garnered attention due to their ability to revolutionize medicine. Their health technology assessment framework is lacking. OBJECTIVE This study aims to analyze the suitability of each health technology assessment (HTA) domain for the assessment of AI-based medical devices. METHODS We conducted a scoping literature review following the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) methodology. We searched databases (PubMed, Embase, and Cochrane Library), gray literature, and HTA agency websites. RESULTS A total of 10.1% (78/775) of the references were included. Data quality and integration are vital aspects to consider when describing and assessing the technical characteristics of AI-based medical devices during an HTA process. When it comes to implementing specialized HTA for AI-based medical devices, several practical challenges and potential barriers could be highlighted and should be taken into account (AI technological evolution timeline, data requirements, complexity and transparency, clinical validation and safety requirements, regulatory and ethical considerations, and economic evaluation). CONCLUSIONS The adaptation of the HTA process through a methodological framework for AI-based medical devices enhances the comparability of results across different evaluations and jurisdictions. By defining the necessary expertise, the framework supports the development of a skilled workforce capable of conducting robust and reliable HTAs of AI-based medical devices. A comprehensive adapted HTA framework for AI-based medical devices can provide valuable insights into the effectiveness, cost-effectiveness, and societal impact of AI-based medical devices, guiding their responsible implementation and maximizing their benefits for patients and health care systems.
Collapse
Affiliation(s)
- Line Farah
- Innovation Center for Medical Devices Department, Foch Hospital, Suresnes, France
- Groupe de Recherche et d'accueil en Droit et Economie de la Santé Department, University Paris-Saclay, Orsay, France
| | - Isabelle Borget
- Groupe de Recherche et d'accueil en Droit et Economie de la Santé Department, University Paris-Saclay, Orsay, France
- Department of Biostatistics and Epidemiology, Gustave Roussy, University Paris-Saclay, Villejuif, France
- Oncostat U1018, Inserm, Équipe Labellisée Ligue Contre le Cancer, University Paris-Saclay, Villejuif, France
| | - Nicolas Martelli
- Groupe de Recherche et d'accueil en Droit et Economie de la Santé Department, University Paris-Saclay, Orsay, France
- Pharmacy Department, Georges Pompidou European Hospital, Paris, France
| | - Alexandre Vallee
- Department of Epidemiology and Public Health, Foch Hospital, Suresnes, France
| |
Collapse
|
8
|
Funer F, Wiesing U. Physician's autonomy in the face of AI support: walking the ethical tightrope. Front Med (Lausanne) 2024; 11:1324963. [PMID: 38606162 PMCID: PMC11007068 DOI: 10.3389/fmed.2024.1324963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 03/18/2024] [Indexed: 04/13/2024] Open
Abstract
The introduction of AI support tools raises questions about the normative orientation of medical practice and the need to rethink its basic concepts. One of these concepts that is central to the discussion is the physician’s autonomy and its appropriateness in the face of high-powered AI applications. In this essay, a differentiation of the physician’s autonomy is made on the basis of a conceptual analysis. It is argued that the physician’s decision-making autonomy is a purposeful autonomy. The physician’s decision-making autonomy is fundamentally anchored in the medical ethos for the purpose to promote the patient’s health and well-being and to prevent him or her from harm. It follows from this purposefulness that the physician’s autonomy is not to be protected for its own sake, but only insofar as it serves this end better than alternative means. We argue that today, given existing limitations of AI support tools, physicians still need physician’s decision-making autonomy. For the possibility of physicians to exercise decision-making autonomy in the face of AI support, we elaborate three conditions: (1) sufficient information about AI support and its statements, (2) sufficient competencies to integrate AI statements into clinical decision-making, and (3) a context of voluntariness that allows, in justified cases, deviations from AI support. If the physician should fulfill his or her moral obligation to promote the health and well-being of the patient, then the use of AI should be designed in such a way that it promotes or at least maintains the physician’s decision-making autonomy.
Collapse
Affiliation(s)
- Florian Funer
- Institute for Ethics and History of Medicine, University Hospital and Faculty of Medicine, University of Tübingen, Tübingen, Germany
| | | |
Collapse
|
9
|
Brandão M, Mendes F, Martins M, Cardoso P, Macedo G, Mascarenhas T, Mascarenhas Saraiva M. Revolutionizing Women's Health: A Comprehensive Review of Artificial Intelligence Advancements in Gynecology. J Clin Med 2024; 13:1061. [PMID: 38398374 PMCID: PMC10889757 DOI: 10.3390/jcm13041061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 02/04/2024] [Accepted: 02/05/2024] [Indexed: 02/25/2024] Open
Abstract
Artificial intelligence has yielded remarkably promising results in several medical fields, namely those with a strong imaging component. Gynecology relies heavily on imaging since it offers useful visual data on the female reproductive system, leading to a deeper understanding of pathophysiological concepts. The applicability of artificial intelligence technologies has not been as noticeable in gynecologic imaging as in other medical fields so far. However, due to growing interest in this area, some studies have been performed with exciting results. From urogynecology to oncology, artificial intelligence algorithms, particularly machine learning and deep learning, have shown huge potential to revolutionize the overall healthcare experience for women's reproductive health. In this review, we aim to establish the current status of AI in gynecology, the upcoming developments in this area, and discuss the challenges facing its clinical implementation, namely the technological and ethical concerns for technology development, implementation, and accountability.
Collapse
Affiliation(s)
- Marta Brandão
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
| | - Francisco Mendes
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Miguel Martins
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Pedro Cardoso
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Guilherme Macedo
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Teresa Mascarenhas
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
- Department of Obstetrics and Gynecology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Miguel Mascarenhas Saraiva
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| |
Collapse
|
10
|
Funer F, Liedtke W, Tinnemeyer S, Klausen AD, Schneider D, Zacharias HU, Langanke M, Salloch S. Responsibility and decision-making authority in using clinical decision support systems: an empirical-ethical exploration of German prospective professionals' preferences and concerns. JOURNAL OF MEDICAL ETHICS 2023; 50:6-11. [PMID: 37217277 PMCID: PMC10803986 DOI: 10.1136/jme-2022-108814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 03/11/2023] [Indexed: 05/24/2023]
Abstract
Machine learning-driven clinical decision support systems (ML-CDSSs) seem impressively promising for future routine and emergency care. However, reflection on their clinical implementation reveals a wide array of ethical challenges. The preferences, concerns and expectations of professional stakeholders remain largely unexplored. Empirical research, however, may help to clarify the conceptual debate and its aspects in terms of their relevance for clinical practice. This study explores, from an ethical point of view, future healthcare professionals' attitudes to potential changes of responsibility and decision-making authority when using ML-CDSS. Twenty-seven semistructured interviews were conducted with German medical students and nursing trainees. The data were analysed based on qualitative content analysis according to Kuckartz. Interviewees' reflections are presented under three themes the interviewees describe as closely related: (self-)attribution of responsibility, decision-making authority and need of (professional) experience. The results illustrate the conceptual interconnectedness of professional responsibility and its structural and epistemic preconditions to be able to fulfil clinicians' responsibility in a meaningful manner. The study also sheds light on the four relata of responsibility understood as a relational concept. The article closes with concrete suggestions for the ethically sound clinical implementation of ML-CDSS.
Collapse
Affiliation(s)
- Florian Funer
- Institute of Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
- Institute of Ethics and History of Medicine, Eberhard Karls University Tübingen, Tübingen, Germany
| | - Wenke Liedtke
- Department of Social Work, Protestant University of Applied Sciences RWL, Bochum, Germany
| | - Sara Tinnemeyer
- Institute of Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| | | | - Diana Schneider
- Competence Center Emerging Technologies, Fraunhofer Institute for Systems and Innovation Research ISI, Karlsruhe, Germany
| | - Helena U Zacharias
- Peter L. Reichertz Institute for Medical Informatics of TU Braunschweig and Hannover Medical School, Hannover Medical School, Hannover, Germany
| | - Martin Langanke
- Department of Social Work, Protestant University of Applied Sciences RWL, Bochum, Germany
| | - Sabine Salloch
- Institute of Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| |
Collapse
|
11
|
El Naqa I, Karolak A, Luo Y, Folio L, Tarhini AA, Rollison D, Parodi K. Translation of AI into oncology clinical practice. Oncogene 2023; 42:3089-3097. [PMID: 37684407 DOI: 10.1038/s41388-023-02826-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 08/23/2023] [Accepted: 08/25/2023] [Indexed: 09/10/2023]
Abstract
Artificial intelligence (AI) is a transformative technology that is capturing popular imagination and can revolutionize biomedicine. AI and machine learning (ML) algorithms have the potential to break through existing barriers in oncology research and practice such as automating workflow processes, personalizing care, and reducing healthcare disparities. Emerging applications of AI/ML in the literature include screening and early detection of cancer, disease diagnosis, response prediction, prognosis, and accelerated drug discovery. Despite this excitement, only few AI/ML models have been properly validated and fewer have become regulated products for routine clinical use. In this review, we highlight the main challenges impeding AI/ML clinical translation. We present different clinical use cases from the domains of radiology, radiation oncology, immunotherapy, and drug discovery in oncology. We dissect the unique challenges and opportunities associated with each of these cases. Finally, we summarize the general requirements for successful AI/ML implementation in the clinic, highlighting specific examples and points of emphasis including the importance of multidisciplinary collaboration of stakeholders, role of domain experts in AI augmentation, transparency of AI/ML models, and the establishment of a comprehensive quality assurance program to mitigate risks of training bias and data drifts, all culminating toward safer and more beneficial AI/ML applications in oncology labs and clinics.
Collapse
Affiliation(s)
- Issam El Naqa
- Department of Machine Learning, Moffitt Cancer Center, Tampa, FL, 33612, USA.
| | - Aleksandra Karolak
- Department of Machine Learning, Moffitt Cancer Center, Tampa, FL, 33612, USA
| | - Yi Luo
- Department of Machine Learning, Moffitt Cancer Center, Tampa, FL, 33612, USA
| | - Les Folio
- Diagnostic Imaging & Interventional Radiology, Moffitt Cancer Center, Tampa, FL, 33612, USA
| | - Ahmad A Tarhini
- Cutaneous Oncology and Immunology, Moffitt Cancer Center, Tampa, FL, 33612, USA
| | - Dana Rollison
- Department of Cancer Epidemiology, Moffitt Cancer Center, Tampa, FL, 33612, USA
| | - Katia Parodi
- Department of Medical Physics, Ludwig-Maximilians-Universität München, Munich, Germany
| |
Collapse
|
12
|
Hille EM, Hummel P, Braun M. Meaningful Human Control over AI for Health? A Review. JOURNAL OF MEDICAL ETHICS 2023:jme-2023-109095. [PMID: 37730418 DOI: 10.1136/jme-2023-109095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Accepted: 08/27/2023] [Indexed: 09/22/2023]
Abstract
Artificial intelligence is currently changing many areas of society. Especially in health, where critical decisions are made, questions of control must be renegotiated: who is in control when an automated system makes clinically relevant decisions? Increasingly, the concept of meaningful human control (MHC) is being invoked for this purpose. However, it is unclear exactly how this concept is to be understood in health. Through a systematic review, we present the current state of the concept of MHC in health. The results show that there is not yet a robust MHC concept for health. We propose a broader understanding of MHC along three strands of action: enabling, exercising and evaluating control. Taking into account these strands of action and the established rules and processes in the different health sectors, the MHC concept needs to be further developed to avoid falling into two gaps, which we have described as theoretical and labelling gaps.
Collapse
Affiliation(s)
- Eva Maria Hille
- Chair of Social Ethics & Ethics of Technology, Rheinische Friedrich-Wilhelms-Universität Bonn, Bonn, Germany
| | - Patrik Hummel
- Department of Industrial Engineering and Innovation Sciences, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Matthias Braun
- Chair of Social Ethics & Ethics of Technology, Rheinische Friedrich-Wilhelms-Universität Bonn, Bonn, Germany
| |
Collapse
|
13
|
Nair M, Andersson J, Nygren JM, Lundgren LE. Barriers and Enablers for Implementation of an Artificial Intelligence-Based Decision Support Tool to Reduce the Risk of Readmission of Patients With Heart Failure: Stakeholder Interviews. JMIR Form Res 2023; 7:e47335. [PMID: 37610799 PMCID: PMC10483295 DOI: 10.2196/47335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 05/29/2023] [Accepted: 05/31/2023] [Indexed: 08/24/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) applications in health care are expected to provide value for health care organizations, professionals, and patients. However, the implementation of such systems should be carefully planned and organized in order to ensure quality, safety, and acceptance. The gathered view of different stakeholders is a great source of information to understand the barriers and enablers for implementation in a specific context. OBJECTIVE This study aimed to understand the context and stakeholder perspectives related to the future implementation of a clinical decision support system for predicting readmissions of patients with heart failure. The study was part of a larger project involving model development, interface design, and implementation planning of the system. METHODS Interviews were held with 12 stakeholders from the regional and municipal health care organizations to gather their views on the potential effects implementation of such a decision support system could have as well as barriers and enablers for implementation. Data were analyzed based on the categories defined in the nonadoption, abandonment, scale-up, spread, sustainability (NASSS) framework. RESULTS Stakeholders had in general a positive attitude and curiosity toward AI-based decision support systems, and mentioned several barriers and enablers based on the experiences of previous implementations of information technology systems. Central aspects to consider for the proposed clinical decision support system were design aspects, access to information throughout the care process, and integration into the clinical workflow. The implementation of such a system could lead to a number of effects related to both clinical outcomes as well as resource allocation, which are all important to address in the planning of implementation. Stakeholders saw, however, value in several aspects of implementing such system, emphasizing the increased quality of life for those patients who can avoid being hospitalized. CONCLUSIONS Several ideas were put forward on how the proposed AI system would potentially affect and provide value for patients, professionals, and the organization, and implementation aspects were important parts of that. A successful system can help clinicians to prioritize the need for different types of treatments but also be used for planning purposes within the hospital. However, the system needs not only technological and clinical precision but also a carefully planned implementation process. Such a process should take into consideration the aspects related to all the categories in the NASSS framework. This study further highlighted the importance to study stakeholder needs early in the process of development, design, and implementation of decision support systems, as the data revealed new information on the potential use of the system and the placement of the application in the care process.
Collapse
Affiliation(s)
- Monika Nair
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | | | - Jens M Nygren
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Lina E Lundgren
- School of Business, Innovation and Sustainability, Halmstad University, Halmstad, Sweden
| |
Collapse
|
14
|
Liu X, Barreto EF, Dong Y, Liu C, Gao X, Tootooni MS, Song X, Kashani KB. Discrepancy between perceptions and acceptance of clinical decision support Systems: implementation of artificial intelligence for vancomycin dosing. BMC Med Inform Decis Mak 2023; 23:157. [PMID: 37568134 PMCID: PMC10416522 DOI: 10.1186/s12911-023-02254-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2022] [Accepted: 07/31/2023] [Indexed: 08/13/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI) tools are more effective if accepted by clinicians. We developed an AI-based clinical decision support system (CDSS) to facilitate vancomycin dosing. This qualitative study assesses clinicians' perceptions regarding CDSS implementation. METHODS Thirteen semi-structured interviews were conducted with critical care pharmacists, at Mayo Clinic (Rochester, MN), from March through April 2020. Eight clinical cases were discussed with each pharmacist (N = 104). Following initial responses, we revealed the CDSS recommendations to assess participants' reactions and feedback. Interviews were audio-recorded, transcribed, and summarized. RESULTS The participants reported considerable time and effort invested daily in individualizing vancomycin therapy for hospitalized patients. Most pharmacists agreed that such a CDSS could favorably affect (N = 8, 62%) or enhance (9, 69%) their ability to make vancomycin dosing decisions. In case-based evaluations, pharmacists' empiric doses differed from the CDSS recommendation in most cases (88/104, 85%). Following revealing the CDSS recommendations, we noted 78% (69/88) discrepant doses. In discrepant cases, pharmacists indicated they would not alter their recommendations. The reasons for declining the CDSS recommendation were general distrust of CDSS, lack of dynamic evaluation and in-depth analysis, inability to integrate all clinical data, and lack of a risk index. CONCLUSION While pharmacists acknowledged enthusiasm about the advantages of AI-based models to improve drug dosing, they were reluctant to integrate the tool into clinical practice. Additional research is necessary to determine the optimal approach to implementing CDSS at the point of care acceptable to clinicians and effective at improving patient outcomes.
Collapse
Affiliation(s)
- Xinyan Liu
- Division of Pulmonary and Critical Care Medicine, Department of Medicine, Mayo Clinic, Rochester, MN, 55905, USA
- ICU, DongE Hospital Affiliated to Shandong First Medical University, Liaocheng, Shandong, 252200, China
| | - Erin F Barreto
- Department of Pharmacy, Mayo Clinic, Rochester, MN, 55905, USA
| | - Yue Dong
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN, 55905, USA
| | - Chang Liu
- Division of Pulmonary and Critical Care Medicine, Department of Medicine, Mayo Clinic, Rochester, MN, 55905, USA
- Department of Critical Care Medicine, Zhongnan Hospital of Wuhan University, Wuhan, Hubei, 430071, China
| | - Xiaolan Gao
- Division of Pulmonary and Critical Care Medicine, Department of Medicine, Mayo Clinic, Rochester, MN, 55905, USA
- Department of Critical Care Medicine, Division of Life Sciences and Medicine, The First Affiliated Hospital of USTC, University of Science and Technology of China, Hefei, Anhui, 230001, China
| | - Mohammad Samie Tootooni
- Health Informatics and Data Science. Health Sciences Campus, Loyola University, Chicago, IL, 60611, USA
| | - Xuan Song
- ICU, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong, 250098, China.
| | - Kianoush B Kashani
- Division of Pulmonary and Critical Care Medicine, Department of Medicine, Mayo Clinic, Rochester, MN, 55905, USA.
- Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA.
| |
Collapse
|
15
|
Jeyaraman M, Balaji S, Jeyaraman N, Yadav S. Unraveling the Ethical Enigma: Artificial Intelligence in Healthcare. Cureus 2023; 15:e43262. [PMID: 37692617 PMCID: PMC10492220 DOI: 10.7759/cureus.43262] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/10/2023] [Indexed: 09/12/2023] Open
Abstract
The integration of artificial intelligence (AI) into healthcare promises groundbreaking advancements in patient care, revolutionizing clinical diagnosis, predictive medicine, and decision-making. This transformative technology uses machine learning, natural language processing, and large language models (LLMs) to process and reason like human intelligence. OpenAI's ChatGPT, a sophisticated LLM, holds immense potential in medical practice, research, and education. However, as AI in healthcare gains momentum, it brings forth profound ethical challenges that demand careful consideration. This comprehensive review explores key ethical concerns in the domain, including privacy, transparency, trust, responsibility, bias, and data quality. Protecting patient privacy in data-driven healthcare is crucial, with potential implications for psychological well-being and data sharing. Strategies like homomorphic encryption (HE) and secure multiparty computation (SMPC) are vital to preserving confidentiality. Transparency and trustworthiness of AI systems are essential, particularly in high-risk decision-making scenarios. Explainable AI (XAI) emerges as a critical aspect, ensuring a clear understanding of AI-generated predictions. Cybersecurity becomes a pressing concern as AI's complexity creates vulnerabilities for potential breaches. Determining responsibility in AI-driven outcomes raises important questions, with debates on AI's moral agency and human accountability. Shifting from data ownership to data stewardship enables responsible data management in compliance with regulations. Addressing bias in healthcare data is crucial to avoid AI-driven inequities. Biases present in data collection and algorithm development can perpetuate healthcare disparities. A public-health approach is advocated to address inequalities and promote diversity in AI research and the workforce. Maintaining data quality is imperative in AI applications, with convolutional neural networks showing promise in multi-input/mixed data models, offering a comprehensive patient perspective. In this ever-evolving landscape, it is imperative to adopt a multidimensional approach involving policymakers, developers, healthcare practitioners, and patients to mitigate ethical concerns. By understanding and addressing these challenges, we can harness the full potential of AI in healthcare while ensuring ethical and equitable outcomes.
Collapse
Affiliation(s)
- Madhan Jeyaraman
- Orthopedics, ACS Medical College and Hospital, Dr. MGR Educational and Research Institute, Chennai, IND
| | - Sangeetha Balaji
- Orthopedics, Government Medical College, Omandurar Government Estate, Chennai, IND
| | - Naveen Jeyaraman
- Orthopedics, ACS Medical College and Hospital, Dr. MGR Educational and Research Institute, Chennai, IND
| | - Sankalp Yadav
- Medicine, Shri Madan Lal Khurana Chest Clinic, New Delhi, IND
| |
Collapse
|
16
|
Braun M, Bleher H, Hille EM, Krutzinna J. Tackling Structural Injustices: On the Entanglement of Visibility and Justice in Emerging Technologies. THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2023; 23:100-102. [PMID: 37339313 DOI: 10.1080/15265161.2023.2207514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/22/2023]
|
17
|
Bleher H, Braun M. Reflections on Putting AI Ethics into Practice: How Three AI Ethics Approaches Conceptualize Theory and Practice. SCIENCE AND ENGINEERING ETHICS 2023; 29:21. [PMID: 37237246 DOI: 10.1007/s11948-023-00443-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Accepted: 04/26/2023] [Indexed: 05/28/2023]
Abstract
Critics currently argue that applied ethics approaches to artificial intelligence (AI) are too principles-oriented and entail a theory-practice gap. Several applied ethical approaches try to prevent such a gap by conceptually translating ethical theory into practice. In this article, we explore how the currently most prominent approaches of AI ethics translate ethics into practice. Therefore, we examine three approaches to applied AI ethics: the embedded ethics approach, the ethically aligned approach, and the Value Sensitive Design (VSD) approach. We analyze each of these three approaches by asking how they understand and conceptualize theory and practice. We outline the conceptual strengths as well as their shortcomings: an embedded ethics approach is context-oriented but risks being biased by it; ethically aligned approaches are principles-oriented but lack justification theories to deal with trade-offs between competing principles; and the interdisciplinary Value Sensitive Design approach is based on stakeholder values but needs linkage to political, legal, or social governance aspects. Against this background, we develop a meta-framework for applied AI ethics conceptions with three dimensions. Based on critical theory, we suggest these dimensions as starting points to critically reflect on the conceptualization of theory and practice. We claim, first, that the inclusion of the dimension of affects and emotions in the ethical decision-making process stimulates reflections on vulnerabilities, experiences of disregard, and marginalization already within the AI development process. Second, we derive from our analysis that considering the dimension of justifying normative background theories provides both standards and criteria as well as guidance for prioritizing or evaluating competing principles in cases of conflict. Third, we argue that reflecting the governance dimension in ethical decision-making is an important factor to reveal power structures as well as to realize ethical AI and its application because this dimension seeks to combine social, legal, technical, and political concerns. This meta-framework can thus serve as a reflective tool for understanding, mapping, and assessing the theory-practice conceptualizations within AI ethics approaches to address and overcome their blind spots.
Collapse
Affiliation(s)
- Hannah Bleher
- Chair of Social Ethics and Ethics of Technology, University of Bonn, Rabinstraße 8, 53111, Bonn, Germany.
| | - Matthias Braun
- Chair of Social Ethics and Ethics of Technology, University of Bonn, Rabinstraße 8, 53111, Bonn, Germany
| |
Collapse
|
18
|
Scherer L, Kuss M, Nahm W. Review of Artificial Intelligence-Based Signal Processing in Dialysis: Challenges for Machine-Embedded and Complementary Applications. ADVANCES IN KIDNEY DISEASE AND HEALTH 2023; 30:40-46. [PMID: 36723281 DOI: 10.1053/j.akdh.2022.11.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Revised: 08/23/2022] [Accepted: 11/07/2022] [Indexed: 01/20/2023]
Abstract
Artificial intelligence technology is trending in nearly every medical area. It offers the possibility for improving analytics, therapy outcome, and user experience during therapy. In dialysis, the application of artificial intelligence as a therapy-individualization tool is led more by start-ups than consolidated players, and innovation in dialysis seems comparably stagnant. Factors such as technical requirements or regulatory processes are important and necessary but can slow down the implementation of artificial intelligence due to missing data infrastructure and undefined approval processes. Current research focuses mainly on analyzing health records or wearable technology to add to existing health data. It barely uses signal data from treatment devices to apply artificial intelligence models. This article, therefore, discusses requirements for signal processing through artificial intelligence in health care and compares these with the status quo in dialysis therapy. It offers solutions for given barriers to speed up innovation with sensor data, opening access to existing and untapped sources, and shows the unique advantage of signal processing in dialysis compared to other health care domains. This research shows that even though the combination of different data is vital for improving patients' therapy, adding signal-based treatment data from dialysis devices to the picture can benefit the understanding of treatment dynamics, improving and individualizing therapy.
Collapse
Affiliation(s)
- Lena Scherer
- Karlsruhe Institute of Technology, Karlsruhe, Germany.
| | | | - Werner Nahm
- Karlsruhe Institute of Technology, Karlsruhe, Germany
| |
Collapse
|
19
|
Garrett MD. Critical Age Theory: Institutional Abuse of Older People in Health Care. EUROPEAN JOURNAL OF MEDICAL AND HEALTH SCIENCES 2022; 4:24-37. [DOI: 10.24018/ejmed.2022.4.6.1540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
Theories of elder abuse focus on the characteristics of the victim, the perpetrator, and the context of abuse. Although all three factors play a role, we are biased to notice individual misbehavior as the primary and sole cause of abuse. We see individuals as responsible for abuse. By examining abuses across a spectrum of healthcare services, abuse is more likely to be due to the institutional culture that includes the use of medications, Assisted Living, Skilled Nursing Facilities/nursing homes, hospices, hospitals, and Medicare Advantage programs. This study highlights multiple and consistent institutional abuses that result in harm and death of older adults on a consistent basis. The results show that when profit is increased, standards of care are diminished, and abuse ensues. Assigning responsibility to the management of healthcare becomes a priority in reducing this level of abuse. However, there are biases that stop us from assigning blame to institutions. Individual healthcare workers adhere to work protocol and rationalize the negative outcomes as inevitable or due to the vulnerability and frailness of older people. This culture is socialized for new employees that develop a culture of diminishing the needs of the individual patient in favor of the priorities dictated by the management protocol. In addition, the public is focused on assigning blame to individuals. Once an individual is assigned blame then they do not look beyond that to understand the context of abuse. A context that is generated by healthcare facilities maximizing profit and denigrating patient care. Regulatory agencies such as the U.S. DHHS, CDC, State Public Health Agencies, State/City Elder Abuse units, and Ombudsmen Programs all collude, for multiple reasons, in diminishing institutional responsibility.
Collapse
|
20
|
von Ulmenstein U, Tretter M, Ehrlich DB, Lauppert von Peharnik C. Limiting medical certainties? Funding challenges for German and comparable public healthcare systems due to AI prediction and how to address them. Front Artif Intell 2022; 5:913093. [PMID: 35978652 PMCID: PMC9376350 DOI: 10.3389/frai.2022.913093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 07/11/2022] [Indexed: 11/23/2022] Open
Abstract
Current technological and medical advances lend substantial momentum to efforts to attain new medical certainties. Artificial Intelligence can enable unprecedented precision and capabilities in forecasting the health conditions of individuals. But, as we lay out, this novel access to medical information threatens to exacerbate adverse selection in the health insurance market. We conduct an interdisciplinary conceptual analysis to study how this risk might be averted, considering legal, ethical, and economic angles. We ask whether it is viable and effective to ban or limit AI and its medical use as well as to limit medical certainties and find that neither of these limitation-based approaches provides an entirely sufficient resolution. Hence, we argue that this challenge must not be neglected in future discussions regarding medical applications of AI forecasting, that it should be addressed on a structural level and we encourage further research on the topic.
Collapse
Affiliation(s)
| | - Max Tretter
- Department of Systematic Theology, Friedrich Alexander University of Erlangen Nuremberg, Erlangen, Bavaria, Germany
| | - David B. Ehrlich
- Department of Economics and Management, Karlsruhe Institute of Technology (KIT), Karlsruhe, Baden-Württemberg, Germany
| | | |
Collapse
|
21
|
Segkouli S, Fico G, Vera-Muñoz C, Lecumberri M, Voulgaridis A, Triantafyllidis A, Sala P, Nunziata S, Campanini N, Montanari E, Morton S, Duclos A, Cocchi F, Nava MD, de Lorenzo T, Chalkia E, Loukea M, Colomer JBM, Dafoulas GE, Guillén S, Arredondo Waldmeyer MT, Votis K. Ethical Decision Making in Iot Data Driven Research: A Case Study of a Large-Scale Pilot. Healthcare (Basel) 2022; 10:healthcare10050957. [PMID: 35628094 PMCID: PMC9141539 DOI: 10.3390/healthcare10050957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Revised: 05/12/2022] [Accepted: 05/13/2022] [Indexed: 11/25/2022] Open
Abstract
IoT technologies generate intelligence and connectivity and develop knowledge to be used in the decision-making process. However, research that uses big data through global interconnected infrastructures, such as the ‘Internet of Things’ (IoT) for Active and Healthy Ageing (AHA), is fraught with several ethical concerns. A large-scale application of IoT operating in diverse piloting contexts and case studies needs to be orchestrated by a robust framework to guide ethical and sustainable decision making in respect to data management of AHA and IoT based solutions. The main objective of the current article is to present the successful completion of a collaborative multiscale research work, which addressed the complicated exercise of ethical decision making in IoT smart ecosystems for older adults. Our results reveal that among the strong enablers of the proposed ethical decision support model were the participatory and deliberative procedures complemented by a set of regulatory and non-regulatory tools to operationalize core ethical values such as transparency, trust, and fairness in real care settings for older adults and their caregivers.
Collapse
Affiliation(s)
- Sofia Segkouli
- Centre for Research and Technology Hellas, Information Technologies Institute, 57001 Thessaloniki, Greece; (A.V.); (A.T.); (K.V.)
- Correspondence: ; Tel.: +30-2311257714
| | - Giuseppe Fico
- Life Supporting Technologies, E.T.S.I. Telecomunicación, Universidad Politécnica de Madrid, 28040 Madrid, Spain; (G.F.); (C.V.-M.); (J.B.M.C.); (M.T.A.W.)
| | - Cecilia Vera-Muñoz
- Life Supporting Technologies, E.T.S.I. Telecomunicación, Universidad Politécnica de Madrid, 28040 Madrid, Spain; (G.F.); (C.V.-M.); (J.B.M.C.); (M.T.A.W.)
| | | | - Antonis Voulgaridis
- Centre for Research and Technology Hellas, Information Technologies Institute, 57001 Thessaloniki, Greece; (A.V.); (A.T.); (K.V.)
| | - Andreas Triantafyllidis
- Centre for Research and Technology Hellas, Information Technologies Institute, 57001 Thessaloniki, Greece; (A.V.); (A.T.); (K.V.)
| | - Pilar Sala
- Mysphera SL, 46980 Paterna, Spain or (P.S.); (S.G.)
- ITACA Institute, Universitat Politècnica València, 46022 Valencia, Spain
| | | | - Nadia Campanini
- Azienda Unita’ Sanitaria Locale Di Parma, 43125 Parma, Italy; (N.C.); (E.M.); (F.C.)
| | - Enrico Montanari
- Azienda Unita’ Sanitaria Locale Di Parma, 43125 Parma, Italy; (N.C.); (E.M.); (F.C.)
| | | | - Alexandre Duclos
- Centre Expert en Technologies et Services pour le Maintien en Autonomie a Domicile des Personnes Agees, 75015 Paris, France;
| | - Francesca Cocchi
- Azienda Unita’ Sanitaria Locale Di Parma, 43125 Parma, Italy; (N.C.); (E.M.); (F.C.)
| | | | | | - Eleni Chalkia
- Centre for Research and Technology Hellas, Hellenic Institute of Transport, 57001 Thessaloniki, Greece; (E.C.); (M.L.)
| | - Matina Loukea
- Centre for Research and Technology Hellas, Hellenic Institute of Transport, 57001 Thessaloniki, Greece; (E.C.); (M.L.)
| | - Juan Bautista Montalvá Colomer
- Life Supporting Technologies, E.T.S.I. Telecomunicación, Universidad Politécnica de Madrid, 28040 Madrid, Spain; (G.F.); (C.V.-M.); (J.B.M.C.); (M.T.A.W.)
| | | | | | - María Teresa Arredondo Waldmeyer
- Life Supporting Technologies, E.T.S.I. Telecomunicación, Universidad Politécnica de Madrid, 28040 Madrid, Spain; (G.F.); (C.V.-M.); (J.B.M.C.); (M.T.A.W.)
| | - Konstantinos Votis
- Centre for Research and Technology Hellas, Information Technologies Institute, 57001 Thessaloniki, Greece; (A.V.); (A.T.); (K.V.)
| |
Collapse
|