1
|
Maris MT, Koçar A, Willems DL, Pols J, Tan HL, Lindinger GL, Bak MAR. Ethical use of artificial intelligence to prevent sudden cardiac death: an interview study of patient perspectives. BMC Med Ethics 2024; 25:42. [PMID: 38575931 PMCID: PMC10996273 DOI: 10.1186/s12910-024-01042-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 03/27/2024] [Indexed: 04/06/2024] Open
Abstract
BACKGROUND The emergence of artificial intelligence (AI) in medicine has prompted the development of numerous ethical guidelines, while the involvement of patients in the creation of these documents lags behind. As part of the European PROFID project we explore patient perspectives on the ethical implications of AI in care for patients at increased risk of sudden cardiac death (SCD). AIM Explore perspectives of patients on the ethical use of AI, particularly in clinical decision-making regarding the implantation of an implantable cardioverter-defibrillator (ICD). METHODS Semi-structured, future scenario-based interviews were conducted among patients who had either an ICD and/or a heart condition with increased risk of SCD in Germany (n = 9) and the Netherlands (n = 15). We used the principles of the European Commission's Ethics Guidelines for Trustworthy AI to structure the interviews. RESULTS Six themes arose from the interviews: the ability of AI to rectify human doctors' limitations; the objectivity of data; whether AI can serve as second opinion; AI explainability and patient trust; the importance of the 'human touch'; and the personalization of care. Overall, our results reveal a strong desire among patients for more personalized and patient-centered care in the context of ICD implantation. Participants in our study express significant concerns about the further loss of the 'human touch' in healthcare when AI is introduced in clinical settings. They believe that this aspect of care is currently inadequately recognized in clinical practice. Participants attribute to doctors the responsibility of evaluating AI recommendations for clinical relevance and aligning them with patients' individual contexts and values, in consultation with the patient. CONCLUSION The 'human touch' patients exclusively ascribe to human medical practitioners extends beyond sympathy and kindness, and has clinical relevance in medical decision-making. Because this cannot be replaced by AI, we suggest that normative research into the 'right to a human doctor' is needed. Furthermore, policies on patient-centered AI integration in clinical practice should encompass the ethics of everyday practice rather than only principle-based ethics. We suggest that an empirical ethics approach grounded in ethnographic research is exceptionally well-suited to pave the way forward.
Collapse
Affiliation(s)
- Menno T Maris
- Department of Ethics, Law and Humanities, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands.
| | - Ayca Koçar
- Institute for Healthcare Management and Health Sciences, University of Bayreuth, Bayreuth, Germany
| | - Dick L Willems
- Department of Ethics, Law and Humanities, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
| | - Jeannette Pols
- Department of Ethics, Law and Humanities, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
- Department of Anthropology, University of Amsterdam, Amsterdam, The Netherlands
| | - Hanno L Tan
- Department of Clinical and Experimental Cardiology, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
- Netherlands Heart Institute, Utrecht, The Netherlands
| | - Georg L Lindinger
- Institute for Healthcare Management and Health Sciences, University of Bayreuth, Bayreuth, Germany
| | - Marieke A R Bak
- Department of Ethics, Law and Humanities, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
- Institute of History and Ethics in Medicine, TUM School of Medicine, Technical University of Munich, Munich, Germany
| |
Collapse
|
2
|
Funer F, Wiesing U. Physician's autonomy in the face of AI support: walking the ethical tightrope. Front Med (Lausanne) 2024; 11:1324963. [PMID: 38606162 PMCID: PMC11007068 DOI: 10.3389/fmed.2024.1324963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 03/18/2024] [Indexed: 04/13/2024] Open
Abstract
The introduction of AI support tools raises questions about the normative orientation of medical practice and the need to rethink its basic concepts. One of these concepts that is central to the discussion is the physician’s autonomy and its appropriateness in the face of high-powered AI applications. In this essay, a differentiation of the physician’s autonomy is made on the basis of a conceptual analysis. It is argued that the physician’s decision-making autonomy is a purposeful autonomy. The physician’s decision-making autonomy is fundamentally anchored in the medical ethos for the purpose to promote the patient’s health and well-being and to prevent him or her from harm. It follows from this purposefulness that the physician’s autonomy is not to be protected for its own sake, but only insofar as it serves this end better than alternative means. We argue that today, given existing limitations of AI support tools, physicians still need physician’s decision-making autonomy. For the possibility of physicians to exercise decision-making autonomy in the face of AI support, we elaborate three conditions: (1) sufficient information about AI support and its statements, (2) sufficient competencies to integrate AI statements into clinical decision-making, and (3) a context of voluntariness that allows, in justified cases, deviations from AI support. If the physician should fulfill his or her moral obligation to promote the health and well-being of the patient, then the use of AI should be designed in such a way that it promotes or at least maintains the physician’s decision-making autonomy.
Collapse
Affiliation(s)
- Florian Funer
- Institute for Ethics and History of Medicine, University Hospital and Faculty of Medicine, University of Tübingen, Tübingen, Germany
| | | |
Collapse
|
3
|
Ferrara M, Bertozzi G, Di Fazio N, Aquila I, Di Fazio A, Maiese A, Volonnino G, Frati P, La Russa R. Risk Management and Patient Safety in the Artificial Intelligence Era: A Systematic Review. Healthcare (Basel) 2024; 12:549. [PMID: 38470660 PMCID: PMC10931321 DOI: 10.3390/healthcare12050549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 02/19/2024] [Accepted: 02/23/2024] [Indexed: 03/14/2024] Open
Abstract
BACKGROUND Healthcare systems represent complex organizations within which multiple factors (physical environment, human factor, technological devices, quality of care) interconnect to form a dense network whose imbalance is potentially able to compromise patient safety. In this scenario, the need for hospitals to expand reactive and proactive clinical risk management programs is easily understood, and artificial intelligence fits well in this context. This systematic review aims to investigate the state of the art regarding the impact of AI on clinical risk management processes. To simplify the analysis of the review outcomes and to motivate future standardized comparisons with any subsequent studies, the findings of the present review will be grouped according to the possibility of applying AI in the prevention of the different incident type groups as defined by the ICPS. MATERIALS AND METHODS On 3 November 2023, a systematic review of the literature according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines was carried out using the SCOPUS and Medline (via PubMed) databases. A total of 297 articles were identified. After the selection process, 36 articles were included in the present systematic review. RESULTS AND DISCUSSION The studies included in this review allowed for the identification of three main "incident type" domains: clinical process, healthcare-associated infection, and medication. Another relevant application of AI in clinical risk management concerns the topic of incident reporting. CONCLUSIONS This review highlighted that AI can be applied transversely in various clinical contexts to enhance patient safety and facilitate the identification of errors. It appears to be a promising tool to improve clinical risk management, although its use requires human supervision and cannot completely replace human skills. To facilitate the analysis of the present review outcome and to enable comparison with future systematic reviews, it was deemed useful to refer to a pre-existing taxonomy for the identification of adverse events. However, the results of the present study highlighted the usefulness of AI not only for risk prevention in clinical practice, but also in improving the use of an essential risk identification tool, which is incident reporting. For this reason, the taxonomy of the areas of application of AI to clinical risk processes should include an additional class relating to risk identification and analysis tools. For this purpose, it was considered convenient to use ICPS classification.
Collapse
Affiliation(s)
- Michela Ferrara
- Department of Anatomical, Histological, Forensic and Orthopaedic Sciences, Sapienza University of Rome, 00161 Rome, Italy; (M.F.); (N.D.F.); (P.F.)
| | - Giuseppe Bertozzi
- Complex Intercompany Structure of Forensic Medicine, 85100 Potenza, Italy;
| | - Nicola Di Fazio
- Department of Anatomical, Histological, Forensic and Orthopaedic Sciences, Sapienza University of Rome, 00161 Rome, Italy; (M.F.); (N.D.F.); (P.F.)
| | - Isabella Aquila
- Department of Medical and Surgical Sciences, University Magna Graecia of Catanzaro, 88100 Catanzaro, Italy;
| | - Aldo Di Fazio
- Regional Hospital “San Carlo”, 85100 Potenza, Italy;
| | - Aniello Maiese
- Department of Surgical Pathology, Medical, Molecular and Critical Area, Institute of Legal Medicine, University of Pisa, 56126 Pisa, Italy;
| | - Gianpietro Volonnino
- Department of Anatomical, Histological, Forensic and Orthopaedic Sciences, Sapienza University of Rome, 00161 Rome, Italy; (M.F.); (N.D.F.); (P.F.)
| | - Paola Frati
- Department of Anatomical, Histological, Forensic and Orthopaedic Sciences, Sapienza University of Rome, 00161 Rome, Italy; (M.F.); (N.D.F.); (P.F.)
| | - Raffaele La Russa
- Department of Clinical Medicine, Public Health, Life and Environment Science, University of L’Aquila, 67100 L’Aquila, Italy;
| |
Collapse
|
4
|
Ackerhans S, Huynh T, Kaiser C, Schultz C. Exploring the role of professional identity in the implementation of clinical decision support systems-a narrative review. Implement Sci 2024; 19:11. [PMID: 38347525 PMCID: PMC10860285 DOI: 10.1186/s13012-024-01339-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 01/09/2024] [Indexed: 02/15/2024] Open
Abstract
BACKGROUND Clinical decision support systems (CDSSs) have the potential to improve quality of care, patient safety, and efficiency because of their ability to perform medical tasks in a more data-driven, evidence-based, and semi-autonomous way. However, CDSSs may also affect the professional identity of health professionals. Some professionals might experience these systems as a threat to their professional identity, as CDSSs could partially substitute clinical competencies, autonomy, or control over the care process. Other professionals may experience an empowerment of the role in the medical system. The purpose of this study is to uncover the role of professional identity in CDSS implementation and to identify core human, technological, and organizational factors that may determine the effect of CDSSs on professional identity. METHODS We conducted a systematic literature review and included peer-reviewed empirical studies from two electronic databases (PubMed, Web of Science) that reported on key factors to CDSS implementation and were published between 2010 and 2023. Our explorative, inductive thematic analysis assessed the antecedents of professional identity-related mechanisms from the perspective of different health care professionals (i.e., physicians, residents, nurse practitioners, pharmacists). RESULTS One hundred thirty-one qualitative, quantitative, or mixed-method studies from over 60 journals were included in this review. The thematic analysis found three dimensions of professional identity-related mechanisms that influence CDSS implementation success: perceived threat or enhancement of professional control and autonomy, perceived threat or enhancement of professional skills and expertise, and perceived loss or gain of control over patient relationships. At the technological level, the most common issues were the system's ability to fit into existing clinical workflows and organizational structures, and its ability to meet user needs. At the organizational level, time pressure and tension, as well as internal communication and involvement of end users were most frequently reported. At the human level, individual attitudes and emotional responses, as well as familiarity with the system, most often influenced the CDSS implementation. Our results show that professional identity-related mechanisms are driven by these factors and influence CDSS implementation success. The perception of the change of professional identity is influenced by the user's professional status and expertise and is improved over the course of implementation. CONCLUSION This review highlights the need for health care managers to evaluate perceived professional identity threats to health care professionals across all implementation phases when introducing a CDSS and to consider their varying manifestations among different health care professionals. Moreover, it highlights the importance of innovation and change management approaches, such as involving health professionals in the design and implementation process to mitigate threat perceptions. We provide future areas of research for the evaluation of the professional identity construct within health care.
Collapse
Affiliation(s)
- Sophia Ackerhans
- Kiel Institute for Responsible Innovation, University of Kiel, Westring 425, 24118, Kiel, Germany.
| | - Thomas Huynh
- Kiel Institute for Responsible Innovation, University of Kiel, Westring 425, 24118, Kiel, Germany
| | - Carsten Kaiser
- Kiel Institute for Responsible Innovation, University of Kiel, Westring 425, 24118, Kiel, Germany
| | - Carsten Schultz
- Kiel Institute for Responsible Innovation, University of Kiel, Westring 425, 24118, Kiel, Germany
| |
Collapse
|
5
|
Funer F, Liedtke W, Tinnemeyer S, Klausen AD, Schneider D, Zacharias HU, Langanke M, Salloch S. Responsibility and decision-making authority in using clinical decision support systems: an empirical-ethical exploration of German prospective professionals' preferences and concerns. JOURNAL OF MEDICAL ETHICS 2023; 50:6-11. [PMID: 37217277 PMCID: PMC10803986 DOI: 10.1136/jme-2022-108814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 03/11/2023] [Indexed: 05/24/2023]
Abstract
Machine learning-driven clinical decision support systems (ML-CDSSs) seem impressively promising for future routine and emergency care. However, reflection on their clinical implementation reveals a wide array of ethical challenges. The preferences, concerns and expectations of professional stakeholders remain largely unexplored. Empirical research, however, may help to clarify the conceptual debate and its aspects in terms of their relevance for clinical practice. This study explores, from an ethical point of view, future healthcare professionals' attitudes to potential changes of responsibility and decision-making authority when using ML-CDSS. Twenty-seven semistructured interviews were conducted with German medical students and nursing trainees. The data were analysed based on qualitative content analysis according to Kuckartz. Interviewees' reflections are presented under three themes the interviewees describe as closely related: (self-)attribution of responsibility, decision-making authority and need of (professional) experience. The results illustrate the conceptual interconnectedness of professional responsibility and its structural and epistemic preconditions to be able to fulfil clinicians' responsibility in a meaningful manner. The study also sheds light on the four relata of responsibility understood as a relational concept. The article closes with concrete suggestions for the ethically sound clinical implementation of ML-CDSS.
Collapse
Affiliation(s)
- Florian Funer
- Institute of Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
- Institute of Ethics and History of Medicine, Eberhard Karls University Tübingen, Tübingen, Germany
| | - Wenke Liedtke
- Department of Social Work, Protestant University of Applied Sciences RWL, Bochum, Germany
| | - Sara Tinnemeyer
- Institute of Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| | | | - Diana Schneider
- Competence Center Emerging Technologies, Fraunhofer Institute for Systems and Innovation Research ISI, Karlsruhe, Germany
| | - Helena U Zacharias
- Peter L. Reichertz Institute for Medical Informatics of TU Braunschweig and Hannover Medical School, Hannover Medical School, Hannover, Germany
| | - Martin Langanke
- Department of Social Work, Protestant University of Applied Sciences RWL, Bochum, Germany
| | - Sabine Salloch
- Institute of Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| |
Collapse
|
6
|
Flores-Balado Á, Castresana Méndez C, Herrero González A, Mesón Gutierrez R, de Las Casas Cámara G, Vila Cordero B, Arcos J, Pfang B, Martín-Ríos MD. Using artificial intelligence to reduce orthopedic surgical site infection surveillance workload: Algorithm design, validation, and implementation in 4 Spanish hospitals. Am J Infect Control 2023; 51:1225-1229. [PMID: 37100291 DOI: 10.1016/j.ajic.2023.04.165] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 04/14/2023] [Accepted: 04/15/2023] [Indexed: 04/28/2023]
Abstract
BACKGROUND Surgical site infection (SSI) surveillance is a labor-intensive endeavor. We present the design and validation of an algorithm for SSI detection after hip replacement surgery, and a report of its successful implementation in 4 public hospitals in Madrid, Spain. METHODS We designed a multivariable algorithm, AI-HPRO, using natural language processing (NLP) and extreme gradient boosting to screen for SSI in patients undergoing hip replacement surgery. The development and validation cohorts included data from 19,661 health care episodes from 4 hospitals in Madrid, Spain. RESULTS Positive microbiological cultures, the text variable "infection", and prescription of clindamycin were strong markers of SSI. Statistical analysis of the final model indicated high sensitivity (99.18%) and specificity (91.01%) with an F1-score of 0.32, AUC of 0.989, accuracy of 91.27%, and negative predictive value of 99.98%. DISCUSSION Implementation of the AI-HPRO algorithm reduced the surveillance time from 975 person/hours to 63.5 person/hours and permitted an 88.95% reduction in the total volume of clinical records to be reviewed manually. The model presents a higher negative predictive value (99.98%) than algorithms relying on NLP alone (94%) or NLP and logistic regression (97%). CONCLUSIONS This is the first report of an algorithm combining NLP and extreme gradient-boosting to permit accurate, real-time orthopedic SSI surveillance.
Collapse
Affiliation(s)
- Álvaro Flores-Balado
- Infection Control Department, Fundación Jiménez Díaz University Hospital, Madrid, Spain
| | | | | | | | | | - Beatriz Vila Cordero
- Infection Control Department, Rey Juan Carlos University Hospital, Móstoles, Comunidad de Madrid, Spain
| | - Javier Arcos
- Fundación Jiménez Díaz University Hospital, Madrid, Spain; UICO (Clinical and Organizational Innovation Unit), Quironsalud 4-H Network, Madrid, Spain
| | - Bernadette Pfang
- UICO (Clinical and Organizational Innovation Unit), Quironsalud 4-H Network, Madrid, Spain
| | | |
Collapse
|
7
|
Cresswell K, Rigby M, Magrabi F, Scott P, Brender J, Craven CK, Wong ZSY, Kukhareva P, Ammenwerth E, Georgiou A, Medlock S, De Keizer NF, Nykänen P, Prgomet M, Williams R. The need to strengthen the evaluation of the impact of Artificial Intelligence-based decision support systems on healthcare provision. Health Policy 2023; 136:104889. [PMID: 37579545 DOI: 10.1016/j.healthpol.2023.104889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 08/04/2023] [Indexed: 08/16/2023]
Abstract
Despite the renewed interest in Artificial Intelligence-based clinical decision support systems (AI-CDS), there is still a lack of empirical evidence supporting their effectiveness. This underscores the need for rigorous and continuous evaluation and monitoring of processes and outcomes associated with the introduction of health information technology. We illustrate how the emergence of AI-CDS has helped to bring to the fore the critical importance of evaluation principles and action regarding all health information technology applications, as these hitherto have received limited attention. Key aspects include assessment of design, implementation and adoption contexts; ensuring systems support and optimise human performance (which in turn requires understanding clinical and system logics); and ensuring that design of systems prioritises ethics, equity, effectiveness, and outcomes. Going forward, information technology strategy, implementation and assessment need to actively incorporate these dimensions. International policy makers, regulators and strategic decision makers in implementing organisations therefore need to be cognisant of these aspects and incorporate them in decision-making and in prioritising investment. In particular, the emphasis needs to be on stronger and more evidence-based evaluation surrounding system limitations and risks as well as optimisation of outcomes, whilst ensuring learning and contextual review. Otherwise, there is a risk that applications will be sub-optimally embodied in health systems with unintended consequences and without yielding intended benefits.
Collapse
Affiliation(s)
- Kathrin Cresswell
- The University of Edinburgh, Usher Institute, Edinburgh, United Kingdom.
| | - Michael Rigby
- Keele University, School of Social, Political and Global Studies and School of Primary, Community and Social Care, Keele, United Kingdom
| | - Farah Magrabi
- Macquarie University, Australian Institute of Health Innovation, Sydney, Australia
| | - Philip Scott
- University of Wales Trinity Saint David, Swansea, United Kingdom
| | - Jytte Brender
- Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
| | - Catherine K Craven
- University of Texas Health Science Center at San Antonio, San Antonio, TX, United States
| | - Zoie Shui-Yee Wong
- St. Luke's International University, Graduate School of Public Health, Tokyo, Japan
| | - Polina Kukhareva
- Department of Biomedical Informatics, University of Utah, United States of America
| | - Elske Ammenwerth
- UMIT TIROL, Private University for Health Sciences and Health Informatics, Institute of Medical Informatics, Hall in Tirol, Austria
| | - Andrew Georgiou
- Macquarie University, Australian Institute of Health Innovation, Sydney, Australia
| | - Stephanie Medlock
- Amsterdam UMC location University of Amsterdam, Department of Medical Informatics, Meibergdreef 9, Amsterdam, the Netherlands; Amsterdam Public Health research institute, Digital Health and Quality of Care Amsterdam, the Netherlands
| | - Nicolette F De Keizer
- Amsterdam UMC location University of Amsterdam, Department of Medical Informatics, Meibergdreef 9, Amsterdam, the Netherlands; Amsterdam Public Health research institute, Digital Health and Quality of Care Amsterdam, the Netherlands
| | - Pirkko Nykänen
- Tampere University, Faculty for Information Technology and Communication Sciences, Finland
| | - Mirela Prgomet
- Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia
| | - Robin Williams
- The University of Edinburgh, Institute for the Study of Science, Technology and Innovation, Edinburgh, United Kingdom
| |
Collapse
|
8
|
Mandair D, Reis-Filho JS, Ashworth A. Biological insights and novel biomarker discovery through deep learning approaches in breast cancer histopathology. NPJ Breast Cancer 2023; 9:21. [PMID: 37024522 PMCID: PMC10079681 DOI: 10.1038/s41523-023-00518-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 02/27/2023] [Indexed: 04/08/2023] Open
Abstract
Breast cancer remains a highly prevalent disease with considerable inter- and intra-tumoral heterogeneity complicating prognostication and treatment decisions. The utilization and depth of genomic, transcriptomic and proteomic data for cancer has exploded over recent times and the addition of spatial context to this information, by understanding the correlating morphologic and spatial patterns of cells in tissue samples, has created an exciting frontier of research, histo-genomics. At the same time, deep learning (DL), a class of machine learning algorithms employing artificial neural networks, has rapidly progressed in the last decade with a confluence of technical developments - including the advent of modern graphic processing units (GPU), allowing efficient implementation of increasingly complex architectures at scale; advances in the theoretical and practical design of network architectures; and access to larger datasets for training - all leading to sweeping advances in image classification and object detection. In this review, we examine recent developments in the application of DL in breast cancer histology with particular emphasis of those producing biologic insights or novel biomarkers, spanning the extraction of genomic information to the use of stroma to predict cancer recurrence, with the aim of suggesting avenues for further advancing this exciting field.
Collapse
Affiliation(s)
- Divneet Mandair
- UCSF Helen Diller Family Comprehensive Cancer Center, San Francisco, CA, 94158, USA
| | | | - Alan Ashworth
- UCSF Helen Diller Family Comprehensive Cancer Center, San Francisco, CA, 94158, USA.
| |
Collapse
|
9
|
Čartolovni A, Malešević A, Poslon L. Critical analysis of the AI impact on the patient-physician relationship: A multi-stakeholder qualitative study. Digit Health 2023; 9:20552076231220833. [PMID: 38130798 PMCID: PMC10734361 DOI: 10.1177/20552076231220833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Accepted: 11/29/2023] [Indexed: 12/23/2023] Open
Abstract
Objective This qualitative study aims to present the aspirations, expectations and critical analysis of the potential for artificial intelligence (AI) to transform patient-physician relationship, according to multi-stakeholder insight. Methods This study was conducted from June to December 2021, using an anticipatory ethics approach and sociology of expectations as the theoretical frameworks. It focused mainly on three groups of stakeholders; namely, physicians (n = 12), patients (n = 15) and healthcare managers (n = 11), all of whom are directly related to the adoption of AI in medicine (n = 38). Results In this study, interviews were conducted with 40% of the patients in the sample (15/38), as well as 31% of the physicians (12/38) and 29% of health managers in the sample (11/38). The findings highlight the following: (1) the impact of AI on fundamental aspects of the patient-physician relationship and the underlying importance of a synergistic relationship between the physician and AI; (2) the potential for AI to alleviate workload and reduce administrative burden by saving time and putting the patient at the centre of the caring process and (3) the potential risk to the holistic approach by neglecting humanness in healthcare. Conclusions This multi-stakeholder qualitative study, which focused on the micro-level of healthcare decision-making, sheds new light on the impact of AI on healthcare and the potential transformation of patient-physician relationship. The results of the current study highlight the need to adopt a critical awareness approach to the implementation of AI in healthcare by applying critical thinking and reasoning. It is important not to rely solely upon the recommendations of AI while neglecting clinical reasoning and physicians' knowledge of best clinical practices. Instead, it is vital that the core values of the existing patient-physician relationship - such as trust and honesty, conveyed through open and sincere communication - are preserved.
Collapse
Affiliation(s)
- Anto Čartolovni
- Digital Healthcare Ethics Laboratory (Digit-HeaL), Catholic University of Croatia, Zagreb, Croatia
- School of Medicine, Catholic University of Croatia, Zagreb, Croatia
| | - Anamaria Malešević
- Digital Healthcare Ethics Laboratory (Digit-HeaL), Catholic University of Croatia, Zagreb, Croatia
| | - Luka Poslon
- Digital Healthcare Ethics Laboratory (Digit-HeaL), Catholic University of Croatia, Zagreb, Croatia
| |
Collapse
|