1
|
Witkowski K, Okhai R, Neely SR. Public perceptions of artificial intelligence in healthcare: ethical concerns and opportunities for patient-centered care. BMC Med Ethics 2024; 25:74. [PMID: 38909180 PMCID: PMC11193174 DOI: 10.1186/s12910-024-01066-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 05/29/2024] [Indexed: 06/24/2024] Open
Abstract
BACKGROUND In an effort to improve the quality of medical care, the philosophy of patient-centered care has become integrated into almost every aspect of the medical community. Despite its widespread acceptance, among patients and practitioners, there are concerns that rapid advancements in artificial intelligence may threaten elements of patient-centered care, such as personal relationships with care providers and patient-driven choices. This study explores the extent to which patients are confident in and comfortable with the use of these technologies when it comes to their own individual care and identifies areas that may align with or threaten elements of patient-centered care. METHODS An exploratory, mixed-method approach was used to analyze survey data from 600 US-based adults in the State of Florida. The survey was administered through a leading market research provider (August 10-21, 2023), and responses were collected to be representative of the state's population based on age, gender, race/ethnicity, and political affiliation. RESULTS Respondents were more comfortable with the use of AI in health-related tasks that were not associated with doctor-patient relationships, such as scheduling patient appointments or follow-ups (84.2%). Fear of losing the 'human touch' associated with doctors was a common theme within qualitative coding, suggesting a potential conflict between the implementation of AI and patient-centered care. In addition, decision self-efficacy was associated with higher levels of comfort with AI, but there were also concerns about losing decision-making control, workforce changes, and cost concerns. A small majority of participants mentioned that AI could be useful for doctors and lead to more equitable care but only when used within limits. CONCLUSION The application of AI in medical care is rapidly advancing, but oversight, regulation, and guidance addressing critical aspects of patient-centered care are lacking. While there is no evidence that AI will undermine patient-physician relationships at this time, there is concern on the part of patients regarding the application of AI within medical care and specifically as it relates to their interaction with physicians. Medical guidance on incorporating AI while adhering to the principles of patient-centered care is needed to clarify how AI will augment medical care.
Collapse
|
2
|
Di Sarno L, Caroselli A, Tonin G, Graglia B, Pansini V, Causio FA, Gatto A, Chiaretti A. Artificial Intelligence in Pediatric Emergency Medicine: Applications, Challenges, and Future Perspectives. Biomedicines 2024; 12:1220. [PMID: 38927427 PMCID: PMC11200597 DOI: 10.3390/biomedicines12061220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Revised: 05/19/2024] [Accepted: 05/28/2024] [Indexed: 06/28/2024] Open
Abstract
The dawn of Artificial intelligence (AI) in healthcare stands as a milestone in medical innovation. Different medical fields are heavily involved, and pediatric emergency medicine is no exception. We conducted a narrative review structured in two parts. The first part explores the theoretical principles of AI, providing all the necessary background to feel confident with these new state-of-the-art tools. The second part presents an informative analysis of AI models in pediatric emergencies. We examined PubMed and Cochrane Library from inception up to April 2024. Key applications include triage optimization, predictive models for traumatic brain injury assessment, and computerized sepsis prediction systems. In each of these domains, AI models outperformed standard methods. The main barriers to a widespread adoption include technological challenges, but also ethical issues, age-related differences in data interpretation, and the paucity of comprehensive datasets in the pediatric context. Future feasible research directions should address the validation of models through prospective datasets with more numerous sample sizes of patients. Furthermore, our analysis shows that it is essential to tailor AI algorithms to specific medical needs. This requires a close partnership between clinicians and developers. Building a shared knowledge platform is therefore a key step.
Collapse
Affiliation(s)
- Lorenzo Di Sarno
- Department of Pediatrics, Fondazione Policlinico Universitario “A. Gemelli” IRCCS, Università Cattolica del Sacro Cuore, 00168 Rome, Italy; (A.C.); (B.G.); (A.C.)
- The Italian Society of Artificial Intelligence in Medicine (SIIAM), 00165 Rome, Italy; (F.A.C.); (A.G.)
| | - Anya Caroselli
- Department of Pediatrics, Fondazione Policlinico Universitario “A. Gemelli” IRCCS, Università Cattolica del Sacro Cuore, 00168 Rome, Italy; (A.C.); (B.G.); (A.C.)
| | - Giovanna Tonin
- Department of Pediatrics, Fondazione Policlinico Universitario “A. Gemelli” IRCCS, 00168 Rome, Italy; (G.T.); (V.P.)
| | - Benedetta Graglia
- Department of Pediatrics, Fondazione Policlinico Universitario “A. Gemelli” IRCCS, Università Cattolica del Sacro Cuore, 00168 Rome, Italy; (A.C.); (B.G.); (A.C.)
| | - Valeria Pansini
- Department of Pediatrics, Fondazione Policlinico Universitario “A. Gemelli” IRCCS, 00168 Rome, Italy; (G.T.); (V.P.)
| | - Francesco Andrea Causio
- The Italian Society of Artificial Intelligence in Medicine (SIIAM), 00165 Rome, Italy; (F.A.C.); (A.G.)
- Section of Hygiene and Public Health, Department of Life Sciences and Public Health, Università Cattolica del Sacro Cuore, 00168 Rome, Italy
| | - Antonio Gatto
- The Italian Society of Artificial Intelligence in Medicine (SIIAM), 00165 Rome, Italy; (F.A.C.); (A.G.)
- Department of Pediatrics, Fondazione Policlinico Universitario “A. Gemelli” IRCCS, 00168 Rome, Italy; (G.T.); (V.P.)
| | - Antonio Chiaretti
- Department of Pediatrics, Fondazione Policlinico Universitario “A. Gemelli” IRCCS, Università Cattolica del Sacro Cuore, 00168 Rome, Italy; (A.C.); (B.G.); (A.C.)
- The Italian Society of Artificial Intelligence in Medicine (SIIAM), 00165 Rome, Italy; (F.A.C.); (A.G.)
| |
Collapse
|
3
|
Mainz JT. Medical AI: is trust really the issue? JOURNAL OF MEDICAL ETHICS 2024; 50:349-350. [PMID: 37495363 DOI: 10.1136/jme-2023-109414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Accepted: 07/15/2023] [Indexed: 07/28/2023]
Abstract
I discuss an influential argument put forward by Hatherley in the Journal of Medical Ethics Drawing on influential philosophical accounts of interpersonal trust, Hatherley claims that medical artificial intelligence is capable of being reliable, but not trustworthy. Furthermore, Hatherley argues that trust generates moral obligations on behalf of the trustee. For instance, when a patient trusts a clinician, it generates certain moral obligations on behalf of the clinician for her to do what she is entrusted to do. I make three objections to Hatherley's claims: (1) At least one philosophical account of interagent trust implies that medical AI is capable of being trustworthy. (2) Even if this account should ultimately be rejected, it does not matter much because what we care mostly about is that medical AI is reliable. (3) It is false that trust in itself generates moral obligations on behalf of the trustee.
Collapse
|
4
|
Sparrow R, Hatherley J, Oakley J, Bain C. Should the Use of Adaptive Machine Learning Systems in Medicine be Classified as Research? THE AMERICAN JOURNAL OF BIOETHICS : AJOB 2024:1-12. [PMID: 38662360 DOI: 10.1080/15265161.2024.2337429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/26/2024]
Abstract
A novel advantage of the use of machine learning (ML) systems in medicine is their potential to continue learning from new data after implementation in clinical practice. To date, considerations of the ethical questions raised by the design and use of adaptive machine learning systems in medicine have, for the most part, been confined to discussion of the so-called "update problem," which concerns how regulators should approach systems whose performance and parameters continue to change even after they have received regulatory approval. In this paper, we draw attention to a prior ethical question: whether the continuous learning that will occur in such systems after their initial deployment should be classified, and regulated, as medical research? We argue that there is a strong prima facie case that the use of continuous learning in medical ML systems should be categorized, and regulated, as research and that individuals whose treatment involves such systems should be treated as research subjects.
Collapse
|
5
|
Shafiabady N, Hadjinicolaou N, Hettikankanamage N, MohammadiSavadkoohi E, Wu RMX, Vakilian J. eXplainable Artificial Intelligence (XAI) for improving organisational regility. PLoS One 2024; 19:e0301429. [PMID: 38656983 PMCID: PMC11042710 DOI: 10.1371/journal.pone.0301429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Accepted: 03/15/2024] [Indexed: 04/26/2024] Open
Abstract
Since the pandemic started, organisations have been actively seeking ways to improve their organisational agility and resilience (regility) and turn to Artificial Intelligence (AI) to gain a deeper understanding and further enhance their agility and regility. Organisations are turning to AI as a critical enabler to achieve these goals. AI empowers organisations by analysing large data sets quickly and accurately, enabling faster decision-making and building agility and resilience. This strategic use of AI gives businesses a competitive advantage and allows them to adapt to rapidly changing environments. Failure to prioritise agility and responsiveness can result in increased costs, missed opportunities, competition and reputational damage, and ultimately, loss of customers, revenue, profitability, and market share. Prioritising can be achieved by utilising eXplainable Artificial Intelligence (XAI) techniques, illuminating how AI models make decisions and making them transparent, interpretable, and understandable. Based on previous research on using AI to predict organisational agility, this study focuses on integrating XAI techniques, such as Shapley Additive Explanations (SHAP), in organisational agility and resilience. By identifying the importance of different features that affect organisational agility prediction, this study aims to demystify the decision-making processes of the prediction model using XAI. This is essential for the ethical deployment of AI, fostering trust and transparency in these systems. Recognising key features in organisational agility prediction can guide companies in determining which areas to concentrate on in order to improve their agility and resilience.
Collapse
Affiliation(s)
- Niusha Shafiabady
- Faculty of Science and Technology, Charles Darwin University, Haymarket, New South Wales, Australia
| | - Nick Hadjinicolaou
- Adelaide Institute of Higher Education, Adelaide, South Australia, Australia
| | | | | | - Robert M. X. Wu
- Faculty of Engineering and Information Technology, University of Technology Sydney, Broadway, New South Wales, Australia
| | - James Vakilian
- Faculty of Science and Technology, Charles Darwin University, Haymarket, New South Wales, Australia
| |
Collapse
|
6
|
Maris MT, Koçar A, Willems DL, Pols J, Tan HL, Lindinger GL, Bak MAR. Ethical use of artificial intelligence to prevent sudden cardiac death: an interview study of patient perspectives. BMC Med Ethics 2024; 25:42. [PMID: 38575931 PMCID: PMC10996273 DOI: 10.1186/s12910-024-01042-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 03/27/2024] [Indexed: 04/06/2024] Open
Abstract
BACKGROUND The emergence of artificial intelligence (AI) in medicine has prompted the development of numerous ethical guidelines, while the involvement of patients in the creation of these documents lags behind. As part of the European PROFID project we explore patient perspectives on the ethical implications of AI in care for patients at increased risk of sudden cardiac death (SCD). AIM Explore perspectives of patients on the ethical use of AI, particularly in clinical decision-making regarding the implantation of an implantable cardioverter-defibrillator (ICD). METHODS Semi-structured, future scenario-based interviews were conducted among patients who had either an ICD and/or a heart condition with increased risk of SCD in Germany (n = 9) and the Netherlands (n = 15). We used the principles of the European Commission's Ethics Guidelines for Trustworthy AI to structure the interviews. RESULTS Six themes arose from the interviews: the ability of AI to rectify human doctors' limitations; the objectivity of data; whether AI can serve as second opinion; AI explainability and patient trust; the importance of the 'human touch'; and the personalization of care. Overall, our results reveal a strong desire among patients for more personalized and patient-centered care in the context of ICD implantation. Participants in our study express significant concerns about the further loss of the 'human touch' in healthcare when AI is introduced in clinical settings. They believe that this aspect of care is currently inadequately recognized in clinical practice. Participants attribute to doctors the responsibility of evaluating AI recommendations for clinical relevance and aligning them with patients' individual contexts and values, in consultation with the patient. CONCLUSION The 'human touch' patients exclusively ascribe to human medical practitioners extends beyond sympathy and kindness, and has clinical relevance in medical decision-making. Because this cannot be replaced by AI, we suggest that normative research into the 'right to a human doctor' is needed. Furthermore, policies on patient-centered AI integration in clinical practice should encompass the ethics of everyday practice rather than only principle-based ethics. We suggest that an empirical ethics approach grounded in ethnographic research is exceptionally well-suited to pave the way forward.
Collapse
Affiliation(s)
- Menno T Maris
- Department of Ethics, Law and Humanities, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands.
| | - Ayca Koçar
- Institute for Healthcare Management and Health Sciences, University of Bayreuth, Bayreuth, Germany
| | - Dick L Willems
- Department of Ethics, Law and Humanities, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
| | - Jeannette Pols
- Department of Ethics, Law and Humanities, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
- Department of Anthropology, University of Amsterdam, Amsterdam, The Netherlands
| | - Hanno L Tan
- Department of Clinical and Experimental Cardiology, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
- Netherlands Heart Institute, Utrecht, The Netherlands
| | - Georg L Lindinger
- Institute for Healthcare Management and Health Sciences, University of Bayreuth, Bayreuth, Germany
| | - Marieke A R Bak
- Department of Ethics, Law and Humanities, Amsterdam UMC, University of Amsterdam, Amsterdam, The Netherlands
- Institute of History and Ethics in Medicine, TUM School of Medicine, Technical University of Munich, Munich, Germany
| |
Collapse
|
7
|
Dimitri P, Savage MO. Artificial intelligence in paediatric endocrinology: conflict or cooperation. J Pediatr Endocrinol Metab 2024; 37:209-221. [PMID: 38183676 DOI: 10.1515/jpem-2023-0554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Accepted: 12/18/2023] [Indexed: 01/08/2024]
Abstract
Artificial intelligence (AI) in medicine is transforming healthcare by automating system tasks, assisting in diagnostics, predicting patient outcomes and personalising patient care, founded on the ability to analyse vast datasets. In paediatric endocrinology, AI has been developed for diabetes, for insulin dose adjustment, detection of hypoglycaemia and retinopathy screening; bone age assessment and thyroid nodule screening; the identification of growth disorders; the diagnosis of precocious puberty; and the use of facial recognition algorithms in conditions such as Cushing syndrome, acromegaly, congenital adrenal hyperplasia and Turner syndrome. AI can also predict those most at risk from childhood obesity by stratifying future interventions to modify lifestyle. AI will facilitate personalised healthcare by integrating data from 'omics' analysis, lifestyle tracking, medical history, laboratory and imaging, therapy response and treatment adherence from multiple sources. As data acquisition and processing becomes fundamental, data privacy and protecting children's health data is crucial. Minimising algorithmic bias generated by AI analysis for rare conditions seen in paediatric endocrinology is an important determinant of AI validity in clinical practice. AI cannot create the patient-doctor relationship or assess the wider holistic determinants of care. Children have individual needs and vulnerabilities and are considered in the context of family relationships and dynamics. Importantly, whilst AI provides value through augmenting efficiency and accuracy, it must not be used to replace clinical skills.
Collapse
Affiliation(s)
- Paul Dimitri
- Department of Paediatric Endocrinology, Sheffield Children's NHS Foundation Trust, Sheffield, UK
| | - Martin O Savage
- Centre for Endocrinology, William Harvey Research Institute, Barts and the London School of Medicine & Dentistry, Queen Mary University of London, London, UK
| |
Collapse
|
8
|
Lawton T, Morgan P, Porter Z, Hickey S, Cunningham A, Hughes N, Iacovides I, Jia Y, Sharma V, Habli I. Clinicians risk becoming 'liability sinks' for artificial intelligence. Future Healthc J 2024; 11:100007. [PMID: 38646041 PMCID: PMC11025047 DOI: 10.1016/j.fhj.2024.100007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Affiliation(s)
- Tom Lawton
- Improvement Academy, Bradford Institute for Health Research, Bradford Royal Infirmary, Duckworth Lane, Bradford BD9 6RJ, UK
- Assuring Autonomy International Programme, University of York, Heslington, York YO10 5DD, UK
| | - Phillip Morgan
- York Law School, University of York, Heslington, York YO10 5DD, UK
| | - Zoe Porter
- Assuring Autonomy International Programme, University of York, Heslington, York YO10 5DD, UK
| | - Shireen Hickey
- Improvement Academy, Bradford Institute for Health Research, Bradford Royal Infirmary, Duckworth Lane, Bradford BD9 6RJ, UK
| | - Alice Cunningham
- Improvement Academy, Bradford Institute for Health Research, Bradford Royal Infirmary, Duckworth Lane, Bradford BD9 6RJ, UK
| | - Nathan Hughes
- Assuring Autonomy International Programme, University of York, Heslington, York YO10 5DD, UK
| | - Ioanna Iacovides
- Department of Computer Science, University of York, Heslington, York YO10 5DD, UK
| | - Yan Jia
- Assuring Autonomy International Programme, University of York, Heslington, York YO10 5DD, UK
| | - Vishal Sharma
- Improvement Academy, Bradford Institute for Health Research, Bradford Royal Infirmary, Duckworth Lane, Bradford BD9 6RJ, UK
| | - Ibrahim Habli
- Assuring Autonomy International Programme, University of York, Heslington, York YO10 5DD, UK
| |
Collapse
|
9
|
Savulescu J, Giubilini A, Vandersluis R, Mishra A. Ethics of artificial intelligence in medicine. Singapore Med J 2024; 65:150-158. [PMID: 38527299 PMCID: PMC7615805 DOI: 10.4103/singaporemedj.smj-2023-279] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 02/08/2024] [Indexed: 03/27/2024]
Abstract
ABSTRACT This article reviews the main ethical issues that arise from the use of artificial intelligence (AI) technologies in medicine. Issues around trust, responsibility, risks of discrimination, privacy, autonomy, and potential benefits and harms are assessed. For better or worse, AI is a promising technology that can revolutionise healthcare delivery. It is up to us to make AI a tool for the good by ensuring that ethical oversight accompanies the design, development and implementation of AI technology in clinical practice.
Collapse
Affiliation(s)
- Julian Savulescu
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Alberto Giubilini
- Oxford Uehiro Centre for Practical Ethics, University of Oxford, Oxford, UK
| | - Robert Vandersluis
- Oxford Uehiro Centre for Practical Ethics, University of Oxford, Oxford, UK
| | - Abhishek Mishra
- Oxford Uehiro Centre for Practical Ethics, University of Oxford, Oxford, UK
| |
Collapse
|
10
|
Fava GA, Sonino N, Aron DC, Balon R, Berrocal Montiel C, Cao J, Concato J, Eory A, Horwitz RI, Rafanelli C, Schnyder U, Wang H, Wise TN, Wright JH, Zipfel S, Patierno C. Clinical Interviewing: An Essential but Neglected Method of Medicine. PSYCHOTHERAPY AND PSYCHOSOMATICS 2024; 93:94-99. [PMID: 38382481 DOI: 10.1159/000536490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Accepted: 01/23/2024] [Indexed: 02/23/2024]
Abstract
Clinical interviewing is the basic method to understand how a person feels and what are the presenting complaints, obtain medical history, evaluate personal attitudes and behavior related to health and disease, give the patient information about diagnosis, prognosis, and treatment, and establish a bond between patient and physician that is crucial for shared decision making and self-management. However, the value of this basic skill is threatened by time pressures and emphasis on technology. Current health care trends privilege expensive tests and procedures and tag the time devoted to interaction with the patient as lacking cost-effectiveness. Instead, the time spent to inquire about problems and life setting may actually help to avoid further testing, procedures, and referrals. Moreover, the dialogue between patient and physician is an essential instrument to increase patient's motivation to engage in healthy behavior. The aim of this paper was to provide an overview of clinical interviewing and its optimal use in relation to style, flow and hypothesis testing, clinical domains, modifications according to settings and goals, and teaching. This review points to the primacy of interviewing in the clinical process. The quality of interviewing determines the quality of data that are collected and, eventually, of assessment and treatment. Thus, interviewing deserves more attention in educational training and more space in clinical encounters than it is currently receiving.
Collapse
Affiliation(s)
- Giovanni A Fava
- Department of Psychiatry, University at Buffalo, State University of New York, Buffalo, New York, USA
| | - Nicoletta Sonino
- Department of Psychiatry, University at Buffalo, State University of New York, Buffalo, New York, USA
- Department of Statistical Sciences, University of Padova, Padova, Italy
| | - David C Aron
- Case Western Reserve University, Cleveland, Ohio, USA
| | - Richard Balon
- Departments of Psychiatry and Behavioral Sciences and Anesthesiology, Wayne State University, Detroit, Michigan, USA
| | - Carmen Berrocal Montiel
- Department of Surgical, Medical and Molecular Pathology, and Critical Care Medicine, University of Pisa, Pisa, Italy
| | - Jianxin Cao
- Changzhou First People's Hospital and Psychosomatic Gastroenterology Institute, Soochow University, Changzhou, China
| | - John Concato
- Center for Drug Evaluation and Research, Food and Drug Administration, Silver Spring, Maryland, USA
- Department of Medicine, Yale University School of Medicine, New Haven, Connecticut, USA
| | - Ajandek Eory
- Department of Family Medicine, Semmelweis University, Budapest, Hungary
| | - Ralph I Horwitz
- Lewis Katz School of Medicine, Temple University, Philadelphia, Pennsylvania, USA
| | - Chiara Rafanelli
- Department of Psychology "Renzo Canestrari", University of Bologna, Bologna, Italy
| | | | - Hongxing Wang
- Division of Neuropsychiatry and Psychosomatics, Department of Neurology, Xuanwu Hospital, Capital Medical University, Beijing, China
- Beijing Psychosomatic Disease Consultation Center, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Thomas N Wise
- Department of Psychiatry, Inova Health Systems, Falls Church, Virginia, USA
- Department of Psychiatry, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
- Department of Psychiatry and Behavioral Sciences, George Washington University School of Medicine, Washington, District of Columbia, USA
| | - Jesse H Wright
- Department of Psychiatry and Behavioral Sciences, University of Louisville School of Medicine, Louisville, Kentucky, USA
| | - Stephan Zipfel
- Department of Psychosomatic Medicine and Psychotherapy, University Medical Hospital Tubingen, Tubingen, Germany
- German Centre of Mental Health, Tubingen, Germany
| | - Chiara Patierno
- Department of Psychology "Renzo Canestrari", University of Bologna, Bologna, Italy
| |
Collapse
|
11
|
Nilsen P. Artificial intelligence in nursing: From speculation to science. Worldviews Evid Based Nurs 2024; 21:4-5. [PMID: 38240405 DOI: 10.1111/wvn.12706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2023] [Accepted: 12/27/2023] [Indexed: 02/09/2024]
Affiliation(s)
- Per Nilsen
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
- Department of Health, Medicine and Caring Sciences, Linköping University, Linköping, Sweden
| |
Collapse
|
12
|
Luu VP, Fiorini M, Combes S, Quemeneur E, Bonneville M, Bousquet PJ. Challenges of artificial intelligence in precision oncology: public-private partnerships including national health agencies as an asset to make it happen. Ann Oncol 2024; 35:154-158. [PMID: 37769849 DOI: 10.1016/j.annonc.2023.09.3106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 07/13/2023] [Accepted: 09/17/2023] [Indexed: 10/03/2023] Open
Affiliation(s)
- V P Luu
- Epidemiology and innovation Unit, Artificial Intelligence and Cancers Association, Paris, France.
| | - M Fiorini
- Artificial Intelligence and Cancers Association, Paris, France
| | | | - E Quemeneur
- France Biotech, Paris, France; Transgene S.A., Illkirch-Graffenstaden, France
| | - M Bonneville
- Alliance pour la Recherche et l'Innovation des Industries de Santé, Paris, France; Institut Mérieux, Lyon, France
| | - P J Bousquet
- Health Survey, Data-Science, Assessment Division, Institut National du Cancer, Boulogne Billancourt, France; Aix Marseille University, INSERM, IRD, Economics and Social Sciences Applied to Health & Analysis of Medical Information (SESSTIM), Marseille, France
| |
Collapse
|
13
|
Weidener L, Fischer M. Role of Ethics in Developing AI-Based Applications in Medicine: Insights From Expert Interviews and Discussion of Implications. JMIR AI 2024; 3:e51204. [PMID: 38875585 PMCID: PMC11041491 DOI: 10.2196/51204] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 11/20/2023] [Accepted: 12/09/2023] [Indexed: 06/16/2024]
Abstract
BACKGROUND The integration of artificial intelligence (AI)-based applications in the medical field has increased significantly, offering potential improvements in patient care and diagnostics. However, alongside these advancements, there is growing concern about ethical considerations, such as bias, informed consent, and trust in the development of these technologies. OBJECTIVE This study aims to assess the role of ethics in the development of AI-based applications in medicine. Furthermore, this study focuses on the potential consequences of neglecting ethical considerations in AI development, particularly their impact on patients and physicians. METHODS Qualitative content analysis was used to analyze the responses from expert interviews. Experts were selected based on their involvement in the research or practical development of AI-based applications in medicine for at least 5 years, leading to the inclusion of 7 experts in the study. RESULTS The analysis revealed 3 main categories and 7 subcategories reflecting a wide range of views on the role of ethics in AI development. This variance underscores the subjectivity and complexity of integrating ethics into the development of AI in medicine. Although some experts view ethics as fundamental, others prioritize performance and efficiency, with some perceiving ethics as potential obstacles to technological progress. This dichotomy of perspectives clearly emphasizes the subjectivity and complexity surrounding the role of ethics in AI development, reflecting the inherent multifaceted nature of this issue. CONCLUSIONS Despite the methodological limitations impacting the generalizability of the results, this study underscores the critical importance of consistent and integrated ethical considerations in AI development for medical applications. It advocates further research into effective strategies for ethical AI development, emphasizing the need for transparent and responsible practices, consideration of diverse data sources, physician training, and the establishment of comprehensive ethical and legal frameworks.
Collapse
Affiliation(s)
- Lukas Weidener
- Research Unit for Quality and Ethics in Health Care, UMIT TIROL - Private University for Health Sciences and Health Technology, Hall in Tirol, Austria
| | - Michael Fischer
- Research Unit for Quality and Ethics in Health Care, UMIT TIROL - Private University for Health Sciences and Health Technology, Hall in Tirol, Austria
| |
Collapse
|
14
|
Fallahpoor M, Chakraborty S, Pradhan B, Faust O, Barua PD, Chegeni H, Acharya R. Deep learning techniques in PET/CT imaging: A comprehensive review from sinogram to image space. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107880. [PMID: 37924769 DOI: 10.1016/j.cmpb.2023.107880] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 10/16/2023] [Accepted: 10/21/2023] [Indexed: 11/06/2023]
Abstract
Positron emission tomography/computed tomography (PET/CT) is increasingly used in oncology, neurology, cardiology, and emerging medical fields. The success stems from the cohesive information that hybrid PET/CT imaging offers, surpassing the capabilities of individual modalities when used in isolation for different malignancies. However, manual image interpretation requires extensive disease-specific knowledge, and it is a time-consuming aspect of physicians' daily routines. Deep learning algorithms, akin to a practitioner during training, extract knowledge from images to facilitate the diagnosis process by detecting symptoms and enhancing images. This acquired knowledge aids in supporting the diagnosis process through symptom detection and image enhancement. The available review papers on PET/CT imaging have a drawback as they either included additional modalities or examined various types of AI applications. However, there has been a lack of comprehensive investigation specifically focused on the highly specific use of AI, and deep learning, on PET/CT images. This review aims to fill that gap by investigating the characteristics of approaches used in papers that employed deep learning for PET/CT imaging. Within the review, we identified 99 studies published between 2017 and 2022 that applied deep learning to PET/CT images. We also identified the best pre-processing algorithms and the most effective deep learning models reported for PET/CT while highlighting the current limitations. Our review underscores the potential of deep learning (DL) in PET/CT imaging, with successful applications in lesion detection, tumor segmentation, and disease classification in both sinogram and image spaces. Common and specific pre-processing techniques are also discussed. DL algorithms excel at extracting meaningful features, and enhancing accuracy and efficiency in diagnosis. However, limitations arise from the scarcity of annotated datasets and challenges in explainability and uncertainty. Recent DL models, such as attention-based models, generative models, multi-modal models, graph convolutional networks, and transformers, are promising for improving PET/CT studies. Additionally, radiomics has garnered attention for tumor classification and predicting patient outcomes. Ongoing research is crucial to explore new applications and improve the accuracy of DL models in this rapidly evolving field.
Collapse
Affiliation(s)
- Maryam Fallahpoor
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), School of Civil and Environmental Engineering, University of Technology Sydney, Ultimo, NSW 2007, Australia
| | - Subrata Chakraborty
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), School of Civil and Environmental Engineering, University of Technology Sydney, Ultimo, NSW 2007, Australia; School of Science and Technology, Faculty of Science, Agriculture, Business and Law, University of New England, Armidale, NSW 2351, Australia
| | - Biswajeet Pradhan
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), School of Civil and Environmental Engineering, University of Technology Sydney, Ultimo, NSW 2007, Australia; Earth Observation Centre, Institute of Climate Change, Universiti Kebangsaan Malaysia, Bangi 43600, Malaysia.
| | - Oliver Faust
- School of Computing and Information Science, Anglia Ruskin University Cambridge Campus, United Kingdom
| | - Prabal Datta Barua
- School of Science and Technology, Faculty of Science, Agriculture, Business and Law, University of New England, Armidale, NSW 2351, Australia; Faculty of Engineering and Information Technology, University of Technology Sydney, Australia; School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Australia
| | | | - Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Toowoomba, QLD, Australia
| |
Collapse
|
15
|
Khosravi M, Zare Z, Mojtabaeian SM, Izadi R. Artificial Intelligence and Decision-Making in Healthcare: A Thematic Analysis of a Systematic Review of Reviews. Health Serv Res Manag Epidemiol 2024; 11:23333928241234863. [PMID: 38449840 PMCID: PMC10916499 DOI: 10.1177/23333928241234863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Revised: 02/04/2024] [Accepted: 02/07/2024] [Indexed: 03/08/2024] Open
Abstract
Introduction The use of artificial intelligence (AI), which can emulate human intelligence and enhance clinical results, has grown in healthcare decision-making due to the digitalization effects and the COVID-19 pandemic. The purpose of this study was to determine the scope of applications of AI tools in the decision-making process in healthcare service delivery networks. Materials and methods This study used a qualitative method to conduct a systematic review of the existing reviews. Review articles published between 2000 and 2024 in English-language were searched in PubMed, Scopus, ProQuest, and Cochrane databases. The CASP (Critical Appraisal Skills Programme) Checklist for Systematic Reviews was used to evaluate the quality of the articles. Based on the eligibility criteria, the final articles were selected and the data extraction was done independently by 2 authors. Finally, the thematic analysis approach was used to analyze the data extracted from the selected articles. Results Of the 14 219 identified records, 18 review articles were eligible and included in the analysis, which covered the findings of 669 other articles. The quality assessment score of all reviewed articles was high. And, the thematic analysis of the data identified 3 main themes including clinical decision-making, organizational decision-making, and shared decision-making; which originated from 8 subthemes. Conclusions This study revealed that AI tools have been applied in various aspects of healthcare decision-making. The use of AI can improve the quality, efficiency, and effectiveness of healthcare services by providing accurate, timely, and personalized information to support decision-making. Further research is needed to explore the best practices and standards for implementing AI in healthcare decision-making.
Collapse
Affiliation(s)
- Mohsen Khosravi
- Department of Health Care Management, School of Management and Information Sciences, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Zahra Zare
- Department of Health Care Management, School of Management and Information Sciences, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Seyyed Morteza Mojtabaeian
- Department of Healthcare Economics, School of Management and Medical Informatics, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Reyhane Izadi
- Department of Health Care Management, School of Management and Information Sciences, Shiraz University of Medical Sciences, Shiraz, Iran
| |
Collapse
|
16
|
Funer F, Liedtke W, Tinnemeyer S, Klausen AD, Schneider D, Zacharias HU, Langanke M, Salloch S. Responsibility and decision-making authority in using clinical decision support systems: an empirical-ethical exploration of German prospective professionals' preferences and concerns. JOURNAL OF MEDICAL ETHICS 2023; 50:6-11. [PMID: 37217277 PMCID: PMC10803986 DOI: 10.1136/jme-2022-108814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 03/11/2023] [Indexed: 05/24/2023]
Abstract
Machine learning-driven clinical decision support systems (ML-CDSSs) seem impressively promising for future routine and emergency care. However, reflection on their clinical implementation reveals a wide array of ethical challenges. The preferences, concerns and expectations of professional stakeholders remain largely unexplored. Empirical research, however, may help to clarify the conceptual debate and its aspects in terms of their relevance for clinical practice. This study explores, from an ethical point of view, future healthcare professionals' attitudes to potential changes of responsibility and decision-making authority when using ML-CDSS. Twenty-seven semistructured interviews were conducted with German medical students and nursing trainees. The data were analysed based on qualitative content analysis according to Kuckartz. Interviewees' reflections are presented under three themes the interviewees describe as closely related: (self-)attribution of responsibility, decision-making authority and need of (professional) experience. The results illustrate the conceptual interconnectedness of professional responsibility and its structural and epistemic preconditions to be able to fulfil clinicians' responsibility in a meaningful manner. The study also sheds light on the four relata of responsibility understood as a relational concept. The article closes with concrete suggestions for the ethically sound clinical implementation of ML-CDSS.
Collapse
Affiliation(s)
- Florian Funer
- Institute of Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
- Institute of Ethics and History of Medicine, Eberhard Karls University Tübingen, Tübingen, Germany
| | - Wenke Liedtke
- Department of Social Work, Protestant University of Applied Sciences RWL, Bochum, Germany
| | - Sara Tinnemeyer
- Institute of Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| | | | - Diana Schneider
- Competence Center Emerging Technologies, Fraunhofer Institute for Systems and Innovation Research ISI, Karlsruhe, Germany
| | - Helena U Zacharias
- Peter L. Reichertz Institute for Medical Informatics of TU Braunschweig and Hannover Medical School, Hannover Medical School, Hannover, Germany
| | - Martin Langanke
- Department of Social Work, Protestant University of Applied Sciences RWL, Bochum, Germany
| | - Sabine Salloch
- Institute of Ethics, History and Philosophy of Medicine, Hannover Medical School, Hannover, Germany
| |
Collapse
|
17
|
Drezga-Kleiminger M, Demaree-Cotton J, Koplin J, Savulescu J, Wilkinson D. Should AI allocate livers for transplant? Public attitudes and ethical considerations. BMC Med Ethics 2023; 24:102. [PMID: 38012660 PMCID: PMC10683249 DOI: 10.1186/s12910-023-00983-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 11/14/2023] [Indexed: 11/29/2023] Open
Abstract
BACKGROUND Allocation of scarce organs for transplantation is ethically challenging. Artificial intelligence (AI) has been proposed to assist in liver allocation, however the ethics of this remains unexplored and the view of the public unknown. The aim of this paper was to assess public attitudes on whether AI should be used in liver allocation and how it should be implemented. METHODS We first introduce some potential ethical issues concerning AI in liver allocation, before analysing a pilot survey including online responses from 172 UK laypeople, recruited through Prolific Academic. FINDINGS Most participants found AI in liver allocation acceptable (69.2%) and would not be less likely to donate their organs if AI was used in allocation (72.7%). Respondents thought AI was more likely to be consistent and less biased compared to humans, although were concerned about the "dehumanisation of healthcare" and whether AI could consider important nuances in allocation decisions. Participants valued accuracy, impartiality, and consistency in a decision-maker, more than interpretability and empathy. Respondents were split on whether AI should be trained on previous decisions or programmed with specific objectives. Whether allocation decisions were made by transplant committee or AI, participants valued consideration of urgency, survival likelihood, life years gained, age, future medication compliance, quality of life, future alcohol use and past alcohol use. On the other hand, the majority thought the following factors were not relevant to prioritisation: past crime, future crime, future societal contribution, social disadvantage, and gender. CONCLUSIONS There are good reasons to use AI in liver allocation, and our sample of participants appeared to support its use. If confirmed, this support would give democratic legitimacy to the use of AI in this context and reduce the risk that donation rates could be affected negatively. Our findings on specific ethical concerns also identify potential expectations and reservations laypeople have regarding AI in this area, which can inform how AI in liver allocation could be best implemented.
Collapse
Affiliation(s)
- Max Drezga-Kleiminger
- Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia
- Oxford Uehiro Centre for Practical Ethics, Faculty of Philosophy, University of Oxford, Oxford, OX1 2JD, UK
| | - Joanna Demaree-Cotton
- Oxford Uehiro Centre for Practical Ethics, Faculty of Philosophy, University of Oxford, Oxford, OX1 2JD, UK
| | - Julian Koplin
- Monash Bioethics Centre, Monash University, Melbourne, Australia
| | - Julian Savulescu
- Oxford Uehiro Centre for Practical Ethics, Faculty of Philosophy, University of Oxford, Oxford, OX1 2JD, UK
- Murdoch Children's Research Institute, Melbourne, Australia
- Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Dominic Wilkinson
- Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia.
- Oxford Uehiro Centre for Practical Ethics, Faculty of Philosophy, University of Oxford, Oxford, OX1 2JD, UK.
- Murdoch Children's Research Institute, Melbourne, Australia.
- Centre for Biomedical Ethics, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore.
- John Radcliffe Hospital, Oxford, UK.
| |
Collapse
|
18
|
Turner JH. Triangle of Trust in Cancer Care? The Physician, the Patient, and Artificial Intelligence Chatbot. Cancer Biother Radiopharm 2023; 38:581-584. [PMID: 37707991 DOI: 10.1089/cbr.2023.0112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/16/2023] Open
Abstract
Trust, as a philosophic paradigm, is predominantly interpersonal, between human beings, and is differentiated from reliance. Can a person trust an inhumane amoral agent, such as a large language model artificial intelligence (AI) chatbot, to manifest the goodwill and willingness normally required in order for it to be deemed trustworthy? This article explores the relationship between the cancer patient, their physician, and AI chatbot in a proposed tripartite, consultative, personalized approach to shared-care in precision molecular oncology. It examines the nature of trust between human agents and machines. It also contemplates AI-enhanced technical precision in state-of-the-art cancer management, complemented by trustworthy, holistic clinical care by a physician, for each individual patient. "To what extent can the user "trust" GPT-4?" Peter Lee,1 Microsoft Research 2023.
Collapse
Affiliation(s)
- J Harvey Turner
- Department of Nuclear Medicine, The University of Western Australia, Fiona Stanley Fremantle Hospitals Group, Murdoch, Australia
| |
Collapse
|
19
|
Gould DJ, Dowsey MM, Glanville-Hearst M, Spelman T, Bailey JA, Choong PFM, Bunzli S. Patients' Views on AI for Risk Prediction in Shared Decision-Making for Knee Replacement Surgery: Qualitative Interview Study. J Med Internet Res 2023; 25:e43632. [PMID: 37721797 PMCID: PMC10546266 DOI: 10.2196/43632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 05/04/2023] [Accepted: 08/21/2023] [Indexed: 09/19/2023] Open
Abstract
BACKGROUND The use of artificial intelligence (AI) in decision-making around knee replacement surgery is increasing, and this technology holds promise to improve the prediction of patient outcomes. Ambiguity surrounds the definition of AI, and there are mixed views on its application in clinical settings. OBJECTIVE In this study, we aimed to explore the understanding and attitudes of patients who underwent knee replacement surgery regarding AI in the context of risk prediction for shared clinical decision-making. METHODS This qualitative study involved patients who underwent knee replacement surgery at a tertiary referral center for joint replacement surgery. The participants were selected based on their age and sex. Semistructured interviews explored the participants' understanding of AI and their opinions on its use in shared clinical decision-making. Data collection and reflexive thematic analyses were conducted concurrently. Recruitment continued until thematic saturation was achieved. RESULTS Thematic saturation was achieved with 19 interviews and confirmed with 1 additional interview, resulting in 20 participants being interviewed (female participants: n=11, 55%; male participants: n=9, 45%; median age: 66 years). A total of 11 (55%) participants had a substantial postoperative complication. Three themes captured the participants' understanding of AI and their perceptions of its use in shared clinical decision-making. The theme Expectations captured the participants' views of themselves as individuals with the right to self-determination as they sought therapeutic solutions tailored to their circumstances, needs, and desires, including whether to use AI at all. The theme Empowerment highlighted the potential of AI to enable patients to develop realistic expectations and equip them with personalized risk information to discuss in shared decision-making conversations with the surgeon. The theme Partnership captured the importance of symbiosis between AI and clinicians because AI has varied levels of interpretability and understanding of human emotions and empathy. CONCLUSIONS Patients who underwent knee replacement surgery in this study had varied levels of familiarity with AI and diverse conceptualizations of its definitions and capabilities. Educating patients about AI through nontechnical explanations and illustrative scenarios could help inform their decision to use it for risk prediction in the shared decision-making process with their surgeon. These findings could be used in the process of developing a questionnaire to ascertain the views of patients undergoing knee replacement surgery on the acceptability of AI in shared clinical decision-making. Future work could investigate the accuracy of this patient group's understanding of AI, beyond their familiarity with it, and how this influences their acceptance of its use. Surgeons may play a key role in finding a place for AI in the clinical setting as the uptake of this technology in health care continues to grow.
Collapse
Affiliation(s)
- Daniel J Gould
- St Vincent's Hospital, Department of Surgery, University of Melbourne, Melbourne, Australia
| | - Michelle M Dowsey
- St Vincent's Hospital, Department of Surgery, University of Melbourne, Melbourne, Australia
- Department of Orthopaedics, St Vincent's Hospital Melbourne, Melbourne, Australia
| | | | - Tim Spelman
- St Vincent's Hospital, Department of Surgery, University of Melbourne, Melbourne, Australia
| | - James A Bailey
- School of Computing and Information Systems, University of Melbourne, Melbourne, Australia
| | - Peter F M Choong
- St Vincent's Hospital, Department of Surgery, University of Melbourne, Melbourne, Australia
- Department of Orthopaedics, St Vincent's Hospital Melbourne, Melbourne, Australia
| | - Samantha Bunzli
- School of Health Sciences and Social Work, Griffith University, Brisbane, Australia
| |
Collapse
|
20
|
Chan B. Black-box assisted medical decisions: AI power vs. ethical physician care. MEDICINE, HEALTH CARE, AND PHILOSOPHY 2023; 26:285-292. [PMID: 37273041 PMCID: PMC10425517 DOI: 10.1007/s11019-023-10153-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/08/2023] [Indexed: 06/06/2023]
Abstract
I raise an ethical problem with physicians using "black box" medical AI algorithms, arguing that its use would compromise proper patient care. Even if AI results are reliable, my contention is that without being able to explain medical decisions to patients, physicians' use of black box AIs would erode the effective and respectful care they provide patients. In addition, I argue that physicians should use AI black boxes only for patients in dire straits, or when physicians use AI as a "co-pilot" (analogous to a spellchecker) but can independently confirm its accuracy. My argument will be further sharpened when, lastly, I give important attention to Alex John London's objection that physicians already sometimes prescribe treatment, such as lithium drugs, even though neither researchers nor doctors can explain why the treatment works.
Collapse
Affiliation(s)
- Berman Chan
- School of Philosophy and Sociology, Lanzhou University, Lanzhou, Gansu, China.
| |
Collapse
|
21
|
Alvarado R. AI as an Epistemic Technology. SCIENCE AND ENGINEERING ETHICS 2023; 29:32. [PMID: 37603120 DOI: 10.1007/s11948-023-00451-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 07/12/2023] [Indexed: 08/22/2023]
Abstract
In this paper I argue that Artificial Intelligence and the many data science methods associated with it, such as machine learning and large language models, are first and foremost epistemic technologies. In order to establish this claim, I first argue that epistemic technologies can be conceptually and practically distinguished from other technologies in virtue of what they are designed for, what they do and how they do it. I then proceed to show that unlike other kinds of technology (including other epistemic technologies) AI can be uniquely positioned as an epistemic technology in that it is primarily designed, developed and deployed to be used in epistemic contexts such as inquiry, it is specifically designed, developed and deployed to manipulate epistemic content such as data, and it is designed, developed and deployed to do so particularly through epistemic operations such as prediction and analysis. As has been shown in recent work in the philosophy and ethics of AI (Alvarado, AI and Ethics, 2022a), understanding AI as an epistemic technology will also have significant implications for important debates regarding our relationship to AI technologies. This paper includes a brief overview of such implications, particularly those pertaining to explainability, opacity, trust and even epistemic harms related to AI technologies.
Collapse
Affiliation(s)
- Ramón Alvarado
- Philosophy Department, University of Oregon, Eugene, OR, USA.
| |
Collapse
|
22
|
McCradden M, Hui K, Buchman DZ. Evidence, ethics and the promise of artificial intelligence in psychiatry. JOURNAL OF MEDICAL ETHICS 2023; 49:573-579. [PMID: 36581457 PMCID: PMC10423547 DOI: 10.1136/jme-2022-108447] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Accepted: 11/29/2022] [Indexed: 05/20/2023]
Abstract
Researchers are studying how artificial intelligence (AI) can be used to better detect, prognosticate and subgroup diseases. The idea that AI might advance medicine's understanding of biological categories of psychiatric disorders, as well as provide better treatments, is appealing given the historical challenges with prediction, diagnosis and treatment in psychiatry. Given the power of AI to analyse vast amounts of information, some clinicians may feel obligated to align their clinical judgements with the outputs of the AI system. However, a potential epistemic privileging of AI in clinical judgements may lead to unintended consequences that could negatively affect patient treatment, well-being and rights. The implications are also relevant to precision medicine, digital twin technologies and predictive analytics generally. We propose that a commitment to epistemic humility can help promote judicious clinical decision-making at the interface of big data and AI in psychiatry.
Collapse
Affiliation(s)
- Melissa McCradden
- Joint Centre for Bioethics, University of Toronto Dalla Lana School of Public Health, Toronto, Ontario, Canada
- Bioethics, The Hospital for Sick Children, Toronto, Ontario, Canada
- Genetics & Genome Biology, Peter Gilgan Centre for Research and Learning, Toronto, Ontario, Canada
| | - Katrina Hui
- Everyday Ethics Lab, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| | - Daniel Z Buchman
- Joint Centre for Bioethics, University of Toronto Dalla Lana School of Public Health, Toronto, Ontario, Canada
- Everyday Ethics Lab, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
| |
Collapse
|
23
|
Upadhyay R, Knoth P, Pasi G, Viviani M. Explainable online health information truthfulness in Consumer Health Search. Front Artif Intell 2023; 6:1184851. [PMID: 37415938 PMCID: PMC10321772 DOI: 10.3389/frai.2023.1184851] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Accepted: 05/30/2023] [Indexed: 07/08/2023] Open
Abstract
Introduction People are today increasingly relying on health information they find online to make decisions that may impact both their physical and mental wellbeing. Therefore, there is a growing need for systems that can assess the truthfulness of such health information. Most of the current literature solutions use machine learning or knowledge-based approaches treating the problem as a binary classification task, discriminating between correct information and misinformation. Such solutions present several problems with regard to user decision making, among which: (i) the binary classification task provides users with just two predetermined possibilities with respect to the truthfulness of the information, which users should take for granted; indeed, (ii) the processes by which the results were obtained are often opaque and the results themselves have little or no interpretation. Methods To address these issues, we approach the problem as an ad hoc retrieval task rather than a classification task, with reference, in particular, to the Consumer Health Search task. To do this, a previously proposed Information Retrieval model, which considers information truthfulness as a dimension of relevance, is used to obtain a ranked list of both topically-relevant and truthful documents. The novelty of this work concerns the extension of such a model with a solution for the explainability of the results obtained, by relying on a knowledge base consisting of scientific evidence in the form of medical journal articles. Results and discussion We evaluate the proposed solution both quantitatively, as a standard classification task, and qualitatively, through a user study to examine the "explained" ranked list of documents. The results obtained illustrate the solution's effectiveness and usefulness in making the retrieved results more interpretable by Consumer Health Searchers, both with respect to topical relevance and truthfulness.
Collapse
Affiliation(s)
- Rishabh Upadhyay
- Information and Knowledge Representation, Retrieval, and Reasoning (IKR3) Lab, Department of Informatics, Systems, and Communication, University of Milano-Bicocca, Milan, Italy
| | - Petr Knoth
- Big Scientific Data and Text Analytics Group, Knowledge Media Institute, The Open University, Milton Keynes, United Kingdom
| | - Gabriella Pasi
- Information and Knowledge Representation, Retrieval, and Reasoning (IKR3) Lab, Department of Informatics, Systems, and Communication, University of Milano-Bicocca, Milan, Italy
| | - Marco Viviani
- Information and Knowledge Representation, Retrieval, and Reasoning (IKR3) Lab, Department of Informatics, Systems, and Communication, University of Milano-Bicocca, Milan, Italy
| |
Collapse
|
24
|
Sauerbrei A, Kerasidou A, Lucivero F, Hallowell N. The impact of artificial intelligence on the person-centred, doctor-patient relationship: some problems and solutions. BMC Med Inform Decis Mak 2023; 23:73. [PMID: 37081503 PMCID: PMC10116477 DOI: 10.1186/s12911-023-02162-y] [Citation(s) in RCA: 25] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 03/29/2023] [Indexed: 04/22/2023] Open
Abstract
Artificial intelligence (AI) is often cited as a possible solution to current issues faced by healthcare systems. This includes the freeing up of time for doctors and facilitating person-centred doctor-patient relationships. However, given the novelty of artificial intelligence tools, there is very little concrete evidence on their impact on the doctor-patient relationship or on how to ensure that they are implemented in a way which is beneficial for person-centred care.Given the importance of empathy and compassion in the practice of person-centred care, we conducted a literature review to explore how AI impacts these two values. Besides empathy and compassion, shared decision-making, and trust relationships emerged as key values in the reviewed papers. We identified two concrete ways which can help ensure that the use of AI tools have a positive impact on person-centred doctor-patient relationships. These are (1) using AI tools in an assistive role and (2) adapting medical education. The study suggests that we need to take intentional steps in order to ensure that the deployment of AI tools in healthcare has a positive impact on person-centred doctor-patient relationships. We argue that the proposed solutions are contingent upon clarifying the values underlying future healthcare systems.
Collapse
Affiliation(s)
- Aurelia Sauerbrei
- Ethox Centre, Nuffield Department of Population Health, University of Oxford, Big Data Institute, Old Road Campus, Oxford, OX3 7LF, UK.
| | - Angeliki Kerasidou
- Ethox Centre, Nuffield Department of Population Health, University of Oxford, Big Data Institute, Old Road Campus, Oxford, OX3 7LF, UK
| | - Federica Lucivero
- Ethox Centre, Nuffield Department of Population Health, University of Oxford, Big Data Institute, Old Road Campus, Oxford, OX3 7LF, UK
| | - Nina Hallowell
- Ethox Centre, Nuffield Department of Population Health, University of Oxford, Big Data Institute, Old Road Campus, Oxford, OX3 7LF, UK
| |
Collapse
|
25
|
Badal K, Lee CM, Esserman LJ. Guiding principles for the responsible development of artificial intelligence tools for healthcare. COMMUNICATIONS MEDICINE 2023; 3:47. [PMID: 37005467 PMCID: PMC10066953 DOI: 10.1038/s43856-023-00279-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Accepted: 03/21/2023] [Indexed: 04/04/2023] Open
Abstract
Several principles have been proposed to improve use of artificial intelligence (AI) in healthcare, but the need for AI to improve longstanding healthcare challenges has not been sufficiently emphasized. We propose that AI should be designed to alleviate health disparities, report clinically meaningful outcomes, reduce overdiagnosis and overtreatment, have high healthcare value, consider biographical drivers of health, be easily tailored to the local population, promote a learning healthcare system, and facilitate shared decision-making. These principles are illustrated by examples from breast cancer research and we provide questions that can be used by AI developers when applying each principle to their work.
Collapse
Affiliation(s)
- Kimberly Badal
- Department of Surgery, Helen Diller Comprehensive Cancer Center, University of California, San Francisco, CA, USA.
| | - Carmen M Lee
- Department of Emergency Medicine, Highland Hospital, Alameda Health System, Alameda, CA, USA
| | - Laura J Esserman
- Department of Surgery, Helen Diller Comprehensive Cancer Center, University of California, San Francisco, CA, USA
| |
Collapse
|
26
|
Dimitsaki S, Gavriilidis GI, Dimitriadis VK, Natsiavas P. Benchmarking of Machine Learning classifiers on plasma proteomic for COVID-19 severity prediction through interpretable artificial intelligence. Artif Intell Med 2023; 137:102490. [PMID: 36868685 PMCID: PMC9846931 DOI: 10.1016/j.artmed.2023.102490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 01/10/2023] [Accepted: 01/11/2023] [Indexed: 01/19/2023]
Abstract
The SARS-CoV-2 pandemic highlighted the need for software tools that could facilitate patient triage regarding potential disease severity or even death. In this article, an ensemble of Machine Learning (ML) algorithms is evaluated in terms of predicting the severity of their condition using plasma proteomics and clinical data as input. An overview of AI-based technical developments to support COVID-19 patient management is presented outlining the landscape of relevant technical developments. Based on this review, the use of an ensemble of ML algorithms that analyze clinical and biological data (i.e., plasma proteomics) of COVID-19 patients is designed and deployed to evaluate the potential use of AI for early COVID-19 patient triage. The proposed pipeline is evaluated using three publicly available datasets for training and testing. Three ML "tasks" are defined, and several algorithms are tested through a hyperparameter tuning method to identify the highest-performance models. As overfitting is one of the typical pitfalls for such approaches (mainly due to the size of the training/validation datasets), a variety of evaluation metrics are used to mitigate this risk. In the evaluation procedure, recall scores ranged from 0.6 to 0.74 and F1-score from 0.62 to 0.75. The best performance is observed via Multi-Layer Perceptron (MLP) and Support Vector Machines (SVM) algorithms. Additionally, input data (proteomics and clinical data) were ranked based on corresponding Shapley additive explanation (SHAP) values and evaluated for their prognosticated capacity and immuno-biological credence. This "interpretable" approach revealed that our ML models could discern critical COVID-19 cases predominantly based on patient's age and plasma proteins on B cell dysfunction, hyper-activation of inflammatory pathways like Toll-like receptors, and hypo-activation of developmental and immune pathways like SCF/c-Kit signaling. Finally, the herein computational workflow is corroborated in an independent dataset and MLP superiority along with the implication of the abovementioned predictive biological pathways are corroborated. Regarding limitations of the presented ML pipeline, the datasets used in this study contain less than 1000 observations and a significant number of input features hence constituting a high-dimensional low-sample (HDLS) dataset which could be sensitive to overfitting. An advantage of the proposed pipeline is that it combines biological data (plasma proteomics) with clinical-phenotypic data. Thus, in principle, the presented approach could enable patient triage in a timely fashion if used on already trained models. However, larger datasets and further systematic validation are needed to confirm the potential clinical value of this approach. The code is available on Github: https://github.com/inab-certh/Predicting-COVID-19-severity-through-interpretable-AI-analysis-of-plasma-proteomics.
Collapse
Affiliation(s)
- Stella Dimitsaki
- Institute of Applied Biosciences, Centre for Research & Technology Hellas, Thermi, Thessaloniki, Greece.
| | - George I Gavriilidis
- Institute of Applied Biosciences, Centre for Research & Technology Hellas, Thermi, Thessaloniki, Greece
| | - Vlasios K Dimitriadis
- Institute of Applied Biosciences, Centre for Research & Technology Hellas, Thermi, Thessaloniki, Greece
| | - Pantelis Natsiavas
- Institute of Applied Biosciences, Centre for Research & Technology Hellas, Thermi, Thessaloniki, Greece
| |
Collapse
|
27
|
Grote T, Berens P. Uncertainty, Evidence, and the Integration of Machine Learning into Medical Practice. THE JOURNAL OF MEDICINE AND PHILOSOPHY 2023; 48:84-97. [PMID: 36630292 DOI: 10.1093/jmp/jhac034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023] Open
Abstract
In light of recent advances in machine learning for medical applications, the automation of medical diagnostics is imminent. That said, before machine learning algorithms find their way into clinical practice, various problems at the epistemic level need to be overcome. In this paper, we discuss different sources of uncertainty arising for clinicians trying to evaluate the trustworthiness of algorithmic evidence when making diagnostic judgments. Thereby, we examine many of the limitations of current machine learning algorithms (with deep learning in particular) and highlight their relevance for medical diagnostics. Among the problems we inspect are the theoretical foundations of deep learning (which are not yet adequately understood), the opacity of algorithmic decisions, and the vulnerabilities of machine learning models, as well as concerns regarding the quality of medical data used to train the models. Building on this, we discuss different desiderata for an uncertainty amelioration strategy that ensures that the integration of machine learning into clinical settings proves to be medically beneficial in a meaningful way.
Collapse
|
28
|
Macri R, Roberts SL. The Use of Artificial Intelligence in Clinical Care: A Values-Based Guide for Shared Decision Making. Curr Oncol 2023; 30:2178-2186. [PMID: 36826129 PMCID: PMC9955933 DOI: 10.3390/curroncol30020168] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 01/28/2023] [Accepted: 02/01/2023] [Indexed: 02/12/2023] Open
Abstract
Clinical applications of artificial intelligence (AI) in healthcare, including in the field of oncology, have the potential to advance diagnosis and treatment. The literature suggests that patient values should be considered in decision making when using AI in clinical care; however, there is a lack of practical guidance for clinicians on how to approach these conversations and incorporate patient values into clinical decision making. We provide a practical, values-based guide for clinicians to assist in critical reflection and the incorporation of patient values into shared decision making when deciding to use AI in clinical care. Values that are relevant to patients, identified in the literature, include trust, privacy and confidentiality, non-maleficence, safety, accountability, beneficence, autonomy, transparency, compassion, equity, justice, and fairness. The guide offers questions for clinicians to consider when adopting the potential use of AI in their practice; explores illness understanding between the patient and clinician; encourages open dialogue of patient values; reviews all clinically appropriate options; and makes a shared decision of what option best meets the patient's values. The guide can be used for diverse clinical applications of AI.
Collapse
Affiliation(s)
- Rosanna Macri
- Department of Bioethics, Sinai Health, Toronto, ON M5G 1X5, Canada
- Joint Centre for Bioethics, Dalla Lana School of Public Health, University of Toronto, Toronto, ON M5T 1P8, Canada
- Department of Radiation Oncology, Temerty Faculty of Medicine, University of Toronto, Toronto, ON M5T 1P5, Canada
- Correspondence:
| | - Shannon L. Roberts
- Project-Specific Bioethics Research Volunteer Student, Hennick Bridgepoint Hospital, Sinai Health, Toronto, ON M4M 2B5, Canada
| |
Collapse
|
29
|
Grote T. Randomised controlled trials in medical AI: ethical considerations. JOURNAL OF MEDICAL ETHICS 2022; 48:899-906. [PMID: 33990429 DOI: 10.1136/medethics-2020-107166] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2020] [Revised: 03/30/2021] [Accepted: 04/08/2021] [Indexed: 06/12/2023]
Abstract
In recent years, there has been a surge of high-profile publications on applications of artificial intelligence (AI) systems for medical diagnosis and prognosis. While AI provides various opportunities for medical practice, there is an emerging consensus that the existing studies show considerable deficits and are unable to establish the clinical benefit of AI systems. Hence, the view that the clinical benefit of AI systems needs to be studied in clinical trials-particularly randomised controlled trials (RCTs)-is gaining ground. However, an issue that has been overlooked so far in the debate is that, compared with drug RCTs, AI RCTs require methodological adjustments, which entail ethical challenges. This paper sets out to develop a systematic account of the ethics of AI RCTs by focusing on the moral principles of clinical equipoise, informed consent and fairness. This way, the objective is to animate further debate on the (research) ethics of medical AI.
Collapse
Affiliation(s)
- Thomas Grote
- Ethics and Philosophy Lab, Cluster of Excellence "Machine Learning: New Perspectives for Science", University of Tübingen, Tübingen D-72076, Germany
| |
Collapse
|
30
|
Crossnohere NL, Childerhose JE, Bose-Brill S. Increasing the Patient-Centeredness of Predictive Analytics Tools. THE PATIENT 2022; 15:615-617. [PMID: 36053486 DOI: 10.1007/s40271-022-00595-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 08/12/2022] [Indexed: 06/15/2023]
Affiliation(s)
- Norah L Crossnohere
- Department of Biomedical Informatics, The Ohio State University, College of Medicine, 1800 Cannon Drive, Columbus, OH, 43210, USA.
- Division of General Internal Medicine, Department of Internal Medicine, The Ohio State University, College of Medicine, Columbus, OH, USA.
| | - Janet E Childerhose
- Division of General Internal Medicine, Department of Internal Medicine, The Ohio State University, College of Medicine, Columbus, OH, USA
- Division of Bioethics, Department of Anatomy and Biomedical Education, The Ohio State University, College of Medicine, Columbus, OH, USA
| | - Seuli Bose-Brill
- Division of General Internal Medicine, Department of Internal Medicine, The Ohio State University, College of Medicine, Columbus, OH, USA
| |
Collapse
|
31
|
Di Martino F, Delmastro F. Explainable AI for clinical and remote health applications: a survey on tabular and time series data. Artif Intell Rev 2022; 56:5261-5315. [PMID: 36320613 PMCID: PMC9607788 DOI: 10.1007/s10462-022-10304-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
AbstractNowadays Artificial Intelligence (AI) has become a fundamental component of healthcare applications, both clinical and remote, but the best performing AI systems are often too complex to be self-explaining. Explainable AI (XAI) techniques are defined to unveil the reasoning behind the system’s predictions and decisions, and they become even more critical when dealing with sensitive and personal health data. It is worth noting that XAI has not gathered the same attention across different research areas and data types, especially in healthcare. In particular, many clinical and remote health applications are based on tabular and time series data, respectively, and XAI is not commonly analysed on these data types, while computer vision and Natural Language Processing (NLP) are the reference applications. To provide an overview of XAI methods that are most suitable for tabular and time series data in the healthcare domain, this paper provides a review of the literature in the last 5 years, illustrating the type of generated explanations and the efforts provided to evaluate their relevance and quality. Specifically, we identify clinical validation, consistency assessment, objective and standardised quality evaluation, and human-centered quality assessment as key features to ensure effective explanations for the end users. Finally, we highlight the main research challenges in the field as well as the limitations of existing XAI methods.
Collapse
|
32
|
Grote T, Keeling G. Enabling Fairness in Healthcare Through Machine Learning. ETHICS AND INFORMATION TECHNOLOGY 2022; 24:39. [PMID: 36060496 PMCID: PMC9428374 DOI: 10.1007/s10676-022-09658-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/27/2022] [Indexed: 06/15/2023]
Abstract
The use of machine learning systems for decision-support in healthcare may exacerbate health inequalities. However, recent work suggests that algorithms trained on sufficiently diverse datasets could in principle combat health inequalities. One concern about these algorithms is that their performance for patients in traditionally disadvantaged groups exceeds their performance for patients in traditionally advantaged groups. This renders the algorithmic decisions unfair relative to the standard fairness metrics in machine learning. In this paper, we defend the permissible use of affirmative algorithms; that is, algorithms trained on diverse datasets that perform better for traditionally disadvantaged groups. Whilst such algorithmic decisions may be unfair, the fairness of algorithmic decisions is not the appropriate locus of moral evaluation. What matters is the fairness of final decisions, such as diagnoses, resulting from collaboration between clinicians and algorithms. We argue that affirmative algorithms can permissibly be deployed provided the resultant final decisions are fair.
Collapse
Affiliation(s)
- Thomas Grote
- Ethics and Philosophy Lab; Cluster of Excellence: Machine Learning: New Perspectives for Science, University of Tübingen, Maria von Linden Str. 6, D-72076 Tübingen, Germany
| | - Geoff Keeling
- Institute for Human-Centered AI and McCoy Family Center for Ethics in Society, Stanford University, 450 Serra Mall, 94305 Stanford, CA USA
| |
Collapse
|
33
|
Goisauf M, Cano Abadía M. Ethics of AI in Radiology: A Review of Ethical and Societal Implications. Front Big Data 2022; 5:850383. [PMID: 35910490 PMCID: PMC9329694 DOI: 10.3389/fdata.2022.850383] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Accepted: 06/13/2022] [Indexed: 11/13/2022] Open
Abstract
Artificial intelligence (AI) is being applied in medicine to improve healthcare and advance health equity. The application of AI-based technologies in radiology is expected to improve diagnostic performance by increasing accuracy and simplifying personalized decision-making. While this technology has the potential to improve health services, many ethical and societal implications need to be carefully considered to avoid harmful consequences for individuals and groups, especially for the most vulnerable populations. Therefore, several questions are raised, including (1) what types of ethical issues are raised by the use of AI in medicine and biomedical research, and (2) how are these issues being tackled in radiology, especially in the case of breast cancer? To answer these questions, a systematic review of the academic literature was conducted. Searches were performed in five electronic databases to identify peer-reviewed articles published since 2017 on the topic of the ethics of AI in radiology. The review results show that the discourse has mainly addressed expectations and challenges associated with medical AI, and in particular bias and black box issues, and that various guiding principles have been suggested to ensure ethical AI. We found that several ethical and societal implications of AI use remain underexplored, and more attention needs to be paid to addressing potential discriminatory effects and injustices. We conclude with a critical reflection on these issues and the identified gaps in the discourse from a philosophical and STS perspective, underlining the need to integrate a social science perspective in AI developments in radiology in the future.
Collapse
|
34
|
Ferrario A. Design publicity of black box algorithms: a support to the epistemic and ethical justifications of medical AI systems. JOURNAL OF MEDICAL ETHICS 2022; 48:492-494. [PMID: 33980658 PMCID: PMC9240322 DOI: 10.1136/medethics-2021-107482] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Accepted: 04/20/2021] [Indexed: 05/04/2023]
Abstract
In their article 'Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI', Durán and Jongsma discuss the epistemic and ethical challenges raised by black box algorithms in medical practice. The opacity of black box algorithms is an obstacle to the trustworthiness of their outcomes. Moreover, the use of opaque algorithms is not normatively justified in medical practice. The authors introduce a formalism, called computational reliabilism, which allows generating justified beliefs on the algorithm reliability and trustworthy outcomes of artificial intelligence (AI) systems by means of epistemic warrants, called reliability indicators. However, they remark the need for reliability indicators specific to black box algorithms and that justified knowledge is not sufficient to justify normatively the actions of the physicians using medical AI systems. Therefore, Durán and Jongsma advocate for a more transparent design and implementation of black box algorithms, providing a series of recommendations to mitigate the epistemic and ethical challenges behind their use in medical practice. In this response, I argue that a peculiar form of black box algorithm transparency, called design publicity, may efficiently implement these recommendations. Design publicity encodes epistemic, that is, reliability indicators, and ethical recommendations for black box algorithms by means of four subtypes of transparency. These target the values and goals, their translation into design requirements, the performance and consistency of the algorithm altogether. I discuss design publicity applying it to a use case focused on the automated classification of skin lesions from medical images.
Collapse
Affiliation(s)
- Andrea Ferrario
- Management, Technology and Economics, ETH Zurich, Zürich, Switzerland
| |
Collapse
|
35
|
Funer F. The Deception of Certainty: how Non-Interpretable Machine Learning Outcomes Challenge the Epistemic Authority of Physicians. A deliberative-relational Approach. MEDICINE, HEALTH CARE AND PHILOSOPHY 2022; 25:167-178. [PMID: 35538267 PMCID: PMC9089291 DOI: 10.1007/s11019-022-10076-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 03/03/2022] [Accepted: 03/03/2022] [Indexed: 02/06/2023]
Abstract
Developments in Machine Learning (ML) have attracted attention in a wide range of healthcare fields to improve medical practice and the benefit of patients. Particularly, this should be achieved by providing more or less automated decision recommendations to the treating physician. However, some hopes placed in ML for healthcare seem to be disappointed, at least in part, by a lack of transparency or traceability. Skepticism exists primarily in the fact that the physician, as the person responsible for diagnosis, therapy, and care, has no or insufficient insight into how such recommendations are reached. The following paper aims to make understandable the specificity of the deliberative model of a physician-patient relationship that has been achieved over decades. By outlining the (social-)epistemic and inherently normative relationship between physicians and patients, I want to show how this relationship might be altered by non-traceable ML recommendations. With respect to some healthcare decisions, such changes in deliberative practice may create normatively far-reaching challenges. Therefore, in the future, a differentiation of decision-making situations in healthcare with respect to the necessary depth of insight into the process of outcome generation seems essential.
Collapse
|
36
|
Kempt H, Nagel SK. Responsibility, second opinions and peer-disagreement: ethical and epistemological challenges of using AI in clinical diagnostic contexts. JOURNAL OF MEDICAL ETHICS 2022; 48:222-229. [PMID: 34907006 DOI: 10.1136/medethics-2021-107440] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Accepted: 11/29/2021] [Indexed: 06/14/2023]
Abstract
In this paper, we first classify different types of second opinions and evaluate the ethical and epistemological implications of providing those in a clinical context. Second, we discuss the issue of how artificial intelligent (AI) could replace the human cognitive labour of providing such second opinion and find that several AI reach the levels of accuracy and efficiency needed to clarify their use an urgent ethical issue. Third, we outline the normative conditions of how AI may be used as second opinion in clinical processes, weighing the benefits of its efficiency against concerns of responsibility attribution. Fourth, we provide a 'rule of disagreement' that fulfils these conditions while retaining some of the benefits of expanding the use of AI-based decision support systems (AI-DSS) in clinical contexts. This is because the rule of disagreement proposes to use AI as much as possible, but retain the ability to use human second opinions to resolve disagreements between AI and physician-in-charge. Fifth, we discuss some counterarguments.
Collapse
Affiliation(s)
- Hendrik Kempt
- Applied Ethics Group, RWTH Aachen University, Aachen, Germany
| | - Saskia K Nagel
- Applied Ethics Group, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
37
|
Čartolovni A, Tomičić A, Lazić Mosler E. Ethical, legal, and social considerations of AI-based medical decision-support tools: A scoping review. Int J Med Inform 2022; 161:104738. [PMID: 35299098 DOI: 10.1016/j.ijmedinf.2022.104738] [Citation(s) in RCA: 36] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Revised: 02/11/2022] [Accepted: 03/10/2022] [Indexed: 10/18/2022]
Abstract
INTRODUCTION Recent developments in the field of Artificial Intelligence (AI) applied to healthcare promise to solve many of the existing global issues in advancing human health and managing global health challenges. This comprehensive review aims not only to surface the underlying ethical and legal but also social implications (ELSI) that have been overlooked in recent reviews while deserving equal attention in the development stage, and certainly ahead of implementation in healthcare. It is intended to guide various stakeholders (eg. designers, engineers, clinicians) in addressing the ELSI of AI at the design stage using the Ethics by Design (EbD) approach. METHODS The authors followed a systematised scoping methodology and searched the following databases: Pubmed, Web of science, Ovid, Scopus, IEEE Xplore, EBSCO Search (Academic Search Premier, CINAHL, PSYCINFO, APA PsycArticles, ERIC) for the ELSI of AI in healthcare through January 2021. Data were charted and synthesised, and the authors conducted a descriptive and thematic analysis of the collected data. RESULTS After reviewing 1108 papers, 94 were included in the final analysis. Our results show a growing interest in the academic community for ELSI in the field of AI. The main issues of concern identified in our analysis fall into four main clusters of impact: AI algorithms, physicians, patients, and healthcare in general. The most prevalent issues are patient safety, algorithmic transparency, lack of proper regulation, liability & accountability, impact on patient-physician relationship and governance of AI empowered healthcare. CONCLUSIONS The results of our review confirm the potential of AI to significantly improve patient care, but the drawbacks to its implementation relate to complex ELSI that have yet to be addressed. Most ELSI refer to the impact on and extension of the reciprocal and fiduciary patient-physician relationship. With the integration of AIbased decision making tools, a bilateral patient-physician relationship may shift into a trilateral one.
Collapse
Affiliation(s)
- Anto Čartolovni
- Digital Healthcare Ethics Laboratory (Digit-HeaL), Catholic University of Croatia, Ilica 242, 10 000 Zagreb, Croatia; School of Medicine, Catholic University of Croatia, Ilica 242, 10 000 Zagreb, Croatia.
| | - Ana Tomičić
- Digital Healthcare Ethics Laboratory (Digit-HeaL), Catholic University of Croatia, Ilica 242, 10 000 Zagreb, Croatia.
| | - Elvira Lazić Mosler
- School of Medicine, Catholic University of Croatia, Ilica 242, 10 000 Zagreb, Croatia; General Hospital Dr. Ivo Pedišić, Sisak, Croatia.
| |
Collapse
|
38
|
Biller-Andorno N, Ferrario A, Joebges S, Krones T, Massini F, Barth P, Arampatzis G, Krauthammer M. AI support for ethical decision-making around resuscitation: proceed with care. JOURNAL OF MEDICAL ETHICS 2022; 48:175-183. [PMID: 33687916 DOI: 10.1136/medethics-2020-106786] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Revised: 12/15/2020] [Accepted: 01/15/2021] [Indexed: 06/12/2023]
Abstract
Artificial intelligence (AI) systems are increasingly being used in healthcare, thanks to the high level of performance that these systems have proven to deliver. So far, clinical applications have focused on diagnosis and on prediction of outcomes. It is less clear in what way AI can or should support complex clinical decisions that crucially depend on patient preferences. In this paper, we focus on the ethical questions arising from the design, development and deployment of AI systems to support decision-making around cardiopulmonary resuscitation and the determination of a patient's Do Not Attempt to Resuscitate status (also known as code status). The COVID-19 pandemic has made us keenly aware of the difficulties physicians encounter when they have to act quickly in stressful situations without knowing what their patient would have wanted. We discuss the results of an interview study conducted with healthcare professionals in a university hospital aimed at understanding the status quo of resuscitation decision processes while exploring a potential role for AI systems in decision-making around code status. Our data suggest that (1) current practices are fraught with challenges such as insufficient knowledge regarding patient preferences, time pressure and personal bias guiding care considerations and (2) there is considerable openness among clinicians to consider the use of AI-based decision support. We suggest a model for how AI can contribute to improve decision-making around resuscitation and propose a set of ethically relevant preconditions-conceptual, methodological and procedural-that need to be considered in further development and implementation efforts.
Collapse
Affiliation(s)
- Nikola Biller-Andorno
- Institute of Biomedical Ethics and History of Medicine, Universität Zürich, Zurich, Switzerland
- Collegium Helveticum, Zurich, Switzerland
| | - Andrea Ferrario
- Department of Management, Technology, and Economics, Eidgenössische Technische Hochschule Zürich, Zurich, Switzerland
| | - Susanne Joebges
- Institute of Biomedical Ethics and History of Medicine, Universität Zürich, Zurich, Switzerland
| | - Tanja Krones
- Institute of Biomedical Ethics and History of Medicine, Universität Zürich, Zurich, Switzerland
- Clinical Ethics, Universitätsspital Zürich, Zurich, Switzerland
| | - Federico Massini
- Institute of Biomedical Ethics and History of Medicine, Universität Zürich, Zurich, Switzerland
- Collegium Helveticum, Zurich, Switzerland
| | - Phyllis Barth
- Institute of Biomedical Ethics and History of Medicine, Universität Zürich, Zurich, Switzerland
- Collegium Helveticum, Zurich, Switzerland
| | - Georgios Arampatzis
- Collegium Helveticum, Zurich, Switzerland
- Computational Science and Engineering Laboratory, Eidgenössische Technische Hochschule Zürich, Zurich, Switzerland
| | - Michael Krauthammer
- Department of Quantitative Biomedicine, Chair of Medical Informatics, Universität Zürich, Zurich, Switzerland
| |
Collapse
|
39
|
Verdicchio M, Perin A. When Doctors and AI Interact: on Human Responsibility for Artificial Risks. PHILOSOPHY & TECHNOLOGY 2022; 35:11. [PMID: 35223383 PMCID: PMC8857871 DOI: 10.1007/s13347-022-00506-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Accepted: 01/18/2022] [Indexed: 01/09/2023]
Abstract
AbstractA discussion concerning whether to conceive Artificial Intelligence (AI) systems as responsible moral entities, also known as “artificial moral agents” (AMAs), has been going on for some time. In this regard, we argue that the notion of “moral agency” is to be attributed only to humans based on their autonomy and sentience, which AI systems lack. We analyze human responsibility in the presence of AI systems in terms of meaningful control and due diligence and argue against fully automated systems in medicine. With this perspective in mind, we focus on the use of AI-based diagnostic systems and shed light on the complex networks of persons, organizations and artifacts that come to be when AI systems are designed, developed, and used in medicine. We then discuss relational criteria of judgment in support of the attribution of responsibility to humans when adverse events are caused or induced by errors in AI systems.
Collapse
Affiliation(s)
- Mario Verdicchio
- Department of Management Information and Production Engineering, University of Bergamo, Bergamo, Italy
- Berlin Ethics Lab, Technische Universität Berlin, Berlin, Germany
| | - Andrea Perin
- Facultad de Derecho, Universidad Andrés Bello, Santiago de Chile, Chile
| |
Collapse
|
40
|
Nyrup R, Robinson D. Explanatory pragmatism: a context-sensitive framework for explainable medical AI. ETHICS AND INFORMATION TECHNOLOGY 2022; 24:13. [PMID: 35250370 PMCID: PMC8885497 DOI: 10.1007/s10676-022-09632-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 01/05/2022] [Indexed: 06/14/2023]
Abstract
Explainable artificial intelligence (XAI) is an emerging, multidisciplinary field of research that seeks to develop methods and tools for making AI systems more explainable or interpretable. XAI researchers increasingly recognise explainability as a context-, audience- and purpose-sensitive phenomenon, rather than a single well-defined property that can be directly measured and optimised. However, since there is currently no overarching definition of explainability, this poses a risk of miscommunication between the many different researchers within this multidisciplinary space. This is the problem we seek to address in this paper. We outline a framework, called Explanatory Pragmatism, which we argue has two attractive features. First, it allows us to conceptualise explainability in explicitly context-, audience- and purpose-relative terms, while retaining a unified underlying definition of explainability. Second, it makes visible any normative disagreements that may underpin conflicting claims about explainability regarding the purposes for which explanations are sought. Third, it allows us to distinguish several dimensions of AI explainability. We illustrate this framework by applying it to a case study involving a machine learning model for predicting whether patients suffering disorders of consciousness were likely to recover consciousness.
Collapse
Affiliation(s)
- Rune Nyrup
- Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, UK
- Department of History and Philosophy of Science, University of Cambridge, Cambridge, UK
| | - Diana Robinson
- Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, UK
- Department of Computer Science, University of Cambridge, Cambridge, UK
- Microsoft Research, Cambridge, UK
| |
Collapse
|
41
|
Improving the Process of Shared Decision-Making by Integrating Online Structured Information and Self-Assessment Tools. J Pers Med 2022; 12:jpm12020256. [PMID: 35207744 PMCID: PMC8879344 DOI: 10.3390/jpm12020256] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Revised: 02/06/2022] [Accepted: 02/08/2022] [Indexed: 12/10/2022] Open
Abstract
The integration of face-to-face communication and online processes to provide access to information and self-assessment tools may improve shared decision-making (SDM) processes. We aimed to assess the effectiveness of implementing an online SDM process with topics and content developed through a participatory design approach. We analyzed the triggered and completed SDM cases with responses from participants at a medical center in Taiwan. Data were retrieved from the Research Electronic Data Capture (REDCap) database of the hospital for analysis. Each team developed web-based patient decision aids (PDA) with empirical evidence in a multi-digitized manner, allowing patients to scan QR codes on a leaflet using their mobile phones and then read the PDA content online. From July 2019 to December 2020, 48 web-based SDM topics were implemented in the 24 clinical departments of this hospital. The results showed that using the REDCap system improved SDM efficiency and quality. Implementing an online SDM process integrated with face-to-face communication enhanced the practice and effectiveness of SDM, possibly through the flexibility of accessing information, self-assessment, and feedback evaluation.
Collapse
|
42
|
Gama F, Tyskbo D, Nygren J, Barlow J, Reed J, Svedberg P. Implementation Frameworks for Artificial Intelligence Translation Into Health Care Practice: Scoping Review. J Med Internet Res 2022; 24:e32215. [PMID: 35084349 PMCID: PMC8832266 DOI: 10.2196/32215] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 09/02/2021] [Accepted: 12/27/2021] [Indexed: 01/22/2023] Open
Abstract
Background Significant efforts have been made to develop artificial intelligence (AI) solutions for health care improvement. Despite the enthusiasm, health care professionals still struggle to implement AI in their daily practice. Objective This paper aims to identify the implementation frameworks used to understand the application of AI in health care practice. Methods A scoping review was conducted using the Cochrane, Evidence Based Medicine Reviews, Embase, MEDLINE, and PsycINFO databases to identify publications that reported frameworks, models, and theories concerning AI implementation in health care. This review focused on studies published in English and investigating AI implementation in health care since 2000. A total of 2541 unique publications were retrieved from the databases and screened on titles and abstracts by 2 independent reviewers. Selected articles were thematically analyzed against the Nilsen taxonomy of implementation frameworks, and the Greenhalgh framework for the nonadoption, abandonment, scale-up, spread, and sustainability (NASSS) of health care technologies. Results In total, 7 articles met all eligibility criteria for inclusion in the review, and 2 articles included formal frameworks that directly addressed AI implementation, whereas the other articles provided limited descriptions of elements influencing implementation. Collectively, the 7 articles identified elements that aligned with all the NASSS domains, but no single article comprehensively considered the factors known to influence technology implementation. New domains were identified, including dependency on data input and existing processes, shared decision-making, the role of human oversight, and ethics of population impact and inequality, suggesting that existing frameworks do not fully consider the unique needs of AI implementation. Conclusions This literature review demonstrates that understanding how to implement AI in health care practice is still in its early stages of development. Our findings suggest that further research is needed to provide the knowledge necessary to develop implementation frameworks to guide the future implementation of AI in clinical practice and highlight the opportunity to draw on existing knowledge from the field of implementation science.
Collapse
Affiliation(s)
- Fábio Gama
- School of Business, Innovation and Sustainability, Halmstad University, Halmstad, Sweden.,School of Administration and Economic Science, Santa Catarina State University, Florianópolis, Brazil
| | - Daniel Tyskbo
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Jens Nygren
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - James Barlow
- Centre for Health Economics and Policy Innovation, Imperial College Business School, London, United Kingdom
| | - Julie Reed
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| | - Petra Svedberg
- School of Health and Welfare, Halmstad University, Halmstad, Sweden
| |
Collapse
|
43
|
Crossnohere NL, Elsaid M, Paskett J, Bose-Brill S, Bridges JFP. Guidelines for artificial intelligence in medicine: A literature review and content analysis of frameworks (Preprint). J Med Internet Res 2022; 24:e36823. [PMID: 36006692 PMCID: PMC9459836 DOI: 10.2196/36823] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Revised: 06/02/2022] [Accepted: 07/14/2022] [Indexed: 12/15/2022] Open
Abstract
Background Artificial intelligence (AI) is rapidly expanding in medicine despite a lack of consensus on its application and evaluation. Objective We sought to identify current frameworks guiding the application and evaluation of AI for predictive analytics in medicine and to describe the content of these frameworks. We also assessed what stages along the AI translational spectrum (ie, AI development, reporting, evaluation, implementation, and surveillance) the content of each framework has been discussed. Methods We performed a literature review of frameworks regarding the oversight of AI in medicine. The search included key topics such as “artificial intelligence,” “machine learning,” “guidance as topic,” and “translational science,” and spanned the time period 2014-2022. Documents were included if they provided generalizable guidance regarding the use or evaluation of AI in medicine. Included frameworks are summarized descriptively and were subjected to content analysis. A novel evaluation matrix was developed and applied to appraise the frameworks’ coverage of content areas across translational stages. Results Fourteen frameworks are featured in the review, including six frameworks that provide descriptive guidance and eight that provide reporting checklists for medical applications of AI. Content analysis revealed five considerations related to the oversight of AI in medicine across frameworks: transparency, reproducibility, ethics, effectiveness, and engagement. All frameworks include discussions regarding transparency, reproducibility, ethics, and effectiveness, while only half of the frameworks discuss engagement. The evaluation matrix revealed that frameworks were most likely to report AI considerations for the translational stage of development and were least likely to report considerations for the translational stage of surveillance. Conclusions Existing frameworks for the application and evaluation of AI in medicine notably offer less input on the role of engagement in oversight and regarding the translational stage of surveillance. Identifying and optimizing strategies for engagement are essential to ensure that AI can meaningfully benefit patients and other end users.
Collapse
Affiliation(s)
- Norah L Crossnohere
- Department of Biomedical Informatics, The Ohio State University College of Medicine, Columbus, OH, United States
- Division of General Internal Medicine, Department of Internal Medicine, The Ohio State University College of Medicine, Columbus, OH, United States
| | - Mohamed Elsaid
- Department of Biomedical Informatics, The Ohio State University College of Medicine, Columbus, OH, United States
| | - Jonathan Paskett
- Department of Biomedical Informatics, The Ohio State University College of Medicine, Columbus, OH, United States
| | - Seuli Bose-Brill
- Division of General Internal Medicine, Department of Internal Medicine, The Ohio State University College of Medicine, Columbus, OH, United States
| | - John F P Bridges
- Department of Biomedical Informatics, The Ohio State University College of Medicine, Columbus, OH, United States
| |
Collapse
|
44
|
Möllmann NR, Mirbabaie M, Stieglitz S. Is it alright to use artificial intelligence in digital health? A systematic literature review on ethical considerations. Health Informatics J 2021; 27:14604582211052391. [PMID: 34935557 DOI: 10.1177/14604582211052391] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
The application of artificial intelligence (AI) not only yields in advantages for healthcare but raises several ethical questions. Extant research on ethical considerations of AI in digital health is quite sparse and a holistic overview is lacking. A systematic literature review searching across 853 peer-reviewed journals and conferences yielded in 50 relevant articles categorized in five major ethical principles: beneficence, non-maleficence, autonomy, justice, and explicability. The ethical landscape of AI in digital health is portrayed including a snapshot guiding future development. The status quo highlights potential areas with little empirical but required research. Less explored areas with remaining ethical questions are validated and guide scholars' efforts by outlining an overview of addressed ethical principles and intensity of studies including correlations. Practitioners understand novel questions AI raises eventually leading to properly regulated implementations and further comprehend that society is on its way from supporting technologies to autonomous decision-making systems.
Collapse
Affiliation(s)
- Nicholas Rj Möllmann
- Research Group Digital Communication and Transformation, 27170University of Duisburg-Essen, Duisburg, Germany
| | - Milad Mirbabaie
- Faculty of Business Administration and Economics, 9168Paderborn University, Paderborn, Germany
| | - Stefan Stieglitz
- Research Group Digital Communication and Transformation, 27170University of Duisburg-Essen, Duisburg, Germany
| |
Collapse
|
45
|
Gentzel M. Biased Face Recognition Technology Used by Government: A Problem for Liberal Democracy. PHILOSOPHY & TECHNOLOGY 2021; 34:1639-1663. [PMID: 34603941 PMCID: PMC8475322 DOI: 10.1007/s13347-021-00478-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 06/07/2020] [Accepted: 09/08/2021] [Indexed: 11/25/2022]
Abstract
This paper presents a novel philosophical analysis of the problem of law enforcement’s use of biased face recognition technology (FRT) in liberal democracies. FRT programs used by law enforcement in identifying crime suspects are substantially more error-prone on facial images depicting darker skin tones and females as compared to facial images depicting Caucasian males. This bias can lead to citizens being wrongfully investigated by police along racial and gender lines. The author develops and defends “A Liberal Argument Against Biased FRT,” which concludes that law enforcement use of biased FRT is inconsistent with the classical liberal requirement that government treat all citizens equally before the law. Two objections to this argument are considered and shown to be unsound. The author concludes by suggesting that equality before the law should be preserved while the problem of machine bias ought to be resolved before FRT and other types of artificial intelligence (AI) are deployed by governments in liberal democracies.
Collapse
|
46
|
de Sá AAR, Carvalho JD, Naves ELM. Reflections on epistemological aspects of artificial intelligence during the COVID-19 pandemic. AI & SOCIETY 2021; 38:1-8. [PMID: 34866808 PMCID: PMC8627296 DOI: 10.1007/s00146-021-01315-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Accepted: 11/05/2021] [Indexed: 12/24/2022]
Abstract
Artificial intelligence plays an important role and has been used by several countries as a health strategy in an attempt to understand, control and find a cure for the disease caused by Coronavirus. These intelligent systems can assist in accelerating the process of developing antivirals for Coronavirus and in predicting new variants of this virus. For this reason, much research on COVID-19 has been developed with the aim of contributing to new discoveries about the Coronavirus. However, there are some epistemological aspects about the use of AI in this pandemic period of Covid-19 that deserve to be discussed and need reflections. In this scenario, this article presents a reflection on the two epistemological aspects faced by the COVID-19 pandemic: (1) The epistemological aspect resulting from the use of patient data to fill the knowledge base of intelligent systems; (2) the epistemological problem arising from the dependence of health professionals on the results/diagnoses issued by intelligent systems. In addition, we present some epistemological challenges to be implemented in a pandemic period.
Collapse
Affiliation(s)
- Angela A. R. de Sá
- Assistive Technology Group, Faculty of Electrical Engineering, Federal University of Uberlândia, Uberlândia, Brazil
| | - Jairo D. Carvalho
- Technologies Study Group, Faculty of Philosophy, Federal University of Uberlândia, Uberlândia, Brazil
| | - Eduardo L. M. Naves
- Assistive Technology Group, Faculty of Electrical Engineering, Federal University of Uberlândia, Uberlândia, Brazil
| |
Collapse
|
47
|
Cordeiro JV. Digital Technologies and Data Science as Health Enablers: An Outline of Appealing Promises and Compelling Ethical, Legal, and Social Challenges. Front Med (Lausanne) 2021; 8:647897. [PMID: 34307394 PMCID: PMC8295525 DOI: 10.3389/fmed.2021.647897] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2020] [Accepted: 06/10/2021] [Indexed: 12/21/2022] Open
Abstract
Digital technologies and data science have laid down the promise to revolutionize healthcare by transforming the way health and disease are analyzed and managed in the future. Digital health applications in healthcare include telemedicine, electronic health records, wearable, implantable, injectable and ingestible digital medical devices, health mobile apps as well as the application of artificial intelligence and machine learning algorithms to medical and public health prognosis and decision-making. As is often the case with technological advancement, progress in digital health raises compelling ethical, legal, and social implications (ELSI). This article aims to succinctly map relevant ELSI of the digital health field. The issues of patient autonomy; assessment, value attribution, and validation of health innovation; equity and trustworthiness in healthcare; professional roles and skills and data protection and security are highlighted against the backdrop of the risks of dehumanization of care, the limitations of machine learning-based decision-making and, ultimately, the future contours of human interaction in medicine and public health. The running theme to this article is the underlying tension between the promises of digital health and its many challenges, which is heightened by the contrasting pace of scientific progress and the timed responses provided by law and ethics. Digital applications can prove to be valuable allies for human skills in medicine and public health. Similarly, ethics and the law can be interpreted and perceived as more than obstacles, but also promoters of fairness, inclusiveness, creativity and innovation in health.
Collapse
Affiliation(s)
- João V Cordeiro
- Public Health Research Centre, NOVA National School of Public Health, Universidade NOVA de Lisboa, Lisboa, Portugal.,Comprehensive Health Research Center, Universidade NOVA de Lisboa, Lisboa, Portugal.,Centro Interdisciplinar de Ciências Sociais, Lisboa, Portugal
| |
Collapse
|
48
|
van Baalen S, Boon M, Verhoef P. From clinical decision support to clinical reasoning support systems. J Eval Clin Pract 2021; 27:520-528. [PMID: 33554432 PMCID: PMC8248191 DOI: 10.1111/jep.13541] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Revised: 12/15/2020] [Accepted: 01/07/2021] [Indexed: 12/19/2022]
Abstract
Despite the great promises that artificial intelligence (AI) holds for health care, the uptake of such technologies into medical practice is slow. In this paper, we focus on the epistemological issues arising from the development and implementation of a class of AI for clinical practice, namely clinical decision support systems (CDSS). We will first provide an overview of the epistemic tasks of medical professionals, and then analyse which of these tasks can be supported by CDSS, while also explaining why some of them should remain the territory of human experts. Clinical decision making involves a reasoning process in which clinicians combine different types of information into a coherent and adequate 'picture of the patient' that enables them to draw explainable and justifiable conclusions for which they bear epistemological responsibility. Therefore, we suggest that it is more appropriate to think of a CDSS as clinical reasoning support systems (CRSS). Developing CRSS that support clinicians' reasoning process therefore requires that: (a) CRSSs are developed on the basis of relevant and well-processed data; and (b) the system facilitates an interaction with the clinician. Therefore, medical experts must collaborate closely with AI experts developing the CRSS. In addition, responsible use of an CRSS requires that the data generated by the CRSS is empirically justified through an empirical link with the individual patient. In practice, this means that the system indicates what factors contributed to arriving at an advice, allowing the user (clinician) to evaluate whether these factors are medically plausible and applicable to the patient. Finally, we defend that proper implementation of CRSS allows combining human and artificial intelligence into hybrid intelligence, were both perform clearly delineated and complementary empirical tasks. Whereas CRSSs can assist with statistical reasoning and finding patterns in complex data, it is the clinicians' task to interpret, integrate and contextualize.
Collapse
Affiliation(s)
| | - Mieke Boon
- Department of PhilosophyUniversity of TwenteEnschedeThe Netherlands
| | | |
Collapse
|
49
|
Begley K, Begley C, Smith V. Shared decision-making and maternity care in the deep learning age: Acknowledging and overcoming inherited defeaters. J Eval Clin Pract 2021; 27:497-503. [PMID: 33188540 PMCID: PMC9292822 DOI: 10.1111/jep.13515] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 10/23/2020] [Accepted: 10/26/2020] [Indexed: 12/15/2022]
Abstract
In recent years there has been an explosion of interest in Artificial Intelligence (AI) both in health care and academic philosophy. This has been due mainly to the rise of effective machine learning and deep learning algorithms, together with increases in data collection and processing power, which have made rapid progress in many areas. However, use of this technology has brought with it philosophical issues and practical problems, in particular, epistemic and ethical. In this paper the authors, with backgrounds in philosophy, maternity care practice and clinical research, draw upon and extend a recent framework for shared decision-making (SDM) that identified a duty of care to the client's knowledge as a necessary condition for SDM. This duty entails the responsibility to acknowledge and overcome epistemic defeaters. This framework is applied to the use of AI in maternity care, in particular, the use of machine learning and deep learning technology to attempt to enhance electronic fetal monitoring (EFM). In doing so, various sub-kinds of epistemic defeater, namely, transparent, opaque, underdetermined, and inherited defeaters are taxonomized and discussed. The authors argue that, although effective current or future AI-enhanced EFM may impose an epistemic obligation on the part of clinicians to rely on such systems' predictions or diagnoses as input to SDM, such obligations may be overridden by inherited defeaters, caused by a form of algorithmic bias. The existence of inherited defeaters implies that the duty of care to the client's knowledge extends to any situation in which a clinician (or anyone else) is involved in producing training data for a system that will be used in SDM. Any future AI must be capable of assessing women individually, taking into account a wide range of factors including women's preferences, to provide a holistic range of evidence for clinical decision-making.
Collapse
Affiliation(s)
- Keith Begley
- Department of Philosophy, Trinity College Dublin, Dublin, Ireland
| | - Cecily Begley
- School of Nursing and Midwifery, Trinity College Dublin, Dublin, Ireland
| | - Valerie Smith
- School of Nursing and Midwifery, Trinity College Dublin, Dublin, Ireland
| |
Collapse
|
50
|
Amann J, Blasimme A, Vayena E, Frey D, Madai VI. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med Inform Decis Mak 2020; 20:310. [PMID: 33256715 PMCID: PMC7706019 DOI: 10.1186/s12911-020-01332-6] [Citation(s) in RCA: 294] [Impact Index Per Article: 73.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Accepted: 11/15/2020] [Indexed: 12/29/2022] Open
Abstract
BACKGROUND Explainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue, instead it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice. METHODS Taking AI-based clinical decision support systems as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using the "Principles of Biomedical Ethics" by Beauchamp and Childress (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI. RESULTS Each of the domains highlights a different set of core considerations and values that are relevant for understanding the role of explainability in clinical practice. From the technological point of view, explainability has to be considered both in terms how it can be achieved and what is beneficial from a development perspective. When looking at the legal perspective we identified informed consent, certification and approval as medical devices, and liability as core touchpoints for explainability. Both the medical and patient perspectives emphasize the importance of considering the interplay between human actors and medical AI. We conclude that omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health. CONCLUSIONS To ensure that medical AI lives up to its promises, there is a need to sensitize developers, healthcare professionals, and legislators to the challenges and limitations of opaque algorithms in medical AI and to foster multidisciplinary collaboration moving forward.
Collapse
Affiliation(s)
- Julia Amann
- Health Ethics and Policy Lab, Department of Health Sciences and Technology, ETH Zurich, Hottingerstrasse 10, 8092, Zurich, Switzerland.
| | - Alessandro Blasimme
- Health Ethics and Policy Lab, Department of Health Sciences and Technology, ETH Zurich, Hottingerstrasse 10, 8092, Zurich, Switzerland
| | - Effy Vayena
- Health Ethics and Policy Lab, Department of Health Sciences and Technology, ETH Zurich, Hottingerstrasse 10, 8092, Zurich, Switzerland
| | - Dietmar Frey
- Charité Lab for Artificial Intelligence in Medicine-CLAIM, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Vince I Madai
- Charité Lab for Artificial Intelligence in Medicine-CLAIM, Charité - Universitätsmedizin Berlin, Berlin, Germany
- School of Computing and Digital Technology, Faculty of Computing, Engineering and the Built Environment, Birmingham City University, Birmingham, UK
| |
Collapse
|