1
|
De Micco F, Grassi S, Tomassini L, Di Palma G, Ricchezze G, Scendoni R. Robotics and AI into healthcare from the perspective of European regulation: who is responsible for medical malpractice? Front Med (Lausanne) 2024; 11:1428504. [PMID: 39309674 PMCID: PMC11412847 DOI: 10.3389/fmed.2024.1428504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Accepted: 08/30/2024] [Indexed: 09/25/2024] Open
Abstract
The integration of robotics and artificial intelligence into medical practice is radically revolutionising patient care. This fusion of advanced technologies with healthcare offers a number of significant benefits, including more precise diagnoses, personalised treatments and improved health data management. However, it is critical to address very carefully the medico-legal challenges associated with this progress. The responsibilities between the different players concerned in medical liability cases are not yet clearly defined, especially when artificial intelligence is involved in the decision-making process. Complexity increases when technology intervenes between a person's action and the result, making it difficult for the patient to prove harm or negligence. In addition, there is the risk of an unfair distribution of blame between physicians and healthcare institutions. The analysis of European legislation highlights the critical issues related to the attribution of legal personality to autonomous robots and the recognition of strict liability for medical doctors and healthcare institutions. Although European legislation has helped to standardise the rules on this issue, some questions remain unresolved. We argue that specific laws are needed to address the issue of medical liability in cases where robotics and artificial intelligence are used in healthcare.
Collapse
Affiliation(s)
- Francesco De Micco
- Research Unit of Bioethics and Humanities, Department of Medicine and Surgery, Università Campus Bio-Medico di Roma, Rome, Italy
- Operative Research Unit of Clinical Affairs, Fondazione Policlinico Universitario Campus Bio-Medico, Rome, Italy
| | - Simone Grassi
- Forensic Medical Sciences, Department of Health Sciences, University of Florence, Florence, Italy
| | - Luca Tomassini
- School of Law, Legal Medicine, Camerino University, Camerino, Italy
| | - Gianmarco Di Palma
- Operative Research Unit of Clinical Affairs, Fondazione Policlinico Universitario Campus Bio-Medico, Rome, Italy
- Department of Public Health, Experimental, and Forensic Medicine, University of Pavia, Pavia, Italy
| | - Giulia Ricchezze
- Department of Law, Institute of Legal Medicine, University of Macerata, Macerata, Italy
| | - Roberto Scendoni
- Department of Law, Institute of Legal Medicine, University of Macerata, Macerata, Italy
| |
Collapse
|
2
|
De Micco F, Tambone V, Frati P, Cingolani M, Scendoni R. Disability 4.0: bioethical considerations on the use of embodied artificial intelligence. Front Med (Lausanne) 2024; 11:1437280. [PMID: 39219800 PMCID: PMC11362069 DOI: 10.3389/fmed.2024.1437280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Accepted: 08/06/2024] [Indexed: 09/04/2024] Open
Abstract
Robotics and artificial intelligence have marked the beginning of a new era in the care and integration of people with disabilities, helping to promote their independence, autonomy and social participation. In this area, bioethical reflection assumes a key role at anthropological, ethical, legal and socio-political levels. However, there is currently a substantial diversity of opinions and ethical arguments, as well as a lack of consensus on the use of assistive robots, while the focus remains predominantly on the usability of products. The article presents a bioethical analysis that highlights the risk arising from using embodied artificial intelligence according to a functionalist model. Failure to recognize disability as the result of a complex interplay between health, personal and situational factors could result in potential damage to the intrinsic dignity of the person and human relations with healthcare workers. Furthermore, the danger of discrimination in accessing these new technologies is highlighted, emphasizing the need for an ethical approach that considers the social and moral implications of implementing embodied AI in the field of rehabilitation.
Collapse
Affiliation(s)
- Francesco De Micco
- Research Unit of Bioethics and Humanities, Department of Medicine and Surgery, University Campus Bio-Medico of Rome, Rome, Italy
- Operative Research Unit of Clinical Affairs, Healthcare Bioethics Center, Fondazione Policlinico Universitario Campus Bio-Medico, Rome, Italy
| | - Vittoradolfo Tambone
- Research Unit of Bioethics and Humanities, Department of Medicine and Surgery, University Campus Bio-Medico of Rome, Rome, Italy
- Operative Research Unit of Clinical Affairs, Healthcare Bioethics Center, Fondazione Policlinico Universitario Campus Bio-Medico, Rome, Italy
| | - Paola Frati
- Department of Anatomical, Histological, Forensic and Orthopedic Sciences, Sapienza University, Rome, Italy
| | - Mariano Cingolani
- Department of Law, Institute of Legal Medicine, University of Macerata, Macerata, Italy
| | - Roberto Scendoni
- Department of Law, Institute of Legal Medicine, University of Macerata, Macerata, Italy
| |
Collapse
|
3
|
Harrison TG, Elliott MJ, Tonelli M. Integrating the patient voice: patient-centred and equitable clinical risk prediction for kidney health and disease. Curr Opin Nephrol Hypertens 2024; 33:456-463. [PMID: 38656234 DOI: 10.1097/mnh.0000000000000993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/26/2024]
Abstract
PURPOSE OF REVIEW Personalized approaches to care are increasingly common in clinical nephrology. Although risk prediction models are developed to estimate the risk of kidney-disease related outcomes, they infrequently consider the priorities of patients they are designed to help. RECENT FINDINGS This review discusses certain steps in risk prediction tool development where patients and their priorities can be incorporated. Considering principles of equity throughout the process has been the focus of recent literature. SUMMARY Applying a person-centred lens has implications for several aspects of risk prediction research. Incorporating the patient voice may involve partnering with patients as researchers to identify the target outcome for the tool and/or determine priorities for outcomes related to the kidney disease domain of interest. Assessing the list of candidate predictors for associations with inequity is important to ensure the tool will not widen disparity for marginalized groups. Estimating model performance using person-centred measures such as model calibration may be used to compare models and select a tool more useful to inform individual treatment decisions. Finally, there is potential to include patients and families in determining other elements of the prediction framework and implementing the tool once development is complete.
Collapse
Affiliation(s)
- Tyrone G Harrison
- Department of Medicine
- Department of Community Health Sciences
- O'Brien Institute for Public Health, Cumming School of Medicine
- Libin Cardiovascular Institute, Cumming School of Medicine, University of Calgary, Calgary, Alberta, Canada
| | - Meghan J Elliott
- Department of Medicine
- Department of Community Health Sciences
- O'Brien Institute for Public Health, Cumming School of Medicine
- Libin Cardiovascular Institute, Cumming School of Medicine, University of Calgary, Calgary, Alberta, Canada
| | - Marcello Tonelli
- Department of Medicine
- Department of Community Health Sciences
- O'Brien Institute for Public Health, Cumming School of Medicine
- Libin Cardiovascular Institute, Cumming School of Medicine, University of Calgary, Calgary, Alberta, Canada
| |
Collapse
|
4
|
Evans RP, Bryant LD, Russell G, Absolom K. Trust and acceptability of data-driven clinical recommendations in everyday practice: A scoping review. Int J Med Inform 2024; 183:105342. [PMID: 38266426 DOI: 10.1016/j.ijmedinf.2024.105342] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Revised: 12/08/2023] [Accepted: 01/14/2024] [Indexed: 01/26/2024]
Abstract
BACKGROUND Increasing attention is being given to the analysis of large health datasets to derive new clinical decision support systems (CDSS). However, few data-driven CDSS are being adopted into clinical practice. Trust in these tools is believed to be fundamental for acceptance and uptake but to date little attention has been given to defining or evaluating trust in clinical settings. OBJECTIVES A scoping review was conducted to explore how and where acceptability and trustworthiness of data-driven CDSS have been assessed from the health professional's perspective. METHODS Medline, Embase, PsycInfo, Web of Science, Scopus, ACM Digital, IEEE Xplore and Google Scholar were searched in March 2022 using terms expanded from: "data-driven" AND "clinical decision support" AND "acceptability". Included studies focused on healthcare practitioner-facing data-driven CDSS, relating directly to clinical care. They included trust or a proxy as an outcome, or in the discussion. The preferred reporting items for systematic reviews and meta-analyses extension for scoping reviews (PRISMA-ScR) is followed in the reporting of this review. RESULTS 3291 papers were screened, with 85 primary research studies eligible for inclusion. Studies covered a diverse range of clinical specialisms and intended contexts, but hypothetical systems (24) outnumbered those in clinical use (18). Twenty-five studies measured trust, via a wide variety of quantitative, qualitative and mixed methods. A further 24 discussed themes of trust without it being explicitly evaluated, and from these, themes of transparency, explainability, and supporting evidence were identified as factors influencing healthcare practitioner trust in data-driven CDSS. CONCLUSION There is a growing body of research on data-driven CDSS, but few studies have explored stakeholder perceptions in depth, with limited focused research on trustworthiness. Further research on healthcare practitioner acceptance, including requirements for transparency and explainability, should inform clinical implementation.
Collapse
Affiliation(s)
- Ruth P Evans
- University of Leeds, Woodhouse Lane, Leeds LS2 9JT, UK.
| | | | - Gregor Russell
- Bradford District Care Trust, Bradford, New Mill, Victoria Rd, BD18 3LD, UK.
| | - Kate Absolom
- University of Leeds, Woodhouse Lane, Leeds LS2 9JT, UK.
| |
Collapse
|
5
|
Verma AA, Trbovich P, Mamdani M, Shojania KG. Grand rounds in methodology: key considerations for implementing machine learning solutions in quality improvement initiatives. BMJ Qual Saf 2024; 33:121-131. [PMID: 38050138 DOI: 10.1136/bmjqs-2022-015713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 11/04/2023] [Indexed: 12/06/2023]
Abstract
Machine learning (ML) solutions are increasingly entering healthcare. They are complex, sociotechnical systems that include data inputs, ML models, technical infrastructure and human interactions. They have promise for improving care across a wide range of clinical applications but if poorly implemented, they may disrupt clinical workflows, exacerbate inequities in care and harm patients. Many aspects of ML solutions are similar to other digital technologies, which have well-established approaches to implementation. However, ML applications present distinct implementation challenges, given that their predictions are often complex and difficult to understand, they can be influenced by biases in the data sets used to develop them, and their impacts on human behaviour are poorly understood. This manuscript summarises the current state of knowledge about implementing ML solutions in clinical care and offers practical guidance for implementation. We propose three overarching questions for potential users to consider when deploying ML solutions in clinical care: (1) Is a clinical or operational problem likely to be addressed by an ML solution? (2) How can an ML solution be evaluated to determine its readiness for deployment? (3) How can an ML solution be deployed and maintained optimally? The Quality Improvement community has an essential role to play in ensuring that ML solutions are translated into clinical practice safely, effectively, and ethically.
Collapse
Affiliation(s)
- Amol A Verma
- Unity Health Toronto, Toronto, Ontario, Canada
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada
- Laboratory Medicine and Pathobiology, University of Toronto, Toronto, ON, Canada
- Medicine, University of Toronto Faculty of Medicine, Toronto, Ontario, Canada
| | - Patricia Trbovich
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada
- Centre for Quality Improvement and Patient Safety, Department of Medicine, University of Toronto, Toronto, ON, Canada
- North York General Hospital, Toronto, ON, Canada
| | - Muhammad Mamdani
- Unity Health Toronto, Toronto, Ontario, Canada
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON, Canada
- Medicine, University of Toronto Faculty of Medicine, Toronto, Ontario, Canada
| | - Kaveh G Shojania
- Medicine, University of Toronto Faculty of Medicine, Toronto, Ontario, Canada
- Sunnybrook Health Sciences Centre, Toronto, ON, Canada
| |
Collapse
|