1
|
Finkelstein J, Gabriel A, Schmer S, Truong TT, Dunn A. Identifying Facilitators and Barriers to Implementation of AI-Assisted Clinical Decision Support in an Electronic Health Record System. J Med Syst 2024; 48:89. [PMID: 39292314 PMCID: PMC11410896 DOI: 10.1007/s10916-024-02104-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Accepted: 08/29/2024] [Indexed: 09/19/2024]
Abstract
Recent advancements in computing have led to the development of artificial intelligence (AI) enabled healthcare technologies. AI-assisted clinical decision support (CDS) integrated into electronic health records (EHR) was demonstrated to have a significant potential to improve clinical care. With the rapid proliferation of AI-assisted CDS, came the realization that a lack of careful consideration of socio-technical issues surrounding the implementation and maintenance of these tools can result in unanticipated consequences, missed opportunities, and suboptimal uptake of these potentially useful technologies. The 48-h Discharge Prediction Tool (48DPT) is a new AI-assisted EHR CDS to facilitate discharge planning. This study aimed to methodologically assess the implementation of 48DPT and identify the barriers and facilitators of adoption and maintenance using the validated implementation science frameworks. The major dimensions of RE-AIM (Reach, Effectiveness, Adoption, Implementation, Maintenance) and the constructs of the Consolidated Framework for Implementation Research (CFIR) frameworks have been used to analyze interviews of 24 key stakeholders using 48DPT. The systematic assessment of the 48DPT implementation allowed us to describe facilitators and barriers to implementation such as lack of awareness, lack of accuracy and trust, limited accessibility, and transparency. Based on our evaluation, the factors that are crucial for the successful implementation of AI-assisted EHR CDS were identified. Future implementation efforts of AI-assisted EHR CDS should engage the key clinical stakeholders in the AI tool development from the very inception of the project, support transparency and explainability of the AI models, provide ongoing education and onboarding of the clinical users, and obtain continuous input from clinical staff on the CDS performance.
Collapse
Affiliation(s)
- Joseph Finkelstein
- Department of Biomedical Informatics, University of Utah, 421 Wakara Way, Rm. 2028, Salt Lake City, UT, 84108, USA.
| | - Aileen Gabriel
- Department of Biomedical Informatics, University of Utah, 421 Wakara Way, Rm. 2028, Salt Lake City, UT, 84108, USA
| | - Susanna Schmer
- Department of Case Management, Mount Sinai Health System, New York, NY, USA
| | - Tuyet-Trinh Truong
- Division of Hospital Medicine, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Andrew Dunn
- Division of Hospital Medicine, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| |
Collapse
|
2
|
Natesan D, Eisenstein EL, Thomas SM, Eclov NCW, Dalal NH, Stephens SJ, Malicki M, Shields S, Cobb A, Mowery YM, Niedzwiecki D, Tenenbaum JD, Palta M, Hong JC. Health Care Cost Reductions with Machine Learning-Directed Evaluations during Radiation Therapy - An Economic Analysis of a Randomized Controlled Study. NEJM AI 2024; 1:10.1056/aioa2300118. [PMID: 38586278 PMCID: PMC10997376 DOI: 10.1056/aioa2300118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/09/2024]
Abstract
BACKGROUND Machine learning (ML) may cost-effectively direct health care by identifying patients most likely to benefit from preventative interventions to avoid negative and expensive outcomes. System for High-Intensity Evaluation During Radiation Therapy (SHIELD-RT; NCT04277650) was a single-institution, randomized controlled study in which electronic health record-based ML accurately identified patients at high risk for acute care (emergency visit or hospitalization) during radiotherapy (RT) and targeted them for supplemental clinical evaluations. This ML-directed intervention resulted in decreased acute care utilization. Given the limited prospective data showing the ability of ML to direct interventions cost-efficiently, an economic analysis was performed. METHODS A post hoc economic analysis was conducted of SHIELD-RT that included RT courses from January 7, 2019, to June 30, 2019. ML-identified high-risk courses (≥10% risk of acute care during RT) were randomized to receive standard of care weekly clinical evaluations with ad hoc supplemental evaluations per clinician discretion versus mandatory twice-weekly evaluations. The primary outcome was difference in mean total medical costs during and 15 days after RT. Acute care costs were obtained via institutional cost accounting. Physician and intervention costs were estimated via Medicare and Medicaid data. Negative binomial regression was used to estimate cost outcomes after adjustment for patient and disease factors. RESULTS A total of 311 high-risk RT courses among 305 patients were randomized to the standard (n=157) or the intervention (n=154) group. Unadjusted mean intervention group supplemental visit costs were $155 per course (95% confidence interval, $142 to $168). The intervention group had fewer acute care visits per course (standard, 0.47; intervention, 0.31; P=0.04). Total mean adjusted costs were $3110 per course for the standard group and $1494 for the intervention group (difference in means, $1616 [95% confidence interval, $1450 to $1783]; P=0.03). CONCLUSIONS In this economic analysis of a randomized controlled, health care ML study, mandatory supplemental evaluations for ML-identified high-risk patients were associated with both reduced total medical costs and improved clinical outcomes. Further study is needed to determine whether economic results are generalizable. (Funded in part by The Duke Endowment, The Conquer Cancer Foundation, the Duke Department of Radiation Oncology, and the National Cancer Institute of the National Institutes of Health [R01CA277782]; ClinicalTrials.gov number, NCT04277650.).
Collapse
Affiliation(s)
- Divya Natesan
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, NC
- Department of Radiation Oncology, Duke University, Durham, NC
| | | | - Samantha M Thomas
- Department of Biostatistics and Bioinformatics, Duke University, Durham, NC
- Duke Cancer Institute, Duke University, Durham, NC
| | | | - Nicole H Dalal
- Department of Radiation Oncology, Duke University, Durham, NC
| | | | - Mary Malicki
- Department of Radiation Oncology, Duke University, Durham, NC
| | - Stacey Shields
- Department of Radiation Oncology, Duke University, Durham, NC
| | - Alyssa Cobb
- Department of Radiation Oncology, Duke University, Durham, NC
| | - Yvonne M Mowery
- Department of Radiation Oncology, Duke University, Durham, NC
- Duke Cancer Institute, Duke University, Durham, NC
| | - Donna Niedzwiecki
- Department of Biostatistics and Bioinformatics, Duke University, Durham, NC
- Duke Cancer Institute, Duke University, Durham, NC
| | | | - Manisha Palta
- Department of Radiation Oncology, Duke University, Durham, NC
- Duke Cancer Institute, Duke University, Durham, NC
| | - Julian C Hong
- Department of Radiation Oncology, University of California, San Francisco, San Francisco
- Bakar Computational Health Sciences Institute, University of California, San Francisco, San Francisco
- UCSF-UC Berkeley Joint Program in Computational Precision Health, San Francisco, San Francisco
| |
Collapse
|
3
|
Magrabi F, Lyell D, Coiera E. Automation in Contemporary Clinical Information Systems: a Survey of AI in Healthcare Settings. Yearb Med Inform 2023; 32:115-126. [PMID: 38147855 PMCID: PMC10751141 DOI: 10.1055/s-0043-1768733] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2023] Open
Abstract
AIMS AND OBJECTIVES To examine the nature and use of automation in contemporary clinical information systems by reviewing studies reporting the implementation and evaluation of artificial intelligence (AI) technologies in healthcare settings. METHOD PubMed/MEDLINE, Web of Science, EMBASE, the tables of contents of major informatics journals, and the bibliographies of articles were searched for studies reporting evaluation of AI in clinical settings from January 2021 to December 2022. We documented the clinical application areas and tasks supported, and the level of system autonomy. Reported effects on user experience, decision-making, care delivery and outcomes were summarised. RESULTS AI technologies are being applied in a wide variety of clinical areas. Most contemporary systems utilise deep learning, use routinely collected data, support diagnosis and triage, are assistive (requiring users to confirm or approve AI provided information or decisions), and are used by doctors in acute care settings in high-income nations. AI systems are integrated and used within existing clinical information systems including electronic medical records. There is limited support for One Health goals. Evaluation is largely based on quantitative methods measuring effects on decision-making. CONCLUSION AI systems are being implemented and evaluated in many clinical areas. There remain many opportunities to understand patterns of routine use and evaluate effects on decision-making, care delivery and patient outcomes using mixed-methods. Support for One Health including integrating data about environmental factors and social determinants needs further exploration.
Collapse
Affiliation(s)
- Farah Magrabi
- Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Sydney, Australia
| | - David Lyell
- Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Sydney, Australia
| | - Enrico Coiera
- Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Sydney, Australia
| |
Collapse
|
4
|
Hong JC, Patel P, Eclov NCW, Stephens SJ, Mowery YM, Tenenbaum JD, Palta M. Healthcare provider evaluation of machine learning-directed care: reactions to deployment on a randomised controlled study. BMJ Health Care Inform 2023; 30:bmjhci-2022-100674. [PMID: 36764680 PMCID: PMC9923272 DOI: 10.1136/bmjhci-2022-100674] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Accepted: 01/28/2023] [Indexed: 02/12/2023] Open
Abstract
OBJECTIVES Clinical artificial intelligence and machine learning (ML) face barriers related to implementation and trust. There have been few prospective opportunities to evaluate these concerns. System for High Intensity EvaLuation During Radiotherapy (NCT03775265) was a randomised controlled study demonstrating that ML accurately directed clinical evaluations to reduce acute care during cancer radiotherapy. We characterised subsequent perceptions and barriers to implementation. METHODS An anonymous 7-question Likert-type scale survey with optional free text was administered to multidisciplinary staff focused on workflow, agreement with ML and patient experience. RESULTS 59/71 (83%) responded. 81% disagreed/strongly disagreed their workflow was disrupted. 67% agreed/strongly agreed patients undergoing intervention were high risk. 75% agreed/strongly agreed they would implement the ML approach routinely if the study was positive. Free-text feedback focused on patient education and ML predictions. CONCLUSIONS Randomised data and firsthand experience support positive reception of clinical ML. Providers highlighted future priorities, including patient counselling and workflow optimisation.
Collapse
Affiliation(s)
- Julian C Hong
- Department of Radiation Oncology, University of California San Francisco, San Francisco, California, USA .,Bakar Computational Health Sciences Institute, University of California San Francisco, San Francisco, California, USA.,Joint Program in Computational Precision Health, UCSF-UC Berkeley, San Francisco, California, USA
| | - Pranalee Patel
- Department of Radiation Oncology, Duke University, Durham, North Carolina, USA
| | - Neville C W Eclov
- Department of Radiation Oncology, Duke University, Durham, North Carolina, USA
| | - Sarah J Stephens
- Department of Radiation Oncology, Duke University, Durham, North Carolina, USA
| | - Yvonne M Mowery
- Department of Radiation Oncology, Duke University, Durham, North Carolina, USA,Department of Head and Neck Surgery & Communication Sciences, Duke University, Durham, North Carolina, USA
| | - Jessica D Tenenbaum
- Department of Biostatistics and Bioinformatics, Duke University, Durham, North Carolina, USA
| | - Manisha Palta
- Department of Radiation Oncology, Duke University, Durham, North Carolina, USA
| |
Collapse
|