1
|
Chomutare T, Tejedor M, Svenning TO, Marco-Ruiz L, Tayefi M, Lind K, Godtliebsen F, Moen A, Ismail L, Makhlysheva A, Ngo PD. Artificial Intelligence Implementation in Healthcare: A Theory-Based Scoping Review of Barriers and Facilitators. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:16359. [PMID: 36498432 PMCID: PMC9738234 DOI: 10.3390/ijerph192316359] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 12/01/2022] [Accepted: 12/02/2022] [Indexed: 05/09/2023]
Abstract
There is a large proliferation of complex data-driven artificial intelligence (AI) applications in many aspects of our daily lives, but their implementation in healthcare is still limited. This scoping review takes a theoretical approach to examine the barriers and facilitators based on empirical data from existing implementations. We searched the major databases of relevant scientific publications for articles related to AI in clinical settings, published between 2015 and 2021. Based on the theoretical constructs of the Consolidated Framework for Implementation Research (CFIR), we used a deductive, followed by an inductive, approach to extract facilitators and barriers. After screening 2784 studies, 19 studies were included in this review. Most of the cited facilitators were related to engagement with and management of the implementation process, while the most cited barriers dealt with the intervention's generalizability and interoperability with existing systems, as well as the inner settings' data quality and availability. We noted per-study imbalances related to the reporting of the theoretic domains. Our findings suggest a greater need for implementation science expertise in AI implementation projects, to improve both the implementation process and the quality of scientific reporting.
Collapse
|
Scoping Review |
3 |
27 |
2
|
Novak LL, Russell RG, Garvey K, Patel M, Thomas Craig KJ, Snowdon J, Miller B. Clinical use of artificial intelligence requires AI-capable organizations. JAMIA Open 2023; 6:ooad028. [PMID: 37152469 PMCID: PMC10155810 DOI: 10.1093/jamiaopen/ooad028] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 03/18/2023] [Accepted: 04/11/2023] [Indexed: 05/09/2023] Open
Abstract
Artificial intelligence-based algorithms are being widely implemented in health care, even as evidence is emerging of bias in their design, problems with implementation, and potential harm to patients. To achieve the promise of using of AI-based tools to improve health, healthcare organizations will need to be AI-capable, with internal and external systems functioning in tandem to ensure the safe, ethical, and effective use of AI-based tools. Ideas are starting to emerge about the organizational routines, competencies, resources, and infrastructures that will be required for safe and effective deployment of AI in health care, but there has been little empirical research. Infrastructures that provide legal and regulatory guidance for managers, clinician competencies for the safe and effective use of AI-based tools, and learner-centric resources such as clear AI documentation and local health ecosystem impact reviews can help drive continuous improvement.
Collapse
|
Review |
2 |
10 |
3
|
van der Vegt AH, Scott IA, Dermawan K, Schnetler RJ, Kalke VR, Lane PJ. Deployment of machine learning algorithms to predict sepsis: systematic review and application of the SALIENT clinical AI implementation framework. J Am Med Inform Assoc 2023:7161075. [PMID: 37172264 DOI: 10.1093/jamia/ocad075] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 04/04/2023] [Accepted: 04/23/2023] [Indexed: 05/14/2023] Open
Abstract
OBJECTIVE To retrieve and appraise studies of deployed artificial intelligence (AI)-based sepsis prediction algorithms using systematic methods, identify implementation barriers, enablers, and key decisions and then map these to a novel end-to-end clinical AI implementation framework. MATERIALS AND METHODS Systematically review studies of clinically applied AI-based sepsis prediction algorithms in regard to methodological quality, deployment and evaluation methods, and outcomes. Identify contextual factors that influence implementation and map these factors to the SALIENT implementation framework. RESULTS The review identified 30 articles of algorithms applied in adult hospital settings, with 5 studies reporting significantly decreased mortality post-implementation. Eight groups of algorithms were identified, each sharing a common algorithm. We identified 14 barriers, 26 enablers, and 22 decision points which were able to be mapped to the 5 stages of the SALIENT implementation framework. DISCUSSION Empirical studies of deployed sepsis prediction algorithms demonstrate their potential for improving care and reducing mortality but reveal persisting gaps in existing implementation guidance. In the examined publications, key decision points reflecting real-word implementation experience could be mapped to the SALIENT framework and, as these decision points appear to be AI-task agnostic, this framework may also be applicable to non-sepsis algorithms. The mapping clarified where and when barriers, enablers, and key decisions arise within the end-to-end AI implementation process. CONCLUSIONS A systematic review of real-world implementation studies of sepsis prediction algorithms was used to validate an end-to-end staged implementation framework that has the ability to account for key factors that warrant attention in ensuring successful deployment, and which extends on previous AI implementation frameworks.
Collapse
|
|
2 |
7 |
4
|
van der Vegt AH, Scott IA, Dermawan K, Schnetler RJ, Kalke VR, Lane PJ. Implementation frameworks for end-to-end clinical AI: derivation of the SALIENT framework. J Am Med Inform Assoc 2023; 30:1503-1515. [PMID: 37208863 PMCID: PMC10436156 DOI: 10.1093/jamia/ocad088] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 04/17/2023] [Accepted: 05/09/2023] [Indexed: 05/21/2023] Open
Abstract
OBJECTIVE To derive a comprehensive implementation framework for clinical AI models within hospitals informed by existing AI frameworks and integrated with reporting standards for clinical AI research. MATERIALS AND METHODS (1) Derive a provisional implementation framework based on the taxonomy of Stead et al and integrated with current reporting standards for AI research: TRIPOD, DECIDE-AI, CONSORT-AI. (2) Undertake a scoping review of published clinical AI implementation frameworks and identify key themes and stages. (3) Perform a gap analysis and refine the framework by incorporating missing items. RESULTS The provisional AI implementation framework, called SALIENT, was mapped to 5 stages common to both the taxonomy and the reporting standards. A scoping review retrieved 20 studies and 247 themes, stages, and subelements were identified. A gap analysis identified 5 new cross-stage themes and 16 new tasks. The final framework comprised 5 stages, 7 elements, and 4 components, including the AI system, data pipeline, human-computer interface, and clinical workflow. DISCUSSION This pragmatic framework resolves gaps in existing stage- and theme-based clinical AI implementation guidance by comprehensively addressing the what (components), when (stages), and how (tasks) of AI implementation, as well as the who (organization) and why (policy domains). By integrating research reporting standards into SALIENT, the framework is grounded in rigorous evaluation methodologies. The framework requires validation as being applicable to real-world studies of deployed AI models. CONCLUSIONS A novel end-to-end framework has been developed for implementing AI within hospital clinical practice that builds on previous AI implementation frameworks and research reporting standards.
Collapse
|
Review |
2 |
2 |
5
|
van der Vegt AH, Campbell V, Mitchell I, Malycha J, Simpson J, Flenady T, Flabouris A, Lane PJ, Mehta N, Kalke VR, Decoyna JA, Es’haghi N, Liu CH, Scott IA. Systematic review and longitudinal analysis of implementing Artificial Intelligence to predict clinical deterioration in adult hospitals: what is known and what remains uncertain. J Am Med Inform Assoc 2024; 31:509-524. [PMID: 37964688 PMCID: PMC10797271 DOI: 10.1093/jamia/ocad220] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 10/27/2023] [Accepted: 10/31/2023] [Indexed: 11/16/2023] Open
Abstract
OBJECTIVE To identify factors influencing implementation of machine learning algorithms (MLAs) that predict clinical deterioration in hospitalized adult patients and relate these to a validated implementation framework. MATERIALS AND METHODS A systematic review of studies of implemented or trialed real-time clinical deterioration prediction MLAs was undertaken, which identified: how MLA implementation was measured; impact of MLAs on clinical processes and patient outcomes; and barriers, enablers and uncertainties within the implementation process. Review findings were then mapped to the SALIENT end-to-end implementation framework to identify the implementation stages at which these factors applied. RESULTS Thirty-seven articles relating to 14 groups of MLAs were identified, each trialing or implementing a bespoke algorithm. One hundred and seven distinct implementation evaluation metrics were identified. Four groups reported decreased hospital mortality, 1 significantly. We identified 24 barriers, 40 enablers, and 14 uncertainties and mapped these to the 5 stages of the SALIENT implementation framework. DISCUSSION Algorithm performance across implementation stages decreased between in silico and trial stages. Silent plus pilot trial inclusion was associated with decreased mortality, as was the use of logistic regression algorithms that used less than 39 variables. Mitigation of alert fatigue via alert suppression and threshold configuration was commonly employed across groups. CONCLUSIONS : There is evidence that real-world implementation of clinical deterioration prediction MLAs may improve clinical outcomes. Various factors identified as influencing success or failure of implementation can be mapped to different stages of implementation, thereby providing useful and practical guidance for implementers.
Collapse
|
Systematic Review |
1 |
1 |
6
|
Bienefeld N, Keller E, Grote G. Human-AI Teaming in Critical Care: A Comparative Analysis of Data Scientists' and Clinicians' Perspectives on AI Augmentation and Automation. J Med Internet Res 2024; 26:e50130. [PMID: 39038285 DOI: 10.2196/50130] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 05/03/2024] [Accepted: 05/23/2024] [Indexed: 07/24/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) holds immense potential for enhancing clinical and administrative health care tasks. However, slow adoption and implementation challenges highlight the need to consider how humans can effectively collaborate with AI within broader socio-technical systems in health care. OBJECTIVE In the example of intensive care units (ICUs), we compare data scientists' and clinicians' assessments of the optimal utilization of human and AI capabilities by determining suitable levels of human-AI teaming for safely and meaningfully augmenting or automating 6 core tasks. The goal is to provide actionable recommendations for policy makers and health care practitioners regarding AI design and implementation. METHODS In this multimethod study, we combine a systematic task analysis across 6 ICUs with an international Delphi survey involving 19 health data scientists from the industry and academia and 61 ICU clinicians (25 physicians and 36 nurses) to define and assess optimal levels of human-AI teaming (level 1=no performance benefits; level 2=AI augments human performance; level 3=humans augment AI performance; level 4=AI performs without human input). Stakeholder groups also considered ethical and social implications. RESULTS Both stakeholder groups chose level 2 and 3 human-AI teaming for 4 out of 6 core tasks in the ICU. For one task (monitoring), level 4 was the preferred design choice. For the task of patient interactions, both data scientists and clinicians agreed that AI should not be used regardless of technological feasibility due to the importance of the physician-patient and nurse-patient relationship and ethical concerns. Human-AI design choices rely on interpretability, predictability, and control over AI systems. If these conditions are not met and AI performs below human-level reliability, a reduction to level 1 or shifting accountability away from human end users is advised. If AI performs at or beyond human-level reliability and these conditions are not met, shifting to level 4 automation should be considered to ensure safe and efficient human-AI teaming. CONCLUSIONS By considering the sociotechnical system and determining appropriate levels of human-AI teaming, our study showcases the potential for improving the safety and effectiveness of AI usage in ICUs and broader health care settings. Regulatory measures should prioritize interpretability, predictability, and control if clinicians hold full accountability. Ethical and social implications must be carefully evaluated to ensure effective collaboration between humans and AI, particularly considering the most recent advancements in generative AI.
Collapse
|
Comparative Study |
1 |
1 |
7
|
Chen S, Lobo BC. Regulatory and Implementation Considerations for Artificial Intelligence. Otolaryngol Clin North Am 2024; 57:871-886. [PMID: 38839554 DOI: 10.1016/j.otc.2024.04.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/07/2024]
Abstract
Successful artificial intelligence (AI) implementation is predicated on the trust of clinicians and patients, and is achieved through a culture of responsible use, focusing on regulations, standards, and education. Otolaryngologists can overcome barriers in AI implementation by promoting data standardization through professional societies, engaging in institutional efforts to integrate AI, and developing otolaryngology-specific AI education for both trainees and practitioners.
Collapse
|
Review |
1 |
|
8
|
Castonguay A, Lovis C. Introducing the "AI Language Models in Health Care" Section: Actionable Strategies for Targeted and Wide-Scale Deployment. JMIR Med Inform 2023; 11:e53785. [PMID: 38127431 PMCID: PMC10767624 DOI: 10.2196/53785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 11/27/2023] [Accepted: 12/02/2023] [Indexed: 12/23/2023] Open
Abstract
The realm of health care is on the cusp of a significant technological leap, courtesy of the advancements in artificial intelligence (AI) language models, but ensuring the ethical design, deployment, and use of these technologies is imperative to truly realize their potential in improving health care delivery and promoting human well-being and safety. Indeed, these models have demonstrated remarkable prowess in generating humanlike text, evidenced by a growing body of research and real-world applications. This capability paves the way for enhanced patient engagement, clinical decision support, and a plethora of other applications that were once considered beyond reach. However, the journey from potential to real-world application is laden with challenges ranging from ensuring reliability and transparency to navigating a complex regulatory landscape. There is still a need for comprehensive evaluation and rigorous validation to ensure that these models are reliable, transparent, and ethically sound. This editorial introduces the new section, titled "AI Language Models in Health Care." This section seeks to create a platform for academics, practitioners, and innovators to share their insights, research findings, and real-world applications of AI language models in health care. The aim is to foster a community that is not only excited about the possibilities but also critically engaged with the ethical, practical, and regulatory challenges that lie ahead.
Collapse
|
Editorial |
2 |
|
9
|
Marco-Ruiz L, Hernández MÁT, Ngo PD, Makhlysheva A, Svenning TO, Dyb K, Chomutare T, Llatas CF, Muñoz-Gama J, Tayefi M. A multinational study on artificial intelligence adoption: Clinical implementers' perspectives. Int J Med Inform 2024; 184:105377. [PMID: 38377725 DOI: 10.1016/j.ijmedinf.2024.105377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 02/06/2024] [Accepted: 02/12/2024] [Indexed: 02/22/2024]
Abstract
BACKGROUND Despite substantial progress in AI research for healthcare, translating research achievements to AI systems in clinical settings is challenging and, in many cases, unsatisfactory. As a result, many AI investments have stalled at the prototype level, never reaching clinical settings. OBJECTIVE To improve the chances of future AI implementation projects succeeding, we analyzed the experiences of clinical AI system implementers to better understand the challenges and success factors in their implementations. METHODS Thirty-seven implementers of clinical AI from European and North and South American countries were interviewed. Semi-structured interviews were transcribed and analyzed qualitatively with the framework method, identifying the success factors and the reasons for challenges as well as documenting proposals from implementers to improve AI adoption in clinical settings. RESULTS We gathered the implementers' requirements for facilitating AI adoption in the clinical setting. The main findings include 1) the lesser importance of AI explainability in favor of proper clinical validation studies, 2) the need to actively involve clinical practitioners, and not only clinical researchers, in the inception of AI research projects, 3) the need for better information structures and processes to manage data access and the ethical approval of AI projects, 4) the need for better support for regulatory compliance and avoidance of duplications in data management approval bodies, 5) the need to increase both clinicians' and citizens' literacy as respects the benefits and limitations of AI, and 6) the need for better funding schemes to support the implementation, embedding, and validation of AI in the clinical workflow, beyond pilots. CONCLUSION Participants in the interviews are positive about the future of AI in clinical settings. At the same time, they proposenumerous measures to transfer research advancesinto implementations that will benefit healthcare personnel. Transferring AI research into benefits for healthcare workers and patients requires adjustments in regulations, data access procedures, education, funding schemes, and validation of AI systems.
Collapse
|
|
1 |
|
10
|
Sriharan A, Sekercioglu N, Mitchell C, Senkaiahliyan S, Hertelendy A, Porter T, Banaszak-Holl J. Leadership for AI Transformation in Health Care Organization: Scoping Review. J Med Internet Res 2024; 26:e54556. [PMID: 39009038 PMCID: PMC11358667 DOI: 10.2196/54556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 03/12/2024] [Accepted: 07/15/2024] [Indexed: 07/17/2024] Open
Abstract
BACKGROUND The leaders of health care organizations are grappling with rising expenses and surging demands for health services. In response, they are increasingly embracing artificial intelligence (AI) technologies to improve patient care delivery, alleviate operational burdens, and efficiently improve health care safety and quality. OBJECTIVE In this paper, we map the current literature and synthesize insights on the role of leadership in driving AI transformation within health care organizations. METHODS We conducted a comprehensive search across several databases, including MEDLINE (via Ovid), PsycINFO (via Ovid), CINAHL (via EBSCO), Business Source Premier (via EBSCO), and Canadian Business & Current Affairs (via ProQuest), spanning articles published from 2015 to June 2023 discussing AI transformation within the health care sector. Specifically, we focused on empirical studies with a particular emphasis on leadership. We used an inductive, thematic analysis approach to qualitatively map the evidence. The findings were reported in accordance with the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analysis extension for Scoping Reviews) guidelines. RESULTS A comprehensive review of 2813 unique abstracts led to the retrieval of 97 full-text articles, with 22 included for detailed assessment. Our literature mapping reveals that successful AI integration within healthcare organizations requires leadership engagement across technological, strategic, operational, and organizational domains. Leaders must demonstrate a blend of technical expertise, adaptive strategies, and strong interpersonal skills to navigate the dynamic healthcare landscape shaped by complex regulatory, technological, and organizational factors. CONCLUSIONS In conclusion, leading AI transformation in healthcare requires a multidimensional approach, with leadership across technological, strategic, operational, and organizational domains. Organizations should implement a comprehensive leadership development strategy, including targeted training and cross-functional collaboration, to equip leaders with the skills needed for AI integration. Additionally, when upskilling or recruiting AI talent, priority should be given to individuals with a strong mix of technical expertise, adaptive capacity, and interpersonal acumen, enabling them to navigate the unique complexities of the healthcare environment.
Collapse
|
Scoping Review |
1 |
|
11
|
Larson DB, Doo FX, Allen B, Mongan J, Flanders AE, Wald C. Proceedings From the 2022 ACR-RSNA Workshop on Safety, Effectiveness, Reliability, and Transparency in AI. J Am Coll Radiol 2024; 21:1119-1129. [PMID: 38354844 DOI: 10.1016/j.jacr.2024.01.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Revised: 01/27/2024] [Accepted: 01/27/2024] [Indexed: 02/16/2024]
Abstract
Despite the surge in artificial intelligence (AI) development for health care applications, particularly for medical imaging applications, there has been limited adoption of such AI tools into clinical practice. During a 1-day workshop in November 2022, co-organized by the ACR and the RSNA, participants outlined experiences and problems with implementing AI in clinical practice, defined the needs of various stakeholders in the AI ecosystem, and elicited potential solutions and strategies related to the safety, effectiveness, reliability, and transparency of AI algorithms. Participants included radiologists from academic and community radiology practices, informatics leaders responsible for AI implementation, regulatory agency employees, and specialty society representatives. The major themes that emerged fell into two categories: (1) AI product development and (2) implementation of AI-based applications in clinical practice. In particular, participants highlighted key aspects of AI product development to include clear clinical task definitions; well-curated data from diverse geographic, economic, and health care settings; standards and mechanisms to monitor model reliability; and transparency regarding model performance, both in controlled and real-world settings. For implementation, participants emphasized the need for strong institutional governance; systematic evaluation, selection, and validation methods conducted by local teams; seamless integration into the clinical workflow; performance monitoring and support by local teams; performance monitoring by external entities; and alignment of incentives through credentialing and reimbursement. Participants predicted that clinical implementation of AI in radiology will continue to be limited until the safety, effectiveness, reliability, and transparency of such tools are more fully addressed.
Collapse
|
|
1 |
|