1
|
Wohlgemut JM, Pisirir E, Stoner RS, Perkins ZB, Marsh W, Tai NRM, Kyrimi E. A scoping review, novel taxonomy and catalogue of implementation frameworks for clinical decision support systems. BMC Med Inform Decis Mak 2024; 24:323. [PMID: 39487462 PMCID: PMC11531160 DOI: 10.1186/s12911-024-02739-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Accepted: 10/24/2024] [Indexed: 11/04/2024] Open
Abstract
BACKGROUND The primary aim of this scoping review was to synthesise key domains and sub-domains described in existing clinical decision support systems (CDSS) implementation frameworks into a novel taxonomy and demonstrate most-studied and least-studied areas. Secondary objectives were to evaluate the frequency and manner of use of each framework, and catalogue frameworks by implementation stage. METHODS A scoping review of Pubmed, Scopus, Web of Science, PsychInfo and Embase was conducted on 12/01/2022, limited to English language, including 2000-2021. Each framework was categorised as addressing one or multiple stages of implementation: design and development, evaluation, acceptance and integration, and adoption and maintenance. Key parts of each framework were grouped into domains and sub-domains. RESULTS Of 3550 titles identified, 58 papers were included. The most-studied implementation stage was acceptance and integration, while the least-studied was design and development. The three main framework uses were: for evaluating adoption, for understanding attitudes toward implementation, and for framework validation. The most frequently used framework was the Consolidated Framework for Implementation Research. CONCLUSIONS Many frameworks have been published to overcome barriers to CDSS implementation and offer guidance towards successful adoption. However, for co-developers, choosing relevant frameworks may be a challenge. A taxonomy of domains addressed by CDSS implementation frameworks is provided, as well as a description of their use, and a catalogue of frameworks listed by the implementation stages they address. Future work should ensure best practices for CDSS design are adequately described, and existing frameworks are well-validated. An emphasis on collaboration between clinician and non-clinician affected parties may help advance the field.
Collapse
Affiliation(s)
- Jared M Wohlgemut
- Centre for Trauma Sciences, Blizard Institute, Queen Mary University of London, London, UK
- Royal London Hospital, Barts Health NHS Trust, London, UK
| | - Erhan Pisirir
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Road, London, E1 4NS, UK
| | - Rebecca S Stoner
- Centre for Trauma Sciences, Blizard Institute, Queen Mary University of London, London, UK
- Royal London Hospital, Barts Health NHS Trust, London, UK
| | - Zane B Perkins
- Centre for Trauma Sciences, Blizard Institute, Queen Mary University of London, London, UK
- Royal London Hospital, Barts Health NHS Trust, London, UK
| | - William Marsh
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Road, London, E1 4NS, UK
| | - Nigel R M Tai
- Centre for Trauma Sciences, Blizard Institute, Queen Mary University of London, London, UK
- Royal London Hospital, Barts Health NHS Trust, London, UK
- Royal Centre for Defence Medicine, Birmingham, UK
| | - Evangelia Kyrimi
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Road, London, E1 4NS, UK.
| |
Collapse
|
2
|
Conte ML, Boisvert P, Barrison P, Seifi F, Landis-Lewis Z, Flynn A, Friedman CP. Ten simple rules to make computable knowledge shareable and reusable. PLoS Comput Biol 2024; 20:e1012179. [PMID: 38900708 PMCID: PMC11189186 DOI: 10.1371/journal.pcbi.1012179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/22/2024] Open
Abstract
Computable biomedical knowledge (CBK) is: "the result of an analytic and/or deliberative process about human health, or affecting human health, that is explicit, and therefore can be represented and reasned upon using logic, formal standards, and mathematical approaches." Representing biomedical knowledge in a machine-interpretable, computable form increases its ability to be discovered, accessed, understood, and deployed. Computable knowledge artifacts can greatly advance the potential for implementation, reproducibility, or extension of the knowledge by users, who may include practitioners, researchers, and learners. Enriching computable knowledge artifacts may help facilitate reuse and translation into practice. Following the examples of 10 Simple Rules papers for scientific code, software, and applications, we present 10 Simple Rules intended to make shared computable knowledge artifacts more useful and reusable. These rules are mainly for researchers and their teams who have decided that sharing their computable knowledge is important, who wish to go beyond simply describing results, algorithms, or models via traditional publication pathways, and who want to both make their research findings more accessible, and to help others use their computable knowledge. These rules are roughly organized into 3 categories: planning, engineering, and documentation. Finally, while many of the following examples are of computable knowledge in biomedical domains, these rules are generalizable to computable knowledge in any research domain.
Collapse
Affiliation(s)
- Marisa L. Conte
- Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, Michigan, United States of America
| | - Peter Boisvert
- Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, Michigan, United States of America
| | - Philip Barrison
- Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, Michigan, United States of America
| | - Farid Seifi
- Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, Michigan, United States of America
| | - Zach Landis-Lewis
- Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, Michigan, United States of America
| | - Allen Flynn
- Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, Michigan, United States of America
| | - Charles P. Friedman
- Department of Learning Health Sciences, University of Michigan Medical School, Ann Arbor, Michigan, United States of America
| |
Collapse
|
3
|
Iserson KV. Informed consent for artificial intelligence in emergency medicine: A practical guide. Am J Emerg Med 2024; 76:225-230. [PMID: 38128163 DOI: 10.1016/j.ajem.2023.11.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 11/08/2023] [Accepted: 11/16/2023] [Indexed: 12/23/2023] Open
Abstract
As artificial intelligence (AI) expands its presence in healthcare, particularly within emergency medicine (EM), there is growing urgency to explore the ethical and practical considerations surrounding its adoption. AI holds the potential to revolutionize how emergency physicians (EPs) make clinical decisions, but AI's complexity often surpasses EPs' capacity to provide patients with informed consent regarding its use. This article underscores the crucial need to address the ethical pitfalls of AI in EM. Patient autonomy necessitates that EPs engage in conversations with patients about whether to use AI in their evaluation and treatment. As clinical AI integration expands, this discussion should become an integral part of the informed consent process, aligning with ethical and legal requirements. The rapid availability of AI programs, fueled by vast electronic health record (EHR) datasets, has led to increased pressure on hospitals and clinicians to embrace clinical AI without comprehensive system evaluation. However, the evolving landscape of AI technology outpaces our ability to anticipate its impact on medical practice and patient care. The central question arises: Are EPs equipped with the necessary knowledge to offer well-informed consent regarding clinical AI? Collaborative efforts between EPs, bioethicists, AI researchers, and healthcare administrators are essential for the development and implementation of optimal AI practices in EM. To facilitate informed consent about AI, EPs should understand at least seven key areas: (1) how AI systems operate; (2) whether AI systems are understandable and trustworthy; (3) the limitations of and errors AI systems make; (4) how disagreements between the EP and AI are resolved; (5) whether the patient's personally identifiable information (PII) and the AI computer systems will be secure; (6) if the AI system functions reliably (has been validated); and (7) if the AI program exhibits bias. This article addresses each of these critical issues, aiming to empower EPs with the knowledge required to navigate the intersection of AI and informed consent in EM.
Collapse
Affiliation(s)
- Kenneth V Iserson
- Professor Emeritus, Department of Emergency Medicine, The University of Arizona, Tucson, AZ, 4930 N. Calle Faja, Tucson, AZ, United States of America.
| |
Collapse
|
4
|
Liu S, Wei S, Lehmann HP. Applicability Area: A novel utility-based approach for evaluating predictive models, beyond discrimination. AMIA ... ANNUAL SYMPOSIUM PROCEEDINGS. AMIA SYMPOSIUM 2024; 2023:494-503. [PMID: 38222359 PMCID: PMC10785877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 01/16/2024]
Abstract
Translating prediction models into practice and supporting clinicians' decision-making demand demonstration of clinical value. Existing approaches to evaluating machine learning models emphasize discriminatory power, which is only a part of the medical decision problem. We propose the Applicability Area (ApAr), a decision-analytic utility-based approach to evaluating predictive models that communicate the range of prior probability and test cutoffs for which the model has positive utility; larger ApArs suggest a broader potential use of the model. We assess ApAr with simulated datasets and with three published medical datasets. ApAr adds value beyond the typical area under the receiver operating characteristic curve (AUROC) metric analysis. As an example, in the diabetes dataset, the top model by ApAr was ranked as the 23rd best model by AUROC. Decision makers looking to adopt and implement models can leverage ApArs to assess if the local range of priors and utilities is within the respective ApArs.
Collapse
Affiliation(s)
- Star Liu
- Johns Hopkins University School of Medicine, Baltimore, MD, United States
| | - Shixiong Wei
- Johns Hopkins University School of Medicine, Baltimore, MD, United States
| | - Harold P Lehmann
- Johns Hopkins University School of Medicine, Baltimore, MD, United States
| |
Collapse
|
5
|
Friedman CP, Lomotan EA, Richardson JE, Ridgeway JL. Socio-technical infrastructure for a learning health system. Learn Health Syst 2024; 8:e10405. [PMID: 38249851 PMCID: PMC10797563 DOI: 10.1002/lrh2.10405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Accepted: 12/12/2023] [Indexed: 01/23/2024] Open
Affiliation(s)
- Charles P. Friedman
- Department of Learning Health SciencesUniversity of MichiganAnn ArborMichiganUSA
| | - Edwin A. Lomotan
- Center for Evidence and Practice ImprovementAgency for Healthcare Research and QualityRockvilleMarylandUSA
| | | | - Jennifer L. Ridgeway
- Division of Health Care Delivery Research, Robert D. and Patricia E. Kern Center for the Science of Health Care DeliveryMayo ClinicRochesterMinnesotaUSA
| |
Collapse
|
6
|
Alper BS, Flynn A, Bray BE, Conte ML, Eldredge C, Gold S, Greenes RA, Haug P, Jacoby K, Koru G, McClay J, Sainvil ML, Sottara D, Tuttle M, Visweswaran S, Yurk RA. Categorizing metadata to help mobilize computable biomedical knowledge. Learn Health Syst 2022; 6:e10271. [PMID: 35036552 PMCID: PMC8753304 DOI: 10.1002/lrh2.10271] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2020] [Revised: 04/03/2021] [Accepted: 04/24/2021] [Indexed: 12/03/2022] Open
Abstract
INTRODUCTION Computable biomedical knowledge artifacts (CBKs) are digital objects conveying biomedical knowledge in machine-interpretable structures. As more CBKs are produced and their complexity increases, the value obtained from sharing CBKs grows. Mobilizing CBKs and sharing them widely can only be achieved if the CBKs are findable, accessible, interoperable, reusable, and trustable (FAIR+T). To help mobilize CBKs, we describe our efforts to outline metadata categories to make CBKs FAIR+T. METHODS We examined the literature regarding metadata with the potential to make digital artifacts FAIR+T. We also examined metadata available online today for actual CBKs of 12 different types. With iterative refinement, we came to a consensus on key categories of metadata that, when taken together, can make CBKs FAIR+T. We use subject-predicate-object triples to more clearly differentiate metadata categories. RESULTS We defined 13 categories of CBK metadata most relevant to making CBKs FAIR+T. Eleven of these categories (type, domain, purpose, identification, location, CBK-to-CBK relationships, technical, authorization and rights management, provenance, evidential basis, and evidence from use metadata) are evident today where CBKs are stored online. Two additional categories (preservation and integrity metadata) were not evident in our examples. We provide a research agenda to guide further study and development of these and other metadata categories. CONCLUSION A wide variety of metadata elements in various categories is needed to make CBKs FAIR+T. More work is needed to develop a common framework for CBK metadata that can make CBKs FAIR+T for all stakeholders.
Collapse
Affiliation(s)
| | - Allen Flynn
- Medical SchoolUniversity of MichiganAnn ArborMichiganUSA
| | - Bruce E. Bray
- Biomedical Informatics and Cardiovascular MedicineSchool of Medicine, University of UtahSalt Lake CityUtahUSA
| | - Marisa L. Conte
- Taubman Health Sciences Library, University of MichiganAnn ArborMichiganUSA
| | | | - Sigfried Gold
- College of Information StudiesUniversity of MarylandCollege ParkMarylandUSA
| | | | - Peter Haug
- Intermountain HealthcareUniversity of UtahSalt Lake CityUtahUSA
| | | | - Gunes Koru
- Department of Information SystemsUniversity of MarylandBaltimoreMarylandUSA
| | - James McClay
- Emergency MedicineUniversity of Nebraska Medical CenterOmahaNebraskaUSA
| | | | | | | | - Shyam Visweswaran
- Department of Biomedical InformaticsUniversity of PittsburghPittsburghPennsylvaniaUSA
| | | |
Collapse
|
7
|
Barboi C, Tzavelis A, Muhammad LN. Comparison of Severity of Illness Scores and Artificial Intelligence Models Predictive of Intensive Care Unit Mortality: Meta-analysis and review of the literature (Preprint). JMIR Med Inform 2021; 10:e35293. [PMID: 35639445 PMCID: PMC9198821 DOI: 10.2196/35293] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 04/24/2022] [Accepted: 04/25/2022] [Indexed: 12/23/2022] Open
Affiliation(s)
- Cristina Barboi
- Indiana University Purdue University, Regenstrief Institue, Indianapolis, IN, United States
| | - Andreas Tzavelis
- Medical Scientist Training Program, Feinberg School of Medicine, Chicago, IL, United States
- Department of Biomedical Engineering, Northwestern University, Chicago, IL, United States
| | - Lutfiyya NaQiyba Muhammad
- Department of Preventive Medicine and Biostatistics, Northwestern University, Evanston, IL, United States
| |
Collapse
|
8
|
Friedman CP, Flynn AJ. Computable knowledge: An imperative for Learning Health Systems. Learn Health Syst 2019; 3:e10203. [PMID: 31641690 PMCID: PMC6802532 DOI: 10.1002/lrh2.10203] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Accepted: 08/28/2019] [Indexed: 11/15/2022] Open
Affiliation(s)
- Charles P. Friedman
- Department of Learning Health SciencesUniversity of Michigan Medical SchoolAnn ArborMichigan
| | - Allen J. Flynn
- Department of Learning Health SciencesUniversity of Michigan Medical SchoolAnn ArborMichigan
| |
Collapse
|
9
|
Magrabi F, Ammenwerth E, McNair JB, De Keizer NF, Hyppönen H, Nykänen P, Rigby M, Scott PJ, Vehko T, Wong ZSY, Georgiou A. Artificial Intelligence in Clinical Decision Support: Challenges for Evaluating AI and Practical Implications. Yearb Med Inform 2019; 28:128-134. [PMID: 31022752 PMCID: PMC6697499 DOI: 10.1055/s-0039-1677903] [Citation(s) in RCA: 115] [Impact Index Per Article: 19.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022] Open
Abstract
OBJECTIVES This paper draws attention to: i) key considerations for evaluating artificial intelligence (AI) enabled clinical decision support; and ii) challenges and practical implications of AI design, development, selection, use, and ongoing surveillance. METHOD A narrative review of existing research and evaluation approaches along with expert perspectives drawn from the International Medical Informatics Association (IMIA) Working Group on Technology Assessment and Quality Development in Health Informatics and the European Federation for Medical Informatics (EFMI) Working Group for Assessment of Health Information Systems. RESULTS There is a rich history and tradition of evaluating AI in healthcare. While evaluators can learn from past efforts, and build on best practice evaluation frameworks and methodologies, questions remain about how to evaluate the safety and effectiveness of AI that dynamically harness vast amounts of genomic, biomarker, phenotype, electronic record, and care delivery data from across health systems. This paper first provides a historical perspective about the evaluation of AI in healthcare. It then examines key challenges of evaluating AI-enabled clinical decision support during design, development, selection, use, and ongoing surveillance. Practical aspects of evaluating AI in healthcare, including approaches to evaluation and indicators to monitor AI are also discussed. CONCLUSION Commitment to rigorous initial and ongoing evaluation will be critical to ensuring the safe and effective integration of AI in complex sociotechnical settings. Specific enhancements that are required for the new generation of AI-enabled clinical decision support will emerge through practical application.
Collapse
Affiliation(s)
- Farah Magrabi
- Macquarie University, Australian Institute of Health Innovation, Sydney, Australia
| | - Elske Ammenwerth
- UMIT, University for Health Sciences, Medical Informatics and Technology, Institute of Medical Informatics, Hall in Tyrol, Austria
| | - Jytte Brender McNair
- Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
| | - Nicolet F De Keizer
- Amsterdam UMC, University of Amsterdam, Department of Medical Informatics, Amsterdam Public Health research institute, The Netherlands
| | - Hannele Hyppönen
- National Institute for Health and Welfare, Information Department, Helsinki, Finland
| | - Pirkko Nykänen
- Tampere University, Faculty for Information Technology and Communication Sciences, Tampere, Finland
| | - Michael Rigby
- Keele University, School of Social Science and Public Policy, Keele, United Kingdom
| | - Philip J Scott
- University of Portsmouth, Centre for Healthcare Modelling and Informatics, Portsmouth, United Kingdom
| | - Tuulikki Vehko
- National Institute for Health and Welfare, Information Department, Helsinki, Finland
| | | | - Andrew Georgiou
- Macquarie University, Australian Institute of Health Innovation, Sydney, Australia
| |
Collapse
|