1
|
Jackman L, Kamran R. Transforming Patient-Reported Outcome Measurement With Digital Health Technology. J Eval Clin Pract 2025; 31:e70107. [PMID: 40296697 PMCID: PMC12038383 DOI: 10.1111/jep.70107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/12/2025] [Revised: 03/18/2025] [Accepted: 04/11/2025] [Indexed: 04/30/2025]
Abstract
Healthcare is shifting from a provider-centric to a patient-centric model, emphasizing the integration of patient-reported outcome measures (PROMs) into routine practice. PROMs enhance shared decision-making and provide valuable insights into patient well-being, yet their widespread implementation is hindered by logistical challenges, time constraints, and infrastructure limitations. Digital health solutions offer a promising approach to overcoming these barriers by streamlining PROM administration, improving accessibility, and optimizing clinical integration. This article explores the transition from paper-based to digital PROM administration, the advantages of computerized adaptive testing (CAT), and the broader considerations necessary to ensure effective implementation. By leveraging digital tools and informatics strategies, healthcare systems can facilitate the meaningful adoption of PROMs to improve patient-centred care. This article can be used to advance PROM implementation across various clinical settings.
Collapse
Affiliation(s)
- Liam Jackman
- Temerty Faculty of MedicineUniversity of TorontoTorontoCanada
| | - Rakhshan Kamran
- Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal SciencesUniversity of OxfordOxfordUK
| |
Collapse
|
2
|
Donovan T, Abell B, McPhail SM, Carter HE. Development of an instrument (Cost-IS) to estimate costs of implementation strategies for digital health solutions: a modified e-Delphi study. Implement Sci 2025; 20:13. [PMID: 40055802 PMCID: PMC11889902 DOI: 10.1186/s13012-025-01423-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2024] [Accepted: 02/25/2025] [Indexed: 03/18/2025] Open
Abstract
BACKGROUND It is important to determine the relative value of health innovations when allocating limited healthcare resources. Implementation strategies require and consume healthcare resources yet are often excluded from published economic evaluations. This paper reports on the development of a pragmatic implementation costing instrument to assist with the planning, delivery, and evaluation of digital health implementation strategies. METHODS A modified e-Delphi process was adopted to develop an implementation costing instrument. Purposive sampling was used to recruit a panel of experts in implementation science, health economic evaluations and/or digital health from the academic, government, clinical or health service sectors. In each round, participants were sent an electronic questionnaire and a prototype of the implementation costing instrument. The prototype in the initial round was informed from a literature review and qualitative interview findings. The prototype was updated iteratively between rounds in response to the panel's feedback. In subsequent rounds, participants also received the anonymous results of items that did not reach consensus in the previous round. Termination occurred once consensus was reached on integral questions (those pertaining specifically to the instrument design) or when three rounds were completed, to prevent sample fatigue. Consensus was defined as at least 75% of experts in agreement for any item. RESULTS Consensus was reached on the core components and design of the instrument from a panel of twelve experts in implementation science, health economic evaluations and/or digital health. Areas where consensus was not reached included users' level of implementation science knowledge, specificity of the tool to digital health and accessibility via digital formats. CONCLUSIONS Cost-IS is a pragmatic data collection instrument designed to estimate the costs of implementation strategies for digital health solutions. Further piloting of Cost-IS is required to establish its feasibility and generalisability.
Collapse
Affiliation(s)
- Thomasina Donovan
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Faculty of Health, Queensland University of Technology, Brisbane, QLD, Australia.
| | - Bridget Abell
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Faculty of Health, Queensland University of Technology, Brisbane, QLD, Australia
| | - Steven M McPhail
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Faculty of Health, Queensland University of Technology, Brisbane, QLD, Australia
- Digital Health and Informatics, Metro South Health, Brisbane, QLD, Australia
| | - Hannah E Carter
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Faculty of Health, Queensland University of Technology, Brisbane, QLD, Australia
| |
Collapse
|
3
|
Vueghs C, Shakeri H, Renton T, Van der Cruyssen F. Development and Evaluation of a GPT4-Based Orofacial Pain Clinical Decision Support System. Diagnostics (Basel) 2024; 14:2835. [PMID: 39767196 PMCID: PMC11674870 DOI: 10.3390/diagnostics14242835] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2024] [Revised: 12/04/2024] [Accepted: 12/14/2024] [Indexed: 01/04/2025] Open
Abstract
Background: Orofacial pain (OFP) encompasses a complex array of conditions affecting the face, mouth, and jaws, often leading to significant diagnostic challenges and high rates of misdiagnosis. Artificial intelligence, particularly large language models like GPT4 (OpenAI, San Francisco, CA, USA), offers potential as a diagnostic aid in healthcare settings. Objective: To evaluate the diagnostic accuracy of GPT4 in OFP cases as a clinical decision support system (CDSS) and compare its performance against treating clinicians, expert evaluators, medical students, and general practitioners. Methods: A total of 100 anonymized patient case descriptions involving diverse OFP conditions were collected. GPT4 was prompted to generate primary and differential diagnoses for each case using the International Classification of Orofacial Pain (ICOP) criteria. Diagnoses were compared to gold-standard diagnoses established by treating clinicians, and a scoring system was used to assess accuracy at three hierarchical ICOP levels. A subset of 24 cases was also evaluated by two clinical experts, two final-year medical students, and two general practitioners for comparative analysis. Diagnostic performance and interrater reliability were calculated. Results: GPT4 achieved the highest accuracy level (ICOP level 3) in 38% of cases, with an overall diagnostic performance score of 157 out of 300 points (52%). The model provided accurate differential diagnoses in 80% of cases (400 out of 500 points). In the subset of 24 cases, the model's performance was comparable to non-expert human evaluators but was surpassed by clinical experts, who correctly diagnosed 54% of cases at level 3. GPT4 demonstrated high accuracy in specific categories, correctly diagnosing 81% of trigeminal neuralgia cases at level 3. Interrater reliability between GPT4 and human evaluators was low (κ = 0.219, p < 0.001), indicating variability in diagnostic agreement. Conclusions: GPT4 shows promise as a CDSS for OFP by improving diagnostic accuracy and offering structured differential diagnoses. While not yet outperforming expert clinicians, GPT4 can augment diagnostic workflows, particularly in primary care or educational settings. Effective integration into clinical practice requires adherence to rigorous guidelines, thorough validation, and ongoing professional oversight to ensure patient safety and diagnostic reliability.
Collapse
Affiliation(s)
- Charlotte Vueghs
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, 3000 Leuven, Belgium
| | - Hamid Shakeri
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, 3000 Leuven, Belgium
| | - Tara Renton
- Department of Oral Surgery, King’s College London Dental Institute, London SE5 9RW, UK
| | - Frederic Van der Cruyssen
- Department of Oral and Maxillofacial Surgery, University Hospitals Leuven, 3000 Leuven, Belgium
- OMFS-IMPATH Research Group, KU Leuven, 3000 Leuven, Belgium
| |
Collapse
|
4
|
Donovan T, Carter HE, McPhail SM, Abell B. Challenges and recommendations for collecting and quantifying implementation costs in practice: a qualitative interview study. Implement Sci Commun 2024; 5:114. [PMID: 39394175 PMCID: PMC11468373 DOI: 10.1186/s43058-024-00648-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Accepted: 09/18/2024] [Indexed: 10/13/2024] Open
Abstract
BACKGROUND The cost of implementation is typically not accounted for in published economic evaluations, which determine the relative value for money of health innovations and are important for allocating scarce resources. Despite key papers outlining relevant implementation costs, they continue to be under reported in the literature and often not considered in practice. This study sought to understand and outline current practices for capturing the costs associated with implementation efforts, with examples from the digital health setting. METHODS A qualitative study of semi-structured interviews with purposefully sampled experts in implementation science, health economics and/or digital health was conducted. The interview guide was informed by a literature review and was pilot tested. Interviews were digitally recorded and transcribed. A hybrid inductive/deductive framework analysis was conducted using thematic analysis to elicit key concepts related to the research question. RESULTS Interviews were conducted with sixteen participants with specialist expertise in implementation science (n = 8), health economics (n = 6), and/or digital health (n = 8). Five participants were experienced in more than one field. Four key themes were elicited from the data: difficulty identifying and collecting implementation cost data; variation in approaches for collecting implementation cost data; the value of implementation costs; and collaboration enables implementation costing. Broadly, while interviewees recognised implementation costs as important, only some costs were considered in practice likely due to the perceived ill-defined boundaries and inconsistencies in terminology. A variety of methods were used to collect and estimate implementation costs; the most frequent approach was staff time tracking. Multidisciplinary collaboration facilitated this process, but the burden of collecting the necessary data was also highlighted. CONCLUSIONS In current practice, standardised methods are not commonly used for data collection or estimation of implementation costs. Improved data collection through standardised practices may support greater transparency and confidence in implementation cost estimates. Although participants had industry exposure, most were also academic researchers and findings may not be representative of non-academic industry settings.
Collapse
Affiliation(s)
- Thomasina Donovan
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Faculty of Health, Queensland University of Technology, Brisbane, Qld, Australia.
| | - Hannah E Carter
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Faculty of Health, Queensland University of Technology, Brisbane, Qld, Australia
| | - Steven M McPhail
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Faculty of Health, Queensland University of Technology, Brisbane, Qld, Australia
- Digital Health and Informatics, Metro South Health, Brisbane, Qld, Australia
| | - Bridget Abell
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Faculty of Health, Queensland University of Technology, Brisbane, Qld, Australia
| |
Collapse
|
5
|
Jhala K, Lynch EA, Eappen S, Curley P, Desai SP, Brink J, Khorasani R, Kapoor N. Financial Impact of a Radiology Safety Net Program for Resolution of Clinically Necessary Follow-up Imaging Recommendations. J Am Coll Radiol 2024; 21:1258-1268. [PMID: 38147905 DOI: 10.1016/j.jacr.2023.12.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 12/01/2023] [Accepted: 12/15/2023] [Indexed: 12/28/2023]
Abstract
OBJECTIVE Health care safety net (SN) programs can potentially improve patient safety and decrease risk associated with missed or delayed follow-up care, although they require financial resources. This study aimed to assess whether the revenue generated from completion of clinically necessary recommendations for additional imaging (RAI) made possible by an IT-enabled SN program could fund the required additional labor resources. METHODS Clinically necessary RAI generated October 21, 2019, to September 24, 2021, were tracked to resolution as of April 13, 2023. A new radiology SN team worked with existing schedulers and care coordinators, performing chart review and patient and provider outreach to ensure RAI resolution. We applied relevant Current Procedural Terminology, version 4 codes of the completed imaging examinations to estimate total revenue. Coprimary outcomes included revenue generated by total performed examinations and estimated revenue attributed to SN involvement. We used Student's t test to compare the secondary outcome, RAI time interval, for higher versus lower revenue-generating modalities. RESULTS In all, 24% (3,243) of eligible follow-up recommendations (13,670) required SN involvement. Total estimated revenue generated by performed recommended examinations was $6,116,871, with $980,628 attributed to SN. Net SN-generated revenue per 1.0 full-time equivalent was an estimated $349,768. Greatest proportion of performed examinations were cross-sectional modalities (CT, MRI, PET/CT), which were higher revenue-generating than non-cross-sectional modalities (x-ray, ultrasound, mammography), and had shorter recommendation time frames (153 versus 180 days, P < .001). DISCUSSION The revenue generated from completion of RAI facilitated by an IT-enabled quality and safety program supplemented by an SN team can fund the required additional labor resources to improve patient safety. Realizing early revenue may require 5 to 6 months postimplementation.
Collapse
Affiliation(s)
- Khushboo Jhala
- Department of Radiology, Center for Evidence-Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| | - Elyse A Lynch
- Department of Radiology, Center for Evidence-Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| | - Sunil Eappen
- Senior Vice President of Medical Affairs, Chief Medical Officer, Department of Anesthesiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| | - Patrick Curley
- Center for Evidence-Based Imaging, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts; Executive Director, Quality and Safety, Enterprise Radiology, Mass General Brigham
| | - Sonali P Desai
- Chief Quality Officer, Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| | - James Brink
- Chair, Department of Radiology, Center for Evidence-Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts; Chief, Enterprise Radiology Service, Mass General Brigham
| | - Ramin Khorasani
- Vice Chair, Department of Radiology, Center for Evidence-Based Imaging, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts; Director, Center for Evidence-Based Imaging, Brigham and Women's Hospital
| | - Neena Kapoor
- Associate Chair, Patient Experience and Clinically Significant Results, Center for Evidence-Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts.
| |
Collapse
|
6
|
Lampe D, Grosser J, Grothe D, Aufenberg B, Gensorowsky D, Witte J, Greiner W. How intervention studies measure the effectiveness of medication safety-related clinical decision support systems in primary and long-term care: a systematic review. BMC Med Inform Decis Mak 2024; 24:188. [PMID: 38965569 PMCID: PMC11225126 DOI: 10.1186/s12911-024-02596-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2024] [Accepted: 07/01/2024] [Indexed: 07/06/2024] Open
Abstract
BACKGROUND Medication errors and associated adverse drug events (ADE) are a major cause of morbidity and mortality worldwide. In recent years, the prevention of medication errors has become a high priority in healthcare systems. In order to improve medication safety, computerized Clinical Decision Support Systems (CDSS) are increasingly being integrated into the medication process. Accordingly, a growing number of studies have investigated the medication safety-related effectiveness of CDSS. However, the outcome measures used are heterogeneous, leading to unclear evidence. The primary aim of this study is to summarize and categorize the outcomes used in interventional studies evaluating the effects of CDSS on medication safety in primary and long-term care. METHODS We systematically searched PubMed, Embase, CINAHL, and Cochrane Library for interventional studies evaluating the effects of CDSS targeting medication safety and patient-related outcomes. We extracted methodological characteristics, outcomes and empirical findings from the included studies. Outcomes were assigned to three main categories: process-related, harm-related, and cost-related. Risk of bias was assessed using the Evidence Project risk of bias tool. RESULTS Thirty-two studies met the inclusion criteria. Almost all studies (n = 31) used process-related outcomes, followed by harm-related outcomes (n = 11). Only three studies used cost-related outcomes. Most studies used outcomes from only one category and no study used outcomes from all three categories. The definition and operationalization of outcomes varied widely between the included studies, even within outcome categories. Overall, evidence on CDSS effectiveness was mixed. A significant intervention effect was demonstrated by nine of fifteen studies with process-related primary outcomes (60%) but only one out of five studies with harm-related primary outcomes (20%). The included studies faced a number of methodological problems that limit the comparability and generalizability of their results. CONCLUSIONS Evidence on the effectiveness of CDSS is currently inconclusive due in part to inconsistent outcome definitions and methodological problems in the literature. Additional high-quality studies are therefore needed to provide a comprehensive account of CDSS effectiveness. These studies should follow established methodological guidelines and recommendations and use a comprehensive set of harm-, process- and cost-related outcomes with agreed-upon and consistent definitions. PROSPERO REGISTRATION CRD42023464746.
Collapse
Affiliation(s)
- David Lampe
- Department of Health Economics and Health Care Management, School of Public Health, Bielefeld University, Universitätsstraße 25, Bielefeld, 33615, Germany.
| | - John Grosser
- Department of Health Economics and Health Care Management, School of Public Health, Bielefeld University, Universitätsstraße 25, Bielefeld, 33615, Germany
| | - Dennis Grothe
- Department of Health Economics and Health Care Management, School of Public Health, Bielefeld University, Universitätsstraße 25, Bielefeld, 33615, Germany
| | - Birthe Aufenberg
- Department of Health Economics and Health Care Management, School of Public Health, Bielefeld University, Universitätsstraße 25, Bielefeld, 33615, Germany
| | | | | | - Wolfgang Greiner
- Department of Health Economics and Health Care Management, School of Public Health, Bielefeld University, Universitätsstraße 25, Bielefeld, 33615, Germany
| |
Collapse
|
7
|
Ozturk N, Yakak I, Ağ MB, Aksoy N. Is ChatGPT reliable and accurate in answering pharmacotherapy-related inquiries in both Turkish and English? CURRENTS IN PHARMACY TEACHING & LEARNING 2024; 16:102101. [PMID: 38702261 DOI: 10.1016/j.cptl.2024.04.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Revised: 04/23/2024] [Accepted: 04/26/2024] [Indexed: 05/06/2024]
Abstract
INTRODUCTION Artificial intelligence (AI), particularly ChatGPT, is becoming more and more prevalent in the healthcare field for tasks such as disease diagnosis and medical record analysis. The objective of this study is to evaluate the proficiency and accuracy of ChatGPT in different domains of clinical pharmacy cases and queries. METHODS The study NAPLEX® Review Questions, 4th edition, pertaining to 10 different chronic conditions compared ChatGPT's responses to pharmacotherapy cases and questions obtained from McGraw Hill's, alongside the answers provided by the book's authors. The proportion of correct responses was collected and analyzed using the Statistical Package for the Social Sciences (SPSS) version 29. RESULTS When tested in English, ChatGPT had substantially higher mean scores than when tested in Turkish. The average accurate score for English and Turkish was 0.41 ± 0.49 and 0.32 ± 0.46, respectively, p = 0.18. Responses to queries beginning with "Which of the following is correct?" are considerably more precise than those beginning with "Mark all the incorrect answers?" 0.66 ± 0.47 as opposed to 0.16 ± 0.36; p = 0.01 in English language and 0.50 ± 0.50 as opposed to 0.14 ± 0.34; p < 0.05in Turkish language. CONCLUSION ChatGPT displayed a moderate level of accuracy while responding to English inquiries, but it displayed a slight level of accuracy when responding to Turkish inquiries, contingent upon the question format. Improving the accuracy of ChatGPT in languages other than English requires the incorporation of several components. The integration of the English version of ChatGPT into clinical practice has the potential to improve the effectiveness, precision, and standard of patient care provision by supplementing personal expertise and professional judgment. However, it is crucial to utilize technology as an adjunct and not a replacement for human decision-making and critical thinking.
Collapse
Affiliation(s)
- Nur Ozturk
- Altinbas University, School of Pharmacy, Department of Clinical Pharmacy, Istanbul, Turkey; Istanbul Medipol University, Graduate School of Health Sciences, Clinical Pharmacy PhD Program, Istanbul, Turkey.
| | - Irem Yakak
- Istanbul Medipol University, Graduate School of Health Sciences, Clinical Pharmacy PhD Program, Istanbul, Turkey.
| | - Melih Buğra Ağ
- Istanbul Medipol University, Graduate School of Health Sciences, Clinical Pharmacy PhD Program, Istanbul, Turkey; Istanbul Medipol University, School of Pharmacy, Department of Clinical Pharmacy, Istanbul, Turkey.
| | - Nilay Aksoy
- Altinbas University, School of Pharmacy, Department of Clinical Pharmacy, Istanbul, Turkey.
| |
Collapse
|
8
|
Garcia MVF, Rikabi S, Krishtopaytis E, Henderson KM, Al-Jaghbeer MJ, Fadell F. Effect of Rapid Response Order Panels on Resident and Nurse Satisfaction. ATS Sch 2024; 5:311-321. [PMID: 39055330 PMCID: PMC11270232 DOI: 10.34197/ats-scholar.2023-0111in] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Accepted: 01/09/2024] [Indexed: 07/27/2024] Open
Abstract
Background Hospitals are required to have rapid response (RR) systems in place to respond to acute changes in a patient's condition. In high-stress situations like RR, medical residents face decision-making challenges due to time constraints and perceived pressure. Instituting order panels (OPs) can facilitate clinical decision making and improve residents' and nurses' satisfaction and patient safety. Objective This quality improvement (QI) project aimed to create and institute standardized OPs for common RR clinical scenarios to improve satisfaction of internal medicine residents and nurses with the RR process. Methods This was a single tertiary care center QI project that developed OPs for 10 common RR scenarios. Resident and nursing satisfaction with RR was assessed before and after OP implementation via survey and qualitative data collection. Results Residents and nurses expressed high levels of satisfaction across various aspects of the RR process before and after OP implementation in both quantitative and qualitative analysis. Increased satisfaction was observed among residents regarding time spent placing orders (94%; P = 0.02) and time spent correcting wrong orders (87%; P = 0.03) after OP implementation. The nurses' survey revealed no statistically significant differences in satisfaction before and after the implementation of OPs regarding communication, collaboration, efficiency, and organization of the team. Conclusion The introduction of standardized OPs for RRs resulted in increased satisfaction among internal medicine residents in terms of order placement and correcting wrong orders. Nurse satisfaction based on survey responses remained neutral. Qualitative data from both groups demonstrated a positive impact on communication, efficiency, and teamwork.
Collapse
Affiliation(s)
| | - Sarah Rikabi
- Cleveland Clinic Foundation, Fairview Hospital, Cleveland, Ohio; and
| | | | | | - Mohammed J. Al-Jaghbeer
- Cleveland Clinic Foundation, Fairview Hospital, Cleveland, Ohio; and
- Respiratory Institute, Cleveland Clinic Foundation, Cleveland, Ohio
| | - Francois Fadell
- Cleveland Clinic Foundation, Fairview Hospital, Cleveland, Ohio; and
- Respiratory Institute, Cleveland Clinic Foundation, Cleveland, Ohio
| |
Collapse
|
9
|
Herrmann S, Giesel-Gerstmeier J, Demuth A, Fenske D. We Ask and Listen: A Group-Wide Retrospective Survey on Satisfaction with Digital Medication Software. J Multidiscip Healthc 2024; 17:923-936. [PMID: 38449841 PMCID: PMC10916516 DOI: 10.2147/jmdh.s446896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 02/09/2024] [Indexed: 03/08/2024] Open
Abstract
Purpose Computerized physician order entry (CPOE) and clinical decision support systems (CDSS) are used internationally since the 1980s. These systems reduce costs, enhance drug therapy safety, and improve quality of care. A few years ago, there was a growing effort to digitize the healthcare sector in Germany. Implementing such systems like CPOE-CDSS requires training for effective adoption and, more important, acceptance by the users. Potential improvements for the software and implementation process can be derived from the users' perspective. The implementation process is globally relevant and applicable across professions due to the constant advancement of digitalization. The study assessed the implementation of medication software and overall satisfaction. Methods In an anonymous voluntary online survey, physicians and nursing staff were asked about their satisfaction with the new CPOE-CDSS. The survey comprised single-choice queries on a Likert scale, categorizing into general information, digital medication administration, drug safety, and software introduction. In addition multiple-choice questions are mentioned. Data analysis was performed using Microsoft Office Excel 2016 and GraphPad PRISM 9.5.0. Results Nurses and physicians' satisfaction with the new software increased with usage hours. The software's performance and loading times have clearly had a negative impact, which leads to a low satisfaction of only 20% among physicians and 17% among nurses. 53% of nurses find the program's training period unsuitable for their daily use, while 57% of physicians approve the training's scope for their professional group. Both professions agree that drug-related problems are easier to detect using CPOE-CDSS, with 76% of nurses and 75% of physicians agreeing. The study provides unbiased feedback on software implementation. Conclusion In conclusion, digitizing healthcare requires managing change, effective training, and addressing software functionality concerns to ensure improved medication safety and streamlined processes. Interfaces, performance optimization, and training remain crucial for software acceptance and effectiveness.
Collapse
Affiliation(s)
- Saskia Herrmann
- Hospital Pharmacy, Helios Kliniken Gmbh, Berlin, Berlin, Germany
- Department of Pharmaceutical/Medicinal Chemistry, Institute of Pharmacy, Friedrich Schiller University Jena, Jena, Thuringia, Germany
| | | | - Annika Demuth
- Hospital Pharmacy, Helios Kliniken Gmbh, Berlin, Berlin, Germany
| | - Dominic Fenske
- Hospital Pharmacy, Helios Kliniken Gmbh, Berlin, Berlin, Germany
| |
Collapse
|
10
|
Roosan D, Padua P, Khan R, Khan H, Verzosa C, Wu Y. Effectiveness of ChatGPT in clinical pharmacy and the role of artificial intelligence in medication therapy management. J Am Pharm Assoc (2003) 2024; 64:422-428.e8. [PMID: 38049066 DOI: 10.1016/j.japh.2023.11.023] [Citation(s) in RCA: 23] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Revised: 10/27/2023] [Accepted: 11/27/2023] [Indexed: 12/06/2023]
Abstract
BACKGROUND The use of artificial intelligence (AI) to optimize medication therapy management (MTM) in identifying drug interactions may potentially improve MTM efficiency. ChatGPT, an AI language model, may be applied to identify medication interventions by integrating patient and drug databases. ChatGPT has been shown to be effective in other areas of clinical medicine, from diagnosis to patient management. However, ChatGPT's ability to manage MTM related activities is little known. OBJECTIVES To evaluate the effectiveness of ChatGPT in MTM services in simple, complex, and very complex cases to understand AI contributions in MTM. METHODS Two clinical pharmacists rated and validated the difficulty of patient cases from simple, complex, and very complex. ChatGPT's response to the cases was assessed based on 3 criteria: the ability to identify drug interactions, precision in recommending alternatives, and appropriateness in devising management plans. Two clinical pharmacists validated the accuracy of ChatGPT's responses and compared them to actual answers for each complexity level. RESULTS ChatGPT 4.0 accurately solved 39 out of 39 (100 %) patient cases. ChatGPT successfully identified drug interactions, provided therapy recommendations and formulated general management plans, but it did not recommend specific dosages. Results suggest it can assist pharmacists in formulating MTM plans to improve overall efficiency. CONCLUSION The application of ChatGPT in MTM has the potential to enhance patient safety and involvement, lower healthcare costs, and assist healthcare providers in medication management and identifying drug interactions. Future pharmacists can utilize AI models such as ChatGPT to improve patient care. The future of the pharmacy profession will depend on how the field responds to the changing need for patient care optimized by AI and automation.
Collapse
|
11
|
Shung DL, Lin JK, Laine L. Achieving Value by Risk Stratification With Machine Learning Model or Clinical Risk Score in Acute Upper Gastrointestinal Bleeding: A Cost Minimization Analysis. Am J Gastroenterol 2024; 119:371-373. [PMID: 37753930 PMCID: PMC10872988 DOI: 10.14309/ajg.0000000000002520] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Accepted: 09/22/2023] [Indexed: 09/28/2023]
Abstract
INTRODUCTION We estimate the economic impact of applying risk assessment tools to identify very low-risk patients with upper gastrointestinal bleeding who can be safely discharged from the emergency department using a cost minimization analysis. METHODS We compare triage strategies (Glasgow-Blatchford score = 0/0-1 or validated machine learning model) with usual care using a Markov chain model from a US health care payer perspective. RESULTS Over 5 years, the Glasgow-Blatchford score triage strategy produced national cumulative savings over usual care of more than $2.7 billion and the machine learning strategy of more than $3.4 billion. DISCUSSION Implementing risk assessment models for upper gastrointestinal bleeding reduces costs, thereby increasing value.
Collapse
Affiliation(s)
- Dennis L. Shung
- Section of Digestive Diseases, Department of Medicine, Yale School of Medicine, New Haven, CT, USA
| | - John K. Lin
- University of Texas M.D. Anderson Cancer Center, Houston, TX, USA
| | - Loren Laine
- Section of Digestive Diseases, Department of Medicine, Yale School of Medicine, New Haven, CT, USA
- VA Connecticut Healthcare System, West Haven, Connecticut, USA
| |
Collapse
|
12
|
Hewage SA, Noviyani R, Brain D, Sharma P, Parsonage W, McPhail SM, Barnett A, Kularatna S. Cost-effectiveness of left atrial appendage closure for stroke prevention in atrial fibrillation: a systematic review appraising the methodological quality. COST EFFECTIVENESS AND RESOURCE ALLOCATION 2023; 21:76. [PMID: 37872572 PMCID: PMC10591401 DOI: 10.1186/s12962-023-00486-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Accepted: 10/10/2023] [Indexed: 10/25/2023] Open
Abstract
BACKGROUND The increasing global prevalence of atrial fibrillation (AF) has led to a growing demand for stroke prevention strategies, resulting in higher healthcare costs. High-quality economic evaluations of stroke prevention strategies can play a crucial role in maximising efficient allocation of resources. In this systematic review, we assessed the methodological quality of such economic evaluations. METHODS We searched electronic databases of PubMed, EMBASE, CINAHL, Cochrane Central Register of Controlled Trials, and Econ Lit to identify model-based economic evaluations comparing the left atrial appendage closure procedure (LAAC) and oral anticoagulants published in English since 2000. Data on study characteristics, model-based details, and analyses were collected. The methodological quality was evaluated using the modified Economic Evaluations Bias (ECOBIAS) checklist. For each of the 22 biases listed in this checklist, studies were categorised into one of four groups: low risk, partial risk, high risk due to inadequate reporting, or high risk. To gauge the overall quality of each study, we computed a composite score by assigning + 2, 0, - 1 and - 2 to each risk category, respectively. RESULTS In our analysis of 12 studies, majority adopted a healthcare provider or payer perspective and employed Markov Models with the number of health states varying from 6 to 16. Cost-effectiveness results varied across studies. LAAC displayed a probability exceeding 50% of being the cost-effective option in six out of nine evaluations compared to warfarin, six out of eight evaluations when compared to dabigatran, in three out of five evaluations against apixaban, and in two out of three studies compared to rivaroxaban. The methodological quality scores for individual studies ranged from 10 to - 12 out of a possible 24. Most high-risk ratings were due to inadequate reporting, which was prevalent across various biases, including those related to data identification, baseline data, treatment effects, and data incorporation. Cost measurement omission bias and inefficient comparator bias were also common. CONCLUSIONS While most studies concluded LAAC to be the cost-effective strategy for stroke prevention in AF, shortcomings in methodological quality raise concerns about reliability and validity of results. Future evaluations, free of these shortcomings, can yield stronger policy evidence.
Collapse
Affiliation(s)
- Sumudu A Hewage
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, Queensland University of Technology, No.61, Musk Avenue, Kelvin Grove, QLD, 4059, Australia.
| | - Rini Noviyani
- Department of Pharmacy, Udayana University, Bali, Indonesia
| | - David Brain
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, Queensland University of Technology, No.61, Musk Avenue, Kelvin Grove, QLD, 4059, Australia
| | - Pakhi Sharma
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, Queensland University of Technology, No.61, Musk Avenue, Kelvin Grove, QLD, 4059, Australia
| | - William Parsonage
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, Queensland University of Technology, No.61, Musk Avenue, Kelvin Grove, QLD, 4059, Australia
- Cardiology department, Royal Brisbane and Women's Hospital, Herston, QLD, Australia
| | - Steven M McPhail
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, Queensland University of Technology, No.61, Musk Avenue, Kelvin Grove, QLD, 4059, Australia
- Digital Health and Informatics Directorate, Metro South Health, Brisbane, QLD, Australia
| | - Adrian Barnett
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, Queensland University of Technology, No.61, Musk Avenue, Kelvin Grove, QLD, 4059, Australia
| | - Sanjeewa Kularatna
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, Queensland University of Technology, No.61, Musk Avenue, Kelvin Grove, QLD, 4059, Australia
| |
Collapse
|
13
|
Abell B, Naicker S, Rodwell D, Donovan T, Tariq A, Baysari M, Blythe R, Parsons R, McPhail SM. Identifying barriers and facilitators to successful implementation of computerized clinical decision support systems in hospitals: a NASSS framework-informed scoping review. Implement Sci 2023; 18:32. [PMID: 37495997 PMCID: PMC10373265 DOI: 10.1186/s13012-023-01287-y] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 07/17/2023] [Indexed: 07/28/2023] Open
Abstract
BACKGROUND Successful implementation and utilization of Computerized Clinical Decision Support Systems (CDSS) in hospitals is complex and challenging. Implementation science, and in particular the Nonadoption, Abandonment, Scale-up, Spread and Sustainability (NASSS) framework, may offer a systematic approach for identifying and addressing these challenges. This review aimed to identify, categorize, and describe barriers and facilitators to CDSS implementation in hospital settings and map them to the NASSS framework. Exploring the applicability of the NASSS framework to CDSS implementation was a secondary aim. METHODS Electronic database searches were conducted (21 July 2020; updated 5 April 2022) in Ovid MEDLINE, Embase, Scopus, PyscInfo, and CINAHL. Original research studies reporting on measured or perceived barriers and/or facilitators to implementation and adoption of CDSS in hospital settings, or attitudes of healthcare professionals towards CDSS were included. Articles with a primary focus on CDSS development were excluded. No language or date restrictions were applied. We used qualitative content analysis to identify determinants and organize them into higher-order themes, which were then reflexively mapped to the NASSS framework. RESULTS Forty-four publications were included. These comprised a range of study designs, geographic locations, participants, technology types, CDSS functions, and clinical contexts of implementation. A total of 227 individual barriers and 130 individual facilitators were identified across the included studies. The most commonly reported influences on implementation were fit of CDSS with workflows (19 studies), the usefulness of the CDSS output in practice (17 studies), CDSS technical dependencies and design (16 studies), trust of users in the CDSS input data and evidence base (15 studies), and the contextual fit of the CDSS with the user's role or clinical setting (14 studies). Most determinants could be appropriately categorized into domains of the NASSS framework with barriers and facilitators in the "Technology," "Organization," and "Adopters" domains most frequently reported. No determinants were assigned to the "Embedding and Adaptation Over Time" domain. CONCLUSIONS This review identified the most common determinants which could be targeted for modification to either remove barriers or facilitate the adoption and use of CDSS within hospitals. Greater adoption of implementation theory should be encouraged to support CDSS implementation.
Collapse
Affiliation(s)
- Bridget Abell
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Faculty of Health, Queensland University of Technology, Brisbane, QLD, Australia
| | - Sundresan Naicker
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Faculty of Health, Queensland University of Technology, Brisbane, QLD, Australia.
| | - David Rodwell
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Faculty of Health, Queensland University of Technology, Brisbane, QLD, Australia
| | - Thomasina Donovan
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Faculty of Health, Queensland University of Technology, Brisbane, QLD, Australia
| | - Amina Tariq
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Faculty of Health, Queensland University of Technology, Brisbane, QLD, Australia
| | - Melissa Baysari
- Biomedical Informatics and Digital Health, School of Medical Sciences, Faculty of Medicine and Health, The University of Sydney, Camperdown, Australia
| | - Robin Blythe
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Faculty of Health, Queensland University of Technology, Brisbane, QLD, Australia
| | - Rex Parsons
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Faculty of Health, Queensland University of Technology, Brisbane, QLD, Australia
| | - Steven M McPhail
- Australian Centre for Health Services Innovation and Centre for Healthcare Transformation, School of Public Health and Social Work, Faculty of Health, Queensland University of Technology, Brisbane, QLD, Australia
| |
Collapse
|