1
|
Liu S, McCoy AB, Wright AP, Nelson SD, Huang SS, Ahmad HB, Carro SE, Franklin J, Brogan J, Wright A. Why do users override alerts? Utilizing large language model to summarize comments and optimize clinical decision support. J Am Med Inform Assoc 2024; 31:1388-1396. [PMID: 38452289 PMCID: PMC11105133 DOI: 10.1093/jamia/ocae041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 02/06/2024] [Accepted: 02/21/2024] [Indexed: 03/09/2024] Open
Abstract
OBJECTIVES To evaluate the capability of using generative artificial intelligence (AI) in summarizing alert comments and to determine if the AI-generated summary could be used to improve clinical decision support (CDS) alerts. MATERIALS AND METHODS We extracted user comments to alerts generated from September 1, 2022 to September 1, 2023 at Vanderbilt University Medical Center. For a subset of 8 alerts, comment summaries were generated independently by 2 physicians and then separately by GPT-4. We surveyed 5 CDS experts to rate the human-generated and AI-generated summaries on a scale from 1 (strongly disagree) to 5 (strongly agree) for the 4 metrics: clarity, completeness, accuracy, and usefulness. RESULTS Five CDS experts participated in the survey. A total of 16 human-generated summaries and 8 AI-generated summaries were assessed. Among the top 8 rated summaries, five were generated by GPT-4. AI-generated summaries demonstrated high levels of clarity, accuracy, and usefulness, similar to the human-generated summaries. Moreover, AI-generated summaries exhibited significantly higher completeness and usefulness compared to the human-generated summaries (AI: 3.4 ± 1.2, human: 2.7 ± 1.2, P = .001). CONCLUSION End-user comments provide clinicians' immediate feedback to CDS alerts and can serve as a direct and valuable data resource for improving CDS delivery. Traditionally, these comments may not be considered in the CDS review process due to their unstructured nature, large volume, and the presence of redundant or irrelevant content. Our study demonstrates that GPT-4 is capable of distilling these comments into summaries characterized by high clarity, accuracy, and completeness. AI-generated summaries are equivalent and potentially better than human-generated summaries. These AI-generated summaries could provide CDS experts with a novel means of reviewing user comments to rapidly optimize CDS alerts both online and offline.
Collapse
Affiliation(s)
- Siru Liu
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37212, United States
- Department of Computer Science, Vanderbilt University, Nashville, TN 37212, United States
| | - Allison B McCoy
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37212, United States
| | - Aileen P Wright
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37212, United States
- Department of Medicine, Vanderbilt University Medical Center, Nashville, TN 37212, United States
| | - Scott D Nelson
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37212, United States
| | - Sean S Huang
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37212, United States
- Department of Medicine, Vanderbilt University Medical Center, Nashville, TN 37212, United States
| | - Hasan B Ahmad
- Department of Biomedical Informatics and Medical Education, University of Washington, Seattle, WA 98195, United States
| | - Sabrina E Carro
- Department of Pediatrics, Vanderbilt University Medical Center, Nashville, TN 37212, United States
| | - Jacob Franklin
- Department of Medicine, Vanderbilt University Medical Center, Nashville, TN 37212, United States
| | - James Brogan
- Department of Medicine, Vanderbilt University Medical Center, Nashville, TN 37212, United States
| | - Adam Wright
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37212, United States
- Department of Medicine, Vanderbilt University Medical Center, Nashville, TN 37212, United States
| |
Collapse
|
2
|
Graafsma J, Murphy RM, van de Garde EMW, Karapinar-Çarkit F, Derijks HJ, Hoge RHL, Klopotowska JE, van den Bemt PMLA. The use of artificial intelligence to optimize medication alerts generated by clinical decision support systems: a scoping review. J Am Med Inform Assoc 2024; 31:1411-1422. [PMID: 38641410 PMCID: PMC11105146 DOI: 10.1093/jamia/ocae076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2024] [Revised: 03/21/2024] [Accepted: 03/28/2024] [Indexed: 04/21/2024] Open
Abstract
OBJECTIVE Current Clinical Decision Support Systems (CDSSs) generate medication alerts that are of limited clinical value, causing alert fatigue. Artificial Intelligence (AI)-based methods may help in optimizing medication alerts. Therefore, we conducted a scoping review on the current state of the use of AI to optimize medication alerts in a hospital setting. Specifically, we aimed to identify the applied AI methods used together with their performance measures and main outcome measures. MATERIALS AND METHODS We searched Medline, Embase, and Cochrane Library database on May 25, 2023 for studies of any quantitative design, in which the use of AI-based methods was investigated to optimize medication alerts generated by CDSSs in a hospital setting. The screening process was supported by ASReview software. RESULTS Out of 5625 citations screened for eligibility, 10 studies were included. Three studies (30%) reported on both statistical performance and clinical outcomes. The most often reported performance measure was positive predictive value ranging from 9% to 100%. Regarding main outcome measures, alerts optimized using AI-based methods resulted in a decreased alert burden, increased identification of inappropriate or atypical prescriptions, and enabled prediction of user responses. In only 2 studies the AI-based alerts were implemented in hospital practice, and none of the studies conducted external validation. DISCUSSION AND CONCLUSION AI-based methods can be used to optimize medication alerts in a hospital setting. However, reporting on models' development and validation should be improved, and external validation and implementation in hospital practice should be encouraged.
Collapse
Affiliation(s)
- Jetske Graafsma
- Department of Clinical Pharmacy and Pharmacology, University Medical Center Groningen, University of Groningen, Groningen, 9713GZ, The Netherlands
| | - Rachel M Murphy
- Department of Medical Informatics Amsterdam UMC, University of Amsterdam, Amsterdam, 1000GG, The Netherlands
- Amsterdam Public Health Institute, Digital Health and Quality of Care, Amsterdam, 1105AZ, The Netherlands
| | - Ewoudt M W van de Garde
- Department of Pharmacy, St Antonius Hospital, Utrecht, 3430AM, The Netherlands
- Division of Pharmacoepidemiology and Clinical Pharmacology, Utrecht University, Utrecht, 3584CS, The Netherlands
| | - Fatma Karapinar-Çarkit
- Department of Clinical Pharmacy and Toxicology, Maastricht University Medical Center, Maastricht, 6229HX, The Netherlands
- Department of Clinical Pharmacy, CARIM, Cardiovascular Research Institute Maastricht, Maastricht University, Maastricht, 6229ER, The Netherlands
| | - Hieronymus J Derijks
- Department of Pharmacy, Jeroen Bosch Hospital, Den Bosch, 5200ME, The Netherlands
| | - Rien H L Hoge
- Department of Pharmacy, Wilhelmina Hospital, Assen, 9401RK, The Netherlands
| | - Joanna E Klopotowska
- Department of Medical Informatics Amsterdam UMC, University of Amsterdam, Amsterdam, 1000GG, The Netherlands
- Amsterdam Public Health Institute, Digital Health and Quality of Care, Amsterdam, 1105AZ, The Netherlands
| | - Patricia M L A van den Bemt
- Department of Clinical Pharmacy and Pharmacology, University Medical Center Groningen, University of Groningen, Groningen, 9713GZ, The Netherlands
| |
Collapse
|
3
|
Li W, Shang N, Zhang Z, Li Y, Li X, Zheng X. Development and validation of a machine learning model to improve precision prediction for irrational prescriptions in orthopedic perioperative patients. Expert Opin Drug Saf 2024:1-11. [PMID: 38698685 DOI: 10.1080/14740338.2024.2348569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 03/19/2024] [Indexed: 05/05/2024]
Abstract
OBJECTIVE Our objective was to develop a machine learning model capable of predicting irrational medical prescriptions precisely within orthopedic perioperative patients. METHODS A dataset comprising 3047 instances of suspected irrational medication prescriptions was collected from a sample of 1318 orthopedic perioperative patients from April 2019 to March 2022. Four machine learning models were employed to forecast irrational prescriptions, following which, the performance of each model was meticulously assessed. Subsequently, a thorough variable importance analysis was conducted on the model that performed the best predictive capabilities. Thereafter, the efficacy of integrating this optimal model into the existing audit prescription process was rigorously evaluated. RESULTS Of the models utilized in this study, the RF model yielded the highest AUC of 92%, whereas the NB model presented the lowest AUC of 68%. Also, the RF model boasted the most robust performance in terms of PPV, reaching 82.4%, and NPV, reaching 86.6%. The ANN and the XGBoost model were neck and neck, with the ANN slightly edging out with a higher PPV of 95.9%, while the XGBoost model boasted an impressive NPV of 98.2%. The RF model singled out the following five factors as the most influential in predicting irrational prescriptions: the type of drug, the type of surgery, the number of comorbidities, the date of surgery after hospitalization, as well as the associated hospital and drug costs. CONCLUSION The RF model showcased significantly high level of proficiency in predicting irrational prescriptions among orthopedic perioperative patients, outperforming other models by a considerable margin. It effectively enhanced the efficiency of pharmacist interventions, displaying outstanding performance in assisting pharmacists to intervene with irrational prescriptions.
Collapse
Affiliation(s)
- Weipeng Li
- School of Pharmacy, Shanxi Medical University, Taiyuan, Shanxi Province, P.R. China
| | - Nan Shang
- Department of Pharmacy, The First Hospital of Shanxi Medical University, Taiyuan, Shanxi Province, P.R. China
| | - Zhiqi Zhang
- Department of Pharmacy, The First Hospital of Shanxi Medical University, Taiyuan, Shanxi Province, P.R. China
| | - Yun Li
- Department of Pharmacy, The First Hospital of Shanxi Medical University, Taiyuan, Shanxi Province, P.R. China
| | - Xianlin Li
- School of Pharmacy, Shanxi Medical University, Taiyuan, Shanxi Province, P.R. China
| | - Xiaojun Zheng
- Department of Pharmacy, The First Hospital of Shanxi Medical University, Taiyuan, Shanxi Province, P.R. China
| |
Collapse
|
4
|
Liu S, McCoy AB, Peterson JF, Lasko TA, Sittig DF, Nelson SD, Andrews J, Patterson L, Cobb CM, Mulherin D, Morton CT, Wright A. Leveraging explainable artificial intelligence to optimize clinical decision support. J Am Med Inform Assoc 2024; 31:968-974. [PMID: 38383050 PMCID: PMC10990514 DOI: 10.1093/jamia/ocae019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 01/02/2024] [Accepted: 01/22/2024] [Indexed: 02/23/2024] Open
Abstract
OBJECTIVE To develop and evaluate a data-driven process to generate suggestions for improving alert criteria using explainable artificial intelligence (XAI) approaches. METHODS We extracted data on alerts generated from January 1, 2019 to December 31, 2020, at Vanderbilt University Medical Center. We developed machine learning models to predict user responses to alerts. We applied XAI techniques to generate global explanations and local explanations. We evaluated the generated suggestions by comparing with alert's historical change logs and stakeholder interviews. Suggestions that either matched (or partially matched) changes already made to the alert or were considered clinically correct were classified as helpful. RESULTS The final dataset included 2 991 823 firings with 2689 features. Among the 5 machine learning models, the LightGBM model achieved the highest Area under the ROC Curve: 0.919 [0.918, 0.920]. We identified 96 helpful suggestions. A total of 278 807 firings (9.3%) could have been eliminated. Some of the suggestions also revealed workflow and education issues. CONCLUSION We developed a data-driven process to generate suggestions for improving alert criteria using XAI techniques. Our approach could identify improvements regarding clinical decision support (CDS) that might be overlooked or delayed in manual reviews. It also unveils a secondary purpose for the XAI: to improve quality by discovering scenarios where CDS alerts are not accepted due to workflow, education, or staffing issues.
Collapse
Affiliation(s)
- Siru Liu
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37203, United States
- Department of Computer Science, Vanderbilt University, Nashville, TN 37212, United States
| | - Allison B McCoy
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37203, United States
| | - Josh F Peterson
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37203, United States
- Department of Medicine, Vanderbilt University Medical Center, Nashville, TN 37203, United States
| | - Thomas A Lasko
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37203, United States
- Department of Computer Science, Vanderbilt University, Nashville, TN 37212, United States
| | - Dean F Sittig
- School of Biomedical Informatics, University of Texas Health Science Center, Houston, TX 77030, United States
| | - Scott D Nelson
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37203, United States
| | - Jennifer Andrews
- Department of Pediatrics, Vanderbilt University Medical Center, Nashville, TN 37203, United States
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN 37203, United States
| | - Lorraine Patterson
- HeathIT, Vanderbilt University Medical Center, Nashville, TN 37203, United States
| | - Cheryl M Cobb
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN 37203, United States
| | - David Mulherin
- HeathIT, Vanderbilt University Medical Center, Nashville, TN 37203, United States
| | - Colleen T Morton
- Department of Medicine, Vanderbilt University Medical Center, Nashville, TN 37203, United States
| | - Adam Wright
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, TN 37203, United States
- Department of Medicine, Vanderbilt University Medical Center, Nashville, TN 37203, United States
| |
Collapse
|
5
|
Black KC, Snyder NA, Zhou M, Zhu Z, Uptegraft C, Chintalapani A, Orwoll B. An Electronic Health Record Alert for Inpatient Coronavirus Disease 2019 Vaccinations Increases Vaccination Ordering and Uncovers Workflow Inefficiencies. Appl Clin Inform 2024; 15:192-198. [PMID: 38253337 PMCID: PMC10917607 DOI: 10.1055/a-2250-6305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Accepted: 01/19/2024] [Indexed: 01/24/2024] Open
Abstract
BACKGROUND Despite mortality benefits, only 19.9% of U.S. adults are fully vaccinated against the coronavirus disease 2019 (COVID-19). The inpatient setting is an opportune environment to update vaccinations, and inpatient electronic health record (EHR) alerts have been shown to increase vaccination rates. OBJECTIVE Our objective was to evaluate whether an EHR alert could increase COVID-19 vaccinations in eligible hospitalized adults by prompting providers to order the vaccine. METHODS This was a quasiexperimental pre-post-interventional design study at an academic and community hospital in the western United States between 1 January, 2021 and 31 October, 2021. Inclusion criteria were unvaccinated hospitalized adults. A soft-stop, interruptive EHR alert prompted providers to order COVID-19 vaccines for those with an expected discharge date within 48 hours and interest in vaccination. The outcome measured was the proportion of all eligible patients for whom vaccines were ordered and administered before and after alert implementation. RESULTS Vaccine ordering rates increased from 4.0 to 13.0% at the academic hospital (odds ratio [OR]: 4.01, 95% confidence interval [CI]: 3.39-4.74, p < 0.001) and from 7.4 to 11.6% at the community hospital (OR: 1.62, 95% CI: 1.23-2.13, p < 0.001) after alert implementation. Administration increased postalert from 3.6 to 12.7% at the academic hospital (OR: 3.21, 95% CI: 2.70-3.82, p < 0.001) but was unchanged at the community hospital, 6.7 to 6.7% (OR: 0.99, 95% CI: 0.73-1.37, p = 0.994). Further analysis revealed infrequent vaccine availability at the community hospital. CONCLUSION Vaccine ordering rates improved at both sites after alert implementation. Vaccine administration rates, however, only improved at the academic hospital, likely due in part to vaccine dispensation inefficiency at the community hospital. This study demonstrates the potential impact of complex workflow patterns on new EHR alert success and provides a rationale for subsequent qualitative workflow analysis with alert implementation.
Collapse
Affiliation(s)
| | | | - Mengyu Zhou
- Department of Medicine, Oregon Health and Science University, Portland, Oregon
| | - Zhen Zhu
- Department of Medicine, Oregon Health and Science University, Portland, Oregon
| | - Colby Uptegraft
- Health Informatics Directorate, Defense Health Agency, Falls Church, Virginia
| | - Ani Chintalapani
- Department of Medicine, Oregon Health and Science University, Portland, Oregon
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health and Science University, Portland, Oregon
| | - Benjamin Orwoll
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health and Science University, Portland, Oregon
- Department of Pediatrics, Oregon Health and Science University, Portland, Oregon
| |
Collapse
|
6
|
Colicchio TK, Cimino JJ. Beyond the override: Using evidence of previous drug tolerance to suppress drug allergy alerts; a retrospective study of opioid alerts. J Biomed Inform 2023; 147:104508. [PMID: 37748541 DOI: 10.1016/j.jbi.2023.104508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 08/29/2023] [Accepted: 09/22/2023] [Indexed: 09/27/2023]
Abstract
OBJECTIVE Despite the extensive literature exploring alert fatigue, most studies have focused on describing the phenomenon, but not on fixing it. The authors aimed to identify data useful to avert clinically irrelevant alerts to inform future research on clinical decision support (CDS) design. METHODS We conducted a retrospective observational study of opioid drug allergy alert (DAA) overrides for the calendar year of 2019 at a large academic medical center, to identify data elements useful to find irrelevant alerts to be averted. RESULTS Overall, 227,815 DAAs were fired in 2019, with an override rate of 91 % (n = 208196). Opioids represented nearly two-thirds of these overrides (n = 129063; 62 %) and were the drug class with the highest override rate (96 %). On average, 29 opioid DAAs were overridden per patient. While most opioid alerts (97.1 %) are fired for a possible match (the drug class of the allergen matches the drug class of the prescribed drug), they are overridden significantly less frequently for definite match (exact match between allergen and prescribed drug) (88 % vs. 95.9 %, p < 0.001). When comparing the triggering drug with previously administered drugs, override rates were equally high for both definite match (95.9 %), no match (95.5 %), and possible match (95.1 %). Likewise, when comparing to home medications, overrides were excessively high for possible match (96.3 %), no match (96 %), and definite match (94.4 %). CONCLUSION We estimate that 74.5% of opioid DAAs (46.4% of all DAAs) at our institution could be relatively safely averted, since they either have a definite match for previous inpatient administrations suggesting drug tolerance or are fired as possible match with low risk of cross-sensitivity. Future research should focus on identifying other relevant data elements ideally with automated methods and use of emerging standards to empower CDS systems to suppress false-positive alerts while avoiding safety hazards.
Collapse
Affiliation(s)
- Tiago K Colicchio
- Informatics Institute, University of Alabama at Birmingham, AL, USA.
| | - James J Cimino
- Informatics Institute, University of Alabama at Birmingham, AL, USA
| |
Collapse
|
7
|
Chien SC, Yang HC, Chen CY, Chien CH, Hsu CK, Chien PH, Li YCJ. Using alert dwell time to filter universal clinical alerts: A machine learning approach. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107696. [PMID: 37480643 DOI: 10.1016/j.cmpb.2023.107696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Revised: 06/14/2023] [Accepted: 06/24/2023] [Indexed: 07/24/2023]
Abstract
BACKGROUND Alerts in computerized physician order entry (CPOE) systems can improve patient safety. However, alerts in rule-based systems cannot be customized based on individual patient or user characteristics. This limitation can lead to the presentation of irrelevant alerts and subsequent alert fatigue. OBJECTIVE We used machine learning approaches with alert dwell time to filter out irrelevant alerts for physicians based on contextual factors. METHODS We utilized five machine learning algorithms and a total of 1,120 features grouped into six categories: alert, demographic, environment, diagnosis, prescription, and laboratory results. The output of the models was the alert dwell time within a specified time window to determine the optimal range by the sensitivity analysis. RESULTS We used 813,026 records (19 categories) from the hospital's outpatient clinic data from 2020 to 2021. The sensitivity analysis showed that a time window with a range of 0.3-4.0 s had the best performance, with an area under the receiver operating characteristic (AUROC) curve of 0.73 and an area under the precision-recall curve (AUPRC) of 0.97. The model built with alert and demographic feature groups showed the best performance, with an AUROC of 0.73. The most significant individual feature groups were alert and demographic, with AUROCs of 0.66 and 0.62, respectively. CONCLUSION Our study found that alerts and user and patient demographic features are more crucial than clinical features when constructing universal context-aware alerts. Using alert dwell time in combination with a time window is an effective way to determine the trigger status of an alert. The findings of this study can provide useful insights for researchers working on specific and universal context-aware alerts.
Collapse
Affiliation(s)
- Shuo-Chen Chien
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan; Artificial Intelligence Research and Development Center, Wan Fang Hospital, Taipei Medical University, Taipei 110, Taiwan; International Center for Health Information and Technology, College of Medical science and Technology, Taipei Medical University, Taipei 110, Taiwan
| | - Hsuan-Chia Yang
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan; International Center for Health Information and Technology, College of Medical science and Technology, Taipei Medical University, Taipei 110, Taiwan; Research Center of Big Data and Meta-analysis, Wan Fang Hospital, Taipei Medical University, Taipei 110, Taiwan
| | - Chun-You Chen
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan; Artificial Intelligence Research and Development Center, Wan Fang Hospital, Taipei Medical University, Taipei 110, Taiwan; International Center for Health Information and Technology, College of Medical science and Technology, Taipei Medical University, Taipei 110, Taiwan; Department of Radiation Oncology, Wan Fang Hospital, Taipei Medical University, Taipei 110, Taiwan
| | - Chia-Hui Chien
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan; International Center for Health Information and Technology, College of Medical science and Technology, Taipei Medical University, Taipei 110, Taiwan; Office of Public Affairs, Taipei Medical University, Taipei 110, Taiwan
| | - Chun-Kung Hsu
- Office of Information Technology, Taipei Medical University, Taipei 110, Taiwan
| | - Po-Han Chien
- Department of Finance, National Taiwan University, Taipei 110, Taiwan
| | - Yu-Chuan Jack Li
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei 110, Taiwan; International Center for Health Information and Technology, College of Medical science and Technology, Taipei Medical University, Taipei 110, Taiwan; Department of Dermatology, Wan Fang Hospital, Taipei Medical University, Taipei 110, Taiwan; Research Center for Artificial Intelligence in Medicine, Taipei Medical University, Taipei 110, Taiwan.
| |
Collapse
|
8
|
Gandhi TK, Classen D, Sinsky CA, Rhew DC, Vande Garde N, Roberts A, Federico F. How can artificial intelligence decrease cognitive and work burden for front line practitioners? JAMIA Open 2023; 6:ooad079. [PMID: 37655124 PMCID: PMC10466077 DOI: 10.1093/jamiaopen/ooad079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 08/15/2023] [Accepted: 08/21/2023] [Indexed: 09/02/2023] Open
Abstract
Artificial intelligence (AI) has tremendous potential to improve the cognitive and work burden of clinicians across a range of clinical activities, which could lead to reduced burnout and better clinical care. The recent explosion of generative AI nicely illustrates this potential. Developers and organizations deploying AI have a responsibility to ensure AI is designed and implemented with end-user input, has mechanisms to identify and potentially reduce bias, and that the impact on cognitive and work burden is measured, monitored, and improved. This article focuses specifically on the role AI can play in reducing cognitive and work burden, outlines the critical issues associated with the use of AI, and serves as a call to action for vendors and users to work together to develop functionality that addresses these challenges.
Collapse
Affiliation(s)
- Tejal K Gandhi
- Press Ganey Associates LLC, Boston, MA 02109, United States
| | - David Classen
- Division of Epidemiology, University of Utah School of Medicine, Salt Lake City, UT 84132, United States
| | - Christine A Sinsky
- Professional Satisfaction & Practice Sustainability, American Medical Association, Chicago, IL 60611, United States
| | - David C Rhew
- Worldwide Commercial, Microsoft, San Francisco, CA 94103, United States
| | | | - Andrew Roberts
- Data Science, Oracle Health, Kansas City, MO 64138, United States
| | - Frank Federico
- Institute for Healthcare Improvement, Boston, MA 02109, United States
| |
Collapse
|
9
|
Liu S, Wright AP, Patterson BL, Wanderer JP, Turer RW, Nelson SD, McCoy AB, Sittig DF, Wright A. Using AI-generated suggestions from ChatGPT to optimize clinical decision support. J Am Med Inform Assoc 2023:7136722. [PMID: 37087108 DOI: 10.1093/jamia/ocad072] [Citation(s) in RCA: 77] [Impact Index Per Article: 77.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 03/28/2023] [Accepted: 04/11/2023] [Indexed: 04/24/2023] Open
Abstract
OBJECTIVE To determine if ChatGPT can generate useful suggestions for improving clinical decision support (CDS) logic and to assess noninferiority compared to human-generated suggestions. METHODS We supplied summaries of CDS logic to ChatGPT, an artificial intelligence (AI) tool for question answering that uses a large language model, and asked it to generate suggestions. We asked human clinician reviewers to review the AI-generated suggestions as well as human-generated suggestions for improving the same CDS alerts, and rate the suggestions for their usefulness, acceptance, relevance, understanding, workflow, bias, inversion, and redundancy. RESULTS Five clinicians analyzed 36 AI-generated suggestions and 29 human-generated suggestions for 7 alerts. Of the 20 suggestions that scored highest in the survey, 9 were generated by ChatGPT. The suggestions generated by AI were found to offer unique perspectives and were evaluated as highly understandable and relevant, with moderate usefulness, low acceptance, bias, inversion, redundancy. CONCLUSION AI-generated suggestions could be an important complementary part of optimizing CDS alerts, can identify potential improvements to alert logic and support their implementation, and may even be able to assist experts in formulating their own suggestions for CDS improvement. ChatGPT shows great potential for using large language models and reinforcement learning from human feedback to improve CDS alert logic and potentially other medical areas involving complex, clinical logic, a key step in the development of an advanced learning health system.
Collapse
Affiliation(s)
- Siru Liu
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Aileen P Wright
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, Tennessee, USA
- Department of Medicine, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Barron L Patterson
- Department of Pediatrics, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Jonathan P Wanderer
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, Tennessee, USA
- Department of Anesthesiology, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Robert W Turer
- Department of Emergency Medicine, University of Texas Southwestern Medical Center, Dallas, Texas, USA
- Clinical Informatics Center, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Scott D Nelson
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Allison B McCoy
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Dean F Sittig
- School of Biomedical Informatics, University of Texas Health Science Center, Houston, Texas, USA
| | - Adam Wright
- Department of Biomedical Informatics, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| |
Collapse
|
10
|
Liu S, Wright AP, Patterson BL, Wanderer JP, Turer RW, Nelson SD, McCoy AB, Sittig DF, Wright A. Assessing the Value of ChatGPT for Clinical Decision Support Optimization. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.02.21.23286254. [PMID: 36865144 PMCID: PMC9980251 DOI: 10.1101/2023.02.21.23286254] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Abstract
Objective To determine if ChatGPT can generate useful suggestions for improving clinical decision support (CDS) logic and to assess noninferiority compared to human-generated suggestions. Methods We supplied summaries of CDS logic to ChatGPT, an artificial intelligence (AI) tool for question answering that uses a large language model, and asked it to generate suggestions. We asked human clinician reviewers to review the AI-generated suggestions as well as human-generated suggestions for improving the same CDS alerts, and rate the suggestions for their usefulness, acceptance, relevance, understanding, workflow, bias, inversion, and redundancy. Results Five clinicians analyzed 36 AI-generated suggestions and 29 human-generated suggestions for 7 alerts. Of the 20 suggestions that scored highest in the survey, 9 were generated by ChatGPT. The suggestions generated by AI were found to offer unique perspectives and were evaluated as highly understandable and relevant, with moderate usefulness, low acceptance, bias, inversion, redundancy. Conclusion AI-generated suggestions could be an important complementary part of optimizing CDS alerts, can identify potential improvements to alert logic and support their implementation, and may even be able to assist experts in formulating their own suggestions for CDS improvement. ChatGPT shows great potential for using large language models and reinforcement learning from human feedback to improve CDS alert logic and potentially other medical areas involving complex, clinical logic, a key step in the development of an advanced learning health system.
Collapse
|
11
|
Short- and Medium-Term Power Demand Forecasting with Multiple Factors Based on Multi-Model Fusion. MATHEMATICS 2022. [DOI: 10.3390/math10122148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
With the continuous development of economy and society, power demand forecasting has become an important task of the power industry. Accurate power demand forecasting can promote the operation and development of the power supply industry. However, since power consumption is affected by a number of factors, it is difficult to accurately predict the power demand data. With the accumulation of data in the power industry, machine learning technology has shown great potential in power demand forecasting. In this study, gradient boosting decision tree (GBDT), extreme gradient boosting (XGBoost) and light gradient boosting machine (LightGBM) are integrated by stacking to build an XLG-LR fusion model to predict power demand. Firstly, preprocessing was carried out on 13 months of electricity and meteorological data. Next, the hyperparameters of each model were adjusted and optimized. Secondly, based on the optimal hyperparameter configuration, a prediction model was built using the training set (70% of the data). Finally, the test set (30% of the data) was used to evaluate the performance of each model. Mean absolute error (MAE), root mean square error (RMSE), mean absolute percentage error (MAPE), and goodness-of-fit coefficient (R^2) were utilized to analyze each model at different lengths of time, including their seasonal, weekly, and monthly forecast effect. Furthermore, the proposed fusion model was compared with other neural network models such as the GRU, LSTM and TCN models. The results showed that the XLG-LR model achieved the best prediction results at different time lengths, and at the same time consumed the least time compared to the neural network model. This method can provide a more reliable reference for the operation and dispatch of power enterprises and future power construction and planning.
Collapse
|