Hurley NC, Gupta RK, Schroeder KM, Hess AS. Danger, Danger, Gaston Labat! Does zero-shot artificial intelligence correlate with anticoagulation guidelines recommendations for neuraxial anesthesia?
Reg Anesth Pain Med 2024;
49:661-667. [PMID:
38253610 DOI:
10.1136/rapm-2023-104868]
[Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 09/18/2023] [Indexed: 01/24/2024]
Abstract
INTRODUCTION
Artificial intelligence and large language models (LLMs) have emerged as potentially disruptive technologies in healthcare. In this study GPT-3.5, an accessible LLM, was assessed for its accuracy and reliability in performing guideline-based evaluation of neuraxial bleeding risk in hypothetical patients on anticoagulation medication. The study also explored the impact of structured prompt guidance on the LLM's performance.
METHODS
A dataset of 10 hypothetical patient stems and 26 anticoagulation profiles (260 unique combinations) was developed based on American Society of Regional Anesthesia and Pain Medicine guidelines. Five prompts were created for the LLM, ranging from minimal guidance to explicit instructions. The model's responses were compared with a "truth table" based on the guidelines. Performance metrics, including accuracy and area under the receiver operating curve (AUC), were used.
RESULTS
Baseline performance of GPT-3.5 was slightly above chance. With detailed prompts and explicit guidelines, performance improved significantly (AUC 0.70, 95% CI (0.64 to 0.77)). Performance varied among medication classes.
DISCUSSION
LLMs show potential for assisting in clinical decision making but rely on accurate and relevant prompts. Integration of LLMs should consider safety and privacy concerns. Further research is needed to optimize LLM performance and address complex scenarios. The tested LLM demonstrates potential in assessing neuraxial bleeding risk but relies on precise prompts. LLM integration should be approached cautiously, considering limitations. Future research should focus on optimization and understanding LLM capabilities and limitations in healthcare.
Collapse