Valderrama CE, Niven DJ, Stelfox HT, Lee J. Predicting abnormal laboratory blood test results in the intensive care unit using novel features based on information theory and historical conditional probability: Observational Study (Preprint).
JMIR Med Inform 2021;
10:e35250. [PMID:
35657648 PMCID:
PMC9206206 DOI:
10.2196/35250]
[Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 03/24/2022] [Accepted: 04/21/2022] [Indexed: 11/13/2022] Open
Abstract
Background
Redundancy in laboratory blood tests is common in intensive care units (ICUs), affecting patients’ health and increasing health care expenses. Medical communities have made recommendations to order laboratory tests more judiciously. Wise selection can rely on modern data-driven approaches that have been shown to help identify low-yield laboratory blood tests in ICUs. However, although conditional entropy and conditional probability distribution have shown the potential to measure the uncertainty of yielding an abnormal test, no previous studies have adapted these techniques to include them in machine learning models for predicting abnormal laboratory test results.
Objective
This study aimed to address the limitations of previous reports by adapting conditional entropy and conditional probability to extract features for predicting abnormal laboratory blood test results.
Methods
We used an ICU data set collected across Alberta, Canada, which included 55,689 ICU admissions from 48,672 patients. We investigated the features of conditional entropy and conditional probability by comparing the performances of 2 machine learning approaches for predicting normal and abnormal results for 18 blood laboratory tests. Approach 1 used patients’ vitals, age, sex, and admission diagnosis as features. Approach 2 used the same features plus the new conditional entropy–based and conditional probability–based features. Both approaches used 4 different machine learning models (fuzzy model, logistic regression, random forest, and gradient boosting trees) and 10 metrics (sensitivity, specificity, accuracy, precision, negative predictive value [NPV], F1 score, area under the curve [AUC], precision-recall AUC, mean G, and index balanced accuracy) to assess the performance of the approaches.
Results
Approach 1 achieved an average AUC of 0.86 for all 18 laboratory tests across the 4 models (sensitivity 78%, specificity 84%, precision 82%, NPV 75%, F1 score 79%, and mean G 81%), whereas approach 2 achieved an average AUC of 0.89 (sensitivity 84%, specificity 84%, precision 83%, NPV 81%, F1 score 83%, and mean G 84%). We found that the inclusion of the new features resulted in significant differences for most of the metrics in favor of approach 2. Sensitivity significantly improved for 8 and 15 laboratory tests across the different classifiers (minimum P<.001 and maximum P=.04). Mean G and index balanced accuracy, which are balanced performance metrics, also improved significantly across the classifiers for 6 to 10 and 6 to 11 laboratory tests. The most relevant feature was the pretest probability feature, which is the probability that a test result was normal when a certain number of consecutive prior tests was already normal.
Conclusions
The findings suggest that conditional entropy–based features and pretest probability improve the capacity to discriminate between normal and abnormal laboratory test results. Detecting the next laboratory test result is an intermediate step toward developing guidelines for reducing overtesting in the ICU.
Collapse