1
|
Cheng Y, Tang Q, Li X, Ma L, Yuan J, Hou X. Meta-lasso: new insight on infection prediction after minimally invasive surgery. Med Biol Eng Comput 2024; 62:1703-1715. [PMID: 38347344 DOI: 10.1007/s11517-024-03027-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Accepted: 01/09/2024] [Indexed: 05/09/2024]
Abstract
Surgical site infection (SSI) after minimally invasive lung cancer surgery constitutes an important factor influencing the direct and indirect economic implications, patient prognosis, and the 5-year survival rate for early-stage lung cancer patients. In the realm of predictive healthcare, machine learning algorithms have been instrumental in anticipating various surgical outcomes, including SSI. However, accurately predicting infection after minimally invasive surgery remains a clinical challenge due to the multitude of physiological and surgical factors associated with it. Furthermore, clinical patient data, in addition to being high-dimensional, often exists the long-tail problem, posing difficulties for traditional machine learning algorithms in effectively processing such data. Based on this insight, we propose a novel approach called meta-lasso for infection prediction following minimally invasive surgery. Our approach leverages the sparse learning algorithm lasso regression to select informative features and introduces a meta-learning framework to mitigate bias towards the dominant class. We conducted a retrospective cohort study on patients who had undergone minimally invasive surgery for lung cancer at Shanghai Chest Hospital between 2018 and 2020. The evaluation encompassed key performance metrics, including sensitivity, specificity, precision (PPV), negative predictive value (NPV), and accuracy. Our approach has surpassed the performance of logistic regression, random forest, Naive Bayes classifier, gradient boosting decision tree, ANN, and lasso regression, with sensitivity at 0.798, specificity at 0.779, precision at 0.789, NPV at 0.798, and accuracy at 0.788 and has greatly improved the classification performance of the inferior class.
Collapse
Affiliation(s)
- Yuejia Cheng
- Shanghai Chest Hospital, Shanghai Jiao Tong University School of Medicine, 241 West Huaihai Road, 200030, Shanghai, China
| | - Qinhua Tang
- Shanghai Chest Hospital, Shanghai Jiao Tong University School of Medicine, 241 West Huaihai Road, 200030, Shanghai, China
| | - Xiang Li
- School of Computer Science, Shanghai University, 99 Shangda Road, 200044, Shanghai, China
| | - Liyan Ma
- School of Computer Science, Shanghai University, 99 Shangda Road, 200044, Shanghai, China
| | - Junyi Yuan
- Shanghai Chest Hospital, Shanghai Jiao Tong University School of Medicine, 241 West Huaihai Road, 200030, Shanghai, China
| | - Xumin Hou
- Shanghai Chest Hospital, Shanghai Jiao Tong University School of Medicine, 241 West Huaihai Road, 200030, Shanghai, China.
| |
Collapse
|
2
|
Verberk JDM, van der Werff SD, Weegar R, Henriksson A, Richir MC, Buchli C, van Mourik MSM, Nauclér P. The augmented value of using clinical notes in semi-automated surveillance of deep surgical site infections after colorectal surgery. Antimicrob Resist Infect Control 2023; 12:117. [PMID: 37884948 PMCID: PMC10604406 DOI: 10.1186/s13756-023-01316-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Accepted: 09/25/2023] [Indexed: 10/28/2023] Open
Abstract
BACKGROUND In patients who underwent colorectal surgery, an existing semi-automated surveillance algorithm based on structured data achieves high sensitivity in detecting deep surgical site infections (SSI), however, generates a significant number of false positives. The inclusion of unstructured, clinical narratives to the algorithm may decrease the number of patients requiring manual chart review. The aim of this study was to investigate the performance of this semi-automated surveillance algorithm augmented with a natural language processing (NLP) component to improve positive predictive value (PPV) and thus workload reduction (WR). METHODS Retrospective, observational cohort study in patients who underwent colorectal surgery from January 1, 2015, through September 30, 2020. NLP was used to detect keyword counts in clinical notes. Several NLP-algorithms were developed with different count input types and classifiers, and added as component to the original semi-automated algorithm. Traditional manual surveillance was compared with the NLP-augmented surveillance algorithms and sensitivity, specificity, PPV and WR were calculated. RESULTS From the NLP-augmented models, the decision tree models with discretized counts or binary counts had the best performance (sensitivity 95.1% (95%CI 83.5-99.4%), WR 60.9%) and improved PPV and WR by only 2.6% and 3.6%, respectively, compared to the original algorithm. CONCLUSIONS The addition of an NLP component to the existing algorithm had modest effect on WR (decrease of 1.4-12.5%), at the cost of sensitivity. For future implementation it will be a trade-off between optimal case-finding techniques versus practical considerations such as acceptability and availability of resources.
Collapse
Affiliation(s)
- Janneke D M Verberk
- Department of Medical Microbiology and Infection Prevention, University Medical Centre Utrecht, Utrecht, the Netherlands
- Julius Centre for Health Sciences and Primary Care, University Medical Centre Utrecht, Utrecht, the Netherlands
- Department of Epidemiology and Surveillance, Centre for Infectious Diseases Control, National Institute for Public Health and the Environment, Bilthoven, the Netherlands
| | - Suzanne D van der Werff
- Department of Medicine Solna, Division of Infectious Diseases, Karolinska Institutet, Stockholm, Sweden.
- Department of Infectious Diseases, Karolinska University Hospital, Stockholm, Sweden.
| | - Rebecka Weegar
- Department of Computer and Systems Sciences, Stockholm University, Stockholm, Sweden
| | - Aron Henriksson
- Department of Computer and Systems Sciences, Stockholm University, Stockholm, Sweden
| | - Milan C Richir
- Department of Surgery, Cancer Centre, University Medical Centre Utrecht, Utrecht, the Netherlands
| | - Christian Buchli
- Department of Molecular Medicine and Surgery, Karolinska Institutet, Stockholm, Sweden
- Department of Pelvic Cancer, GI Oncology and Colorectal Surgery Unit, Karolinska University Hospital, Stockholm, Sweden
| | - Maaike S M van Mourik
- Department of Medical Microbiology and Infection Prevention, University Medical Centre Utrecht, Utrecht, the Netherlands
| | - Pontus Nauclér
- Department of Medicine Solna, Division of Infectious Diseases, Karolinska Institutet, Stockholm, Sweden
- Department of Infectious Diseases, Karolinska University Hospital, Stockholm, Sweden
| |
Collapse
|
3
|
Calderwood MS, Anderson DJ, Bratzler DW, Dellinger EP, Garcia-Houchins S, Maragakis LL, Nyquist AC, Perkins KM, Preas MA, Saiman L, Schaffzin JK, Schweizer M, Yokoe DS, Kaye KS. Strategies to prevent surgical site infections in acute-care hospitals: 2022 Update. Infect Control Hosp Epidemiol 2023; 44:695-720. [PMID: 37137483 PMCID: PMC10867741 DOI: 10.1017/ice.2023.67] [Citation(s) in RCA: 30] [Impact Index Per Article: 30.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
The intent of this document is to highlight practical recommendations in a concise format designed to assist acute-care hospitals in implementing and prioritizing their surgical-site infection (SSI) prevention efforts. This document updates the Strategies to Prevent Surgical Site Infections in Acute Care Hospitals published in 2014. This expert guidance document is sponsored by the Society for Healthcare Epidemiology of America (SHEA). It is the product of a collaborative effort led by SHEA, the Infectious Diseases Society of America (IDSA), the Association for Professionals in Infection Control and Epidemiology (APIC), the American Hospital Association (AHA), and The Joint Commission, with major contributions from representatives of a number of organizations and societies with content expertise.
Collapse
Affiliation(s)
| | - Deverick J. Anderson
- Duke Center for Antimicrobial Stewardship and Infection Prevention, Duke University School of Medicine, Durham, North Carolina, United States
| | - Dale W. Bratzler
- University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma, United States
| | | | | | - Lisa L. Maragakis
- Johns Hopkins School of Medicine, Baltimore, Maryland, United States
| | - Ann-Christine Nyquist
- Children’s Hospital Colorado, University of Colorado School of Medicine, Aurora, Colorado, United States
| | - Kiran M. Perkins
- Division of Healthcare Quality Promotion, Centers for Disease Control and Prevention, Atlanta, Georgia, United States
| | - Michael Anne Preas
- University of Maryland Medical System, Baltimore, Maryland, United States
| | - Lisa Saiman
- Columbia University Irving Medical Center and NewYork–Presbyterian Hospital, New York, New York, United States
| | - Joshua K. Schaffzin
- Children’s Hospital of Eastern Ontario, University of Ottawa, Ottawa, Ontario, Canada
| | - Marin Schweizer
- Center for Access and Delivery Research and Evaluation, Iowa City VA Health Care System, University of Iowa, Iowa City, Iowa
| | - Deborah S. Yokoe
- University of California-San Francisco, San Francisco, California, United States
| | - Keith S. Kaye
- Rutgers Robert Wood Johnson Medical School, New Brunswick, New Jersey, United States
| |
Collapse
|
4
|
Use of machine learning in pediatric surgical clinical prediction tools: A systematic review. J Pediatr Surg 2023; 58:908-916. [PMID: 36804103 DOI: 10.1016/j.jpedsurg.2023.01.020] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Accepted: 01/03/2023] [Indexed: 01/20/2023]
Abstract
PURPOSE Clinical prediction tools (CPTs) are decision-making instruments utilizing patient data to predict specific clinical outcomes, risk-stratify patients, or suggest personalized diagnostic or therapeutic options. Recent advancements in artificial intelligence have resulted in a proliferation of CPTs created using machine learning (ML)-yet the clinical applicability of ML-based CPTs and their validation in clinical settings remain unclear. This systematic review aims to compare the validity and clinical efficacy of ML-based to traditional CPTs in pediatric surgery. METHODS Nine databases were searched from 2000 until July 9, 2021 to retrieve articles reporting on CPTs and ML for pediatric surgical conditions. PRISMA standards were followed, and screening was performed by two independent reviewers in Rayyan, with a third reviewer resolving conflicts. Risk of bias was assessed using the PROBAST. RESULTS Out of 8300 studies, 48 met the inclusion criteria. The most represented surgical specialties were pediatric general (14), neurosurgery (13) and cardiac surgery (12). Prognostic (26) CPTs were the most represented type of surgical pediatric CPTs followed by diagnostic (10), interventional (9), and risk stratifying (2). One study included a CPT for diagnostic, interventional and prognostic purposes. 81% of studies compared their CPT to ML-based CPTs, statistical CPTs, or the unaided clinician, but lacked external validation and/or evidence of clinical implementation. CONCLUSIONS While most studies claim significant potential improvements by incorporating ML-based CPTs in pediatric surgical decision-making, both external validation and clinical application remains limited. Further studies must focus on validating existing instruments or developing validated tools, and incorporating them in the clinical workflow. TYPE OF STUDY Systematic Review LEVEL OF EVIDENCE: Level III.
Collapse
|
5
|
Performance of machine learning algorithms for surgical site infection case detection and prediction: A systematic review and meta-analysis. Ann Med Surg (Lond) 2022; 84:104956. [PMID: 36582918 PMCID: PMC9793260 DOI: 10.1016/j.amsu.2022.104956] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 11/08/2022] [Accepted: 11/13/2022] [Indexed: 11/24/2022] Open
Abstract
Background Medical researchers and clinicians have shown much interest in developing machine learning (ML) algorithms to detect/predict surgical site infections (SSIs). However, little is known about the overall performance of ML algorithms in predicting SSIs and how to improve the algorithm's robustness. We conducted a systematic review and meta-analysis to summarize the performance of ML algorithms in SSIs case detection and prediction and to describe the impact of using unstructured and textual data in the development of ML algorithms. Methods MEDLINE, EMBASE, CINAHL, CENTRAL and Web of Science were searched from inception to March 25, 2021. Study characteristics and algorithm development information were extracted. Performance statistics (e.g., sensitivity, area under the receiver operating characteristic curve [AUC]) were pooled using a random effect model. Stratified analysis was applied to different study characteristic levels. Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Diagnostic Test Accuracy Studies (PRISMA-DTA) was followed. Results Of 945 articles identified, 108 algorithms from 32 articles were included in this review. The overall pooled estimate of the SSI incidence rate was 3.67%, 95% CI: 3.58-3.76. Mixed-use of structured and textual data-based algorithms (pooled estimates of sensitivity 0.83, 95% CI: 0.78-0.87, specificity 0.92, 95% CI: 0.86-0.95, AUC 0.92, 95% CI: 0.89-0.94) outperformed algorithms solely based on structured data (sensitivity 0.56, 95% CI:0.43-0.69, specificity 0.95, 95% CI:0.91-0.97, AUC = 0.90, 95% CI: 0.87-0.92). Conclusions ML algorithms developed with structured and textual data provided optimal performance. External validation of ML algorithms is needed to translate current knowledge into clinical practice.
Collapse
|
6
|
Wu C, Zhou T, Tian Y, Wu J, Li J, Liu Z. A method for the early prediction of chronic diseases based on short sequential medical data. Artif Intell Med 2022; 127:102262. [DOI: 10.1016/j.artmed.2022.102262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2020] [Revised: 02/18/2022] [Accepted: 02/23/2022] [Indexed: 11/30/2022]
|
7
|
Coombes CE, Coombes KR, Fareed N. A novel model to label delirium in an intensive care unit from clinician actions. BMC Med Inform Decis Mak 2021; 21:97. [PMID: 33750375 PMCID: PMC7941123 DOI: 10.1186/s12911-021-01461-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Accepted: 03/02/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND In the intensive care unit (ICU), delirium is a common, acute, confusional state associated with high risk for short- and long-term morbidity and mortality. Machine learning (ML) has promise to address research priorities and improve delirium outcomes. However, due to clinical and billing conventions, delirium is often inconsistently or incompletely labeled in electronic health record (EHR) datasets. Here, we identify clinical actions abstracted from clinical guidelines in electronic health records (EHR) data that indicate risk of delirium among intensive care unit (ICU) patients. We develop a novel prediction model to label patients with delirium based on a large data set and assess model performance. METHODS EHR data on 48,451 admissions from 2001 to 2012, available through Medical Information Mart for Intensive Care-III database (MIMIC-III), was used to identify features to develop our prediction models. Five binary ML classification models (Logistic Regression; Classification and Regression Trees; Random Forests; Naïve Bayes; and Support Vector Machines) were fit and ranked by Area Under the Curve (AUC) scores. We compared our best model with two models previously proposed in the literature for goodness of fit, precision, and through biological validation. RESULTS Our best performing model with threshold reclassification for predicting delirium was based on a multiple logistic regression using the 31 clinical actions (AUC 0.83). Our model out performed other proposed models by biological validation on clinically meaningful, delirium-associated outcomes. CONCLUSIONS Hurdles in identifying accurate labels in large-scale datasets limit clinical applications of ML in delirium. We developed a novel labeling model for delirium in the ICU using a large, public data set. By using guideline-directed clinical actions independent from risk factors, treatments, and outcomes as model predictors, our classifier could be used as a delirium label for future clinically targeted models.
Collapse
Affiliation(s)
- Caitlin E Coombes
- College of Medicine, The Ohio State University, Columbus, OH, 43210, USA
| | - Kevin R Coombes
- Department of Biomedical Informatics, The Ohio State University College of Medicine, 460 Medical Center Dr., 512 Institute of Behavioral Medicine Research, Columbus, OH, 43210, USA
| | - Naleef Fareed
- Department of Biomedical Informatics, The Ohio State University College of Medicine, 460 Medical Center Dr., 512 Institute of Behavioral Medicine Research, Columbus, OH, 43210, USA.
- Center for the Advancement of Team Science, Analytics, and Systems Thinking, College of Medicine, The Ohio State University, Columbus, OH, 43210, USA.
| |
Collapse
|
8
|
Berrondo C, Bettinger B, Katz CB, Bauer J, Shnorhavorian M, Zerr DM. Validation of an Electronic Surveillance Algorithm to Identify Patients With Post-Operative Surgical Site Infections Using National Surgical Quality Improvement Program Pediatric Data. J Pediatric Infect Dis Soc 2020; 9:680-685. [PMID: 31886513 DOI: 10.1093/jpids/piz095] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/15/2019] [Accepted: 12/06/2019] [Indexed: 11/14/2022]
Abstract
BACKGROUND Surgical site infections (SSIs) are common, but data related to these infections maybe difficult to capture. We developed an electronic surveillance algorithm to identify patients with SSIs. Our objective was to validate our algorithm by comparing it with our institutional National Surgical Quality Improvement Program Pediatric (NSQIP Peds) data. METHODS We applied our algorithm to our institutional NSQIP Peds 2015-2017 cohort. The algorithm consisted of the presence of a diagnosis code for post-operative infection or the presence of 4 criteria: diagnosis code for infection, antibiotic administration, positive culture, and readmission/surgery related to infection. We compared the algorithm's SSI rate to the NSQIP Peds identified SSI. Algorithm performance was assessed using sensitivity, specificity, negative predictive value (NPV), positive predictive value (PPV), and Cohen's kappa. The charts of discordant patients were reviewed to understand limitations of the algorithm. RESULTS Of 3879 patients included, 2.5% had SSIs by NSQIP Peds definition and 1.9% had SSIs by our algorithm. Our algorithm achieved a sensitivity of 44%, specificity of 99%, NPV of 99%, PPV of 59%, and Cohen's kappa of 0.5. Of the 54 false negatives, 37% were diagnosed/treated as outpatients, 31% had tracheitis, and 17% developed SSIs during their post-operative admission. Of the 30 false positives, 33% had an infection at index surgery and 33% had SSIs related to other surgeries/procedures. CONCLUSIONS Our algorithm achieved high specificity and NPV compared with NSQIP Peds reported SSIs and may be useful when identifying SSIs in patient populations that are not actively monitored for SSIs.
Collapse
Affiliation(s)
- Claudia Berrondo
- Division of Pediatric Urology, Seattle Children's Hospital, and Department of Urology, University of Washington, Seattle, Washington, USA.,Division of Pediatric Urology, Children's Hospital and Medical Center, and Department of Surgery (Urologic Surgery), University of Nebraska, Omaha, Nebraska, USA
| | - Brendan Bettinger
- Department of Quality and Safety Support, Seattle Children's Hospital, Seattle, Washington, USA
| | - Cindy B Katz
- Department of Surgical Management, Seattle Children's Hospital, Seattle, Washington, USA
| | - Jennifer Bauer
- Division of Pediatric Orthopedic Surgery, Seattle Children's Hospital, and Department of Orthopedic Surgery, University of Washington, Seattle, Washington, USA
| | - Margarett Shnorhavorian
- Division of Pediatric Urology, Seattle Children's Hospital, and Department of Urology, University of Washington, Seattle, Washington, USA
| | - Danielle M Zerr
- Division of Pediatric Infectious Diseases, Seattle Children's Hospital, and Department of Pediatrics, University of Washington, Seattle, Washington, USA
| |
Collapse
|
9
|
Chen W, Lu Z, You L, Zhou L, Xu J, Chen K. Artificial Intelligence-Based Multimodal Risk Assessment Model for Surgical Site Infection (AMRAMS): Development and Validation Study. JMIR Med Inform 2020; 8:e18186. [PMID: 32538798 PMCID: PMC7325005 DOI: 10.2196/18186] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Revised: 04/15/2020] [Accepted: 04/19/2020] [Indexed: 01/16/2023] Open
Abstract
Background Surgical site infection (SSI) is one of the most common types of health care–associated infections. It increases mortality, prolongs hospital length of stay, and raises health care costs. Many institutions developed risk assessment models for SSI to help surgeons preoperatively identify high-risk patients and guide clinical intervention. However, most of these models had low accuracies. Objective We aimed to provide a solution in the form of an Artificial intelligence–based Multimodal Risk Assessment Model for Surgical site infection (AMRAMS) for inpatients undergoing operations, using routinely collected clinical data. We internally and externally validated the discriminations of the models, which combined various machine learning and natural language processing techniques, and compared them with the National Nosocomial Infections Surveillance (NNIS) risk index. Methods We retrieved inpatient records between January 1, 2014, and June 30, 2019, from the electronic medical record (EMR) system of Rui Jin Hospital, Luwan Branch, Shanghai, China. We used data from before July 1, 2018, as the development set for internal validation and the remaining data as the test set for external validation. We included patient demographics, preoperative lab results, and free-text preoperative notes as our features. We used word-embedding techniques to encode text information, and we trained the LASSO (least absolute shrinkage and selection operator) model, random forest model, gradient boosting decision tree (GBDT) model, convolutional neural network (CNN) model, and self-attention network model using the combined data. Surgeons manually scored the NNIS risk index values. Results For internal bootstrapping validation, CNN yielded the highest mean area under the receiver operating characteristic curve (AUROC) of 0.889 (95% CI 0.886-0.892), and the paired-sample t test revealed statistically significant advantages as compared with other models (P<.001). The self-attention network yielded the second-highest mean AUROC of 0.882 (95% CI 0.878-0.886), but the AUROC was only numerically higher than the AUROC of the third-best model, GBDT with text embeddings (mean AUROC 0.881, 95% CI 0.878-0.884, P=.47). The AUROCs of LASSO, random forest, and GBDT models using text embeddings were statistically higher than the AUROCs of models not using text embeddings (P<.001). For external validation, the self-attention network yielded the highest AUROC of 0.879. CNN was the second-best model (AUROC 0.878), and GBDT with text embeddings was the third-best model (AUROC 0.872). The NNIS risk index scored by surgeons had an AUROC of 0.651. Conclusions Our AMRAMS based on EMR data and deep learning methods—CNN and self-attention network—had significant advantages in terms of accuracy compared with other conventional machine learning methods and the NNIS risk index. Moreover, the semantic embeddings of preoperative notes improved the model performance further. Our models could replace the NNIS risk index to provide personalized guidance for the preoperative intervention of SSIs. Through this case, we offered an easy-to-implement solution for building multimodal RAMs for other similar scenarios.
Collapse
Affiliation(s)
- Weijia Chen
- Department of Anesthesiology, Rui Jin Hospital, Luwan Branch, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Zhijun Lu
- Department of Anesthesiology, Rui Jin Hospital, Luwan Branch, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Lijue You
- Department of Informatics, Rui Jin Hospital, Luwan Branch, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Lingling Zhou
- Department of Infection Prevention and Control, Rui Jin Hospital, Luwan Branch, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jie Xu
- VitalStrategic Research Institute, Shanghai, China.,Synyi Research, Shanghai, China
| | - Ken Chen
- Department of Anesthesiology, Rui Jin Hospital, Luwan Branch, Shanghai Jiao Tong University School of Medicine, Shanghai, China.,Synyi Research, Shanghai, China.,Precision Diagnosis and Image Guided Therapy, Philips Research China, Shanghai, China
| |
Collapse
|
10
|
Shi J, Liu S, Pruitt LCC, Luppens CL, Ferraro JP, Gundlapalli AV, Chapman WW, Bucher BT. Using Natural Language Processing to improve EHR Structured Data-based Surgical Site Infection Surveillance. AMIA ... ANNUAL SYMPOSIUM PROCEEDINGS. AMIA SYMPOSIUM 2020; 2019:794-803. [PMID: 32308875 PMCID: PMC7153106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Surgical Site Infection surveillance in healthcare systems is labor intensive and plagued by underreporting as current methodology relies heavily on manual chart review. The rapid adoption of electronic health records (EHRs) has the potential to allow the secondary use of EHR data for quality surveillance programs. This study aims to investigate the effectiveness of integrating natural language processing (NLP) outputs with structured EHR data to build machine learning models for SSI identification using real-world clinical data. We examined a set of models using structured data with and without NLP document-level, mention-level, and keyword features. The top-performing model was based on a Random Forest classifier enhanced with NLP document-level features achieving a 0.58 sensitivity, 0.97 specificity, 0.54 PPV, 0.98 NPV, and 0.52 F0.5 score. We further interrogated the feature contributions, analyzed the errors, and discussed future directions.
Collapse
Affiliation(s)
- Jianlin Shi
- School of Medicine, University of Utah, Salt Lake City, Utah, US
| | - Siru Liu
- School of Medicine, University of Utah, Salt Lake City, Utah, US
| | - Liese C C Pruitt
- School of Medicine, University of Utah, Salt Lake City, Utah, US
| | | | - Jeffrey P Ferraro
- School of Medicine, University of Utah, Salt Lake City, Utah, US
- Intermountain Healthcare, Salt Lake City, Utah, US
| | - Adi V Gundlapalli
- School of Medicine, University of Utah, Salt Lake City, Utah, US
- VA Salt Lake City Healthcare System, IDEAS Center 2.0, Salt Lake City, Utah, US
| | - Wendy W Chapman
- School of Medicine, University of Utah, Salt Lake City, Utah, US
| | - Brian T Bucher
- School of Medicine, University of Utah, Salt Lake City, Utah, US
| |
Collapse
|
11
|
Karavite DJ, Miller MW, Ramos MJ, Rettig SL, Ross RK, Xiao R, Muthu N, Localio AR, Gerber JS, Coffin SE, Grundmeier RW. User Testing an Information Foraging Tool for Ambulatory Surgical Site Infection Surveillance. Appl Clin Inform 2018; 9:791-802. [PMID: 30357777 DOI: 10.1055/s-0038-1675179] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022] Open
Abstract
BACKGROUND Surveillance for surgical site infections (SSIs) after ambulatory surgery in children requires a detailed manual chart review to assess criteria defined by the National Health and Safety Network (NHSN). Electronic health records (EHRs) impose an inefficient search process where infection preventionists must manually review every postsurgical encounter (< 30 days). Using text mining and business intelligence software, we developed an information foraging application, the SSI Workbench, to visually present which postsurgical encounters included SSI-related terms and synonyms, antibiotic, and culture orders. OBJECTIVE This article compares the Workbench and EHR on four dimensions: (1) effectiveness, (2) efficiency, (3) workload, and (4) usability. METHODS Comparative usability test of Workbench and EHR. Objective test metrics are time per case, encounters reviewed per case, time per encounter, and retrieval of information meeting NHSN definitions. Subjective measures are cognitive load using the National Aeronautics and Space Administration (NASA) Task Load Index (NASA TLX), and a questionnaire on system usability and utility. RESULTS Eight infection preventionists participated in the test. There was no difference in effectiveness as subjects retrieved information from all cases, using both systems, to meet the NHSN criteria. There was no difference in efficiency in time per case between the Workbench and EHR (8.58 vs. 7.39 minutes, p = 0.36). However, with the Workbench subjects opened fewer encounters per case (3.0 vs. 7.5, p = 0.002), spent more time per encounter (2.23 vs. 0.92 minutes, p = 0.002), rated the Workbench lower in cognitive load (NASA TLX, 24 vs. 33, p = 0.02), and significantly higher in measures of usability. CONCLUSION Compared with the EHR, the Workbench was more usable, short, and reduced cognitive load. In overall efficiency, the Workbench did not save time, but demonstrated a shift from between-encounter foraging to within-encounter foraging and was rated as significantly more efficient. Our results suggest that infection surveillance can be better supported by systems applying information foraging theory.
Collapse
Affiliation(s)
- Dean J Karavite
- Department of Biomedical and Health Informatics, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania, United States
| | - Matthew W Miller
- Department of Biomedical and Health Informatics, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania, United States
| | - Mark J Ramos
- Department of Biomedical and Health Informatics, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania, United States
| | - Susan L Rettig
- Department of Infection Prevention and Control, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania, United States
| | - Rachael K Ross
- Division of Infectious Disease, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania, United States
| | - Rui Xiao
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States
| | - Naveen Muthu
- Department of Biomedical and Health Informatics, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania, United States.,Department of Pediatrics, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania, United States
| | - A Russell Localio
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, United States
| | - Jeffrey S Gerber
- Division of Infectious Disease, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania, United States.,Department of Pediatrics, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania, United States
| | - Susan E Coffin
- Division of Infectious Disease, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania, United States.,Department of Pediatrics, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania, United States
| | - Robert W Grundmeier
- Department of Biomedical and Health Informatics, Children's Hospital of Philadelphia, Philadelphia, Pennsylvania, United States.,Department of Pediatrics, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania, United States
| |
Collapse
|