1
|
Visu P, Sathiya V, Ajitha P, Surendran R. Enhanced swin transformer based tuberculosis classification with segmentation using chest X-ray. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2025:8953996241300018. [PMID: 39973770 DOI: 10.1177/08953996241300018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/21/2025]
Abstract
BACKGROUND: Tuberculosis disease is the disease that causes significant morbidity and mortality worldwide. Thus, early detection of the disease is crucial for proper treatment and controlling the spread of Tuberculosis disease. Chest X-ray imaging is one of the most widely used diagnostic tools for detecting the Tuberculosis, which is time-consuming, and prone to errors. Nowadays, deep learning model provides the automated classification of medical images with promising outcome. OBJECTIVE: Thus, this research introduced a deep learning based segmentation and classification model. Initially, the Adaptive Gaussian Filtering based pre-processing and data augmentation is performed to remove artefacts and biased outcome. Then, Attention UNet (A_UNet) based segmentation is proposed for segmenting the required region of Chest X-ray. METHODS: Using the segmented outcome, Enhanced Swin Transformer (EnSTrans) model based Tuberculosis classification model is designed with Residual Pyramid Network based Multi-layer perceptron (MLP) layer for enhancing the classification accuracy. RESULTS: Enhanced Lotus Effect Optimization (EnLeO) Algorithm is employed for the loss function optimization of the EnSTrans model. CONCLUSIONS: The proposed methods acquired the Accuracy, Recall, Precision, F-score, and Specificity of 99.0576%, 98.9459%, 99.145%, 98.96%, and 99.152% respectively.
Collapse
Affiliation(s)
- P Visu
- Department of Artificial Intelligence and Data Science, Velammal Engineering College, Chennai, India
| | - V Sathiya
- Department of Computer Science and Engineering, Panimalar Engineering College, Chennai, India
| | - P Ajitha
- Department of Computer Science and Engineering, School of Computing, Sathyabama Institue of Science and Technology, Chennai, India
| | - R Surendran
- Department of Computer Science and Engineering, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, India
| |
Collapse
|
2
|
Islam MS, Al Farid F, Shamrat FMJM, Islam MN, Rashid M, Bari BS, Abdullah J, Nazrul Islam M, Akhtaruzzaman M, Nomani Kabir M, Mansor S, Abdul Karim H. Challenges issues and future recommendations of deep learning techniques for SARS-CoV-2 detection utilising X-ray and CT images: a comprehensive review. PeerJ Comput Sci 2024; 10:e2517. [PMID: 39896401 PMCID: PMC11784792 DOI: 10.7717/peerj-cs.2517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Accepted: 10/24/2024] [Indexed: 02/04/2025]
Abstract
The global spread of SARS-CoV-2 has prompted a crucial need for accurate medical diagnosis, particularly in the respiratory system. Current diagnostic methods heavily rely on imaging techniques like CT scans and X-rays, but identifying SARS-CoV-2 in these images proves to be challenging and time-consuming. In this context, artificial intelligence (AI) models, specifically deep learning (DL) networks, emerge as a promising solution in medical image analysis. This article provides a meticulous and comprehensive review of imaging-based SARS-CoV-2 diagnosis using deep learning techniques up to May 2024. This article starts with an overview of imaging-based SARS-CoV-2 diagnosis, covering the basic steps of deep learning-based SARS-CoV-2 diagnosis, SARS-CoV-2 data sources, data pre-processing methods, the taxonomy of deep learning techniques, findings, research gaps and performance evaluation. We also focus on addressing current privacy issues, limitations, and challenges in the realm of SARS-CoV-2 diagnosis. According to the taxonomy, each deep learning model is discussed, encompassing its core functionality and a critical assessment of its suitability for imaging-based SARS-CoV-2 detection. A comparative analysis is included by summarizing all relevant studies to provide an overall visualization. Considering the challenges of identifying the best deep-learning model for imaging-based SARS-CoV-2 detection, the article conducts an experiment with twelve contemporary deep-learning techniques. The experimental result shows that the MobileNetV3 model outperforms other deep learning models with an accuracy of 98.11%. Finally, the article elaborates on the current challenges in deep learning-based SARS-CoV-2 diagnosis and explores potential future directions and methodological recommendations for research and advancement.
Collapse
Affiliation(s)
- Md Shofiqul Islam
- Computer Science and Engineering (CSE), Military Institute of Science and Technology (MIST), Dhaka, Bangladesh
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Warun Ponds, Victoria, Australia
| | - Fahmid Al Farid
- Faculty of Engineering, Multimedia University, Cyeberjaya, Selangor, Malaysia
| | | | - Md Nahidul Islam
- Faculty of Electrical and Electronics Engineering Technology, Universiti Malaysia Pahang Al-Sultan Abdullah (UMPSA), Pekan, Pahang, Malaysia
| | - Mamunur Rashid
- Faculty of Electrical and Electronics Engineering Technology, Universiti Malaysia Pahang Al-Sultan Abdullah (UMPSA), Pekan, Pahang, Malaysia
- Electrical and Computer Engineering, Tennessee Tech University, Cookeville, TN, United States
| | - Bifta Sama Bari
- Faculty of Electrical and Electronics Engineering Technology, Universiti Malaysia Pahang Al-Sultan Abdullah (UMPSA), Pekan, Pahang, Malaysia
- Electrical and Computer Engineering, Tennessee Tech University, Cookeville, TN, United States
| | - Junaidi Abdullah
- Faculty of Computing and Informatics, Multimedia University, Cyberjaya, Selangor, Malaysia
| | - Muhammad Nazrul Islam
- Computer Science and Engineering (CSE), Military Institute of Science and Technology (MIST), Dhaka, Bangladesh
| | - Md Akhtaruzzaman
- Computer Science and Engineering (CSE), Military Institute of Science and Technology (MIST), Dhaka, Bangladesh
| | - Muhammad Nomani Kabir
- Department of Computer Science & Engineering, United International University (UIU), Dhaka, Bangladesh
| | - Sarina Mansor
- Faculty of Engineering, Multimedia University, Cyeberjaya, Selangor, Malaysia
| | - Hezerul Abdul Karim
- Faculty of Engineering, Multimedia University, Cyeberjaya, Selangor, Malaysia
| |
Collapse
|
3
|
Sufian MA, Hamzi W, Sharifi T, Zaman S, Alsadder L, Lee E, Hakim A, Hamzi B. AI-Driven Thoracic X-ray Diagnostics: Transformative Transfer Learning for Clinical Validation in Pulmonary Radiography. J Pers Med 2024; 14:856. [PMID: 39202047 PMCID: PMC11355475 DOI: 10.3390/jpm14080856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2024] [Revised: 07/23/2024] [Accepted: 08/01/2024] [Indexed: 09/03/2024] Open
Abstract
Our research evaluates advanced artificial (AI) methodologies to enhance diagnostic accuracy in pulmonary radiography. Utilizing DenseNet121 and ResNet50, we analyzed 108,948 chest X-ray images from 32,717 patients and DenseNet121 achieved an area under the curve (AUC) of 94% in identifying the conditions of pneumothorax and oedema. The model's performance surpassed that of expert radiologists, though further improvements are necessary for diagnosing complex conditions such as emphysema, effusion, and hernia. Clinical validation integrating Latent Dirichlet Allocation (LDA) and Named Entity Recognition (NER) demonstrated the potential of natural language processing (NLP) in clinical workflows. The NER system achieved a precision of 92% and a recall of 88%. Sentiment analysis using DistilBERT provided a nuanced understanding of clinical notes, which is essential for refining diagnostic decisions. XGBoost and SHapley Additive exPlanations (SHAP) enhanced feature extraction and model interpretability. Local Interpretable Model-agnostic Explanations (LIME) and occlusion sensitivity analysis further enriched transparency, enabling healthcare providers to trust AI predictions. These AI techniques reduced processing times by 60% and annotation errors by 75%, setting a new benchmark for efficiency in thoracic diagnostics. The research explored the transformative potential of AI in medical imaging, advancing traditional diagnostics and accelerating medical evaluations in clinical settings.
Collapse
Affiliation(s)
- Md Abu Sufian
- IVR Low-Carbon Research Institute, Chang’an University, Xi’an 710018, China;
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| | - Wahiba Hamzi
- Laboratoire de Biotechnologie Santé et Environnement, Department of Biology, University of Blida, Blida 09000, Algeria
| | - Tazkera Sharifi
- Data Science Architect-Lead Technologist, Booz Allen Hamilton, Texas City, TX 78226, USA
| | - Sadia Zaman
- Department of Physiology, Queen Mary University, London E1 4NS, UK
| | - Lujain Alsadder
- Department of Physiology, Queen Mary University, London E1 4NS, UK
| | - Esther Lee
- Department of Physiology, Queen Mary University, London E1 4NS, UK
| | - Amir Hakim
- Department of Physiology, Queen Mary University, London E1 4NS, UK
| | - Boumediene Hamzi
- Department of Computing and Mathematical Sciences, California Institute of Technology, Caltech, CA 91125, USA
- The Alan Turing Institute, London NW1 2DB, UK
- Department of Mathematics, Imperial College London, London SW7 2AZ, UK
- Department of Mathematics, Gulf University for Science and Technology (GUST), Mubarak Al-Abdullah 32093, Kuwait
| |
Collapse
|
4
|
Samarla SK, P M. Ensemble fusion model for improved lung abnormality classification: Leveraging pre-trained models. MethodsX 2024; 12:102640. [PMID: 38524306 PMCID: PMC10957444 DOI: 10.1016/j.mex.2024.102640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Accepted: 02/27/2024] [Indexed: 03/26/2024] Open
Abstract
Lung abnormalities pose significant health concerns, underscoring the need for swift and accurate diagnoses to facilitate timely medical intervention. This study introduces a novel methodology for the sub-classification of lung abnormalities within chest X-rays captured via smartphones. An accurate and timely diagnosis of lung abnormalities is essential for the successful implementation of appropriate therapy. In this paper, we propose a novel approach using a Convolutional neural network (CNN) with three maximum pooling layers and early fusion for sub-classifying lung abnormalities from chest Xrays. Based on the kind of abnormality, the CheXpert dataset is divided into 13 sub-classes, each of which is trained using a different sub-model. An early fusion procedure is then used to integrate the outputs of the sub-model.•3M-CNN (Method 1): We employed a Convolutional Neural Network (CNN) with three max pooling layers and an early fusion strategy to train dedicated sub-models for each of the 13 distinct sub-classes of lung abnormalities using the CheXpert dataset.•Ensemble Model (Method 2): Our 'Ensemble model' integrated the outputs of the trained sub-models, providing a powerful approach for the sub-classification of lung abnormalities.•Exceptional Accuracy: Our '3M-CNN' and 'fused model' achieved an accuracy of 98.79%, surpassing established methodologies, which is beneficial in resource-constrained environments embracing smartphone-based imaging.
Collapse
Affiliation(s)
- Suresh Kumar Samarla
- Information Technology, Puducherry Technological University, Puducherry, India
- CSE Department, SRKR Engineering College, AndhraPradesh, India
| | - Maragathavalli P
- Information Technology, Puducherry Technological University, Puducherry, India
| |
Collapse
|
5
|
Thakur GK, Thakur A, Kulkarni S, Khan N, Khan S. Deep Learning Approaches for Medical Image Analysis and Diagnosis. Cureus 2024; 16:e59507. [PMID: 38826977 PMCID: PMC11144045 DOI: 10.7759/cureus.59507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Accepted: 05/01/2024] [Indexed: 06/04/2024] Open
Abstract
In addition to enhancing diagnostic accuracy, deep learning techniques offer the potential to streamline workflows, reduce interpretation time, and ultimately improve patient outcomes. The scalability and adaptability of deep learning algorithms enable their deployment across diverse clinical settings, ranging from radiology departments to point-of-care facilities. Furthermore, ongoing research efforts focus on addressing the challenges of data heterogeneity, model interpretability, and regulatory compliance, paving the way for seamless integration of deep learning solutions into routine clinical practice. As the field continues to evolve, collaborations between clinicians, data scientists, and industry stakeholders will be paramount in harnessing the full potential of deep learning for advancing medical image analysis and diagnosis. Furthermore, the integration of deep learning algorithms with other technologies, including natural language processing and computer vision, may foster multimodal medical data analysis and clinical decision support systems to improve patient care. The future of deep learning in medical image analysis and diagnosis is promising. With each success and advancement, this technology is getting closer to being leveraged for medical purposes. Beyond medical image analysis, patient care pathways like multimodal imaging, imaging genomics, and intelligent operating rooms or intensive care units can benefit from deep learning models.
Collapse
Affiliation(s)
- Gopal Kumar Thakur
- Department of Data Sciences, Harrisburg University of Science and Technology, Harrisburg, USA
| | - Abhishek Thakur
- Department of Data Sciences, Harrisburg University of Science and Technology, Harrisburg, USA
| | - Shridhar Kulkarni
- Department of Data Sciences, Harrisburg University of Science and Technology, Harrisburg, USA
| | - Naseebia Khan
- Department of Data Sciences, Harrisburg University of Science and Technology, Harrisburg, USA
| | - Shahnawaz Khan
- Department of Computer Application, Bundelkhand University, Jhansi, IND
| |
Collapse
|
6
|
Vafaeezadeh M, Behnam H, Gifani P. Ultrasound Image Analysis with Vision Transformers-Review. Diagnostics (Basel) 2024; 14:542. [PMID: 38473014 DOI: 10.3390/diagnostics14050542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Revised: 02/22/2024] [Accepted: 02/29/2024] [Indexed: 03/14/2024] Open
Abstract
Ultrasound (US) has become a widely used imaging modality in clinical practice, characterized by its rapidly evolving technology, advantages, and unique challenges, such as a low imaging quality and high variability. There is a need to develop advanced automatic US image analysis methods to enhance its diagnostic accuracy and objectivity. Vision transformers, a recent innovation in machine learning, have demonstrated significant potential in various research fields, including general image analysis and computer vision, due to their capacity to process large datasets and learn complex patterns. Their suitability for automatic US image analysis tasks, such as classification, detection, and segmentation, has been recognized. This review provides an introduction to vision transformers and discusses their applications in specific US image analysis tasks, while also addressing the open challenges and potential future trends in their application in medical US image analysis. Vision transformers have shown promise in enhancing the accuracy and efficiency of ultrasound image analysis and are expected to play an increasingly important role in the diagnosis and treatment of medical conditions using ultrasound imaging as technology progresses.
Collapse
Affiliation(s)
- Majid Vafaeezadeh
- Biomedical Engineering Department, School of Electrical Engineering, Iran University of Science and Technology, Tehran 1311416846, Iran
| | - Hamid Behnam
- Biomedical Engineering Department, School of Electrical Engineering, Iran University of Science and Technology, Tehran 1311416846, Iran
| | - Parisa Gifani
- Medical Sciences and Technologies Department, Science and Research Branch, Islamic Azad University, Tehran 1477893855, Iran
| |
Collapse
|
7
|
Arslan M, Haider A, Khurshid M, Abu Bakar SSU, Jani R, Masood F, Tahir T, Mitchell K, Panchagnula S, Mandair S. From Pixels to Pathology: Employing Computer Vision to Decode Chest Diseases in Medical Images. Cureus 2023; 15:e45587. [PMID: 37868395 PMCID: PMC10587792 DOI: 10.7759/cureus.45587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/19/2023] [Indexed: 10/24/2023] Open
Abstract
Radiology has been a pioneer in the healthcare industry's digital transformation, incorporating digital imaging systems like picture archiving and communication system (PACS) and teleradiology over the past thirty years. This shift has reshaped radiology services, positioning the field at a crucial junction for potential evolution into an integrated diagnostic service through artificial intelligence and machine learning. These technologies offer advanced tools for radiology's transformation. The radiology community has advanced computer-aided diagnosis (CAD) tools using machine learning techniques, notably deep learning convolutional neural networks (CNNs), for medical image pattern recognition. However, the integration of CAD tools into clinical practice has been hindered by challenges in workflow integration, unclear business models, and limited clinical benefits, despite development dating back to the 1990s. This comprehensive review focuses on detecting chest-related diseases through techniques like chest X-rays (CXRs), magnetic resonance imaging (MRI), nuclear medicine, and computed tomography (CT) scans. It examines the utilization of computer-aided programs by researchers for disease detection, addressing key areas: the role of computer-aided programs in disease detection advancement, recent developments in MRI, CXR, radioactive tracers, and CT scans for chest disease identification, research gaps for more effective development, and the incorporation of machine learning programs into diagnostic tools.
Collapse
Affiliation(s)
- Muhammad Arslan
- Department of Emergency Medicine, Royal Infirmary of Edinburgh, National Health Service (NHS) Lothian, Edinburgh, GBR
| | - Ali Haider
- Department of Allied Health Sciences, The University of Lahore, Gujrat Campus, Gujrat, PAK
| | - Mohsin Khurshid
- Department of Microbiology, Government College University Faisalabad, Faisalabad, PAK
| | | | - Rutva Jani
- Department of Internal Medicine, C. U. Shah Medical College and Hospital, Gujarat, IND
| | - Fatima Masood
- Department of Internal Medicine, Gulf Medical University, Ajman, ARE
| | - Tuba Tahir
- Department of Business Administration, Iqra University, Karachi, PAK
| | - Kyle Mitchell
- Department of Internal Medicine, University of Science, Arts and Technology, Olveston, MSR
| | - Smruthi Panchagnula
- Department of Internal Medicine, Ganni Subbalakshmi Lakshmi (GSL) Medical College, Hyderabad, IND
| | - Satpreet Mandair
- Department of Internal Medicine, Medical University of the Americas, Charlestown, KNA
| |
Collapse
|
8
|
Malik H, Anees T, Al-Shamaylehs AS, Alharthi SZ, Khalil W, Akhunzada A. Deep Learning-Based Classification of Chest Diseases Using X-rays, CT Scans, and Cough Sound Images. Diagnostics (Basel) 2023; 13:2772. [PMID: 37685310 PMCID: PMC10486427 DOI: 10.3390/diagnostics13172772] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 08/14/2023] [Accepted: 08/21/2023] [Indexed: 09/10/2023] Open
Abstract
Chest disease refers to a variety of lung disorders, including lung cancer (LC), COVID-19, pneumonia (PNEU), tuberculosis (TB), and numerous other respiratory disorders. The symptoms (i.e., fever, cough, sore throat, etc.) of these chest diseases are similar, which might mislead radiologists and health experts when classifying chest diseases. Chest X-rays (CXR), cough sounds, and computed tomography (CT) scans are utilized by researchers and doctors to identify chest diseases such as LC, COVID-19, PNEU, and TB. The objective of the work is to identify nine different types of chest diseases, including COVID-19, edema (EDE), LC, PNEU, pneumothorax (PNEUTH), normal, atelectasis (ATE), and consolidation lung (COL). Therefore, we designed a novel deep learning (DL)-based chest disease detection network (DCDD_Net) that uses a CXR, CT scans, and cough sound images for the identification of nine different types of chest diseases. The scalogram method is used to convert the cough sounds into an image. Before training the proposed DCDD_Net model, the borderline (BL) SMOTE is applied to balance the CXR, CT scans, and cough sound images of nine chest diseases. The proposed DCDD_Net model is trained and evaluated on 20 publicly available benchmark chest disease datasets of CXR, CT scan, and cough sound images. The classification performance of the DCDD_Net is compared with four baseline models, i.e., InceptionResNet-V2, EfficientNet-B0, DenseNet-201, and Xception, as well as state-of-the-art (SOTA) classifiers. The DCDD_Net achieved an accuracy of 96.67%, a precision of 96.82%, a recall of 95.76%, an F1-score of 95.61%, and an area under the curve (AUC) of 99.43%. The results reveal that DCDD_Net outperformed the other four baseline models in terms of many performance evaluation metrics. Thus, the proposed DCDD_Net model can provide significant assistance to radiologists and medical experts. Additionally, the proposed model was also shown to be resilient by statistical evaluations of the datasets using McNemar and ANOVA tests.
Collapse
Affiliation(s)
- Hassaan Malik
- School of Systems and Technology, University of Management and Technology, Lahore 54770, Pakistan; (H.M.); (T.A.)
| | - Tayyaba Anees
- School of Systems and Technology, University of Management and Technology, Lahore 54770, Pakistan; (H.M.); (T.A.)
| | - Ahmad Sami Al-Shamaylehs
- Department of Networks and Cybersecurity, Faculty of Information Technology, Al-Ahliyya Amman University, Amman 19328, Jordan;
| | - Salman Z. Alharthi
- Department of Information System, College of Computers and Information Systems, Al-Lith Campus, Umm AL-Qura University, P.O. Box 7745, AL-Lith 21955, Saudi Arabia
| | - Wajeeha Khalil
- Department of Computer Science and Information Technology, University of Engineering and Technology Peshawar, Peshawar 25000, Pakistan;
| | - Adnan Akhunzada
- College of Computing & IT, University of Doha for Science and Technology, Doha P.O. Box 24449, Qatar;
| |
Collapse
|
9
|
Arora M, Davis CM, Gowda NR, Foster DG, Mondal A, Coopersmith CM, Kamaleswaran R. Uncertainty-Aware Convolutional Neural Network for Identifying Bilateral Opacities on Chest X-rays: A Tool to Aid Diagnosis of Acute Respiratory Distress Syndrome. Bioengineering (Basel) 2023; 10:946. [PMID: 37627831 PMCID: PMC10451804 DOI: 10.3390/bioengineering10080946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 07/26/2023] [Accepted: 08/03/2023] [Indexed: 08/27/2023] Open
Abstract
Acute Respiratory Distress Syndrome (ARDS) is a severe lung injury with high mortality, primarily characterized by bilateral pulmonary opacities on chest radiographs and hypoxemia. In this work, we trained a convolutional neural network (CNN) model that can reliably identify bilateral opacities on routine chest X-ray images of critically ill patients. We propose this model as a tool to generate predictive alerts for possible ARDS cases, enabling early diagnosis. Our team created a unique dataset of 7800 single-view chest-X-ray images labeled for the presence of bilateral or unilateral pulmonary opacities, or 'equivocal' images, by three blinded clinicians. We used a novel training technique that enables the CNN to explicitly predict the 'equivocal' class using an uncertainty-aware label smoothing loss. We achieved an Area under the Receiver Operating Characteristic Curve (AUROC) of 0.82 (95% CI: 0.80, 0.85), a precision of 0.75 (95% CI: 0.73, 0.78), and a sensitivity of 0.76 (95% CI: 0.73, 0.78) on the internal test set while achieving an (AUROC) of 0.84 (95% CI: 0.81, 0.86), a precision of 0.73 (95% CI: 0.63, 0.69), and a sensitivity of 0.73 (95% CI: 0.70, 0.75) on an external validation set. Further, our results show that this approach improves the model calibration and diagnostic odds ratio of the hypothesized alert tool, making it ideal for clinical decision support systems.
Collapse
Affiliation(s)
- Mehak Arora
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, GA 30332, USA;
| | - Carolyn M. Davis
- Department of Surgery, Emory University School of Medicine, Atlanta, GA 30332, USA; (C.M.D.); (D.G.F.); (C.M.C.)
- Emory Critical Care Center, Emory University School of Medicine, Atlanta, GA 30332, USA
| | - Niraj R. Gowda
- Division of Pulmonary, Critical Care, Allergy and Sleep Medicine, Emory University School of Medicine, Atlanta, GA 30332, USA;
| | - Dennis G. Foster
- Department of Surgery, Emory University School of Medicine, Atlanta, GA 30332, USA; (C.M.D.); (D.G.F.); (C.M.C.)
| | - Angana Mondal
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, GA 30332, USA;
| | - Craig M. Coopersmith
- Department of Surgery, Emory University School of Medicine, Atlanta, GA 30332, USA; (C.M.D.); (D.G.F.); (C.M.C.)
- Emory Critical Care Center, Emory University School of Medicine, Atlanta, GA 30332, USA
| | - Rishikesan Kamaleswaran
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, GA 30332, USA;
- Emory Critical Care Center, Emory University School of Medicine, Atlanta, GA 30332, USA
| |
Collapse
|