1
|
Liang Z, Xue Z, Rajaraman S, Antani S. Automated quantification of SARS-CoV-2 pneumonia with large vision model knowledge adaptation. New Microbes New Infect 2024; 62:101457. [PMID: 39253407 PMCID: PMC11381763 DOI: 10.1016/j.nmni.2024.101457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Revised: 07/10/2024] [Accepted: 08/12/2024] [Indexed: 09/11/2024] Open
Abstract
Background Large vision models (LVM) pretrained by large datasets have demonstrated their enormous capacity to understand visual patterns and capture semantic information from images. We proposed a novel method of knowledge domain adaptation with pretrained LVM for a low-cost artificial intelligence (AI) model to quantify the severity of SARS-CoV-2 pneumonia based on frontal chest X-ray (CXR) images. Methods Our method used the pretrained LVMs as the primary feature extractor and self-supervised contrastive learning for domain adaptation. An encoder with a 2048-dimensional feature vector output was first trained by self-supervised learning for knowledge domain adaptation. Then a multi-layer perceptron (MLP) was trained for the final severity prediction. A dataset with 2599 CXR images was used for model training and evaluation. Results The model based on the pretrained vision transformer (ViT) and self-supervised learning achieved the best performance in cross validation, with mean squared error (MSE) of 23.83 (95 % CI 22.67-25.00) and mean absolute error (MAE) of 3.64 (95 % CI 3.54-3.73). Its prediction correlation has theR 2 of 0.81 (95 % CI 0.79-0.82) and Spearman ρ of 0.80 (95 % CI 0.77-0.81), which are comparable to the current state-of-the-art (SOTA) methods trained by much larger CXR datasets. Conclusion The proposed new method has achieved the SOTA performance to quantify the severity of SARS-CoV-2 pneumonia at a significantly lower cost. The method can be extended to other infectious disease detection or quantification to expedite the application of AI in medical research.
Collapse
Affiliation(s)
- Zhaohui Liang
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Zhiyun Xue
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Sivaramakrishnan Rajaraman
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Sameer Antani
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
2
|
Yang T, Zhang L, Sun S, Yao X, Wang L, Ge Y. Identifying severe community-acquired pneumonia using radiomics and clinical data: a machine learning approach. Sci Rep 2024; 14:21884. [PMID: 39300101 DOI: 10.1038/s41598-024-72310-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Accepted: 09/05/2024] [Indexed: 09/22/2024] Open
Abstract
Evaluating Community-Acquired Pneumonia (CAP) is crucial for determining appropriate treatment methods. In this study, we established a machine learning model using radiomics and clinical features to rapidly and accurately identify Severe Community-Acquired Pneumonia (SCAP). A total of 174 CAP patients were included in the study, with 64 cases classified as SCAP. Radiomic features were extracted from chest CT scans using radiomics techniques, and screened to remove irrelevant features. Additionally, clinical indicators of patients were similarly screened and constituted the clinical feature set. Subsequently, eight common machine learning models were employed to complete the SCAP identification task. Specifically, interpretability analysis was conducted on the models. In the end, we screened out 15 radiomic features (such as LeastAxisLength, Maximum2DDiameterColumn and ZonePercentage) and two clinical features: Lymphocyte (p = 0.041) and Albumin (p = 0.044). Using radiomic features as inputs in model predictions yielded the highest AUC of 0.85 on the test set. When using the clinical feature set as model inputs, the AUC was 0.82. Combining the two sets of features as model inputs, Ada Boost achieved the best performance with an AUC of 0.89. Our study demonstrates that combining radiomics and clinical data using machine learning methods can more accurately identify SCAP patients.
Collapse
Affiliation(s)
- Tianning Yang
- College of Science, North China University of Science and Technology, Tangshan, Hebei, China
| | - Ling Zhang
- Department of Respiratory Medicine, North China University of Science and Technology, Affiliated Hospital, Tangshan, Hebei, China
| | - Siyi Sun
- Department of Respiratory Medicine, North China University of Science and Technology, Affiliated Hospital, Tangshan, Hebei, China
| | - Xuexin Yao
- Department of Respiratory Medicine, North China University of Science and Technology, Affiliated Hospital, Tangshan, Hebei, China
| | - Lichuan Wang
- College of Science, North China University of Science and Technology, Tangshan, Hebei, China.
| | - Yanlei Ge
- Department of Respiratory Medicine, North China University of Science and Technology, Affiliated Hospital, Tangshan, Hebei, China.
| |
Collapse
|
3
|
Yu K, Ghosh S, Liu Z, Deible C, Poynton CB, Batmanghelich K. Anatomy-specific Progression Classification in Chest Radiographs via Weakly Supervised Learning. Radiol Artif Intell 2024; 6:e230277. [PMID: 39046325 PMCID: PMC11427915 DOI: 10.1148/ryai.230277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 06/19/2024] [Accepted: 06/28/2024] [Indexed: 07/25/2024]
Abstract
Purpose To develop a machine learning approach for classifying disease progression in chest radiographs using weak labels automatically derived from radiology reports. Materials and Methods In this retrospective study, a twin neural network was developed to classify anatomy-specific disease progression into four categories: improved, unchanged, worsened, and new. A two-step weakly supervised learning approach was employed, pretraining the model on 243 008 frontal chest radiographs from 63 877 patients (mean age, 51.7 years ± 17.0 [SD]; 34 813 [55%] female) included in the MIMIC-CXR database and fine-tuning it on the subset with progression labels derived from consecutive studies. Model performance was evaluated for six pathologic observations on test datasets of unseen patients from the MIMIC-CXR database. Area under the receiver operating characteristic (AUC) analysis was used to evaluate classification performance. The algorithm is also capable of generating bounding-box predictions to localize areas of new progression. Recall, precision, and mean average precision were used to evaluate the new progression localization. One-tailed paired t tests were used to assess statistical significance. Results The model outperformed most baselines in progression classification, achieving macro AUC scores of 0.72 ± 0.004 for atelectasis, 0.75 ± 0.007 for consolidation, 0.76 ± 0.017 for edema, 0.81 ± 0.006 for effusion, 0.7 ± 0.032 for pneumonia, and 0.69 ± 0.01 for pneumothorax. For new observation localization, the model achieved mean average precision scores of 0.25 ± 0.03 for atelectasis, 0.34 ± 0.03 for consolidation, 0.33 ± 0.03 for edema, and 0.31 ± 0.03 for pneumothorax. Conclusion Disease progression classification models were developed on a large chest radiograph dataset, which can be used to monitor interval changes and detect new pathologic conditions on chest radiographs. Keywords: Prognosis, Unsupervised Learning, Transfer Learning, Convolutional Neural Network (CNN), Emergency Radiology, Named Entity Recognition Supplemental material is available for this article. © RSNA, 2024 See also commentary by Alves and Venkadesh in this issue.
Collapse
Affiliation(s)
- Ke Yu
- From the School of Computing and Information, University of Pittsburgh, Pittsburgh, Pa (K.Y., Z.L.); Department of Electrical and Computer Engineering, Boston University, 8 St. Mary’s St, Office 421, Boston, MA 02215 (S.G., K.B.); Department of Radiology, University of Pittsburgh, Pittsburgh, Pa (C.D.); and Chobanian & Avedisian School of Medicine, Boston University, Boston, Mass (C.B.P.)
| | - Shantanu Ghosh
- From the School of Computing and Information, University of Pittsburgh, Pittsburgh, Pa (K.Y., Z.L.); Department of Electrical and Computer Engineering, Boston University, 8 St. Mary’s St, Office 421, Boston, MA 02215 (S.G., K.B.); Department of Radiology, University of Pittsburgh, Pittsburgh, Pa (C.D.); and Chobanian & Avedisian School of Medicine, Boston University, Boston, Mass (C.B.P.)
| | - Zhexiong Liu
- From the School of Computing and Information, University of Pittsburgh, Pittsburgh, Pa (K.Y., Z.L.); Department of Electrical and Computer Engineering, Boston University, 8 St. Mary’s St, Office 421, Boston, MA 02215 (S.G., K.B.); Department of Radiology, University of Pittsburgh, Pittsburgh, Pa (C.D.); and Chobanian & Avedisian School of Medicine, Boston University, Boston, Mass (C.B.P.)
| | - Christopher Deible
- From the School of Computing and Information, University of Pittsburgh, Pittsburgh, Pa (K.Y., Z.L.); Department of Electrical and Computer Engineering, Boston University, 8 St. Mary’s St, Office 421, Boston, MA 02215 (S.G., K.B.); Department of Radiology, University of Pittsburgh, Pittsburgh, Pa (C.D.); and Chobanian & Avedisian School of Medicine, Boston University, Boston, Mass (C.B.P.)
| | - Clare B. Poynton
- From the School of Computing and Information, University of Pittsburgh, Pittsburgh, Pa (K.Y., Z.L.); Department of Electrical and Computer Engineering, Boston University, 8 St. Mary’s St, Office 421, Boston, MA 02215 (S.G., K.B.); Department of Radiology, University of Pittsburgh, Pittsburgh, Pa (C.D.); and Chobanian & Avedisian School of Medicine, Boston University, Boston, Mass (C.B.P.)
| | - Kayhan Batmanghelich
- From the School of Computing and Information, University of Pittsburgh, Pittsburgh, Pa (K.Y., Z.L.); Department of Electrical and Computer Engineering, Boston University, 8 St. Mary’s St, Office 421, Boston, MA 02215 (S.G., K.B.); Department of Radiology, University of Pittsburgh, Pittsburgh, Pa (C.D.); and Chobanian & Avedisian School of Medicine, Boston University, Boston, Mass (C.B.P.)
| |
Collapse
|
4
|
Kantipudi K, Gu J, Bui V, Yu H, Jaeger S, Yaniv Z. Automated Pulmonary Tuberculosis Severity Assessment on Chest X-rays. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01052-7. [PMID: 38587769 DOI: 10.1007/s10278-024-01052-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Revised: 01/18/2024] [Accepted: 02/12/2024] [Indexed: 04/09/2024]
Abstract
According to the 2022 World Health Organization's Global Tuberculosis (TB) report, an estimated 10.6 million people fell ill with TB, and 1.6 million died from the disease in 2021. In addition, 2021 saw a reversal of a decades-long trend of declining TB infections and deaths, with an estimated increase of 4.5% in the number of people who fell ill with TB compared to 2020, and an estimated yearly increase of 450,000 cases of drug resistant TB. Estimating the severity of pulmonary TB using frontal chest X-rays (CXR) can enable better resource allocation in resource constrained settings and monitoring of treatment response, enabling prompt treatment modifications if disease severity does not decrease over time. The Timika score is a clinically used TB severity score based on a CXR reading. This work proposes and evaluates three deep learning-based approaches for predicting the Timika score with varying levels of explainability. The first approach uses two deep learning-based models, one to explicitly detect lesion regions using YOLOV5n and another to predict the presence of cavitation using DenseNet121, which are then utilized in score calculation. The second approach uses a DenseNet121-based regression model to directly predict the affected lung percentage and another to predict cavitation presence using a DenseNet121-based classification model. Finally, the third approach directly predicts the Timika score using a DenseNet121-based regression model. The best performance is achieved by the second approach with a mean absolute error of 13-14% and a Pearson correlation of 0.7-0.84 using three held-out datasets for evaluating generalization.
Collapse
Affiliation(s)
- Karthik Kantipudi
- Office of Cyber Infrastructure and Computational Biology, National Institute of Allergy and Infectious Diseases, Bethesda, 20892, MD, USA.
| | - Jingwen Gu
- Office of Cyber Infrastructure and Computational Biology, National Institute of Allergy and Infectious Diseases, Bethesda, 20892, MD, USA
| | - Vy Bui
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, Bethesda, 20894, MD, USA
| | - Hang Yu
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, Bethesda, 20894, MD, USA
| | - Stefan Jaeger
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, Bethesda, 20894, MD, USA
| | - Ziv Yaniv
- Office of Cyber Infrastructure and Computational Biology, National Institute of Allergy and Infectious Diseases, Bethesda, 20892, MD, USA.
| |
Collapse
|
5
|
Rahayu DRP, Rusli M, Bramantono B, Widyoningroem A. Association between chest X-ray score and clinical outcome in COVID-19 patients: A study on modified radiographic assessment of lung edema score (mRALE) in Indonesia. NARRA J 2024; 4:e691. [PMID: 38798849 PMCID: PMC11125424 DOI: 10.52225/narra.v4i1.691] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Accepted: 04/03/2024] [Indexed: 05/29/2024]
Abstract
Radiological examinations such as chest X-rays (CXR) play a crucial role in the early diagnosis and determining disease severity in coronavirus disease 2019 (COVID-19). Various CXR scoring systems have been developed to quantitively assess lung abnormalities in COVID-19 patients, including CXR modified radiographic assessment of lung edema (mRALE). The aim of this study was to determine the relationship between mRALE scores and clinical outcome (mortality), as well as to identify the correlation between mRALE score and the severity of hypoxia (PaO2/FiO2 ratio). A retrospective cohort study was conducted among hospitalized COVID-19 patients at Dr. Soetomo General Academic Hospital Surabaya, Indonesia, from February to April 2022. All CXR data at initial admission were scored using the mRALE scoring system, and the clinical outcomes at the end of hospitalization were recorded. Of the total 178 COVID-19 patients, 62.9% survived after completing the treatment. Patients within non-survived had significantly higher quick sequential organ failure assessment (qSOFA) score (p<0.001), lower PaO2/FiO2 ratio (p=0.004), and higher blood urea nitrogen (p<0.001), serum creatinine (p<0.008) and serum glutamic oxaloacetic transaminase (p=0.001) levels. There was a significant relationship between mRALE score and clinical outcome (survived vs deceased) (p=0.024; contingency coefficient of 0.184); and mRALE score of ≥2.5 served as a risk factor for mortality among COVID-19 patients (relative risk of 1.624). There was a significant negative correlation between the mRALE score and PaO2/FiO2 ratio based on the Spearman correlation test (r=-0.346; p<0.001). The findings highlight that the initial mRALE score may serve as an independent predictor of mortality among hospitalized COVID-19 patients as well as proves its potential prognostic role in the management of COVID-19.
Collapse
Affiliation(s)
- Dwi RP. Rahayu
- Department of Internal Medicine, Faculty of Medicine, Universitas Airlangga, Surabaya, Indonesia
- Department of Internal Medicine, Dr. Soetomo General Academic Hospital, Surabaya, Indonesia
| | - Musofa Rusli
- Division of Tropical Medicine and Infectious Disease, Department of Internal Medicine, Faculty of Medicine, Universitas Airlangga, Surabaya, Indonesia
- Division of Tropical Medicine and Infectious Disease, Dr. Soetomo General Academic Hospital, Surabaya, Indonesia
| | - Bramantono Bramantono
- Division of Tropical Medicine and Infectious Disease, Department of Internal Medicine, Faculty of Medicine, Universitas Airlangga, Surabaya, Indonesia
- Division of Tropical Medicine and Infectious Disease, Dr. Soetomo General Academic Hospital, Surabaya, Indonesia
| | - Anita Widyoningroem
- Department of Radiology, Faculty of Medicine, Universitas Airlangga, Surabaya, Indonesia
- Department of Radiology, Dr. Soetomo General Academic Hospital, Surabaya, Indonesia
| |
Collapse
|
6
|
Egemen D, Perkins RB, Cheung LC, Befano B, Rodriguez AC, Desai K, Lemay A, Ahmed SR, Antani S, Jeronimo J, Wentzensen N, Kalpathy-Cramer J, De Sanjose S, Schiffman M. Artificial intelligence-based image analysis in clinical testing: lessons from cervical cancer screening. J Natl Cancer Inst 2024; 116:26-33. [PMID: 37758250 PMCID: PMC10777665 DOI: 10.1093/jnci/djad202] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Revised: 09/11/2023] [Accepted: 09/21/2023] [Indexed: 10/03/2023] Open
Abstract
Novel screening and diagnostic tests based on artificial intelligence (AI) image recognition algorithms are proliferating. Some initial reports claim outstanding accuracy followed by disappointing lack of confirmation, including our own early work on cervical screening. This is a presentation of lessons learned, organized as a conceptual step-by-step approach to bridge the gap between the creation of an AI algorithm and clinical efficacy. The first fundamental principle is specifying rigorously what the algorithm is designed to identify and what the test is intended to measure (eg, screening, diagnostic, or prognostic). Second, designing the AI algorithm to minimize the most clinically important errors. For example, many equivocal cervical images cannot yet be labeled because the borderline between cases and controls is blurred. To avoid a misclassified case-control dichotomy, we have isolated the equivocal cases and formally included an intermediate, indeterminate class (severity order of classes: case>indeterminate>control). The third principle is evaluating AI algorithms like any other test, using clinical epidemiologic criteria. Repeatability of the algorithm at the borderline, for indeterminate images, has proven extremely informative. Distinguishing between internal and external validation is also essential. Linking the AI algorithm results to clinical risk estimation is the fourth principle. Absolute risk (not relative) is the critical metric for translating a test result into clinical use. Finally, generating risk-based guidelines for clinical use that match local resources and priorities is the last principle in our approach. We are particularly interested in applications to lower-resource settings to address health disparities. We note that similar principles apply to other domains of AI-based image analysis for medical diagnostic testing.
Collapse
Affiliation(s)
- Didem Egemen
- Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Rockville, MD, USA
| | - Rebecca B Perkins
- Department of Obstetrics and Gynecology, Boston Medical Center/Boston University School of Medicine, Boston, MA, USA
| | - Li C Cheung
- Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Rockville, MD, USA
| | - Brian Befano
- Information Management Services Inc, Calverton, MD, USA
- Department of Epidemiology, School of Public Health, University of Washington, Seattle, WA, USA
| | - Ana Cecilia Rodriguez
- Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Rockville, MD, USA
| | - Kanan Desai
- Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Rockville, MD, USA
| | - Andreanne Lemay
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Syed Rakin Ahmed
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
- Harvard Graduate Program in Biophysics, Harvard Medical School, Harvard University, Cambridge, MA, USA
- Massachusetts Institute of Technology, Cambridge, MA, USA
- Geisel School of Medicine at Dartmouth, Dartmouth College, Hanover, NH, USA
| | - Sameer Antani
- National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Jose Jeronimo
- Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Rockville, MD, USA
| | - Nicolas Wentzensen
- Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Rockville, MD, USA
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Silvia De Sanjose
- Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Rockville, MD, USA
- ISGlobal, Barcelona, Spain
| | - Mark Schiffman
- Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Rockville, MD, USA
| |
Collapse
|
7
|
Henao JAG, Depotter A, Bower DV, Bajercius H, Todorova PT, Saint-James H, de Mortanges AP, Barroso MC, He J, Yang J, You C, Staib LH, Gange C, Ledda RE, Caminiti C, Silva M, Cortopassi IO, Dela Cruz CS, Hautz W, Bonel HM, Sverzellati N, Duncan JS, Reyes M, Poellinger A. A Multiclass Radiomics Method-Based WHO Severity Scale for Improving COVID-19 Patient Assessment and Disease Characterization From CT Scans. Invest Radiol 2023; 58:882-893. [PMID: 37493348 PMCID: PMC10662611 DOI: 10.1097/rli.0000000000001005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 05/26/2023] [Indexed: 07/27/2023]
Abstract
OBJECTIVES The aim of this study was to evaluate the severity of COVID-19 patients' disease by comparing a multiclass lung lesion model to a single-class lung lesion model and radiologists' assessments in chest computed tomography scans. MATERIALS AND METHODS The proposed method, AssessNet-19, was developed in 2 stages in this retrospective study. Four COVID-19-induced tissue lesions were manually segmented to train a 2D-U-Net network for a multiclass segmentation task followed by extensive extraction of radiomic features from the lung lesions. LASSO regression was used to reduce the feature set, and the XGBoost algorithm was trained to classify disease severity based on the World Health Organization Clinical Progression Scale. The model was evaluated using 2 multicenter cohorts: a development cohort of 145 COVID-19-positive patients from 3 centers to train and test the severity prediction model using manually segmented lung lesions. In addition, an evaluation set of 90 COVID-19-positive patients was collected from 2 centers to evaluate AssessNet-19 in a fully automated fashion. RESULTS AssessNet-19 achieved an F1-score of 0.76 ± 0.02 for severity classification in the evaluation set, which was superior to the 3 expert thoracic radiologists (F1 = 0.63 ± 0.02) and the single-class lesion segmentation model (F1 = 0.64 ± 0.02). In addition, AssessNet-19 automated multiclass lesion segmentation obtained a mean Dice score of 0.70 for ground-glass opacity, 0.68 for consolidation, 0.65 for pleural effusion, and 0.30 for band-like structures compared with ground truth. Moreover, it achieved a high agreement with radiologists for quantifying disease extent with Cohen κ of 0.94, 0.92, and 0.95. CONCLUSIONS A novel artificial intelligence multiclass radiomics model including 4 lung lesions to assess disease severity based on the World Health Organization Clinical Progression Scale more accurately determines the severity of COVID-19 patients than a single-class model and radiologists' assessment.
Collapse
|
8
|
Kelly BS, Mathur P, Plesniar J, Lawlor A, Killeen RP. Using deep learning-derived image features in radiologic time series to make personalised predictions: proof of concept in colonic transit data. Eur Radiol 2023; 33:8376-8386. [PMID: 37284869 PMCID: PMC10244854 DOI: 10.1007/s00330-023-09769-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 04/04/2023] [Accepted: 04/19/2023] [Indexed: 06/08/2023]
Abstract
OBJECTIVES Siamese neural networks (SNN) were used to classify the presence of radiopaque beads as part of a colonic transit time study (CTS). The SNN output was then used as a feature in a time series model to predict progression through a CTS. METHODS This retrospective study included all patients undergoing a CTS in a single institution from 2010 to 2020. Data were partitioned in an 80/20 Train/Test split. Deep learning models based on a SNN architecture were trained and tested to classify images according to the presence, absence, and number of radiopaque beads and to output the Euclidean distance between the feature representations of the input images. Time series models were used to predict the total duration of the study. RESULTS In total, 568 images of 229 patients (143, 62% female, mean age 57) patients were included. For the classification of the presence of beads, the best performing model (Siamese DenseNET trained with a contrastive loss with unfrozen weights) achieved an accuracy, precision, and recall of 0.988, 0.986, and 1. A Gaussian process regressor (GPR) trained on the outputs of the SNN outperformed both GPR using only the number of beads and basic statistical exponential curve fitting with MAE of 0.9 days compared to 2.3 and 6.3 days (p < 0.05) respectively. CONCLUSIONS SNNs perform well at the identification of radiopaque beads in CTS. For time series prediction our methods were superior at identifying progression through the time series compared to statistical models, enabling more accurate personalised predictions. CLINICAL RELEVANCE STATEMENT Our radiologic time series model has potential clinical application in use cases where change assessment is critical (e.g. nodule surveillance, cancer treatment response, and screening programmes) by quantifying change and using it to make more personalised predictions. KEY POINTS • Time series methods have improved but application to radiology lags behind computer vision. Colonic transit studies are a simple radiologic time series measuring function through serial radiographs. • We successfully employed a Siamese neural network (SNN) to compare between radiographs at different points in time and then used the output of SNN as a feature in a Gaussian process regression model to predict progression through the time series. • This novel use of features derived from a neural network on medical imaging data to predict progression has potential clinical application in more complex use cases where change assessment is critical such as in oncologic imaging, monitoring for treatment response, and screening programmes.
Collapse
Affiliation(s)
- Brendan S Kelly
- Department of Radiology, St Vincent's University Hospital, Dublin, Ireland.
- Insight Centre for Data Analytics, UCD, Dublin, Ireland.
- School of Medicine, University College Dublin, Dublin, Ireland.
| | | | - Jan Plesniar
- School of Medicine, University College Dublin, Dublin, Ireland
| | | | - Ronan P Killeen
- Department of Radiology, St Vincent's University Hospital, Dublin, Ireland
- School of Medicine, University College Dublin, Dublin, Ireland
| |
Collapse
|
9
|
Shenouda M, Flerlage I, Kaveti A, Giger ML, Armato SG. Assessment of a deep learning model for COVID-19 classification on chest radiographs: a comparison across image acquisition techniques and clinical factors. J Med Imaging (Bellingham) 2023; 10:064504. [PMID: 38162317 PMCID: PMC10753846 DOI: 10.1117/1.jmi.10.6.064504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 11/30/2023] [Accepted: 12/06/2023] [Indexed: 01/03/2024] Open
Abstract
Purpose The purpose is to assess the performance of a pre-trained deep learning model in the task of classifying between coronavirus disease (COVID)-positive and COVID-negative patients from chest radiographs (CXRs) while considering various image acquisition parameters, clinical factors, and patient demographics. Methods Standard and soft-tissue CXRs of 9860 patients comprised the "original dataset," consisting of training and test sets and were used to train a DenseNet-121 architecture model to classify COVID-19 using three classification algorithms: standard, soft tissue, and a combination of both types of images via feature fusion. A larger more-current test set of 5893 patients (the "current test set") was used to assess the performance of the pretrained model. The current test set contained a larger span of dates, incorporated different variants of the virus and included different immunization statuses. Model performance between the original and current test sets was evaluated using area under the receiver operating characteristic curve (ROC AUC) [95% CI]. Results The model achieved AUC values of 0.67 [0.65, 0.70] for cropped standard images, 0.65 [0.63, 0.67] for cropped soft-tissue images, and 0.67 [0.65, 0.69] for both types of cropped images. These were all significantly lower than the performance of the model on the original test set. Investigations regarding matching the acquisition dates between the test sets (i.e., controlling for virus variants), immunization status, disease severity, and age and sex distributions did not fully explain the discrepancy in performance. Conclusions Several relevant factors were considered to determine whether differences existed in the test sets, including time period of image acquisition, vaccination status, and disease severity. The lower performance on the current test set may have occurred due to model overfitting and a lack of generalizability.
Collapse
Affiliation(s)
- Mena Shenouda
- The University of Chicago, Committee on Medical Physics, Department of Radiology, Chicago, Illinois, United States
| | | | - Aditi Kaveti
- Stony Brook University, Stony Brook, New York, United States
| | - Maryellen L. Giger
- The University of Chicago, Committee on Medical Physics, Department of Radiology, Chicago, Illinois, United States
| | - Samuel G. Armato
- The University of Chicago, Committee on Medical Physics, Department of Radiology, Chicago, Illinois, United States
| |
Collapse
|
10
|
Liang Z, Xue Z, Rajaraman S, Feng Y, Antani S. Automatic Quantification of COVID-19 Pulmonary Edema by Self-supervised Contrastive Learning. MEDICAL IMAGE LEARNING WITH LIMITED AND NOISY DATA : SECOND INTERNATIONAL WORKSHOP, MILLAND 2023, HELD IN CONJUNCTION WITH MICCAI 2023, VANCOUVER, BC, CANADA, OCTOBER 8, 2023, PROCEEDINGS. MILLAND (WORKSHOP) : (2ND : 2023 : VANCOUVER, B... 2023; 14307:128-137. [PMID: 38415180 PMCID: PMC10896252 DOI: 10.1007/978-3-031-44917-8_12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/29/2024]
Abstract
We proposed a self-supervised machine learning method to automatically rate the severity of pulmonary edema in the frontal chest X-ray radiographs (CXR) which could be potentially related to COVID-19 viral pneumonia. For this we use the modified radiographic assessment of lung edema (mRALE) scoring system. The new model was first optimized with the simple Siamese network (SimSiam) architecture where a ResNet-50 pretrained by ImageNet database was used as the backbone. The encoder projected a 2048-dimension embedding as representation features to a downstream fully connected deep neural network for mRALE score prediction. A 5-fold cross-validation with 2,599 frontal CXRs was used to examine the new model's performance with comparison to a non-pretrained SimSiam encoder and a ResNet-50 trained from scratch. The mean absolute error (MAE) of the new model is 5.05 (95%CI 5.03-5.08), the mean squared error (MSE) is 66.67 (95%CI 66.29-67.06), and the Spearman's correlation coefficient (Spearman ρ) to the expert-annotated scores is 0.77 (95%CI 0.75-0.79). All the performance metrics of the new model are superior to the two comparators (P<0.01), and the scores of MSE and Spearman ρ of the two comparators have no statistical difference (P>0.05). The model also achieved a prediction probability concordance of 0.811 and a quadratic weighted kappa of 0.739 with the medical expert annotations in external validation. We conclude that the self-supervised contrastive learning method is an effective strategy for mRALE automated scoring. It provides a new approach to improve machine learning performance and minimize the expert knowledge involvement in quantitative medical image pattern learning.
Collapse
Affiliation(s)
- Zhaohui Liang
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Zhiyun Xue
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Sivaramakrishnan Rajaraman
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Yang Feng
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Sameer Antani
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
11
|
Zaeri N. Artificial intelligence and machine learning responses to COVID-19 related inquiries. J Med Eng Technol 2023; 47:301-320. [PMID: 38625639 DOI: 10.1080/03091902.2024.2321846] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2021] [Accepted: 02/18/2024] [Indexed: 04/17/2024]
Abstract
Researchers and scientists can use computational-based models to turn linked data into useful information, aiding in disease diagnosis, examination, and viral containment due to recent artificial intelligence and machine learning breakthroughs. In this paper, we extensively study the role of artificial intelligence and machine learning in delivering efficient responses to the COVID-19 pandemic almost four years after its start. In this regard, we examine a large number of critical studies conducted by various academic and research communities from multiple disciplines, as well as practical implementations of artificial intelligence algorithms that suggest potential solutions in investigating different COVID-19 decision-making scenarios. We identify numerous areas where artificial intelligence and machine learning can impact this context, including diagnosis (using chest X-ray imaging and CT imaging), severity, tracking, treatment, and the drug industry. Furthermore, we analyse the dilemma's limits, restrictions, and hazards.
Collapse
Affiliation(s)
- Naser Zaeri
- Faculty of Computer Studies, Arab Open University, Kuwait
| |
Collapse
|
12
|
Li H, Drukker K, Hu Q, Whitney HM, Fuhrman JD, Giger ML. Predicting intensive care need for COVID-19 patients using deep learning on chest radiography. J Med Imaging (Bellingham) 2023; 10:044504. [PMID: 37608852 PMCID: PMC10440543 DOI: 10.1117/1.jmi.10.4.044504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Revised: 07/12/2023] [Accepted: 08/01/2023] [Indexed: 08/24/2023] Open
Abstract
Purpose Image-based prediction of coronavirus disease 2019 (COVID-19) severity and resource needs can be an important means to address the COVID-19 pandemic. In this study, we propose an artificial intelligence/machine learning (AI/ML) COVID-19 prognosis method to predict patients' needs for intensive care by analyzing chest X-ray radiography (CXR) images using deep learning. Approach The dataset consisted of 8357 CXR exams from 5046 COVID-19-positive patients as confirmed by reverse transcription polymerase chain reaction (RT-PCR) tests for the SARS-CoV-2 virus with a training/validation/test split of 64%/16%/20% on a by patient level. Our model involved a DenseNet121 network with a sequential transfer learning technique employed to train on a sequence of gradually more specific and complex tasks: (1) fine-tuning a model pretrained on ImageNet using a previously established CXR dataset with a broad spectrum of pathologies; (2) refining on another established dataset to detect pneumonia; and (3) fine-tuning using our in-house training/validation datasets to predict patients' needs for intensive care within 24, 48, 72, and 96 h following the CXR exams. The classification performances were evaluated on our independent test set (CXR exams of 1048 patients) using the area under the receiver operating characteristic curve (AUC) as the figure of merit in the task of distinguishing between those COVID-19-positive patients who required intensive care following the imaging exam and those who did not. Results Our proposed AI/ML model achieved an AUC (95% confidence interval) of 0.78 (0.74, 0.81) when predicting the need for intensive care 24 h in advance, and at least 0.76 (0.73, 0.80) for 48 h or more in advance using predictions based on the AI prognostic marker derived from CXR images. Conclusions This AI/ML prediction model for patients' needs for intensive care has the potential to support both clinical decision-making and resource management.
Collapse
Affiliation(s)
- Hui Li
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Karen Drukker
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Qiyuan Hu
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Heather M. Whitney
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Jordan D. Fuhrman
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Maryellen L. Giger
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| |
Collapse
|
13
|
Yoo SJ, Kim H, Witanto JN, Inui S, Yoon JH, Lee KD, Choi YW, Goo JM, Yoon SH. Generative adversarial network for automatic quantification of Coronavirus disease 2019 pneumonia on chest radiographs. Eur J Radiol 2023; 164:110858. [PMID: 37209462 DOI: 10.1016/j.ejrad.2023.110858] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 04/10/2023] [Accepted: 04/29/2023] [Indexed: 05/22/2023]
Abstract
PURPOSE To develop a generative adversarial network (GAN) to quantify COVID-19 pneumonia on chest radiographs automatically. MATERIALS AND METHODS This retrospective study included 50,000 consecutive non-COVID-19 chest CT scans in 2015-2017 for training. Anteroposterior virtual chest, lung, and pneumonia radiographs were generated from whole, segmented lung, and pneumonia pixels from each CT scan. Two GANs were sequentially trained to generate lung images from radiographs and to generate pneumonia images from lung images. GAN-driven pneumonia extent (pneumonia area/lung area) was expressed from 0% to 100%. We examined the correlation of GAN-driven pneumonia extent with semi-quantitative Brixia X-ray severity score (one dataset, n = 4707) and quantitative CT-driven pneumonia extent (four datasets, n = 54-375), along with analyzing a measurement difference between the GAN and CT extents. Three datasets (n = 243-1481), where unfavorable outcomes (respiratory failure, intensive care unit admission, and death) occurred in 10%, 38%, and 78%, respectively, were used to examine the predictive power of GAN-driven pneumonia extent. RESULTS GAN-driven radiographic pneumonia was correlated with the severity score (0.611) and CT-driven extent (0.640). 95% limits of agreements between GAN and CT-driven extents were -27.1% to 17.4%. GAN-driven pneumonia extent provided odds ratios of 1.05-1.18 per percent for unfavorable outcomes in the three datasets, with areas under the receiver operating characteristic curve (AUCs) of 0.614-0.842. When combined with demographic information only and with both demographic and laboratory information, the prediction models yielded AUCs of 0.643-0.841 and 0.688-0.877, respectively. CONCLUSION The generative adversarial network automatically quantified COVID-19 pneumonia on chest radiographs and identified patients with unfavorable outcomes.
Collapse
Affiliation(s)
- Seung-Jin Yoo
- Department of Radiology, Hanyang University Medical Center, Hanyang University College of Medicine, Seoul, Republic of Korea
| | - Hyungjin Kim
- Department of Radiology, Seoul National University Hospital, Seoul National College of Medicine, Seoul, Korea
| | | | - Shohei Inui
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan; Department of Radiology, Japan Self-Defense Forces Central Hospital, Tokyo, Japan
| | - Jeong-Hwa Yoon
- Institute of Health Policy and Management, Medical Research Center, Seoul National University, Seoul, South Korea
| | - Ki-Deok Lee
- Division of Infectious diseases, Department of Internal Medicine, Myongji Hospital, Goyang, Korea
| | - Yo Won Choi
- Department of Radiology, Hanyang University Medical Center, Hanyang University College of Medicine, Seoul, Republic of Korea
| | - Jin Mo Goo
- Department of Radiology, Seoul National University Hospital, Seoul National College of Medicine, Seoul, Korea; Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea
| | - Soon Ho Yoon
- Department of Radiology, Seoul National University Hospital, Seoul National College of Medicine, Seoul, Korea; MEDICALIP Co. Ltd., Seoul, Korea
| |
Collapse
|
14
|
Rahman T, Chowdhury MEH, Khandakar A, Mahbub ZB, Hossain MSA, Alhatou A, Abdalla E, Muthiyal S, Islam KF, Kashem SBA, Khan MS, Zughaier SM, Hossain M. BIO-CXRNET: a robust multimodal stacking machine learning technique for mortality risk prediction of COVID-19 patients using chest X-ray images and clinical data. Neural Comput Appl 2023; 35:1-23. [PMID: 37362565 PMCID: PMC10157130 DOI: 10.1007/s00521-023-08606-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Accepted: 04/11/2023] [Indexed: 06/28/2023]
Abstract
Nowadays, quick, and accurate diagnosis of COVID-19 is a pressing need. This study presents a multimodal system to meet this need. The presented system employs a machine learning module that learns the required knowledge from the datasets collected from 930 COVID-19 patients hospitalized in Italy during the first wave of COVID-19 (March-June 2020). The dataset consists of twenty-five biomarkers from electronic health record and Chest X-ray (CXR) images. It is found that the system can diagnose low- or high-risk patients with an accuracy, sensitivity, and F1-score of 89.03%, 90.44%, and 89.03%, respectively. The system exhibits 6% higher accuracy than the systems that employ either CXR images or biomarker data. In addition, the system can calculate the mortality risk of high-risk patients using multivariate logistic regression-based nomogram scoring technique. Interested physicians can use the presented system to predict the early mortality risks of COVID-19 patients using the web-link: Covid-severity-grading-AI. In this case, a physician needs to input the following information: CXR image file, Lactate Dehydrogenase (LDH), Oxygen Saturation (O2%), White Blood Cells Count, C-reactive protein, and Age. This way, this study contributes to the management of COVID-19 patients by predicting early mortality risk. Supplementary Information The online version contains supplementary material available at 10.1007/s00521-023-08606-w.
Collapse
Affiliation(s)
- Tawsifur Rahman
- Department of Electrical Engineering, Qatar University, P.O. Box 2713, Doha, Qatar
| | | | - Amith Khandakar
- Department of Electrical Engineering, Qatar University, P.O. Box 2713, Doha, Qatar
| | - Zaid Bin Mahbub
- Department of Physics and Mathematics, North South University, Dhaka, 1229 Bangladesh
| | | | - Abraham Alhatou
- Department of Biology, University of South Carolina (USC), Columbia, SC 29208 USA
| | - Eynas Abdalla
- Anesthesia Department, Hamad General Hospital, P.O. Box 3050, Doha, Qatar
| | - Sreekumar Muthiyal
- Department of Radiology, Hamad General Hospital, P.O. Box 3050, Doha, Qatar
| | | | - Saad Bin Abul Kashem
- Department of Computer Science, AFG College with the University of Aberdeen, Doha, Qatar
| | - Muhammad Salman Khan
- Department of Electrical Engineering, Qatar University, P.O. Box 2713, Doha, Qatar
| | - Susu M. Zughaier
- Department of Basic Medical Sciences, College of Medicine, QU Health, Qatar University, P.O. Box 2713, Doha, Qatar
| | - Maqsud Hossain
- NSU Genome Research Institute (NGRI), North South University, Dhaka, 1229 Bangladesh
| |
Collapse
|
15
|
Dabbagh R, Jamal A, Bhuiyan Masud JH, Titi MA, Amer YS, Khayat A, Alhazmi TS, Hneiny L, Baothman FA, Alkubeyyer M, Khan SA, Temsah MH. Harnessing Machine Learning in Early COVID-19 Detection and Prognosis: A Comprehensive Systematic Review. Cureus 2023; 15:e38373. [PMID: 37265897 PMCID: PMC10230599 DOI: 10.7759/cureus.38373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/30/2023] [Indexed: 06/03/2023] Open
Abstract
During the early phase of the COVID-19 pandemic, reverse transcriptase-polymerase chain reaction (RT-PCR) testing faced limitations, prompting the exploration of machine learning (ML) alternatives for diagnosis and prognosis. Providing a comprehensive appraisal of such decision support systems and their use in COVID-19 management can aid the medical community in making informed decisions during the risk assessment of their patients, especially in low-resource settings. Therefore, the objective of this study was to systematically review the studies that predicted the diagnosis of COVID-19 or the severity of the disease using ML. Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA), we conducted a literature search of MEDLINE (OVID), Scopus, EMBASE, and IEEE Xplore from January 1 to June 31, 2020. The outcomes were COVID-19 diagnosis or prognostic measures such as death, need for mechanical ventilation, admission, and acute respiratory distress syndrome. We included peer-reviewed observational studies, clinical trials, research letters, case series, and reports. We extracted data about the study's country, setting, sample size, data source, dataset, diagnostic or prognostic outcomes, prediction measures, type of ML model, and measures of diagnostic accuracy. Bias was assessed using the Prediction model Risk Of Bias ASsessment Tool (PROBAST). This study was registered in the International Prospective Register of Systematic Reviews (PROSPERO), with the number CRD42020197109. The final records included for data extraction were 66. Forty-three (64%) studies used secondary data. The majority of studies were from Chinese authors (30%). Most of the literature (79%) relied on chest imaging for prediction, while the remainder used various laboratory indicators, including hematological, biochemical, and immunological markers. Thirteen studies explored predicting COVID-19 severity, while the rest predicted diagnosis. Seventy percent of the articles used deep learning models, while 30% used traditional ML algorithms. Most studies reported high sensitivity, specificity, and accuracy for the ML models (exceeding 90%). The overall concern about the risk of bias was "unclear" in 56% of the studies. This was mainly due to concerns about selection bias. ML may help identify COVID-19 patients in the early phase of the pandemic, particularly in the context of chest imaging. Although these studies reflect that these ML models exhibit high accuracy, the novelty of these models and the biases in dataset selection make using them as a replacement for the clinicians' cognitive decision-making questionable. Continued research is needed to enhance the robustness and reliability of ML systems in COVID-19 diagnosis and prognosis.
Collapse
Affiliation(s)
- Rufaidah Dabbagh
- Family & Community Medicine Department, College of Medicine, King Saud University, Riyadh, SAU
| | - Amr Jamal
- Family & Community Medicine Department, College of Medicine, King Saud University, Riyadh, SAU
- Research Chair for Evidence-Based Health Care and Knowledge Translation, Family and Community Medicine Department, College of Medicine, King Saud University, Riyadh, SAU
| | | | - Maher A Titi
- Quality Management Department, King Saud University Medical City, Riyadh, SAU
- Research Chair for Evidence-Based Health Care and Knowledge Translation, Family and Community Medicine Department, College of Medicine, King Saud University, Riyadh, SAU
| | - Yasser S Amer
- Pediatrics, Quality Management Department, King Saud University Medical City, Riyadh, SAU
- Research Chair for Evidence-Based Health Care and Knowledge Translation, Family and Community Medicine Department, College of Medicine, King Saud University, Riyadh, SAU
| | - Afnan Khayat
- Health Information Management Department, Prince Sultan Military College of Health Sciences, Al Dhahran, SAU
| | - Taha S Alhazmi
- Family & Community Medicine Department, College of Medicine, King Saud University, Riyadh, SAU
| | - Layal Hneiny
- Medicine, Wegner Health Sciences Library, University of South Dakota, Vermillion, USA
| | - Fatmah A Baothman
- Department of Information Systems, King Abdulaziz University, Jeddah, SAU
| | | | - Samina A Khan
- School of Computer Sciences, Universiti Sains Malaysia, Penang, MYS
| | - Mohamad-Hani Temsah
- Pediatric Intensive Care Unit, Department of Pediatrics, King Saud University, Riyadh, SAU
| |
Collapse
|
16
|
Tariq A, Tang S, Sakhi H, Celi LA, Newsome JM, Rubin DL, Trivedi H, Gichoya JW, Banerjee I. Fusion of imaging and non-imaging data for disease trajectory prediction for coronavirus disease 2019 patients. J Med Imaging (Bellingham) 2023; 10:034004. [PMID: 37388280 PMCID: PMC10306115 DOI: 10.1117/1.jmi.10.3.034004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 06/07/2023] [Accepted: 06/13/2023] [Indexed: 07/01/2023] Open
Abstract
Purpose Our study investigates whether graph-based fusion of imaging data with non-imaging electronic health records (EHR) data can improve the prediction of the disease trajectories for patients with coronavirus disease 2019 (COVID-19) beyond the prediction performance of only imaging or non-imaging EHR data. Approach We present a fusion framework for fine-grained clinical outcome prediction [discharge, intensive care unit (ICU) admission, or death] that fuses imaging and non-imaging information using a similarity-based graph structure. Node features are represented by image embedding, and edges are encoded with clinical or demographic similarity. Results Experiments on data collected from the Emory Healthcare Network indicate that our fusion modeling scheme performs consistently better than predictive models developed using only imaging or non-imaging features, with area under the receiver operating characteristics curve of 0.76, 0.90, and 0.75 for discharge from hospital, mortality, and ICU admission, respectively. External validation was performed on data collected from the Mayo Clinic. Our scheme highlights known biases in the model prediction, such as bias against patients with alcohol abuse history and bias based on insurance status. Conclusions Our study signifies the importance of the fusion of multiple data modalities for the accurate prediction of clinical trajectories. The proposed graph structure can model relationships between patients based on non-imaging EHR data, and graph convolutional networks can fuse this relationship information with imaging data to effectively predict future disease trajectory more effectively than models employing only imaging or non-imaging data. Our graph-based fusion modeling frameworks can be easily extended to other prediction tasks to efficiently combine imaging data with non-imaging clinical data.
Collapse
Affiliation(s)
- Amara Tariq
- Mayo Clinic, Department of Administration, Phoenix, Arizona, United States
| | - Siyi Tang
- Stanford University, Department of Electrical Engineering, Stanford, California, United States
| | - Hifza Sakhi
- Philadelphia College of Osteopathic Medicine - Georgia Campus, Swanee, Georgia, United States
| | - Leo Anthony Celi
- Massachusetts Institute of Technology, Boston, Massachusetts, United States
| | - Janice M. Newsome
- Emory University, School of Medicine, Department of Radiology and Imaging Sciences, Atlanta, Georgia, United States
| | - Daniel L. Rubin
- Stanford University, Department of Biomedical Data Science, Stanford, California, United States
- Stanford University, Department of Radiology, Stanford, California, United States
| | - Hari Trivedi
- Emory University, School of Medicine, Department of Radiology and Imaging Sciences, Atlanta, Georgia, United States
| | - Judy Wawira Gichoya
- Emory University, School of Medicine, Department of Radiology and Imaging Sciences, Atlanta, Georgia, United States
| | - Imon Banerjee
- Mayo Clinic, Department of Radiology, Phoenix, Arizona, United States
- Arizona State University, Ira A. Fulton School of Engineering, Department of Computer Engineering, Tempe, Arizona, United States
| |
Collapse
|
17
|
Gasulla Ó, Ledesma-Carbayo MJ, Borrell LN, Fortuny-Profitós J, Mazaira-Font FA, Barbero Allende JM, Alonso-Menchén D, García-Bennett J, Del Río-Carrrero B, Jofré-Grimaldo H, Seguí A, Monserrat J, Teixidó-Román M, Torrent A, Ortega MÁ, Álvarez-Mon M, Asúnsolo A. Enhancing physicians' radiology diagnostics of COVID-19's effects on lung health by leveraging artificial intelligence. Front Bioeng Biotechnol 2023; 11:1010679. [PMID: 37152658 PMCID: PMC10157246 DOI: 10.3389/fbioe.2023.1010679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Accepted: 03/14/2023] [Indexed: 05/09/2023] Open
Abstract
Introduction: This study aimed to develop an individualized artificial intelligence model to help radiologists assess the severity of COVID-19's effects on patients' lung health. Methods: Data was collected from medical records of 1103 patients diagnosed with COVID-19 using RT- qPCR between March and June 2020, in Hospital Madrid-Group (HM-Group, Spain). By using Convolutional Neural Networks, we determine the effects of COVID-19 in terms of lung area, opacities, and pulmonary air density. We then combine these variables with age and sex in a regression model to assess the severity of these conditions with respect to fatality risk (death or ICU). Results: Our model can predict high effect with an AUC of 0.736. Finally, we compare the performance of the model with respect to six physicians' diagnosis, and test for improvements on physicians' performance when using the prediction algorithm. Discussion: We find that the algorithm outperforms physicians (39.5% less error), and thus, physicians can significantly benefit from the information provided by the algorithm by reducing error by almost 30%.
Collapse
Affiliation(s)
- Óscar Gasulla
- Hospital Universitari de Bellvitge-Universitat de Barcelona, L´Hospitalet de Llobregat, Spain
- Department of Surgery, Medical and Social Sciences, Faculty of Medicine and Health Sciences, University of Alcalá, Alcala de Henares, Spain
| | - Maria J. Ledesma-Carbayo
- Biomedical Image Technologies, Universidad Politécnica de Madrid & CIBER BBN, ISCIII, Madrid, Spain
| | - Luisa N. Borrell
- Department of Surgery, Medical and Social Sciences, Faculty of Medicine and Health Sciences, University of Alcalá, Alcala de Henares, Spain
- Department of Epidemiology and Biostatistics, Graduate School of Public Health and Health Policy, University of New York, New York, NY, United States
| | | | - Ferran A. Mazaira-Font
- Departament d'Econometria, Estadística i Economia Aplicada-Universitat de Barcelona, Barcelona, Spain
| | - Jose María Barbero Allende
- Department of Medicine and Medical Specialities, Faculty of Medicine and Health Sciences, University of Alcalá, Alcalá de Henares, Spain
| | - David Alonso-Menchén
- Department of Medicine and Medical Specialities, Faculty of Medicine and Health Sciences, University of Alcalá, Alcalá de Henares, Spain
| | - Josep García-Bennett
- Hospital Universitari de Bellvitge-Universitat de Barcelona, L´Hospitalet de Llobregat, Spain
| | - Belen Del Río-Carrrero
- Hospital Universitari de Bellvitge-Universitat de Barcelona, L´Hospitalet de Llobregat, Spain
| | - Hector Jofré-Grimaldo
- Hospital Universitari de Bellvitge-Universitat de Barcelona, L´Hospitalet de Llobregat, Spain
| | - Aleix Seguí
- Campus Nord, Universitat Politècnica de Catalunya, Barcelona, Spain
| | - Jorge Monserrat
- Department of Medicine and Medical Specialities, Faculty of Medicine and Health Sciences, University of Alcalá, Alcalá de Henares, Spain
- Ramón y Cajal Institute of Sanitary Research (IRYCIS), Madrid, Spain
| | - Miguel Teixidó-Román
- Departament d'Econometria, Estadística i Economia Aplicada-Universitat de Barcelona, Barcelona, Spain
| | - Adrià Torrent
- Departament d'Econometria, Estadística i Economia Aplicada-Universitat de Barcelona, Barcelona, Spain
| | - Miguel Ángel Ortega
- Department of Medicine and Medical Specialities, Faculty of Medicine and Health Sciences, University of Alcalá, Alcalá de Henares, Spain
- Ramón y Cajal Institute of Sanitary Research (IRYCIS), Madrid, Spain
| | - Melchor Álvarez-Mon
- Department of Medicine and Medical Specialities, Faculty of Medicine and Health Sciences, University of Alcalá, Alcalá de Henares, Spain
- Ramón y Cajal Institute of Sanitary Research (IRYCIS), Madrid, Spain
- Service of Internal Medicine and Immune System Diseases-Rheumatology, University Hospital Príncipe de Asturias, (CIBEREHD), Alcalá de Henares, Spain
| | - Angel Asúnsolo
- Department of Surgery, Medical and Social Sciences, Faculty of Medicine and Health Sciences, University of Alcalá, Alcala de Henares, Spain
- Department of Epidemiology and Biostatistics, Graduate School of Public Health and Health Policy, University of New York, New York, NY, United States
- Ramón y Cajal Institute of Sanitary Research (IRYCIS), Madrid, Spain
| |
Collapse
|
18
|
Shen B, Hou W, Jiang Z, Li H, Singer AJ, Hoshmand-Kochi M, Abbasi A, Glass S, Thode HC, Levsky J, Lipton M, Duong TQ. Longitudinal Chest X-ray Scores and their Relations with Clinical Variables and Outcomes in COVID-19 Patients. Diagnostics (Basel) 2023; 13:diagnostics13061107. [PMID: 36980414 PMCID: PMC10047384 DOI: 10.3390/diagnostics13061107] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 03/08/2023] [Accepted: 03/13/2023] [Indexed: 03/17/2023] Open
Abstract
Background: This study evaluated the temporal characteristics of lung chest X-ray (CXR) scores in COVID-19 patients during hospitalization and how they relate to other clinical variables and outcomes (alive or dead). Methods: This is a retrospective study of COVID-19 patients. CXR scores of disease severity were analyzed for: (i) survivors (N = 224) versus non-survivors (N = 28) in the general floor group, and (ii) survivors (N = 92) versus non-survivors (N = 56) in the invasive mechanical ventilation (IMV) group. Unpaired t-tests were used to compare survivors and non-survivors and between time points. Comparison across multiple time points used repeated measures ANOVA and corrected for multiple comparisons. Results: For general-floor patients, non-survivor CXR scores were significantly worse at admission compared to those of survivors (p < 0.05), and non-survivor CXR scores deteriorated at outcome (p < 0.05) whereas survivor CXR scores did not (p > 0.05). For IMV patients, survivor and non-survivor CXR scores were similar at intubation (p > 0.05), and both improved at outcome (p < 0.05), with survivor scores showing greater improvement (p < 0.05). Hospitalization and IMV duration were not different between groups (p > 0.05). CXR scores were significantly correlated with lactate dehydrogenase, respiratory rate, D-dimer, C-reactive protein, procalcitonin, ferritin, SpO2, and lymphocyte count (p < 0.05). Conclusions: Longitudinal CXR scores have the potential to provide prognosis, guide treatment, and monitor disease progression.
Collapse
Affiliation(s)
- Beiyi Shen
- Department of Radiology, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY 11794, USA
| | - Wei Hou
- Department of Family Medicine, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY 11794, USA
| | - Zhao Jiang
- Department of Radiology, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY 11794, USA
| | - Haifang Li
- Department of Radiology, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY 11794, USA
| | - Adam J. Singer
- Department of Emergency Medicine, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY 11794, USA
| | - Mahsa Hoshmand-Kochi
- Department of Radiology, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY 11794, USA
| | - Almas Abbasi
- Department of Radiology, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY 11794, USA
| | - Samantha Glass
- Department of Radiology, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY 11794, USA
| | - Henry C. Thode
- Department of Emergency Medicine, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY 11794, USA
| | - Jeffrey Levsky
- Department of Radiology, Albert Einstein College of Medicine and Montefiore Medical Center, Bronx, NY 10461, USA
| | - Michael Lipton
- Department of Radiology, Albert Einstein College of Medicine and Montefiore Medical Center, Bronx, NY 10461, USA
| | - Tim Q. Duong
- Department of Radiology, Albert Einstein College of Medicine and Montefiore Medical Center, Bronx, NY 10461, USA
- Correspondence: ; Tel.: +718-920-6268
| |
Collapse
|
19
|
Berg A, Vandersmissen E, Wimmer M, Major D, Neubauer T, Lenis D, Cant J, Snoeckx A, Bühler K. Employing similarity to highlight differences: On the impact of anatomical assumptions in chest X-ray registration methods. Comput Biol Med 2023; 154:106543. [PMID: 36682179 DOI: 10.1016/j.compbiomed.2023.106543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 12/15/2022] [Accepted: 01/10/2023] [Indexed: 01/18/2023]
Abstract
To facilitate both the detection and the interpretation of findings in chest X-rays, comparison with a previous image of the same patient is very valuable to radiologists. Today, the most common approach for deep learning methods to automatically inspect chest X-rays disregards the patient history and classifies only single images as normal or abnormal. Nevertheless, several methods for assisting in the task of comparison through image registration have been proposed in the past. However, as we illustrate, they tend to miss specific types of pathological changes like cardiomegaly and effusion. Due to assumptions on fixed anatomical structures or their measurements of registration quality, they produce unnaturally deformed warp fields impacting visualization of differences between moving and fixed images. We aim to overcome these limitations, through a new paradigm based on individual rib pair segmentation for anatomy penalized registration. Our method proves to be a natural way to limit the folding percentage of the warp field to 1/6 of the state of the art while increasing the overlap of ribs by more than 25%, implying difference images showing pathological changes overlooked by other methods. We develop an anatomically penalized convolutional multi-stage solution on the National Institutes of Health (NIH) data set, starting from less than 25 fully and 50 partly labeled training images, employing sequential instance memory segmentation with hole dropout, weak labeling, coarse-to-fine refinement and Gaussian mixture model histogram matching. We statistically evaluate the benefits of our method and highlight the limits of currently used metrics for registration of chest X-rays.
Collapse
Affiliation(s)
- Astrid Berg
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Donau-City-Straße 11, Vienna, 1220, Austria.
| | - Eva Vandersmissen
- Agfa NV, Radiology Solutions R&D, Septestraat 27, 2640 Mortsel, Belgium.
| | - Maria Wimmer
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Donau-City-Straße 11, Vienna, 1220, Austria.
| | - David Major
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Donau-City-Straße 11, Vienna, 1220, Austria.
| | - Theresa Neubauer
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Donau-City-Straße 11, Vienna, 1220, Austria.
| | - Dimitrios Lenis
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Donau-City-Straße 11, Vienna, 1220, Austria.
| | - Jeroen Cant
- Agfa NV, Radiology Solutions R&D, Septestraat 27, 2640 Mortsel, Belgium.
| | - Annemiek Snoeckx
- Department of Radiology, Antwerp University Hospital, Drie Eikenstraat 655, 2650 Edegem, Belgium; Faculty of Medicine and Health Sciences, University of Antwerp, Universiteitsplein 1, 2610 Wilrijk, Belgium.
| | - Katja Bühler
- VRVis Zentrum für Virtual Reality und Visualisierung Forschungs-GmbH, Donau-City-Straße 11, Vienna, 1220, Austria.
| |
Collapse
|
20
|
A Prospective Observational Study on Short and Long-Term Outcomes of COVID-19 Patients with Acute Hypoxic Respiratory Failure Treated with High-Flow Nasal Cannula. J Clin Med 2023; 12:jcm12041249. [PMID: 36835785 PMCID: PMC9965220 DOI: 10.3390/jcm12041249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 01/13/2023] [Accepted: 02/01/2023] [Indexed: 02/09/2023] Open
Abstract
(1) The use of high-flow nasal cannula (HFNC) combined with frequent respiratory monitoring in patients with acute hypoxic respiratory failure due to COVID-19 has been shown to reduce intubation and mechanical ventilation. (2) This prospective, single-center, observational study included consecutive adult patients with COVID-19 pneumonia treated with a high-flow nasal cannula. Hemodynamic parameters, respiratory rate, inspiratory fraction of oxygen (FiO2), saturation of oxygen (SpO2), and the ratio of oxygen saturation to respiratory rate (ROX) were recorded prior to treatment initiation and every 2 h for 24 h. A 6-month follow-up questionnaire was also conducted. (3) Over the study period, 153 of 187 patients were eligible for HFNC. Of these patients, 80% required intubation and 37% of the intubated patients died in hospital. Male sex (OR = 4.65; 95% CI [1.28; 20.6], p = 0.03) and higher BMI (OR = 2.63; 95% CI [1.14; 6.76], p = 0.03) were associated with an increased risk for new limitations at 6-months after hospital discharge. (4) 20% of patients who received HFNC did not require intubation and were discharged alive from the hospital. Male sex and higher BMI were associated with poor long-term functional outcomes.
Collapse
|
21
|
Affiliation(s)
- Theresa C McLoud
- From the Department of Radiology, Harvard Medical School, Massachusetts General Hospital, 55 Fruit St, MZ-FND 216, Boston, MA 02114-2696 (T.C.M.); and Department of Radiology, Mayo Clinic College of Medicine and Science, Mayo Clinic Florida, Jacksonville, Fla (B.P.L.)
| | - Brent P Little
- From the Department of Radiology, Harvard Medical School, Massachusetts General Hospital, 55 Fruit St, MZ-FND 216, Boston, MA 02114-2696 (T.C.M.); and Department of Radiology, Mayo Clinic College of Medicine and Science, Mayo Clinic Florida, Jacksonville, Fla (B.P.L.)
| |
Collapse
|
22
|
Jeong YJ, Wi YM, Park H, Lee JE, Kim SH, Lee KS. Current and Emerging Knowledge in COVID-19. Radiology 2023; 306:e222462. [PMID: 36625747 PMCID: PMC9846833 DOI: 10.1148/radiol.222462] [Citation(s) in RCA: 27] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 11/21/2022] [Accepted: 11/23/2022] [Indexed: 01/11/2023]
Abstract
COVID-19 has emerged as a pandemic leading to a global public health crisis of unprecedented morbidity. A comprehensive insight into the imaging of COVID-19 has enabled early diagnosis, stratification of disease severity, and identification of potential sequelae. The evolution of COVID-19 can be divided into early infectious, pulmonary, and hyperinflammatory phases. Clinical features, imaging features, and management are different among the three phases. In the early stage, peripheral ground-glass opacities are predominant CT findings, and therapy directly targeting SARS-CoV-2 is effective. In the later stage, organizing pneumonia or diffuse alveolar damage pattern are predominant CT findings and anti-inflammatory therapies are more beneficial. The risk of severe disease or hospitalization is lower in breakthrough or Omicron variant infection compared with nonimmunized or Delta variant infections. The protection rates of the fourth dose of mRNA vaccination were 34% and 67% against overall infection and hospitalizations for severe illness, respectively. After acute COVID-19 pneumonia, most residual CT abnormalities gradually decreased in extent, but they may remain as linear or multifocal reticular or cystic lesions. Advanced insights into the pathophysiologic and imaging features of COVID-19 along with vaccine benefits have improved patient care, but emerging knowledge of post-COVID-19 condition, or long COVID, also presents radiology with new challenges.
Collapse
Affiliation(s)
- Yeon Joo Jeong
- From the Department of Radiology, Research Institute for Convergence
of Biomedical Science and Technology, Pusan National University Yangsan
Hospital, Pusan National University School of Medicine, Yangsan, Korea (Y.J.J.);
Division of Infectious Diseases, Department of Internal Medicine (Y.M.W.,
S.H.K.) and Department of Radiology (K.S.L.), Samsung Changwon Hospital,
Sungkyunkwan University School of Medicine (SKKU-SOM), Changwon 51353, Korea;
Department of Electrical and Computer Engineering, Sungkyunkwan University,
Suwon, Korea (H.P.); Center for Neuroscience Imaging Research, Institute for
Basic Science, Suwon, Korea (H.P.); and Department of Radiology, Chonnam
National University Hospital, Gwangju, Korea (J.E.L.)
| | - Yu Mi Wi
- From the Department of Radiology, Research Institute for Convergence
of Biomedical Science and Technology, Pusan National University Yangsan
Hospital, Pusan National University School of Medicine, Yangsan, Korea (Y.J.J.);
Division of Infectious Diseases, Department of Internal Medicine (Y.M.W.,
S.H.K.) and Department of Radiology (K.S.L.), Samsung Changwon Hospital,
Sungkyunkwan University School of Medicine (SKKU-SOM), Changwon 51353, Korea;
Department of Electrical and Computer Engineering, Sungkyunkwan University,
Suwon, Korea (H.P.); Center for Neuroscience Imaging Research, Institute for
Basic Science, Suwon, Korea (H.P.); and Department of Radiology, Chonnam
National University Hospital, Gwangju, Korea (J.E.L.)
| | - Hyunjin Park
- From the Department of Radiology, Research Institute for Convergence
of Biomedical Science and Technology, Pusan National University Yangsan
Hospital, Pusan National University School of Medicine, Yangsan, Korea (Y.J.J.);
Division of Infectious Diseases, Department of Internal Medicine (Y.M.W.,
S.H.K.) and Department of Radiology (K.S.L.), Samsung Changwon Hospital,
Sungkyunkwan University School of Medicine (SKKU-SOM), Changwon 51353, Korea;
Department of Electrical and Computer Engineering, Sungkyunkwan University,
Suwon, Korea (H.P.); Center for Neuroscience Imaging Research, Institute for
Basic Science, Suwon, Korea (H.P.); and Department of Radiology, Chonnam
National University Hospital, Gwangju, Korea (J.E.L.)
| | - Jong Eun Lee
- From the Department of Radiology, Research Institute for Convergence
of Biomedical Science and Technology, Pusan National University Yangsan
Hospital, Pusan National University School of Medicine, Yangsan, Korea (Y.J.J.);
Division of Infectious Diseases, Department of Internal Medicine (Y.M.W.,
S.H.K.) and Department of Radiology (K.S.L.), Samsung Changwon Hospital,
Sungkyunkwan University School of Medicine (SKKU-SOM), Changwon 51353, Korea;
Department of Electrical and Computer Engineering, Sungkyunkwan University,
Suwon, Korea (H.P.); Center for Neuroscience Imaging Research, Institute for
Basic Science, Suwon, Korea (H.P.); and Department of Radiology, Chonnam
National University Hospital, Gwangju, Korea (J.E.L.)
| | - Si-Ho Kim
- From the Department of Radiology, Research Institute for Convergence
of Biomedical Science and Technology, Pusan National University Yangsan
Hospital, Pusan National University School of Medicine, Yangsan, Korea (Y.J.J.);
Division of Infectious Diseases, Department of Internal Medicine (Y.M.W.,
S.H.K.) and Department of Radiology (K.S.L.), Samsung Changwon Hospital,
Sungkyunkwan University School of Medicine (SKKU-SOM), Changwon 51353, Korea;
Department of Electrical and Computer Engineering, Sungkyunkwan University,
Suwon, Korea (H.P.); Center for Neuroscience Imaging Research, Institute for
Basic Science, Suwon, Korea (H.P.); and Department of Radiology, Chonnam
National University Hospital, Gwangju, Korea (J.E.L.)
| | - Kyung Soo Lee
- From the Department of Radiology, Research Institute for Convergence
of Biomedical Science and Technology, Pusan National University Yangsan
Hospital, Pusan National University School of Medicine, Yangsan, Korea (Y.J.J.);
Division of Infectious Diseases, Department of Internal Medicine (Y.M.W.,
S.H.K.) and Department of Radiology (K.S.L.), Samsung Changwon Hospital,
Sungkyunkwan University School of Medicine (SKKU-SOM), Changwon 51353, Korea;
Department of Electrical and Computer Engineering, Sungkyunkwan University,
Suwon, Korea (H.P.); Center for Neuroscience Imaging Research, Institute for
Basic Science, Suwon, Korea (H.P.); and Department of Radiology, Chonnam
National University Hospital, Gwangju, Korea (J.E.L.)
| |
Collapse
|
23
|
Lee JH, Koh J, Jeon YK, Goo JM, Yoon SH. An Integrated Radiologic-Pathologic Understanding of COVID-19 Pneumonia. Radiology 2023; 306:e222600. [PMID: 36648343 PMCID: PMC9868683 DOI: 10.1148/radiol.222600] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 12/07/2022] [Accepted: 12/09/2022] [Indexed: 01/18/2023]
Abstract
This article reviews the radiologic and pathologic findings of the epithelial and endothelial injuries in COVID-19 pneumonia to help radiologists understand the fundamental nature of the disease. The radiologic and pathologic manifestations of COVID-19 pneumonia result from epithelial and endothelial injuries based on viral toxicity and immunopathologic effects. The pathologic features of mild and reversible COVID-19 pneumonia involve nonspecific pneumonia or an organizing pneumonia pattern, while the pathologic features of potentially fatal and irreversible COVID-19 pneumonia are characterized by diffuse alveolar damage followed by fibrosis or acute fibrinous organizing pneumonia. These pathologic responses of epithelial injuries observed in COVID-19 pneumonia are not specific to SARS-CoV-2 but rather constitute universal responses to viral pneumonia. Endothelial injury in COVID-19 pneumonia is a prominent feature compared with other types of viral pneumonia and encompasses various vascular abnormalities at different levels, including pulmonary thromboembolism, vascular engorgement, peripheral vascular reduction, a vascular tree-in-bud pattern, and lung perfusion abnormality. Chest CT with different imaging techniques (eg, CT quantification, dual-energy CT perfusion) can fully capture the various manifestations of epithelial and endothelial injuries. CT can thus aid in establishing prognosis and identifying patients at risk for deterioration.
Collapse
Affiliation(s)
- Jong Hyuk Lee
- From the Departments of Radiology (J.H.L., J.M.G., S.H.Y.) and
Pathology (J.K., Y.K.J.), Seoul National University Hospital, Seoul National
University College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul 03080, Korea;
Department of Radiology, Seoul National University College of Medicine, Seoul,
Korea (J.M.G.); Institute of Radiation Medicine, Seoul National University
Medical Research Center, Seoul, Korea (J.M.G.); and Cancer Research Institute,
Seoul National University, Seoul, Korea (J.M.G.)
| | - Jaemoon Koh
- From the Departments of Radiology (J.H.L., J.M.G., S.H.Y.) and
Pathology (J.K., Y.K.J.), Seoul National University Hospital, Seoul National
University College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul 03080, Korea;
Department of Radiology, Seoul National University College of Medicine, Seoul,
Korea (J.M.G.); Institute of Radiation Medicine, Seoul National University
Medical Research Center, Seoul, Korea (J.M.G.); and Cancer Research Institute,
Seoul National University, Seoul, Korea (J.M.G.)
| | - Yoon Kyung Jeon
- From the Departments of Radiology (J.H.L., J.M.G., S.H.Y.) and
Pathology (J.K., Y.K.J.), Seoul National University Hospital, Seoul National
University College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul 03080, Korea;
Department of Radiology, Seoul National University College of Medicine, Seoul,
Korea (J.M.G.); Institute of Radiation Medicine, Seoul National University
Medical Research Center, Seoul, Korea (J.M.G.); and Cancer Research Institute,
Seoul National University, Seoul, Korea (J.M.G.)
| | - Jin Mo Goo
- From the Departments of Radiology (J.H.L., J.M.G., S.H.Y.) and
Pathology (J.K., Y.K.J.), Seoul National University Hospital, Seoul National
University College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul 03080, Korea;
Department of Radiology, Seoul National University College of Medicine, Seoul,
Korea (J.M.G.); Institute of Radiation Medicine, Seoul National University
Medical Research Center, Seoul, Korea (J.M.G.); and Cancer Research Institute,
Seoul National University, Seoul, Korea (J.M.G.)
| | - Soon Ho Yoon
- From the Departments of Radiology (J.H.L., J.M.G., S.H.Y.) and
Pathology (J.K., Y.K.J.), Seoul National University Hospital, Seoul National
University College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul 03080, Korea;
Department of Radiology, Seoul National University College of Medicine, Seoul,
Korea (J.M.G.); Institute of Radiation Medicine, Seoul National University
Medical Research Center, Seoul, Korea (J.M.G.); and Cancer Research Institute,
Seoul National University, Seoul, Korea (J.M.G.)
| |
Collapse
|
24
|
Lakhani P, Mongan J, Singhal C, Zhou Q, Andriole KP, Auffermann WF, Prasanna PM, Pham TX, Peterson M, Bergquist PJ, Cook TS, Ferraciolli SF, Corradi GCA, Takahashi MS, Workman CS, Parekh M, Kamel SI, Galant J, Mas-Sanchez A, Benítez EC, Sánchez-Valverde M, Jaques L, Panadero M, Vidal M, Culiañez-Casas M, Angulo-Gonzalez D, Langer SG, de la Iglesia-Vayá M, Shih G. The 2021 SIIM-FISABIO-RSNA Machine Learning COVID-19 Challenge: Annotation and Standard Exam Classification of COVID-19 Chest Radiographs. J Digit Imaging 2023; 36:365-372. [PMID: 36171520 PMCID: PMC9518934 DOI: 10.1007/s10278-022-00706-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 09/15/2022] [Accepted: 09/16/2022] [Indexed: 11/30/2022] Open
Abstract
We describe the curation, annotation methodology, and characteristics of the dataset used in an artificial intelligence challenge for detection and localization of COVID-19 on chest radiographs. The chest radiographs were annotated by an international group of radiologists into four mutually exclusive categories, including "typical," "indeterminate," and "atypical appearance" for COVID-19, or "negative for pneumonia," adapted from previously published guidelines, and bounding boxes were placed on airspace opacities. This dataset and respective annotations are available to researchers for academic and noncommercial use.
Collapse
Affiliation(s)
- Paras Lakhani
- Department of Radiology, Thomas Jefferson University, Sidney Kimmel Jefferson Medical College, 111 S 11th St, Philadelphia, PA, 19107, USA.
| | - J Mongan
- University of California San Francisco, San Francisco, CA, USA
| | | | | | - K P Andriole
- Mass General Brigham and Harvard Medical School, Boston, MA, USA
| | | | - P M Prasanna
- University of Utah Health, Salt Lake City, UT, USA
| | - T X Pham
- University of Utah Health, Salt Lake City, UT, USA
| | | | - P J Bergquist
- Medstar Georgetown University Hospital, Washington DC, USA
| | - T S Cook
- University of Pennsylvania, Philadelphia, PA, USA
| | | | | | | | - C S Workman
- Vanderbilt University Medical Center, Nashville TN, USA
| | - M Parekh
- Department of Radiology, Thomas Jefferson University, Sidney Kimmel Jefferson Medical College, 111 S 11th St, Philadelphia, PA, 19107, USA
| | - S I Kamel
- Department of Radiology, Thomas Jefferson University, Sidney Kimmel Jefferson Medical College, 111 S 11th St, Philadelphia, PA, 19107, USA
| | - J Galant
- Hospital Universitario San Juan de Alicante, San Juan de Alicante, Alicante, Spain
| | - A Mas-Sanchez
- Hospital Universitario San Juan de Alicante, San Juan de Alicante, Alicante, Spain
| | - E C Benítez
- Hospital Universitario San Juan de Alicante, San Juan de Alicante, Alicante, Spain
| | - M Sánchez-Valverde
- Hospital Universitario San Juan de Alicante, San Juan de Alicante, Alicante, Spain
| | - L Jaques
- Hospital Universitario San Juan de Alicante, San Juan de Alicante, Alicante, Spain
| | - M Panadero
- Hospital Universitario San Juan de Alicante, San Juan de Alicante, Alicante, Spain
| | - M Vidal
- Hospital Universitario San Juan de Alicante, San Juan de Alicante, Alicante, Spain
| | - M Culiañez-Casas
- Hospital Universitario San Juan de Alicante, San Juan de Alicante, Alicante, Spain
| | | | | | - María de la Iglesia-Vayá
- The Foundation for the Promotion of Health and Biomedical Research of Valencia Region, Valencia, Spain
| | - G Shih
- Weill Cornell Medicine, New York, NY, USA
| |
Collapse
|
25
|
Vardhan A, Makhnevich A, Omprakash P, Hirschorn D, Barish M, Cohen SL, Zanos TP. A radiographic, deep transfer learning framework, adapted to estimate lung opacities from chest x-rays. Bioelectron Med 2023; 9:1. [PMID: 36597113 PMCID: PMC9809517 DOI: 10.1186/s42234-022-00103-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 12/12/2022] [Indexed: 01/05/2023] Open
Abstract
Chest radiographs (CXRs) are the most widely available radiographic imaging modality used to detect respiratory diseases that result in lung opacities. CXR reports often use non-standardized language that result in subjective, qualitative, and non-reproducible opacity estimates. Our goal was to develop a robust deep transfer learning framework and adapt it to estimate the degree of lung opacity from CXRs. Following CXR data selection based on exclusion criteria, segmentation schemes were used for ROI (Region Of Interest) extraction, and all combinations of segmentation, data balancing, and classification methods were tested to pick the top performing models. Multifold cross validation was used to determine the best model from the initial selected top models, based on appropriate performance metrics, as well as a novel Macro-Averaged Heatmap Concordance Score (MA HCS). Performance of the best model is compared against that of expert physician annotators, and heatmaps were produced. Finally, model performance sensitivity analysis across patient populations of interest was performed. The proposed framework was adapted to the specific use case of estimation of degree of CXR lung opacity using ordinal multiclass classification. Acquired between March 24, 2020, and May 22, 2020, 38,365 prospectively annotated CXRs from 17,418 patients were used. We tested three neural network architectures (ResNet-50, VGG-16, and ChexNet), three segmentation schemes (no segmentation, lung segmentation, and lateral segmentation based on spine detection), and three data balancing strategies (undersampling, double-stage sampling, and synthetic minority oversampling) using 38,079 CXR images for training, and validation with 286 images as the out-of-the-box dataset that underwent expert radiologist adjudication. Based on the results of these experiments, the ResNet-50 model with undersampling and no ROI segmentation is recommended for lung opacity classification, based on optimal values for the MAE metric and HCS (Heatmap Concordance Score). The degree of agreement between the opacity scores predicted by this model with respect to the two sets of radiologist scores (OR or Original Reader and OOBTR or Out Of Box Reader) in terms of performance metrics is superior to the inter-radiologist opacity score agreement.
Collapse
Affiliation(s)
- Avantika Vardhan
- grid.250903.d0000 0000 9566 0634Institute of Health System Science, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY 11030 USA ,grid.250903.d0000 0000 9566 0634Institute of Bioelectronic Medicine, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY 11030 USA
| | - Alex Makhnevich
- grid.250903.d0000 0000 9566 0634Institute of Health System Science, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY 11030 USA ,grid.512756.20000 0004 0370 4759Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Northwell Health, Hempstead, NY 11549 USA
| | - Pravan Omprakash
- grid.250903.d0000 0000 9566 0634Institute of Bioelectronic Medicine, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY 11030 USA
| | - David Hirschorn
- grid.250903.d0000 0000 9566 0634Institute of Health System Science, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY 11030 USA ,grid.416477.70000 0001 2168 3646Department of Information Services, Northwell Health, New Hyde Park, NY 11042 USA
| | - Matthew Barish
- grid.250903.d0000 0000 9566 0634Institute of Health System Science, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY 11030 USA ,grid.416477.70000 0001 2168 3646Department of Information Services, Northwell Health, New Hyde Park, NY 11042 USA
| | - Stuart L. Cohen
- grid.250903.d0000 0000 9566 0634Institute of Health System Science, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY 11030 USA ,grid.512756.20000 0004 0370 4759Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Northwell Health, Hempstead, NY 11549 USA
| | - Theodoros P. Zanos
- grid.250903.d0000 0000 9566 0634Institute of Health System Science, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY 11030 USA ,grid.250903.d0000 0000 9566 0634Institute of Bioelectronic Medicine, Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY 11030 USA ,grid.512756.20000 0004 0370 4759Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Northwell Health, Hempstead, NY 11549 USA
| |
Collapse
|
26
|
Deep Learning Models to Predict Fatal Pneumonia Using Chest X-Ray Images. Can Respir J 2022; 2022:8026580. [DOI: 10.1155/2022/8026580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 07/01/2022] [Accepted: 09/10/2022] [Indexed: 11/27/2022] Open
Abstract
Background and Aims. Chest X-ray (CXR) is indispensable to the assessment of severity, diagnosis, and management of pneumonia. Deep learning is an artificial intelligence (AI) technology that has been applied to the interpretation of medical images. This study investigated the feasibility of classifying fatal pneumonia based on CXR images using deep learning models on publicly available platforms. Methods. CXR images of patients with pneumonia at diagnosis were labeled as fatal or nonfatal based on medical records. We applied CXR images from 1031 patients with nonfatal pneumonia and 243 patients with fatal pneumonia for training and self-evaluation of the deep learning models. All labeled CXR images were randomly allocated to the training, validation, and test datasets of deep learning models. Data augmentation techniques were not used in this study. We created two deep learning models using two publicly available platforms. Results. The first model showed an area under the precision-recall curve of 0.929 with a sensitivity of 50.0% and a specificity of 92.4% for classifying fatal pneumonia. We evaluated the performance of our deep learning models using sensitivity, specificity, PPV, negative predictive value (NPV), accuracy, and F1 score. Using the external validation test dataset of 100 CXR images, the sensitivity, specificity, accuracy, and F1 score were 68.0%, 86.0%, 77.0%, and 74.7%, respectively. In the original dataset, the performance of the second model showed a sensitivity, specificity, and accuracy of 39.6%, 92.8%, and 82.7%, respectively, while external validation showed values of 38.0%, 92.0%, and 65.0%, respectively. The F1 score was 52.1%. These results were comparable to those obtained by respiratory physicians and residents. Conclusions. The deep learning models yielded good accuracy in classifying fatal pneumonia. By further improving the performance, AI could assist physicians in the severity assessment of patients with pneumonia.
Collapse
|
27
|
Huang SC, Chaudhari AS, Langlotz CP, Shah N, Yeung S, Lungren MP. Developing medical imaging AI for emerging infectious diseases. Nat Commun 2022; 13:7060. [PMID: 36400764 PMCID: PMC9672573 DOI: 10.1038/s41467-022-34234-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Accepted: 10/19/2022] [Indexed: 11/19/2022] Open
Abstract
Very few of the COVID-19 ML models were fit for deployment in real-world settings. In this Comment, Huang et al. discuss the main steps required to develop clinically useful models in the context of an emerging infectious disease.
Collapse
Affiliation(s)
- Shih-Cheng Huang
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA.
- Center for Artificial Intelligence in Medicine & Imaging, Stanford University, Stanford, CA, USA.
| | - Akshay S Chaudhari
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA
- Center for Artificial Intelligence in Medicine & Imaging, Stanford University, Stanford, CA, USA
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Curtis P Langlotz
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA
- Center for Artificial Intelligence in Medicine & Imaging, Stanford University, Stanford, CA, USA
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Nigam Shah
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA
| | - Serena Yeung
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA
- Center for Artificial Intelligence in Medicine & Imaging, Stanford University, Stanford, CA, USA
- Department of Computer Science, Stanford University, Stanford, CA, USA
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Clinical Excellence Research Center, Stanford University School of Medicine, Stanford, CA, USA
| | - Matthew P Lungren
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA
- Center for Artificial Intelligence in Medicine & Imaging, Stanford University, Stanford, CA, USA
- Department of Radiology, Stanford University, Stanford, CA, USA
| |
Collapse
|
28
|
An efficient lung disease classification from X-ray images using hybrid Mask-RCNN and BiDLSTM. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.104340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
29
|
Wu W, Bhatraju PK, Cobb N, Sathe NA, Duan KI, Seitz KP, Thau MR, Sung CC, Hippe DS, Reddy G, Pipavath S. Radiographic Findings and Association With Clinical Severity and Outcomes in Critically Ill Patients With COVID-19. Curr Probl Diagn Radiol 2022; 51:884-891. [PMID: 35610068 PMCID: PMC9023378 DOI: 10.1067/j.cpradiol.2022.04.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2022] [Revised: 03/16/2022] [Accepted: 04/18/2022] [Indexed: 01/08/2023]
Abstract
PURPOSE To describe evolution and severity of radiographic findings and assess association with disease severity and outcomes in critically ill COVID-19 patients. MATERIALS AND METHODS This retrospective study included 62 COVID-19 patients admitted to the intensive care unit (ICU). Clinical data was obtained from electronic medical records. A total of 270 chest radiographs were reviewed and qualitatively scored (CXR score) using a severity scale of 0-30. Radiographic findings were correlated with clinical severity and outcome. RESULTS The CXR score increases from a median initial score of 10 at hospital presentation to the median peak CXR score of 18 within a median time of 4 days after hospitalization, and then slowly decreases to a median last CXR score of 15 in a median time of 12 days after hospitalization. The initial and peak CXR score was independently associated with invasive MV after adjusting for age, gender, body mass index, smoking, and comorbidities (Initial, odds ratio [OR]: 2.11 per 5-point increase, confidence interval [CI] 1.35-3.32, P= 0.001; Peak, OR: 2.50 per 5-point increase, CI 1.48-4.22, P= 0.001). Peak CXR scores were also independently associated with vasopressor usage (OR: 2.28 per 5-point increase, CI 1.30-3.98, P= 0.004). Peak CXR scores strongly correlated with the duration of invasive MV (Rho = 0.62, P< 0.001), while the initial CXR score (Rho = 0.26) and the peak CXR score (Rho = 0.27) correlated weakly with the sequential organ failure assessment score. No statistically significant associations were found between radiographic findings and mortality. CONCLUSIONS Evolution of radiographic features indicates rapid disease progression and correlate with requirement for invasive MV or vasopressors but not mortality, which suggests potential nonpulmonary pathways to death in COVID-19.
Collapse
Affiliation(s)
- Wei Wu
- University of Washington School of Medicine, Department of Radiology, Seattle, WA.
| | - Pavan K Bhatraju
- University of Washington School of Medicine, Department of Internal Medicine, Division of Pulmonary, Critical Care and Sleep Medicine, Seattle, WA
| | - Natalie Cobb
- University of Washington School of Medicine, Department of Internal Medicine, Division of Pulmonary, Critical Care and Sleep Medicine, Seattle, WA
| | - Neha A Sathe
- University of Washington School of Medicine, Department of Internal Medicine, Division of Pulmonary, Critical Care and Sleep Medicine, Seattle, WA
| | - Kevin I Duan
- University of Washington School of Medicine, Department of Internal Medicine, Division of Pulmonary, Critical Care and Sleep Medicine, Seattle, WA
| | - Kevin P Seitz
- University of Washington School of Medicine, Department of Internal Medicine, Division of Pulmonary, Critical Care and Sleep Medicine, Seattle, WA
| | - Matthew R Thau
- University of Washington School of Medicine, Department of Internal Medicine, Division of Pulmonary, Critical Care and Sleep Medicine, Seattle, WA
| | - Clifford C Sung
- University of Washington School of Medicine, Department of Internal Medicine, Division of Pulmonary, Critical Care and Sleep Medicine, Seattle, WA
| | - Daniel S Hippe
- Clinical Research Division, Fred Hutchinson Cancer Research Center, Seattle, WA
| | - Gautham Reddy
- University of Washington School of Medicine, Department of Radiology, Seattle, WA
| | - Sudhakar Pipavath
- University of Washington School of Medicine, Department of Radiology, Seattle, WA
| |
Collapse
|
30
|
Tariq A, Tang S, Sakhi H, Celi LA, Newsome JM, Rubin DL, Trivedi H, Gichoy JW, Patel B, Banerjee I. Graph-based Fusion Modeling and Explanation for Disease Trajectory Prediction. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2022:2022.10.25.22281469. [PMID: 36324799 PMCID: PMC9628192 DOI: 10.1101/2022.10.25.22281469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
We propose a relational graph to incorporate clinical similarity between patients while building personalized clinical event predictors with a focus on hospitalized COVID-19 patients. Our graph formation process fuses heterogeneous data, i.e., chest X-rays as node features and non-imaging EHR for edge formation. While node represents a snap-shot in time for a single patient, weighted edge structure encodes complex clinical patterns among patients. While age and gender have been used in the past for patient graph formation, our method incorporates complex clinical history while avoiding manual feature selection. The model learns from the patient's own data as well as patterns among clinically-similar patients. Our visualization study investigates the effects of 'neighborhood' of a node on its predictiveness and showcases the model's tendency to focus on edge-connected patients with highly suggestive clinical features common with the node. The proposed model generalizes well by allowing edge formation process to adapt to an external cohort.
Collapse
Affiliation(s)
| | - Siyi Tang
- Department of Electrical Engineering, Stanford University
| | - Hifza Sakhi
- Philadelphia College of Osteopathic Medicine - Georgia Campus
| | | | - Janice M Newsome
- Department of Radiology and Imaging Sciences, Emory University, GA
| | | | - Hari Trivedi
- Department of Radiology and Imaging Sciences, Emory University, GA
| | | | | | | |
Collapse
|
31
|
Karthik R, Menaka R, Hariharan M, Kathiresan GS. AI for COVID-19 Detection from Radiographs: Incisive Analysis of State of the Art Techniques, Key Challenges and Future Directions. Ing Rech Biomed 2022; 43:486-510. [PMID: 34336141 PMCID: PMC8312058 DOI: 10.1016/j.irbm.2021.07.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 06/14/2021] [Accepted: 07/19/2021] [Indexed: 12/24/2022]
Abstract
Background and objective In recent years, Artificial Intelligence has had an evident impact on the way research addresses challenges in different domains. It has proven to be a huge asset, especially in the medical field, allowing for time-efficient and reliable solutions. This research aims to spotlight the impact of deep learning and machine learning models in the detection of COVID-19 from medical images. This is achieved by conducting a review of the state-of-the-art approaches proposed by the recent works in this field. Methods The main focus of this study is the recent developments of classification and segmentation approaches to image-based COVID-19 detection. The study reviews 140 research papers published in different academic research databases. These papers have been screened and filtered based on specified criteria, to acquire insights prudent to image-based COVID-19 detection. Results The methods discussed in this review include different types of imaging modality, predominantly X-rays and CT scans. These modalities are used for classification and segmentation tasks as well. This review seeks to categorize and discuss the different deep learning and machine learning architectures employed for these tasks, based on the imaging modality utilized. It also hints at other possible deep learning and machine learning architectures that can be proposed for better results towards COVID-19 detection. Along with that, a detailed overview of the emerging trends and breakthroughs in Artificial Intelligence-based COVID-19 detection has been discussed as well. Conclusion This work concludes by stipulating the technical and non-technical challenges faced by researchers and illustrates the advantages of image-based COVID-19 detection with Artificial Intelligence techniques.
Collapse
Affiliation(s)
- R Karthik
- Centre for Cyber Physical Systems, Vellore Institute of Technology, Chennai, India
| | - R Menaka
- Centre for Cyber Physical Systems, Vellore Institute of Technology, Chennai, India
| | - M Hariharan
- School of Computing Sciences and Engineering, Vellore Institute of Technology, Chennai, India
| | - G S Kathiresan
- School of Electronics Engineering, Vellore Institute of Technology, Chennai, India
| |
Collapse
|
32
|
Li MD, Arun NT, Aggarwal M, Gupta S, Singh P, Little BP, Mendoza DP, Corradi GC, Takahashi MS, Ferraciolli SF, Succi MD, Lang M, Bizzo BC, Dayan I, Kitamura FC, Kalpathy-Cramer J. Multi-population generalizability of a deep learning-based chest radiograph severity score for COVID-19. Medicine (Baltimore) 2022; 101:e29587. [PMID: 35866818 PMCID: PMC9302282 DOI: 10.1097/md.0000000000029587] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Revised: 04/21/2022] [Accepted: 04/28/2022] [Indexed: 01/04/2023] Open
Abstract
To tune and test the generalizability of a deep learning-based model for assessment of COVID-19 lung disease severity on chest radiographs (CXRs) from different patient populations. A published convolutional Siamese neural network-based model previously trained on hospitalized patients with COVID-19 was tuned using 250 outpatient CXRs. This model produces a quantitative measure of COVID-19 lung disease severity (pulmonary x-ray severity (PXS) score). The model was evaluated on CXRs from 4 test sets, including 3 from the United States (patients hospitalized at an academic medical center (N = 154), patients hospitalized at a community hospital (N = 113), and outpatients (N = 108)) and 1 from Brazil (patients at an academic medical center emergency department (N = 303)). Radiologists from both countries independently assigned reference standard CXR severity scores, which were correlated with the PXS scores as a measure of model performance (Pearson R). The Uniform Manifold Approximation and Projection (UMAP) technique was used to visualize the neural network results. Tuning the deep learning model with outpatient data showed high model performance in 2 United States hospitalized patient datasets (R = 0.88 and R = 0.90, compared to baseline R = 0.86). Model performance was similar, though slightly lower, when tested on the United States outpatient and Brazil emergency department datasets (R = 0.86 and R = 0.85, respectively). UMAP showed that the model learned disease severity information that generalized across test sets. A deep learning model that extracts a COVID-19 severity score on CXRs showed generalizable performance across multiple populations from 2 continents, including outpatients and hospitalized patients.
Collapse
Affiliation(s)
- Matthew D. Li
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Nishanth T. Arun
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Mehak Aggarwal
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Sharut Gupta
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Praveer Singh
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Brent P. Little
- Division of Thoracic Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Dexter P. Mendoza
- Division of Thoracic Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | | | | | | | - Marc D. Succi
- Division of Emergency Radiology, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Min Lang
- Division of Thoracic Imaging and Intervention, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Bernardo C. Bizzo
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- MGH and BWH Center for Clinical Data Science, Mass General Brigham, Boston, MA, USA
| | - Ittai Dayan
- MGH and BWH Center for Clinical Data Science, Mass General Brigham, Boston, MA, USA
| | - Felipe C. Kitamura
- Diagnósticos da América SA (DASA), São Paulo, Brazil
- Department of Diagnostic Imaging, Universidade Federal de São Paulo, São Paulo, Brazil
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- MGH and BWH Center for Clinical Data Science, Mass General Brigham, Boston, MA, USA
| |
Collapse
|
33
|
Chamberlin JH, Aquino G, Nance S, Wortham A, Leaphart N, Paladugu N, Brady S, Baird H, Fiegel M, Fitzpatrick L, Kocher M, Ghesu F, Mansoor A, Hoelzer P, Zimmermann M, James WE, Dennis DJ, Houston BA, Kabakus IM, Baruah D, Schoepf UJ, Burt JR. Automated diagnosis and prognosis of COVID-19 pneumonia from initial ER chest X-rays using deep learning. BMC Infect Dis 2022; 22:637. [PMID: 35864468 PMCID: PMC9301895 DOI: 10.1186/s12879-022-07617-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Accepted: 07/14/2022] [Indexed: 11/10/2022] Open
Abstract
Background Airspace disease as seen on chest X-rays is an important point in triage for patients initially presenting to the emergency department with suspected COVID-19 infection. The purpose of this study is to evaluate a previously trained interpretable deep learning algorithm for the diagnosis and prognosis of COVID-19 pneumonia from chest X-rays obtained in the ED. Methods This retrospective study included 2456 (50% RT-PCR positive for COVID-19) adult patients who received both a chest X-ray and SARS-CoV-2 RT-PCR test from January 2020 to March of 2021 in the emergency department at a single U.S. institution. A total of 2000 patients were included as an additional training cohort and 456 patients in the randomized internal holdout testing cohort for a previously trained Siemens AI-Radiology Companion deep learning convolutional neural network algorithm. Three cardiothoracic fellowship-trained radiologists systematically evaluated each chest X-ray and generated an airspace disease area-based severity score which was compared against the same score produced by artificial intelligence. The interobserver agreement, diagnostic accuracy, and predictive capability for inpatient outcomes were assessed. Principal statistical tests used in this study include both univariate and multivariate logistic regression. Results Overall ICC was 0.820 (95% CI 0.790–0.840). The diagnostic AUC for SARS-CoV-2 RT-PCR positivity was 0.890 (95% CI 0.861–0.920) for the neural network and 0.936 (95% CI 0.918–0.960) for radiologists. Airspace opacities score by AI alone predicted ICU admission (AUC = 0.870) and mortality (0.829) in all patients. Addition of age and BMI into a multivariate log model improved mortality prediction (AUC = 0.906). Conclusion The deep learning algorithm provides an accurate and interpretable assessment of the disease burden in COVID-19 pneumonia on chest radiographs. The reported severity scores correlate with expert assessment and accurately predicts important clinical outcomes. The algorithm contributes additional prognostic information not currently incorporated into patient management.
Supplementary Information The online version contains supplementary material available at 10.1186/s12879-022-07617-7.
Collapse
Affiliation(s)
- Jordan H Chamberlin
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Gilberto Aquino
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Sophia Nance
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Andrew Wortham
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Nathan Leaphart
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Namrata Paladugu
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Sean Brady
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Henry Baird
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Matthew Fiegel
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Logan Fitzpatrick
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Madison Kocher
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | | | | | | | | | - W Ennis James
- Department of Internal Medicine, Division of Pulmonary, Critical Care, Allergy & Sleep Medicine, Medical University of South Carolina, Charleston, SC, USA
| | - D Jameson Dennis
- Department of Internal Medicine, Division of Pulmonary, Critical Care, Allergy & Sleep Medicine, Medical University of South Carolina, Charleston, SC, USA
| | - Brian A Houston
- Department of Internal Medicine, Division of Cardiology, Medical University of South Carolina, Charleston, SC, USA
| | - Ismail M Kabakus
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Dhiraj Baruah
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - U Joseph Schoepf
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Jeremy R Burt
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA. .,MUSC-ART, Cardiothoracic Imaging, 25 Courtenay Drive, MSC 226, 2nd Floor, Rm 2256, Charleston, SC, 29425, USA.
| |
Collapse
|
34
|
Akbar MN, Wang X, Erdogmus D, Dalal S. PENet: Continuous-Valued Pulmonary Edema Severity Prediction On Chest X-ray Using Siamese Convolutional Networks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:1834-1838. [PMID: 36086469 DOI: 10.1109/embc48229.2022.9871153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
For physicians to make rapid clinical decisions for patients with congestive heart failure, the assessment of pulmonary edema severity in chest radiographs is vital. Although deep learning has shown promise in detecting the presence or absence or discrete grades of severity, of such edema, prediction of continuous-valued severity yet remains a challenge. Here, we propose PENet: Siamese convolutional neural networks to assess the continuous spectrum of severity of lung edema from chest radiographs. We present different modes of implementing this network and demonstrate that our best model outperforms that of earlier work (mean AUC of 0.91 over 0.87), while using only 1/16-th the dimension of input images and 1/69-th the size of training data, thus also saving expensive computation.
Collapse
|
35
|
New Optimized Deep Learning Application for COVID-19 Detection in Chest X-ray Images. Symmetry (Basel) 2022. [DOI: 10.3390/sym14051003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Due to false negative results of the real-time Reverse Transcriptase-Polymerase Chain Reaction (RT-PCR) test, the complemental practices such as computed tomography (CT) and X-ray in combination with RT-PCR are discussed to achieve a more accurate diagnosis of COVID-19 in clinical practice. Since radiology includes visual understanding as well as decision making under limited conditions such as uncertainty, urgency, patient burden, and hospital facilities, mistakes are inevitable. Therefore, there is an immediate requirement to carry out further investigation and develop new accurate detection and identification methods to provide automatically quantitative evaluation of COVID-19. In this paper, we propose a new computer-aided diagnosis application for COVID-19 detection using deep learning techniques. A new technique, which receives symmetric X-ray data as the input, is presented in this study by combining Convolutional Neural Networks (CNN) with Ant Lion Optimization Algorithm (ALO) and Multiclass Naïve Bayes Classifier (NB). Moreover, several other classifiers such as Softmax, Support Vector Machines (SVM), K-Nearest Neighbors (KNN) and Decision Tree (DT) are combined with CNN. The promising results of these classifiers are evaluated and presented for accuracy, precision, and F1-score metrics. NB classifier with Ant Lion Optimization Algorithm and CNN produced the best results with 98.31% accuracy, 100% precision and 98.25% F1-score and with the lowest execution time.
Collapse
|
36
|
O'Shea A, Li MD, Mercaldo ND, Balthazar P, Som A, Yeung T, Succi MD, Little BP, Kalpathy-Cramer J, Lee SI. Intubation and mortality prediction in hospitalized COVID-19 patients using a combination of convolutional neural network-based scoring of chest radiographs and clinical data. BJR Open 2022; 4:20210062. [PMID: 36105420 PMCID: PMC9459864 DOI: 10.1259/bjro.20210062] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Revised: 03/06/2022] [Accepted: 03/09/2022] [Indexed: 12/04/2022] Open
Abstract
Objective To predict short-term outcomes in hospitalized COVID-19 patients using a model incorporating clinical variables with automated convolutional neural network (CNN) chest radiograph analysis. Methods A retrospective single center study was performed on patients consecutively admitted with COVID-19 between March 14 and April 21 2020. Demographic, clinical and laboratory data were collected, and automated CNN scoring of the admission chest radiograph was performed. The two outcomes of disease progression were intubation or death within 7 days and death within 14 days following admission. Multiple imputation was performed for missing predictor variables and, for each imputed data set, a penalized logistic regression model was constructed to identify predictors and their functional relationship to each outcome. Cross-validated area under the characteristic (AUC) curves were estimated to quantify the discriminative ability of each model. Results 801 patients (median age 59; interquartile range 46-73 years, 469 men) were evaluated. 36 patients were deceased and 207 were intubated at 7 days and 65 were deceased at 14 days. Cross-validated AUC values for predictive models were 0.82 (95% CI, 0.79-0.86) for death or intubation within 7 days and 0.82 (0.78-0.87) for death within 14 days. Automated CNN chest radiograph score was an important variable in predicting both outcomes. Conclusion Automated CNN chest radiograph analysis, in combination with clinical variables, predicts short-term intubation and death in patients hospitalized for COVID-19 infection. Chest radiograph scoring of more severe disease was associated with a greater probability of adverse short-term outcome. Advances in knowledge Model-based predictions of intubation and death in COVID-19 can be performed with high discriminative performance using admission clinical data and convolutional neural network-based scoring of chest radiograph severity.
Collapse
Affiliation(s)
- Aileen O'Shea
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Matthew D Li
- Department of Radiology and Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, United States
| | - Nathaniel D Mercaldo
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States
| | - Patricia Balthazar
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Avik Som
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | | | - Marc D Succi
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Brent P Little
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, MGH and BWH Center for Clinical Data Science, Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - Susanna I Lee
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| |
Collapse
|
37
|
Patel NJ, D'Silva KM, Li MD, Hsu TY, DiIorio M, Fu X, Cook C, Prisco L, Martin L, Vanni KM, Zaccardelli A, Zhang Y, Kalpathy‐Cramer J, Sparks JA, Wallace ZS. Assessing the Severity of COVID-19 Lung Injury in Rheumatic Diseases Versus the General Population Using Deep Learning-Derived Chest Radiograph Scores. Arthritis Care Res (Hoboken) 2022; 75:657-666. [PMID: 35313091 PMCID: PMC9081965 DOI: 10.1002/acr.24883] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Revised: 03/01/2022] [Accepted: 03/15/2022] [Indexed: 12/05/2022]
Abstract
OBJECTIVE COVID-19 patients with rheumatic disease have a higher risk of mechanical ventilation than the general population. The present study was undertaken to assess lung involvement using a validated deep learning algorithm that extracts a quantitative measure of radiographic lung disease severity. METHODS We performed a comparative cohort study of rheumatic disease patients with COVID-19 and ≥1 chest radiograph within ±2 weeks of COVID-19 diagnosis and matched comparators. We used unadjusted and adjusted (for age, Charlson comorbidity index, and interstitial lung disease) quantile regression to compare the maximum pulmonary x-ray severity (PXS) score at the 10th to 90th percentiles between groups. We evaluated the association of severe PXS score (>9) with mechanical ventilation and death using Cox regression. RESULTS We identified 70 patients with rheumatic disease and 463 general population comparators. Maximum PXS scores were similar in the rheumatic disease patients and comparators at the 10th to 60th percentiles but significantly higher among rheumatic disease patients at the 70th to 90th percentiles (90th percentile score of 10.2 versus 9.2; adjusted P = 0.03). Rheumatic disease patients were more likely to have a PXS score of >9 (20% versus 11%; P = 0.02), indicating severe pulmonary disease. Rheumatic disease patients with PXS scores >9 versus ≤9 had higher risk of mechanical ventilation (hazard ratio [HR] 24.1 [95% confidence interval (95% CI) 6.7, 86.9]) and death (HR 8.2 [95% CI 0.7, 90.4]). CONCLUSION Rheumatic disease patients with COVID-19 had more severe radiographic lung involvement than comparators. Higher PXS scores were associated with mechanical ventilation and will be important for future studies leveraging big data to assess COVID-19 outcomes in rheumatic disease patients.
Collapse
Affiliation(s)
- Naomi J. Patel
- Division of Rheumatology, Allergy, and Immunology, Massachusetts General HospitalBostonMAUSA
| | - Kristin M. D'Silva
- Division of Rheumatology, Allergy, and Immunology, Massachusetts General HospitalBostonMAUSA,Clinical Epidemiology Program, Mongan Institute, Department of MedicineMassachusetts General HospitalBostonMAUSA
| | - Matthew D. Li
- Department of Radiology, Massachusetts General HospitalBostonMAUSA
| | - Tiffany Y‐T. Hsu
- Division of Rheumatology, Inflammation, and Immunity, Brigham and Women's HospitalBostonMAUSA
| | - Michael DiIorio
- Division of Rheumatology, Inflammation, and Immunity, Brigham and Women's HospitalBostonMAUSA
| | - Xiaoqing Fu
- Division of Rheumatology, Allergy, and Immunology, Massachusetts General HospitalBostonMAUSA,Clinical Epidemiology Program, Mongan Institute, Department of MedicineMassachusetts General HospitalBostonMAUSA
| | - Claire Cook
- Division of Rheumatology, Allergy, and Immunology, Massachusetts General HospitalBostonMAUSA,Clinical Epidemiology Program, Mongan Institute, Department of MedicineMassachusetts General HospitalBostonMAUSA
| | - Lauren Prisco
- Division of Rheumatology, Inflammation, and Immunity, Brigham and Women's HospitalBostonMAUSA
| | - Lily Martin
- Division of Rheumatology, Inflammation, and Immunity, Brigham and Women's HospitalBostonMAUSA
| | - Kathleen M.M. Vanni
- Division of Rheumatology, Inflammation, and Immunity, Brigham and Women's HospitalBostonMAUSA
| | - Alessandra Zaccardelli
- Division of Rheumatology, Inflammation, and Immunity, Brigham and Women's HospitalBostonMAUSA
| | - Yuqing Zhang
- Division of Rheumatology, Allergy, and Immunology, Massachusetts General HospitalBostonMAUSA,Clinical Epidemiology Program, Mongan Institute, Department of MedicineMassachusetts General HospitalBostonMAUSA
| | | | - Jeffrey A. Sparks
- Division of Rheumatology, Inflammation, and Immunity, Brigham and Women's HospitalBostonMAUSA
| | - Zachary S. Wallace
- Division of Rheumatology, Allergy, and Immunology, Massachusetts General HospitalBostonMAUSA,Clinical Epidemiology Program, Mongan Institute, Department of MedicineMassachusetts General HospitalBostonMAUSA
| |
Collapse
|
38
|
Bai J, Jin A, Wang T, Yang C, Nabavi S. Feature fusion siamese network for breast cancer detection comparing current and prior mammograms. Med Phys 2022; 49:3654-3669. [PMID: 35271746 DOI: 10.1002/mp.15598] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 02/08/2022] [Accepted: 03/01/2022] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Automatic detection of very small and non-mass abnormalities from mammogram images has remained challenging. In clinical practice for each patient, radiologists commonly not only screen the mammogram images obtained during the examination, but also compare them with previous mammogram images to make a clinical decision. To design an AI system to mimic radiologists for better cancer detection, in this work we proposed an end-to-end enhanced Siamese convolutional neural network to detect breast cancer using previous year and current year mammogram images. METHODS The proposed Siamese based network uses high resolution mammogram images and fuses features of pairs of previous year and current year mammogram images to predict cancer probabilities. The proposed approach is developed based on the concept of one-shot learning that learns the abnormal differences between current and prior images instead of abnormal objects, and as a result can perform better with small sample size data sets. We developed two variants of the proposed network. In the first model, to fuse the features of current and previous images, we designed an enhanced distance learning network that considers not only the overall distance, but also the pixel-wise distances between the features. In the other model, we concatenated the features of current and previous images to fuse them. RESULTS We compared the performance of the proposed models with those of some baseline models that use current images only (ResNet and VGG) and also use current and prior images (LSTM and vanilla Siamese) in terms of accuracy, sensitivity, precision, F1 score and AUC. Results show that the proposed models outperform the baseline models and the proposed model with the distance learning network performs the best (accuracy: 0.92, sensitivity: 0.93, precision: 0.91, specificity: 0.91, F1: 0.92 and AUC: 0.95). CONCLUSIONS Integrating prior mammogram images improves automatic cancer classification, specially for very small and non-mass abnormalities. For classification models that integrate current and prior mammogram images, using an enhanced and effective distance learning network can advance the performance of the models. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Jun Bai
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT, 06269, USA.,University of Connecticut School of Medicine, 263 Farmington Ave. Farmington CT 06030, USA.,Department of Radiology, UConn Health, 263 Farmington Ave. Farmington CT 06030, USA
| | - Annie Jin
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT, 06269, USA.,University of Connecticut School of Medicine, 263 Farmington Ave. Farmington CT 06030, USA.,Department of Radiology, UConn Health, 263 Farmington Ave. Farmington CT 06030, USA
| | - Tianyu Wang
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT, 06269, USA.,University of Connecticut School of Medicine, 263 Farmington Ave. Farmington CT 06030, USA.,Department of Radiology, UConn Health, 263 Farmington Ave. Farmington CT 06030, USA
| | - Clifford Yang
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT, 06269, USA.,University of Connecticut School of Medicine, 263 Farmington Ave. Farmington CT 06030, USA.,Department of Radiology, UConn Health, 263 Farmington Ave. Farmington CT 06030, USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, 371 Fairfield Way, Storrs, CT, 06269, USA.,University of Connecticut School of Medicine, 263 Farmington Ave. Farmington CT 06030, USA.,Department of Radiology, UConn Health, 263 Farmington Ave. Farmington CT 06030, USA
| |
Collapse
|
39
|
Hurt B, Rubel MA, Masutani EM, Jacobs K, Hahn L, Horowitz M, Kligerman S, Hsiao A. Radiologist-supervised Transfer Learning: Improving Radiographic Localization of Pneumonia and Prognostication of Patients With COVID-19. J Thorac Imaging 2022; 37:90-99. [PMID: 34710891 PMCID: PMC8863580 DOI: 10.1097/rti.0000000000000618] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
PURPOSE To assess the potential of a transfer learning strategy leveraging radiologist supervision to enhance convolutional neural network-based (CNN) localization of pneumonia on radiographs and to further assess the prognostic value of CNN severity quantification on patients evaluated for COVID-19 pneumonia, for whom severity on the presenting radiograph is a known predictor of mortality and intubation. MATERIALS AND METHODS We obtained an initial CNN previously trained to localize pneumonia along with 25,684 radiographs used for its training. We additionally curated 1466 radiographs from patients who had a computed tomography (CT) performed on the same day. Regional likelihoods of pneumonia were then annotated by cardiothoracic radiologists, referencing these CTs. Combining data, a preexisting CNN was fine-tuned using transfer learning. Whole-image and regional performance of the updated CNN was assessed using receiver-operating characteristic area under the curve and Dice. Finally, the value of CNN measurements was assessed with survival analysis on 203 patients with COVID-19 and compared against modified radiographic assessment of lung edema (mRALE) score. RESULTS Pneumonia detection area under the curve improved on both internal (0.756 to 0.841) and external (0.864 to 0.876) validation data. Dice overlap also improved, particularly in the lung bases (R: 0.121 to 0.433, L: 0.111 to 0.486). There was strong correlation between radiologist mRALE score and CNN fractional area of involvement (ρ=0.85). Survival analysis showed similar, strong prognostic ability of the CNN and mRALE for mortality, likelihood of intubation, and duration of hospitalization among patients with COVID-19. CONCLUSIONS Radiologist-supervised transfer learning can enhance the ability of CNNs to localize and quantify the severity of disease. Closed-loop systems incorporating radiologists may be beneficial for continued improvement of artificial intelligence algorithms.
Collapse
Affiliation(s)
- Brian Hurt
- Department of Radiology, University of California San Diego School of Medicine
| | - Meagan A Rubel
- Department of Radiology, University of California San Diego School of Medicine
| | - Evan M Masutani
- Department of Radiology, University of California San Diego School of Medicine
- Department of Bioengineering, University of California, San Diego, San Diego, CA
| | - Kathleen Jacobs
- Department of Radiology, University of California San Diego School of Medicine
| | - Lewis Hahn
- Department of Radiology, University of California San Diego School of Medicine
| | - Michael Horowitz
- Department of Radiology, University of California San Diego School of Medicine
| | - Seth Kligerman
- Department of Radiology, University of California San Diego School of Medicine
| | - Albert Hsiao
- Department of Radiology, University of California San Diego School of Medicine
| |
Collapse
|
40
|
Yamada D, Ohde S, Imai R, Ikejima K, Matsusako M, Kurihara Y. Visual classification of three computed tomography lung patterns to predict prognosis of COVID-19: a retrospective study. BMC Pulm Med 2022; 22:1. [PMID: 34980061 PMCID: PMC8721943 DOI: 10.1186/s12890-021-01813-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Accepted: 12/22/2021] [Indexed: 01/10/2023] Open
Abstract
BACKGROUND Quantitative evaluation of radiographic images has been developed and suggested for the diagnosis of coronavirus disease 2019 (COVID-19). However, there are limited opportunities to use these image-based diagnostic indices in clinical practice. Our aim in this study was to evaluate the utility of a novel visually-based classification of pulmonary findings from computed tomography (CT) images of COVID-19 patients with the following three patterns defined: peripheral, multifocal, and diffuse findings of pneumonia. We also evaluated the prognostic value of this classification to predict the severity of COVID-19. METHODS This was a single-center retrospective cohort study of patients hospitalized with COVID-19 between January 1st and September 30th, 2020, who presented with suspicious findings on CT lung images at admission (n = 69). We compared the association between the three predefined patterns (peripheral, multifocal, and diffuse), admission to the intensive care unit, tracheal intubation, and death. We tested quantitative CT analysis as an outcome predictor for COVID-19. Quantitative CT analysis was performed using a semi-automated method (Thoracic Volume Computer-Assisted Reading software, GE Health care, United States). Lungs were divided by Hounsfield unit intervals. Compromised lung (%CL) volume was the sum of poorly and non-aerated volumes (- 500, 100 HU). We collected patient clinical data, including demographic and clinical variables at the time of admission. RESULTS Patients with a diffuse pattern were intubated more frequently and for a longer duration than patients with a peripheral or multifocal pattern. The following clinical variables were significantly different between the diffuse pattern and peripheral and multifocal groups: body temperature (p = 0.04), lymphocyte count (p = 0.01), neutrophil count (p = 0.02), c-reactive protein (p < 0.01), lactate dehydrogenase (p < 0.01), Krebs von den Lungen-6 antigen (p < 0.01), D-dimer (p < 0.01), and steroid (p = 0.01) and favipiravir (p = 0.03) administration. CONCLUSIONS Our simple visual assessment of CT images can predict the severity of illness, a resulting decrease in respiratory function, and the need for supplemental respiratory ventilation among patients with COVID-19.
Collapse
Affiliation(s)
- Daisuke Yamada
- Department of Radiology, St. Luke's International Hospital, 9-1 Akashi-cho, Chuo-ku, Tokyo, 104-8560, Japan.
| | - Sachiko Ohde
- Graduate School of Public Health, St. Luke's International University, 9-1 Akashi-cho, Chuo-ku, Tokyo, 104-8560, Japan
| | - Ryosuke Imai
- Department of Pulmonary Medicine, Thoracic Center, St. Luke's International Hospital, 9-1 Akashi-cho, Chuo-ku, Tokyo, 104-8560, Japan
| | - Kengo Ikejima
- Department of Radiology, St. Luke's International Hospital, 9-1 Akashi-cho, Chuo-ku, Tokyo, 104-8560, Japan
| | - Masaki Matsusako
- Department of Radiology, St. Luke's International Hospital, 9-1 Akashi-cho, Chuo-ku, Tokyo, 104-8560, Japan
| | - Yasuyuki Kurihara
- Department of Radiology, St. Luke's International Hospital, 9-1 Akashi-cho, Chuo-ku, Tokyo, 104-8560, Japan
| |
Collapse
|
41
|
Puntaje radiográfico de evaluación del edema pulmonar (RALE) y su asociación con desenlaces clínicos en el síndrome de dificultad respiratoria aguda en Colombia. ACTA COLOMBIANA DE CUIDADO INTENSIVO 2022. [PMCID: PMC8746788 DOI: 10.1016/j.acci.2021.12.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
42
|
Park S, Kim G, Oh Y, Seo JB, Lee SM, Kim JH, Moon S, Lim JK, Ye JC. Multi-task vision transformer using low-level chest X-ray feature corpus for COVID-19 diagnosis and severity quantification. Med Image Anal 2022; 75:102299. [PMID: 34814058 PMCID: PMC8566090 DOI: 10.1016/j.media.2021.102299] [Citation(s) in RCA: 36] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Revised: 10/29/2021] [Accepted: 11/02/2021] [Indexed: 12/11/2022]
Abstract
Developing a robust algorithm to diagnose and quantify the severity of the novel coronavirus disease 2019 (COVID-19) using Chest X-ray (CXR) requires a large number of well-curated COVID-19 datasets, which is difficult to collect under the global COVID-19 pandemic. On the other hand, CXR data with other findings are abundant. This situation is ideally suited for the Vision Transformer (ViT) architecture, where a lot of unlabeled data can be used through structural modeling by the self-attention mechanism. However, the use of existing ViT may not be optimal, as the feature embedding by direct patch flattening or ResNet backbone in the standard ViT is not intended for CXR. To address this problem, here we propose a novel Multi-task ViT that leverages low-level CXR feature corpus obtained from a backbone network that extracts common CXR findings. Specifically, the backbone network is first trained with large public datasets to detect common abnormal findings such as consolidation, opacity, edema, etc. Then, the embedded features from the backbone network are used as corpora for a versatile Transformer model for both the diagnosis and the severity quantification of COVID-19. We evaluate our model on various external test datasets from totally different institutions to evaluate the generalization capability. The experimental results confirm that our model can achieve state-of-the-art performance in both diagnosis and severity quantification tasks with outstanding generalization capability, which are sine qua non of widespread deployment.
Collapse
Affiliation(s)
- Sangjoon Park
- Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
| | - Gwanghyun Kim
- Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
| | - Yujin Oh
- Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea
| | - Joon Beom Seo
- Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Sang Min Lee
- Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Jin Hwan Kim
- College of Medicine, Chungnam National Univerity, Daejeon, South Korea
| | - Sungjun Moon
- College of Medicine, Yeungnam University, Daegu, South Korea
| | - Jae-Kwang Lim
- School of Medicine, Kyungpook National University, Daegu, South Korea
| | - Jong Chul Ye
- Korea Advanced Institute of Science and Technology (KAIST), Daejeon, South Korea.
| |
Collapse
|
43
|
Xu GX, Liu C, Liu J, Ding Z, Shi F, Guo M, Zhao W, Li X, Wei Y, Gao Y, Ren CX, Shen D. Cross-Site Severity Assessment of COVID-19 From CT Images via Domain Adaptation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:88-102. [PMID: 34383647 PMCID: PMC8905616 DOI: 10.1109/tmi.2021.3104474] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Revised: 07/26/2021] [Accepted: 08/08/2021] [Indexed: 06/13/2023]
Abstract
Early and accurate severity assessment of Coronavirus disease 2019 (COVID-19) based on computed tomography (CT) images offers a great help to the estimation of intensive care unit event and the clinical decision of treatment planning. To augment the labeled data and improve the generalization ability of the classification model, it is necessary to aggregate data from multiple sites. This task faces several challenges including class imbalance between mild and severe infections, domain distribution discrepancy between sites, and presence of heterogeneous features. In this paper, we propose a novel domain adaptation (DA) method with two components to address these problems. The first component is a stochastic class-balanced boosting sampling strategy that overcomes the imbalanced learning problem and improves the classification performance on poorly-predicted classes. The second component is a representation learning that guarantees three properties: 1) domain-transferability by prototype triplet loss, 2) discriminant by conditional maximum mean discrepancy loss, and 3) completeness by multi-view reconstruction loss. Particularly, we propose a domain translator and align the heterogeneous data to the estimated class prototypes (i.e., class centers) in a hyper-sphere manifold. Experiments on cross-site severity assessment of COVID-19 from CT images show that the proposed method can effectively tackle the imbalanced learning problem and outperform recent DA approaches.
Collapse
Affiliation(s)
- Geng-Xin Xu
- School of MathematicsSun Yat-sen UniversityGuangzhou510275China
| | - Chen Liu
- Department of RadiologySouthwest HospitalThird Military Medical University (Army Medical University)Chongqing400038China
| | - Jun Liu
- Department of RadiologyThe Second Xiangya HospitalCentral South UniversityChangsha410011China
- Department of Radiology Quality Control CenterChangshaHunan410011China
| | - Zhongxiang Ding
- Department of RadiologyHangzhou First People’s HospitalZhejiang University School of MedicineHangzhou310027China
| | - Feng Shi
- Department of Research and DevelopmentShanghai United Imaging Intelligence Company Ltd.Shanghai200232China
| | - Man Guo
- Department of RadiologySouthwest HospitalThird Military Medical University (Army Medical University)Chongqing400038China
| | - Wei Zhao
- Department of RadiologyThe Second Xiangya HospitalCentral South UniversityChangsha410011China
| | - Xiaoming Li
- Department of RadiologySouthwest HospitalThird Military Medical University (Army Medical University)Chongqing400038China
| | - Ying Wei
- Department of Research and DevelopmentShanghai United Imaging Intelligence Company Ltd.Shanghai200232China
| | - Yaozong Gao
- Department of Research and DevelopmentShanghai United Imaging Intelligence Company Ltd.Shanghai200232China
| | - Chuan-Xian Ren
- School of MathematicsSun Yat-sen UniversityGuangzhou510275China
- Pazhou LabGuangzhou510330China
- Key Laboratory of Machine Intelligence and Advanced Computing (Sun Yat-sen University) Ministry of EducationGuangzhou510275China
| | - Dinggang Shen
- Department of Research and DevelopmentShanghai United Imaging Intelligence Company Ltd.Shanghai200232China
- School of Biomedical EngineeringShanghaiTech UniversityShanghai201210China
- Department of Artificial IntelligenceKorea UniversitySeoul02841Republic of Korea
| |
Collapse
|
44
|
Arun N, Gaw N, Singh P, Chang K, Aggarwal M, Chen B, Hoebel K, Gupta S, Patel J, Gidwani M, Adebayo J, Li MD, Kalpathy-Cramer J. Assessing the Trustworthiness of Saliency Maps for Localizing Abnormalities in Medical Imaging. Radiol Artif Intell 2021; 3:e200267. [PMID: 34870212 PMCID: PMC8637231 DOI: 10.1148/ryai.2021200267] [Citation(s) in RCA: 65] [Impact Index Per Article: 21.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Revised: 09/13/2021] [Accepted: 09/20/2021] [Indexed: 11/11/2022]
Abstract
PURPOSE To evaluate the trustworthiness of saliency maps for abnormality localization in medical imaging. MATERIALS AND METHODS Using two large publicly available radiology datasets (Society for Imaging Informatics in Medicine-American College of Radiology Pneumothorax Segmentation dataset and Radiological Society of North America Pneumonia Detection Challenge dataset), the performance of eight commonly used saliency map techniques were quantified in regard to (a) localization utility (segmentation and detection), (b) sensitivity to model weight randomization, (c) repeatability, and (d) reproducibility. Their performances versus baseline methods and localization network architectures were compared, using area under the precision-recall curve (AUPRC) and structural similarity index measure (SSIM) as metrics. RESULTS All eight saliency map techniques failed at least one of the criteria and were inferior in performance compared with localization networks. For pneumothorax segmentation, the AUPRC ranged from 0.024 to 0.224, while a U-Net achieved a significantly superior AUPRC of 0.404 (P < .005). For pneumonia detection, the AUPRC ranged from 0.160 to 0.519, while a RetinaNet achieved a significantly superior AUPRC of 0.596 (P <.005). Five and two saliency methods (of eight) failed the model randomization test on the segmentation and detection datasets, respectively, suggesting that these methods are not sensitive to changes in model parameters. The repeatability and reproducibility of the majority of the saliency methods were worse than localization networks for both the segmentation and detection datasets. CONCLUSION The use of saliency maps in the high-risk domain of medical imaging warrants additional scrutiny and recommend that detection or segmentation models be used if localization is the desired output of the network.Keywords: Technology Assessment, Technical Aspects, Feature Detection, Convolutional Neural Network (CNN) Supplemental material is available for this article. © RSNA, 2021.
Collapse
Affiliation(s)
| | | | - Praveer Singh
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
149 13th St, Boston, MA 02129 (N.A., P.S., K.C., M.A., B.C., K.H., S.G.,
J.P., M.G., M.D.L., J.K.C.); Department of Computer Science, Shiv Nadar
University, Greater Noida, India (N.A.); Department of Operational Sciences,
Graduate School of Engineering and Management, Air Force Institute of
Technology, Wright-Patterson AFB, Dayton, Ohio (N.G.); and Massachusetts
Institute of Technology, Cambridge, Mass (K.C., B.C., K.H., J.P., J.A.)
| | - Ken Chang
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
149 13th St, Boston, MA 02129 (N.A., P.S., K.C., M.A., B.C., K.H., S.G.,
J.P., M.G., M.D.L., J.K.C.); Department of Computer Science, Shiv Nadar
University, Greater Noida, India (N.A.); Department of Operational Sciences,
Graduate School of Engineering and Management, Air Force Institute of
Technology, Wright-Patterson AFB, Dayton, Ohio (N.G.); and Massachusetts
Institute of Technology, Cambridge, Mass (K.C., B.C., K.H., J.P., J.A.)
| | - Mehak Aggarwal
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
149 13th St, Boston, MA 02129 (N.A., P.S., K.C., M.A., B.C., K.H., S.G.,
J.P., M.G., M.D.L., J.K.C.); Department of Computer Science, Shiv Nadar
University, Greater Noida, India (N.A.); Department of Operational Sciences,
Graduate School of Engineering and Management, Air Force Institute of
Technology, Wright-Patterson AFB, Dayton, Ohio (N.G.); and Massachusetts
Institute of Technology, Cambridge, Mass (K.C., B.C., K.H., J.P., J.A.)
| | - Bryan Chen
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
149 13th St, Boston, MA 02129 (N.A., P.S., K.C., M.A., B.C., K.H., S.G.,
J.P., M.G., M.D.L., J.K.C.); Department of Computer Science, Shiv Nadar
University, Greater Noida, India (N.A.); Department of Operational Sciences,
Graduate School of Engineering and Management, Air Force Institute of
Technology, Wright-Patterson AFB, Dayton, Ohio (N.G.); and Massachusetts
Institute of Technology, Cambridge, Mass (K.C., B.C., K.H., J.P., J.A.)
| | - Katharina Hoebel
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
149 13th St, Boston, MA 02129 (N.A., P.S., K.C., M.A., B.C., K.H., S.G.,
J.P., M.G., M.D.L., J.K.C.); Department of Computer Science, Shiv Nadar
University, Greater Noida, India (N.A.); Department of Operational Sciences,
Graduate School of Engineering and Management, Air Force Institute of
Technology, Wright-Patterson AFB, Dayton, Ohio (N.G.); and Massachusetts
Institute of Technology, Cambridge, Mass (K.C., B.C., K.H., J.P., J.A.)
| | - Sharut Gupta
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
149 13th St, Boston, MA 02129 (N.A., P.S., K.C., M.A., B.C., K.H., S.G.,
J.P., M.G., M.D.L., J.K.C.); Department of Computer Science, Shiv Nadar
University, Greater Noida, India (N.A.); Department of Operational Sciences,
Graduate School of Engineering and Management, Air Force Institute of
Technology, Wright-Patterson AFB, Dayton, Ohio (N.G.); and Massachusetts
Institute of Technology, Cambridge, Mass (K.C., B.C., K.H., J.P., J.A.)
| | - Jay Patel
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
149 13th St, Boston, MA 02129 (N.A., P.S., K.C., M.A., B.C., K.H., S.G.,
J.P., M.G., M.D.L., J.K.C.); Department of Computer Science, Shiv Nadar
University, Greater Noida, India (N.A.); Department of Operational Sciences,
Graduate School of Engineering and Management, Air Force Institute of
Technology, Wright-Patterson AFB, Dayton, Ohio (N.G.); and Massachusetts
Institute of Technology, Cambridge, Mass (K.C., B.C., K.H., J.P., J.A.)
| | - Mishka Gidwani
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
149 13th St, Boston, MA 02129 (N.A., P.S., K.C., M.A., B.C., K.H., S.G.,
J.P., M.G., M.D.L., J.K.C.); Department of Computer Science, Shiv Nadar
University, Greater Noida, India (N.A.); Department of Operational Sciences,
Graduate School of Engineering and Management, Air Force Institute of
Technology, Wright-Patterson AFB, Dayton, Ohio (N.G.); and Massachusetts
Institute of Technology, Cambridge, Mass (K.C., B.C., K.H., J.P., J.A.)
| | - Julius Adebayo
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
149 13th St, Boston, MA 02129 (N.A., P.S., K.C., M.A., B.C., K.H., S.G.,
J.P., M.G., M.D.L., J.K.C.); Department of Computer Science, Shiv Nadar
University, Greater Noida, India (N.A.); Department of Operational Sciences,
Graduate School of Engineering and Management, Air Force Institute of
Technology, Wright-Patterson AFB, Dayton, Ohio (N.G.); and Massachusetts
Institute of Technology, Cambridge, Mass (K.C., B.C., K.H., J.P., J.A.)
| | - Matthew D. Li
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
149 13th St, Boston, MA 02129 (N.A., P.S., K.C., M.A., B.C., K.H., S.G.,
J.P., M.G., M.D.L., J.K.C.); Department of Computer Science, Shiv Nadar
University, Greater Noida, India (N.A.); Department of Operational Sciences,
Graduate School of Engineering and Management, Air Force Institute of
Technology, Wright-Patterson AFB, Dayton, Ohio (N.G.); and Massachusetts
Institute of Technology, Cambridge, Mass (K.C., B.C., K.H., J.P., J.A.)
| | - Jayashree Kalpathy-Cramer
- From the Athinoula A. Martinos Center for Biomedical Imaging,
Department of Radiology, Massachusetts General Hospital, Harvard Medical School,
149 13th St, Boston, MA 02129 (N.A., P.S., K.C., M.A., B.C., K.H., S.G.,
J.P., M.G., M.D.L., J.K.C.); Department of Computer Science, Shiv Nadar
University, Greater Noida, India (N.A.); Department of Operational Sciences,
Graduate School of Engineering and Management, Air Force Institute of
Technology, Wright-Patterson AFB, Dayton, Ohio (N.G.); and Massachusetts
Institute of Technology, Cambridge, Mass (K.C., B.C., K.H., J.P., J.A.)
| |
Collapse
|
45
|
Lee S, Summers RM. Clinical Artificial Intelligence Applications in Radiology: Chest and Abdomen. Radiol Clin North Am 2021; 59:987-1002. [PMID: 34689882 DOI: 10.1016/j.rcl.2021.07.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Organ segmentation, chest radiograph classification, and lung and liver nodule detections are some of the popular artificial intelligence (AI) tasks in chest and abdominal radiology due to the wide availability of public datasets. AI algorithms have achieved performance comparable to humans in less time for several organ segmentation tasks, and some lesion detection and classification tasks. This article introduces the current published articles of AI applied to chest and abdominal radiology, including organ segmentation, lesion detection, classification, and predicting prognosis.
Collapse
Affiliation(s)
- Sungwon Lee
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Building 10, Room 1C224D, 10 Center Drive, Bethesda, MD 20892-1182, USA
| | - Ronald M Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Building 10, Room 1C224D, 10 Center Drive, Bethesda, MD 20892-1182, USA.
| |
Collapse
|
46
|
Wang T, Chen Z, Shang Q, Ma C, Chen X, Xiao E. A Promising and Challenging Approach: Radiologists' Perspective on Deep Learning and Artificial Intelligence for Fighting COVID-19. Diagnostics (Basel) 2021; 11:diagnostics11101924. [PMID: 34679622 PMCID: PMC8534829 DOI: 10.3390/diagnostics11101924] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 10/10/2021] [Accepted: 10/14/2021] [Indexed: 12/23/2022] Open
Abstract
Chest X-rays (CXR) and computed tomography (CT) are the main medical imaging modalities used against the increased worldwide spread of the 2019 coronavirus disease (COVID-19) epidemic. Machine learning (ML) and artificial intelligence (AI) technology, based on medical imaging fully extracting and utilizing the hidden information in massive medical imaging data, have been used in COVID-19 research of disease diagnosis and classification, treatment decision-making, efficacy evaluation, and prognosis prediction. This review article describes the extensive research of medical image-based ML and AI methods in preventing and controlling COVID-19, and summarizes their characteristics, differences, and significance in terms of application direction, image collection, and algorithm improvement, from the perspective of radiologists. The limitations and challenges faced by these systems and technologies, such as generalization and robustness, are discussed to indicate future research directions.
Collapse
Affiliation(s)
- Tianming Wang
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha 410011, China; (T.W.); (Z.C.); (Q.S.); (C.M.); (X.C.)
- Department of Radiology, Xiangya Hospital, Central South University, Changsha 410008, China
| | - Zhu Chen
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha 410011, China; (T.W.); (Z.C.); (Q.S.); (C.M.); (X.C.)
| | - Quanliang Shang
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha 410011, China; (T.W.); (Z.C.); (Q.S.); (C.M.); (X.C.)
| | - Cong Ma
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha 410011, China; (T.W.); (Z.C.); (Q.S.); (C.M.); (X.C.)
| | - Xiangyu Chen
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha 410011, China; (T.W.); (Z.C.); (Q.S.); (C.M.); (X.C.)
| | - Enhua Xiao
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha 410011, China; (T.W.); (Z.C.); (Q.S.); (C.M.); (X.C.)
- Molecular Imaging Research Center, Central South University, Changsha 410008, China
- Correspondence:
| |
Collapse
|
47
|
Radiology Implementation Considerations for Artificial Intelligence (AI) Applied to COVID-19, From the AJR Special Series on AI Applications. AJR Am J Roentgenol 2021; 219:15-23. [PMID: 34612681 DOI: 10.2214/ajr.21.26717] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Hundreds of imaging-based artificial intelligence (AI) models have been developed in response to the COVID-19 pandemic. AI systems that incorporate imaging have shown promise in primary detection, severity grading, and prognostication of outcomes in COVID-19, and have enabled integration of imaging with a broad range of additional clinical and epidemiologic data. However, systematic reviews of AI models applied to COVID-19 medical imaging have highlighted problems in the field, including methodologic issues and problems in real-world deployment. Clinical use of such models should be informed by both the promise and potential pitfalls of implementation. How does a practicing radiologist make sense of this complex topic, and what factors should be considered in the implementation of AI tools for imaging of COVID-19? This critical review aims to help the radiologist understand the nuances that impact the clinical deployment of AI for imaging of COVID-19. We review imaging use cases for AI models in COVID-19 (e.g., diagnosis, severity assessment, and prognostication) and explore considerations for AI model development and testing, deployment infrastructure, clinical user interfaces, quality control, and institutional review board and regulatory approvals, with a practical focus on what a radiologist should consider when implementing an AI tool for COVID-19.
Collapse
|
48
|
Little BP. Disease Severity Scoring for COVID-19: A Welcome (Semi)Quantitative Role for Chest Radiography. Radiology 2021; 302:470-472. [PMID: 34519581 PMCID: PMC8451247 DOI: 10.1148/radiol.2021212212] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Affiliation(s)
- Brent P Little
- Department of Radiology, Division of Cardiothoracic Imaging, Mayo Clinic Florida, 4500 San Pablo Road, Jacksonville, FL 32224
| |
Collapse
|
49
|
Rehouma R, Buchert M, Chen YP. Machine learning for medical imaging-based COVID-19 detection and diagnosis. INT J INTELL SYST 2021; 36:5085-5115. [PMID: 38607786 PMCID: PMC8242401 DOI: 10.1002/int.22504] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 04/15/2021] [Accepted: 05/11/2021] [Indexed: 12/15/2022]
Abstract
The novel coronavirus disease 2019 (COVID-19) is considered to be a significant health challenge worldwide because of its rapid human-to-human transmission, leading to a rise in the number of infected people and deaths. The detection of COVID-19 at the earliest stage is therefore of paramount importance for controlling the pandemic spread and reducing the mortality rate. The real-time reverse transcription-polymerase chain reaction, the primary method of diagnosis for coronavirus infection, has a relatively high false negative rate while detecting early stage disease. Meanwhile, the manifestations of COVID-19, as seen through medical imaging methods such as computed tomography (CT), radiograph (X-ray), and ultrasound imaging, show individual characteristics that differ from those of healthy cases or other types of pneumonia. Machine learning (ML) applications for COVID-19 diagnosis, detection, and the assessment of disease severity based on medical imaging have gained considerable attention. Herein, we review the recent progress of ML in COVID-19 detection with a particular focus on ML models using CT and X-ray images published in high-ranking journals, including a discussion of the predominant features of medical imaging in patients with COVID-19. Deep Learning algorithms, particularly convolutional neural networks, have been utilized widely for image segmentation and classification to identify patients with COVID-19 and many ML modules have achieved remarkable predictive results using datasets with limited sample sizes.
Collapse
Affiliation(s)
- Rokaya Rehouma
- School of Cancer MedicineLa Trobe UniversityMelbourneVictoriaAustralia
| | - Michael Buchert
- School of Cancer MedicineLa Trobe UniversityMelbourneVictoriaAustralia
- Tumour Microenvironment and Cancer Signaling GroupOlivia Newton‐John Cancer Research InstituteMelbourneVictoriaAustralia
| | - Yi‐Ping Phoebe Chen
- Department of Computer Science and Information TechnologyLa Trobe UniversityMelbourneVictoriaAustralia
| |
Collapse
|
50
|
Gibson LE, Fenza RD, Lang M, Capriles MI, Li MD, Kalpathy-Cramer J, Little BP, Arora P, Mueller AL, Ichinose F, Bittner EA, Berra L, G. Chang M. Right Ventricular Strain Is Common in Intubated COVID-19 Patients and Does Not Reflect Severity of Respiratory Illness. J Intensive Care Med 2021; 36:900-909. [PMID: 33783269 PMCID: PMC8267080 DOI: 10.1177/08850666211006335] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 03/02/2021] [Accepted: 03/11/2021] [Indexed: 12/28/2022]
Abstract
BACKGROUND Right ventricular (RV) dysfunction is common and associated with worse outcomes in patients with coronavirus disease 2019 (COVID-19). In non-COVID-19 acute respiratory distress syndrome, RV dysfunction develops due to pulmonary hypoxic vasoconstriction, inflammation, and alveolar overdistension or atelectasis. Although similar pathogenic mechanisms may induce RV dysfunction in COVID-19, other COVID-19-specific pathology, such as pulmonary endothelialitis, thrombosis, or myocarditis, may also affect RV function. We quantified RV dysfunction by echocardiographic strain analysis and investigated its correlation with disease severity, ventilatory parameters, biomarkers, and imaging findings in critically ill COVID-19 patients. METHODS We determined RV free wall longitudinal strain (FWLS) in 32 patients receiving mechanical ventilation for COVID-19-associated respiratory failure. Demographics, comorbid conditions, ventilatory parameters, medications, and laboratory findings were extracted from the medical record. Chest imaging was assessed to determine the severity of lung disease and the presence of pulmonary embolism. RESULTS Abnormal FWLS was present in 66% of mechanically ventilated COVID-19 patients and was associated with higher lung compliance (39.6 vs 29.4 mL/cmH2O, P = 0.016), lower airway plateau pressures (21 vs 24 cmH2O, P = 0.043), lower tidal volume ventilation (5.74 vs 6.17 cc/kg, P = 0.031), and reduced left ventricular function. FWLS correlated negatively with age (r = -0.414, P = 0.018) and with serum troponin (r = 0.402, P = 0.034). Patients with abnormal RV strain did not exhibit decreased oxygenation or increased disease severity based on inflammatory markers, vasopressor requirements, or chest imaging findings. CONCLUSIONS RV dysfunction is common among critically ill COVID-19 patients and is not related to abnormal lung mechanics or ventilatory pressures. Instead, patients with abnormal FWLS had more favorable lung compliance. RV dysfunction may be secondary to diffuse intravascular micro- and macro-thrombosis or direct myocardial damage. TRIAL REGISTRATION National Institutes of Health #NCT04306393. Registered 10 March 2020, https://clinicaltrials.gov/ct2/show/NCT04306393.
Collapse
Affiliation(s)
- Lauren E. Gibson
- Department of Anesthesia, Critical Care, and Pain Medicine, Massachusetts General Hospital, Boston, MA, USA
| | - Raffaele Di Fenza
- Department of Anesthesia, Critical Care, and Pain Medicine, Massachusetts General Hospital, Boston, MA, USA
| | - Min Lang
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Martin I. Capriles
- Department of Anesthesia, Critical Care, and Pain Medicine, Massachusetts General Hospital, Boston, MA, USA
| | - Matthew D. Li
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | | | - Brent P. Little
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Pankaj Arora
- Division of Cardiovascular Disease, University of Alabama at Birmingham, Birmingham, AL, USA
| | - Ariel L. Mueller
- Department of Anesthesia, Critical Care, and Pain Medicine, Massachusetts General Hospital, Boston, MA, USA
| | - Fumito Ichinose
- Department of Anesthesia, Critical Care, and Pain Medicine, Massachusetts General Hospital, Boston, MA, USA
| | - Edward A. Bittner
- Department of Anesthesia, Critical Care, and Pain Medicine, Massachusetts General Hospital, Boston, MA, USA
| | - Lorenzo Berra
- Department of Anesthesia, Critical Care, and Pain Medicine, Massachusetts General Hospital, Boston, MA, USA
| | - Marvin G. Chang
- Department of Anesthesia, Critical Care, and Pain Medicine, Massachusetts General Hospital, Boston, MA, USA
| |
Collapse
|