1
|
Hindocha S, Hunter B, Linton-Reid K, George Charlton T, Chen M, Logan A, Ahmed M, Locke I, Sharma B, Doran S, Orton M, Bunce C, Power D, Ahmad S, Chan K, Ng P, Toshner R, Yasar B, Conibear J, Murphy R, Newsom-Davis T, Goodley P, Evison M, Yousaf N, Bitar G, McDonald F, Blackledge M, Aboagye E, Lee R. Validated machine learning tools to distinguish immune checkpoint inhibitor, radiotherapy, COVID-19 and other infective pneumonitis. Radiother Oncol 2024; 195:110266. [PMID: 38582181 DOI: 10.1016/j.radonc.2024.110266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 03/27/2024] [Accepted: 03/31/2024] [Indexed: 04/08/2024]
Abstract
BACKGROUND Pneumonitis is a well-described, potentially disabling, or fatal adverse effect associated with both immune checkpoint inhibitors (ICI) and thoracic radiotherapy. Accurate differentiation between checkpoint inhibitor pneumonitis (CIP) radiation pneumonitis (RP), and infective pneumonitis (IP) is crucial for swift, appropriate, and tailored management to achieve optimal patient outcomes. However, correct diagnosis is often challenging, owing to overlapping clinical presentations and radiological patterns. METHODS In this multi-centre study of 455 patients, we used machine learning with radiomic features extracted from chest CT imaging to develop and validate five models to distinguish CIP and RP from COVID-19, non-COVID-19 infective pneumonitis, and each other. Model performance was compared to that of two radiologists. RESULTS Models to distinguish RP from COVID-19, CIP from COVID-19 and CIP from non-COVID-19 IP out-performed radiologists (test set AUCs of 0.92 vs 0.8 and 0.8; 0.68 vs 0.43 and 0.4; 0.71 vs 0.55 and 0.63 respectively). Models to distinguish RP from non-COVID-19 IP and CIP from RP were not superior to radiologists but demonstrated modest performance, with test set AUCs of 0.81 and 0.8 respectively. The CIP vs RP model performed less well on patients with prior exposure to both ICI and radiotherapy (AUC 0.54), though the radiologists also had difficulty distinguishing this test cohort (AUC values 0.6 and 0.6). CONCLUSION Our results demonstrate the potential utility of such tools as a second or concurrent reader to support oncologists, radiologists, and chest physicians in cases of diagnostic uncertainty. Further research is required for patients with exposure to both ICI and thoracic radiotherapy.
Collapse
Affiliation(s)
- Sumeet Hindocha
- Early Diagnosis and Detection Centre, The Royal Marsden NHS Foundation Trust, Fulham Road, London SW36JJ, UK; Cancer Imaging Centre, Department of Surgery & Cancer, Imperial College London, Du Cane Road, London W12 0NN, UK.
| | - Benjamin Hunter
- Early Diagnosis and Detection Centre, The Royal Marsden NHS Foundation Trust, Fulham Road, London SW36JJ, UK
| | - Kristofer Linton-Reid
- Cancer Imaging Centre, Department of Surgery & Cancer, Imperial College London, Du Cane Road, London W12 0NN, UK
| | - Thomas George Charlton
- Guy's Cancer Centre, Guy's and St Thomas' NHS Foundation Trust, Great Maze Pond, London, SE19RT, UK
| | - Mitchell Chen
- Department of Surgery and Cancer, Imperial College London, Du Cane Road, London W12 0NN, UK
| | - Andrew Logan
- Department of Surgery and Cancer, Imperial College London, Du Cane Road, London W12 0NN, UK
| | - Merina Ahmed
- Lung Unit, The Royal Marsden NHS Foundation Trust, Downs Road, Sutton SM25PT, UK
| | - Imogen Locke
- Lung Unit, The Royal Marsden NHS Foundation Trust, Downs Road, Sutton SM25PT, UK
| | - Bhupinder Sharma
- Department of Radiology, The Royal Marsden NHS Foundation Trust, Fulham Road, London SW36JJ, UK
| | - Simon Doran
- Institute of Cancer Research NIHR Biomedical Research Centre, London, UK
| | - Matthew Orton
- Artificial Intelligence Imaging Hub, Royal Marsden NHS Foundation Trust, Downs Road, Sutton SM25PT, UK
| | - Catey Bunce
- Institute of Cancer Research NIHR Biomedical Research Centre, London, UK
| | - Danielle Power
- Department of Clinical Oncology, Imperial College Healthcare NHS Trust, Fulham Palace Road, London W6 8RF, UK
| | - Shahreen Ahmad
- Guy's Cancer Centre, Guy's and St Thomas' NHS Foundation Trust, Great Maze Pond, London, SE19RT, UK
| | - Karen Chan
- Guy's Cancer Centre, Guy's and St Thomas' NHS Foundation Trust, Great Maze Pond, London, SE19RT, UK
| | - Peng Ng
- Guy's Cancer Centre, Guy's and St Thomas' NHS Foundation Trust, Great Maze Pond, London, SE19RT, UK
| | - Richard Toshner
- Interstitial lung disease unit, St Bartholomews' Hospital, Barts Health NHS Trust, West Smithfield, London EC1A 7BE, UK
| | - Binnaz Yasar
- Department of Clinical Oncology, St Batholomew's Hospital, Barts Health NHS Trust, West Smithfield, London, EC1A 7BE, UK
| | - John Conibear
- Department of Clinical Oncology, St Batholomew's Hospital, Barts Health NHS Trust, West Smithfield, London, EC1A 7BE, UK
| | - Ravindhi Murphy
- Chelsea and Westminster Hospital, Chelsea and Westminster NHS Foundation Trust, 369 Fulham Road, London SW10 9NH, UK
| | - Tom Newsom-Davis
- Chelsea and Westminster Hospital, Chelsea and Westminster NHS Foundation Trust, 369 Fulham Road, London SW10 9NH, UK
| | - Patrick Goodley
- Lung Cancer & Thoracic Surgery Directorate, Wythenshawe Hospital, Manchester University NHS Foundation Trust, Greater Manchester, UK; Division of Immunology, Immunity to Infection & Respiratory Medicine, University of Manchester, Manchester, UK
| | - Matthew Evison
- Lung Cancer & Thoracic Surgery Directorate, Wythenshawe Hospital, Manchester University NHS Foundation Trust, Greater Manchester, UK
| | - Nadia Yousaf
- Lung Unit, The Royal Marsden NHS Foundation Trust, Fulham Road, London SW36JJ, UK
| | - George Bitar
- Department of Radiology, The Royal Marsden NHS Foundation Trust, Fulham Road, London SW36JJ, UK
| | - Fiona McDonald
- Lung Unit, The Royal Marsden NHS Foundation Trust, Fulham Road, London SW36JJ, UK
| | - Matthew Blackledge
- Radiotherapy and Imaging, Institute of Cancer Research, 123 Old Brompton Road, London SW7 3RP, UK
| | - Eric Aboagye
- Cancer Imaging Centre, Department of Surgery & Cancer, Imperial College London, Du Cane Road, London W12 0NN, UK
| | - Richard Lee
- Early Diagnosis and Detection Centre, The Royal Marsden NHS Foundation Trust, Fulham Road, London SW36JJ, UK
| |
Collapse
|
2
|
Zhang Z, Wittenstein J. Advancing acute respiratory failure management through artificial intelligence: a call for thematic collection contributions. Intensive Care Med Exp 2024; 12:45. [PMID: 38713382 PMCID: PMC11076423 DOI: 10.1186/s40635-024-00629-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Accepted: 05/02/2024] [Indexed: 05/08/2024] Open
Affiliation(s)
- Zhongheng Zhang
- Department of Emergency Medicine, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China.
| | - Jakob Wittenstein
- Department of Anesthesiology and Intensive Care Medicine, Pulmonary Engineering Group, University Hospital Carl Gustav Carus Dresden, TUD Dresden University of Technology, Dresden, Germany
| |
Collapse
|
3
|
Abad M, Casas-Roma J, Prados F. Generalizable disease detection using model ensemble on chest X-ray images. Sci Rep 2024; 14:5890. [PMID: 38467705 PMCID: PMC10928229 DOI: 10.1038/s41598-024-56171-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Accepted: 03/03/2024] [Indexed: 03/13/2024] Open
Abstract
In the realm of healthcare, the demand for swift and precise diagnostic tools has been steadily increasing. This study delves into a comprehensive performance analysis of three pre-trained convolutional neural network (CNN) architectures: ResNet50, DenseNet121, and Inception-ResNet-v2. To ensure the broad applicability of our approach, we curated a large-scale dataset comprising a diverse collection of chest X-ray images, that included both positive and negative cases of COVID-19. The models' performance was evaluated using separate datasets for internal validation (from the same source as the training images) and external validation (from different sources). Our examination uncovered a significant drop in network efficacy, registering a 10.66% reduction for ResNet50, a 36.33% decline for DenseNet121, and a 19.55% decrease for Inception-ResNet-v2 in terms of accuracy. Best results were obtained with DenseNet121 achieving the highest accuracy at 96.71% in internal validation and Inception-ResNet-v2 attaining 76.70% accuracy in external validation. Furthermore, we introduced a model ensemble approach aimed at improving network performance when making inferences on images from diverse sources beyond their training data. The proposed method uses uncertainty-based weighting by calculating the entropy in order to assign appropriate weights to the outputs of each network. Our results showcase the effectiveness of the ensemble method in enhancing accuracy up to 97.38% for internal validation and 81.18% for external validation, while maintaining a balanced ability to detect both positive and negative cases.
Collapse
Affiliation(s)
- Maider Abad
- Universitat Oberta de Catalunya, e-Health Center, Barcelona, Spain.
| | - Jordi Casas-Roma
- Universitat Oberta de Catalunya, e-Health Center, Barcelona, Spain
- Department of Computer Science, Universitat Autònoma de Barcelona (UAB), Bellaterra, Spain
- Computer Vision Center (CVC), Universitat Autònoma de Barcelona (UAB), Bellaterra, Spain
| | - Ferran Prados
- Universitat Oberta de Catalunya, e-Health Center, Barcelona, Spain
- Queen Square MS Centre, Department of Neuroinflammation, UCL Queen Square Institute of Neurology, Faculty of Brain Science, University College of London, London, WC1N 3BG, UK
- Centre for Medical Image Computing (CMIC), Department of Medical Physics and Bioengineering, University College London, London, WC1V 6LJ, UK
| |
Collapse
|
4
|
Wan G, Wu X, Zhang X, Sun H, Yu X. Development of a novel machine learning model based on laboratory and imaging indices to predict acute cardiac injury in cancer patients with COVID-19 infection: a retrospective observational study. J Cancer Res Clin Oncol 2023; 149:17039-17050. [PMID: 37747525 DOI: 10.1007/s00432-023-05417-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 09/07/2023] [Indexed: 09/26/2023]
Abstract
PURPOSE Due to the increased risk of acute cardiac injury (ACI) and poor prognosis in cancer patients with COVID-19 infection, our aim was to develop a novel and interpretable model for predicting ACI occurrence in cancer patients with COVID-19 infection. METHODS This retrospective observational study screened 740 cancer patients with COVID-19 infection from December 2022 to April 2023. The least absolute shrinkage and selection operator (LASSO) regression was used for the preliminary screening of the indices. To enhance the model accuracy, we introduced an alpha index to further screen and rank the indices based on their significance. Random forest (RF) was used to construct the prediction model. The Shapley Additive Explanation (SHAP) and Local Interpretable Model-Agnostic Explanation (LIME) methods were utilized to explain the model. RESULTS According to the inclusion criteria, 201 cancer patients with COVID-19, including 36 variables indices, were included in the analysis. The top eight indices (albumin, lactate dehydrogenase, cystatin C, neutrophil count, creatine kinase isoenzyme, red blood cell distribution width, D-dimer and chest computed tomography) for predicting the occurrence of ACI in cancer patients with COVID-19 infection were included in the RF model. The model achieved an area under curve (AUC) of 0.940, an accuracy of 0.866, a sensitivity of 0.750 and a specificity of 0.900. The calibration curve and decision curve analysis showed good calibration and clinical practicability. SHAP results demonstrated that albumin was the most important index for predicting the occurrence of ACI. LIME results showed that the model could predict the probability of ACI in each cancer patient infected with COVID-19 individually. CONCLUSION We developed a novel machine-learning model that demonstrates high explainability and accuracy in predicting the occurrence of ACI in cancer patients with COVID-19 infection, using laboratory and imaging indices.
Collapse
Affiliation(s)
- Guangcai Wan
- Department of Clinical Laboratory, Jilin Cancer Hospital, Changchun, 130012, China
| | - Xuefeng Wu
- Department of Clinical Laboratory, Jilin Cancer Hospital, Changchun, 130012, China
| | - Xiaowei Zhang
- Department of Clinical Laboratory, Jilin Cancer Hospital, Changchun, 130012, China
| | - Hongshuai Sun
- Department of Clinical Laboratory, Jilin Cancer Hospital, Changchun, 130012, China
| | - Xiuyan Yu
- Department of Clinical Laboratory, Jilin Cancer Hospital, Changchun, 130012, China.
| |
Collapse
|
5
|
Henao JAG, Depotter A, Bower DV, Bajercius H, Todorova PT, Saint-James H, de Mortanges AP, Barroso MC, He J, Yang J, You C, Staib LH, Gange C, Ledda RE, Caminiti C, Silva M, Cortopassi IO, Dela Cruz CS, Hautz W, Bonel HM, Sverzellati N, Duncan JS, Reyes M, Poellinger A. A Multiclass Radiomics Method-Based WHO Severity Scale for Improving COVID-19 Patient Assessment and Disease Characterization From CT Scans. Invest Radiol 2023; 58:882-893. [PMID: 37493348 PMCID: PMC10662611 DOI: 10.1097/rli.0000000000001005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 05/26/2023] [Indexed: 07/27/2023]
Abstract
OBJECTIVES The aim of this study was to evaluate the severity of COVID-19 patients' disease by comparing a multiclass lung lesion model to a single-class lung lesion model and radiologists' assessments in chest computed tomography scans. MATERIALS AND METHODS The proposed method, AssessNet-19, was developed in 2 stages in this retrospective study. Four COVID-19-induced tissue lesions were manually segmented to train a 2D-U-Net network for a multiclass segmentation task followed by extensive extraction of radiomic features from the lung lesions. LASSO regression was used to reduce the feature set, and the XGBoost algorithm was trained to classify disease severity based on the World Health Organization Clinical Progression Scale. The model was evaluated using 2 multicenter cohorts: a development cohort of 145 COVID-19-positive patients from 3 centers to train and test the severity prediction model using manually segmented lung lesions. In addition, an evaluation set of 90 COVID-19-positive patients was collected from 2 centers to evaluate AssessNet-19 in a fully automated fashion. RESULTS AssessNet-19 achieved an F1-score of 0.76 ± 0.02 for severity classification in the evaluation set, which was superior to the 3 expert thoracic radiologists (F1 = 0.63 ± 0.02) and the single-class lesion segmentation model (F1 = 0.64 ± 0.02). In addition, AssessNet-19 automated multiclass lesion segmentation obtained a mean Dice score of 0.70 for ground-glass opacity, 0.68 for consolidation, 0.65 for pleural effusion, and 0.30 for band-like structures compared with ground truth. Moreover, it achieved a high agreement with radiologists for quantifying disease extent with Cohen κ of 0.94, 0.92, and 0.95. CONCLUSIONS A novel artificial intelligence multiclass radiomics model including 4 lung lesions to assess disease severity based on the World Health Organization Clinical Progression Scale more accurately determines the severity of COVID-19 patients than a single-class model and radiologists' assessment.
Collapse
|
6
|
Ippolito D, Maino C, Gandola D, Franco PN, Miron R, Barbu V, Bologna M, Corso R, Breaban ME. Artificial Intelligence Applied to Chest X-ray: A Reliable Tool to Assess the Differential Diagnosis of Lung Pneumonia in the Emergency Department. Diseases 2023; 11:171. [PMID: 37987282 PMCID: PMC10660530 DOI: 10.3390/diseases11040171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 11/09/2023] [Accepted: 11/13/2023] [Indexed: 11/22/2023] Open
Abstract
BACKGROUND Considering the large number of patients with pulmonary symptoms admitted to the emergency department daily, it is essential to diagnose them correctly. It is necessary to quickly solve the differential diagnosis between COVID-19 and typical bacterial pneumonia to address them with the best management possible. In this setting, an artificial intelligence (AI) system can help radiologists detect pneumonia more quickly. METHODS We aimed to test the diagnostic performance of an AI system in detecting COVID-19 pneumonia and typical bacterial pneumonia in patients who underwent a chest X-ray (CXR) and were admitted to the emergency department. The final dataset was composed of three sub-datasets: the first included all patients positive for COVID-19 pneumonia (n = 1140, namely "COVID-19+"), the second one included all patients with typical bacterial pneumonia (n = 500, "pneumonia+"), and the third one was composed of healthy subjects (n = 1000). Two radiologists were blinded to demographic, clinical, and laboratory data. The developed AI system was used to evaluate all CXRs randomly and was asked to classify them into three classes. Cohen's κ was used for interrater reliability analysis. The AI system's diagnostic accuracy was evaluated using a confusion matrix, and 95%CIs were reported as appropriate. RESULTS The interrater reliability analysis between the most experienced radiologist and the AI system reported an almost perfect agreement for COVID-19+ (κ = 0.822) and pneumonia+ (κ = 0.913). We found 96% sensitivity (95% CIs = 94.9-96.9) and 79.8% specificity (76.4-82.9) for the radiologist and 94.7% sensitivity (93.4-95.8) and 80.2% specificity (76.9-83.2) for the AI system in the detection of COVID-19+. Moreover, we found 97.9% sensitivity (98-99.3) and 88% specificity (83.5-91.7) for the radiologist and 97.5% sensitivity (96.5-98.3) and 83.9% specificity (79-87.9) for the AI system in the detection of pneumonia+ patients. Finally, the AI system reached an accuracy of 93.8%, with a misclassification rate of 6.2% and weighted-F1 of 93.8% in detecting COVID+, pneumonia+, and healthy subjects. CONCLUSIONS The AI system demonstrated excellent diagnostic performance in identifying COVID-19 and typical bacterial pneumonia in CXRs acquired in the emergency setting.
Collapse
Affiliation(s)
- Davide Ippolito
- Department of Diagnostic Radiology, Fondazione IRCCS San Gerardo dei Tintori, Via Pergolesi 33, 20900 Monza, Italy; (D.I.); (D.G.); (P.N.F.); (R.C.)
- School of Medicine, University of Milano-Bicocca, Via Cadore 48, 20900 Monza, Italy
| | - Cesare Maino
- Department of Diagnostic Radiology, Fondazione IRCCS San Gerardo dei Tintori, Via Pergolesi 33, 20900 Monza, Italy; (D.I.); (D.G.); (P.N.F.); (R.C.)
| | - Davide Gandola
- Department of Diagnostic Radiology, Fondazione IRCCS San Gerardo dei Tintori, Via Pergolesi 33, 20900 Monza, Italy; (D.I.); (D.G.); (P.N.F.); (R.C.)
| | - Paolo Niccolò Franco
- Department of Diagnostic Radiology, Fondazione IRCCS San Gerardo dei Tintori, Via Pergolesi 33, 20900 Monza, Italy; (D.I.); (D.G.); (P.N.F.); (R.C.)
| | - Radu Miron
- Sentic Lab, Strada Elena Doamna 20, 700398 Iași, Romania; (R.M.); (V.B.)
| | - Vlad Barbu
- Sentic Lab, Strada Elena Doamna 20, 700398 Iași, Romania; (R.M.); (V.B.)
| | | | - Rocco Corso
- Department of Diagnostic Radiology, Fondazione IRCCS San Gerardo dei Tintori, Via Pergolesi 33, 20900 Monza, Italy; (D.I.); (D.G.); (P.N.F.); (R.C.)
| | - Mihaela Elena Breaban
- Faculty of Computer Science, “Alexandru Ioan Cuza” University of Iasi, Strada General Henri Mathias Berthelot 16, 700483 Iași, Romania
| |
Collapse
|
7
|
Kalantar R, Hindocha S, Hunter B, Sharma B, Khan N, Koh DM, Ahmed M, Aboagye EO, Lee RW, Blackledge MD. Non-contrast CT synthesis using patch-based cycle-consistent generative adversarial network (Cycle-GAN) for radiomics and deep learning in the era of COVID-19. Sci Rep 2023; 13:10568. [PMID: 37386097 PMCID: PMC10310777 DOI: 10.1038/s41598-023-36712-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Accepted: 06/07/2023] [Indexed: 07/01/2023] Open
Abstract
Handcrafted and deep learning (DL) radiomics are popular techniques used to develop computed tomography (CT) imaging-based artificial intelligence models for COVID-19 research. However, contrast heterogeneity from real-world datasets may impair model performance. Contrast-homogenous datasets present a potential solution. We developed a 3D patch-based cycle-consistent generative adversarial network (cycle-GAN) to synthesize non-contrast images from contrast CTs, as a data homogenization tool. We used a multi-centre dataset of 2078 scans from 1,650 patients with COVID-19. Few studies have previously evaluated GAN-generated images with handcrafted radiomics, DL and human assessment tasks. We evaluated the performance of our cycle-GAN with these three approaches. In a modified Turing-test, human experts identified synthetic vs acquired images, with a false positive rate of 67% and Fleiss' Kappa 0.06, attesting to the photorealism of the synthetic images. However, on testing performance of machine learning classifiers with radiomic features, performance decreased with use of synthetic images. Marked percentage difference was noted in feature values between pre- and post-GAN non-contrast images. With DL classification, deterioration in performance was observed with synthetic images. Our results show that whilst GANs can produce images sufficient to pass human assessment, caution is advised before GAN-synthesized images are used in medical imaging applications.
Collapse
Affiliation(s)
- Reza Kalantar
- Division of Radiotherapy and Imaging, the Institute of Cancer, London, SM2 5NG, UK
| | - Sumeet Hindocha
- Division of Radiotherapy and Imaging, the Institute of Cancer, London, SM2 5NG, UK
- AI for Healthcare Centre for Doctoral Training, Imperial College London, Exhibition Road, London, SW7 2BX, UK
- Cancer Imaging Centre, Department of Surgery & Cancer, Imperial College London, Du Cane Road, London, W12 0NN, UK
- Early Diagnosis and Detection Team, The Royal Marsden NHS Foundation Trust, Fulham Road, London, SW3 6JJ, UK
| | - Benjamin Hunter
- Cancer Imaging Centre, Department of Surgery & Cancer, Imperial College London, Du Cane Road, London, W12 0NN, UK
- Early Diagnosis and Detection Team, The Royal Marsden NHS Foundation Trust, Fulham Road, London, SW3 6JJ, UK
| | - Bhupinder Sharma
- Division of Radiotherapy and Imaging, the Institute of Cancer, London, SM2 5NG, UK
- Department of Radiology, The Royal Marsden NHS Foundation Trust, Sutton, SM2 5PT, UK
| | - Nasir Khan
- Department of Radiology, The Royal Marsden NHS Foundation Trust, Sutton, SM2 5PT, UK
| | - Dow-Mu Koh
- Department of Radiology, The Royal Marsden NHS Foundation Trust, Sutton, SM2 5PT, UK
| | - Merina Ahmed
- Lung Unit, The Royal Marsden NHS Foundation Trust, Sutton, SM2 5PT, UK
| | - Eric O Aboagye
- Cancer Imaging Centre, Department of Surgery & Cancer, Imperial College London, Du Cane Road, London, W12 0NN, UK
| | - Richard W Lee
- Early Diagnosis and Detection Team, The Royal Marsden NHS Foundation Trust, Fulham Road, London, SW3 6JJ, UK
| | - Matthew D Blackledge
- Division of Radiotherapy and Imaging, the Institute of Cancer, London, SM2 5NG, UK.
| |
Collapse
|
8
|
Ullah Z, Usman M, Gwak J. MTSS-AAE: Multi-task semi-supervised adversarial autoencoding for COVID-19 detection based on chest X-ray images. EXPERT SYSTEMS WITH APPLICATIONS 2023; 216:119475. [PMID: 36619348 PMCID: PMC9810379 DOI: 10.1016/j.eswa.2022.119475] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 07/28/2022] [Accepted: 12/22/2022] [Indexed: 06/12/2023]
Abstract
Efficient diagnosis of COVID-19 plays an important role in preventing the spread of the disease. There are three major modalities to diagnose COVID-19 which include polymerase chain reaction tests, computed tomography scans, and chest X-rays (CXRs). Among these, diagnosis using CXRs is the most economical approach; however, it requires extensive human expertise to diagnose COVID-19 in CXRs, which may deprive it of cost-effectiveness. The computer-aided diagnosis with deep learning has the potential to perform accurate detection of COVID-19 in CXRs without human intervention while preserving its cost-effectiveness. Many efforts have been made to develop a highly accurate and robust solution. However, due to the limited amount of labeled data, existing solutions are evaluated on a small set of test dataset. In this work, we proposed a solution to this problem by using a multi-task semi-supervised learning (MTSSL) framework that utilized auxiliary tasks for which adequate data is publicly available. Specifically, we utilized Pneumonia, Lung Opacity, and Pleural Effusion as additional tasks using the ChesXpert dataset. We illustrated that the primary task of COVID-19 detection, for which only limited labeled data is available, can be improved by using this additional data. We further employed an adversarial autoencoder (AAE), which has a strong capability to learn powerful and discriminative features, within our MTSSL framework to maximize the benefit of multi-task learning. In addition, the supervised classification networks in combination with the unsupervised AAE empower semi-supervised learning, which includes a discriminative part in the unsupervised AAE training pipeline. The generalization of our framework is improved due to this semi-supervised learning and thus it leads to enhancement in COVID-19 detection performance. The proposed model is rigorously evaluated on the largest publicly available COVID-19 dataset and experimental results show that the proposed model attained state-of-the-art performance.
Collapse
Affiliation(s)
- Zahid Ullah
- Department of Software, Korea National University of Transportation, Chungju 27469, South Korea
| | - Muhammad Usman
- Department of Computer Science and Engineering, Seoul National University, Seoul 08826, South Korea
| | - Jeonghwan Gwak
- Department of Software, Korea National University of Transportation, Chungju 27469, South Korea
- Department of Biomedical Engineering, Korea National University of Transportation, Chungju 27469, South Korea
- Department of AI Robotics Engineering, Korea National University of Transportation, Chungju 27469, South Korea
- Department of IT Energy Convergence (BK21 FOUR), Korea National University of Transportation, Chungju 27469, South Korea
| |
Collapse
|
9
|
Yagin FH, Cicek İB, Alkhateeb A, Yagin B, Colak C, Azzeh M, Akbulut S. Explainable artificial intelligence model for identifying COVID-19 gene biomarkers. Comput Biol Med 2023; 154:106619. [PMID: 36738712 PMCID: PMC9889119 DOI: 10.1016/j.compbiomed.2023.106619] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 01/11/2023] [Accepted: 01/28/2023] [Indexed: 02/04/2023]
Abstract
AIM COVID-19 has revealed the need for fast and reliable methods to assist clinicians in diagnosing the disease. This article presents a model that applies explainable artificial intelligence (XAI) methods based on machine learning techniques on COVID-19 metagenomic next-generation sequencing (mNGS) samples. METHODS In the data set used in the study, there are 15,979 gene expressions of 234 patients with COVID-19 negative 141 (60.3%) and COVID-19 positive 93 (39.7%). The least absolute shrinkage and selection operator (LASSO) method was applied to select genes associated with COVID-19. Support Vector Machine - Synthetic Minority Oversampling Technique (SVM-SMOTE) method was used to handle the class imbalance problem. Logistics regression (LR), SVM, random forest (RF), and extreme gradient boosting (XGBoost) methods were constructed to predict COVID-19. An explainable approach based on local interpretable model-agnostic explanations (LIME) and SHAPley Additive exPlanations (SHAP) methods was applied to determine COVID-19- associated biomarker candidate genes and improve the final model's interpretability. RESULTS For the diagnosis of COVID-19, the XGBoost (accuracy: 0.930) model outperformed the RF (accuracy: 0.912), SVM (accuracy: 0.877), and LR (accuracy: 0.912) models. As a result of the SHAP, the three most important genes associated with COVID-19 were IFI27, LGR6, and FAM83A. The results of LIME showed that especially the high level of IFI27 gene expression contributed to increasing the probability of positive class. CONCLUSIONS The proposed model (XGBoost) was able to predict COVID-19 successfully. The results show that machine learning combined with LIME and SHAP can explain the biomarker prediction for COVID-19 and provide clinicians with an intuitive understanding and interpretability of the impact of risk factors in the model.
Collapse
Affiliation(s)
- Fatma Hilal Yagin
- Department of Biostatistics and Medical Informatics, Faculty of Medicine, Inonu University, 44280, Malatya, Turkey.
| | - İpek Balikci Cicek
- Department of Biostatistics and Medical Informatics, Faculty of Medicine, Inonu University, 44280, Malatya, Turkey.
| | - Abedalrhman Alkhateeb
- Software Engineering Department, King Hussein School for Computing Sciences, Amman, Jordan.
| | - Burak Yagin
- Department of Biostatistics and Medical Informatics, Faculty of Medicine, Inonu University, 44280, Malatya, Turkey.
| | - Cemil Colak
- Department of Biostatistics and Medical Informatics, Faculty of Medicine, Inonu University, 44280, Malatya, Turkey.
| | - Mohammad Azzeh
- Data Science Department, King Hussein School for Computing Sciences, Amman, Jordan.
| | - Sami Akbulut
- Department of Biostatistics and Medical Informatics, Faculty of Medicine, Inonu University, 44280, Malatya, Turkey; Inonu University, Faculty of Medicine, Department of Surgery, 44280, Malatya, Turkey; Inonu University, Faculty of Medicine, Department of Public Health, 44280, Malatya, Turkey.
| |
Collapse
|
10
|
Yang D, Ren G, Ni R, Huang YH, Lam NFD, Sun H, Wan SBN, Wong MFE, Chan KK, Tsang HCH, Xu L, Wu TC, Kong FM(S, Wáng YXJ, Qin J, Chan LWC, Ying M, Cai J. Deep learning attention-guided radiomics for COVID-19 chest radiograph classification. Quant Imaging Med Surg 2023; 13:572-584. [PMID: 36819269 PMCID: PMC9929417 DOI: 10.21037/qims-22-531] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Accepted: 09/17/2022] [Indexed: 11/23/2022]
Abstract
Background Accurate assessment of coronavirus disease 2019 (COVID-19) lung involvement through chest radiograph plays an important role in effective management of the infection. This study aims to develop a two-step feature merging method to integrate image features from deep learning and radiomics to differentiate COVID-19, non-COVID-19 pneumonia and normal chest radiographs (CXR). Methods In this study, a deformable convolutional neural network (deformable CNN) was developed and used as a feature extractor to obtain 1,024-dimensional deep learning latent representation (DLR) features. Then 1,069-dimensional radiomics features were extracted from the region of interest (ROI) guided by deformable CNN's attention. The two feature sets were concatenated to generate a merged feature set for classification. For comparative experiments, the same process has been applied to the DLR-only feature set for verifying the effectiveness of feature concatenation. Results Using the merged feature set resulted in an overall average accuracy of 91.0% for three-class classification, representing a statistically significant improvement of 0.6% compared to the DLR-only classification. The recall and precision of classification into the COVID-19 class were 0.926 and 0.976, respectively. The feature merging method was shown to significantly improve the classification performance as compared to using only deep learning features, regardless of choice of classifier (P value <0.0001). Three classes' F1-score were 0.892, 0.890, and 0.950 correspondingly (i.e., normal, non-COVID-19 pneumonia, COVID-19). Conclusions A two-step COVID-19 classification framework integrating information from both DLR and radiomics features (guided by deep learning attention mechanism) has been developed. The proposed feature merging method has been shown to improve the performance of chest radiograph classification as compared to the case of using only deep learning features.
Collapse
Affiliation(s)
- Dongrong Yang
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Ge Ren
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Ruiyan Ni
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Yu-Hua Huang
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Ngo Fung Daniel Lam
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Hongfei Sun
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Shiu Bun Nelson Wan
- Department of Radiology, Pamela Youde Nethersole Eastern Hospital, Hong Kong, China
| | - Man Fung Esther Wong
- Department of Radiology, Pamela Youde Nethersole Eastern Hospital, Hong Kong, China
| | - King Kwong Chan
- Department of Radiology and Imaging, Queen Elizabeth Hospital, Hong Kong, China
| | | | - Lu Xu
- Department of Radiology and Imaging, Queen Elizabeth Hospital, Hong Kong, China
| | - Tak Chiu Wu
- Department of Radiology and Imaging, Queen Elizabeth Hospital, Hong Kong, China
| | | | - Yì Xiáng J. Wáng
- Deparment of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Hong Kong, China
| | - Jing Qin
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| | - Lawrence Wing Chi Chan
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Michael Ying
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
11
|
Chen H, Jiang Y, Ko H, Loew M. A teacher–student framework with Fourier Transform augmentation for COVID-19 infection segmentation in CT images. Biomed Signal Process Control 2023; 79:104250. [PMID: 36188130 PMCID: PMC9510070 DOI: 10.1016/j.bspc.2022.104250] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 08/11/2022] [Accepted: 09/18/2022] [Indexed: 11/23/2022]
Abstract
Automatic segmentation of infected regions in computed tomography (CT) images is necessary for the initial diagnosis of COVID-19. Deep-learning-based methods have the potential to automate this task but require a large amount of data with pixel-level annotations. Training a deep network with annotated lung cancer CT images, which are easier to obtain, can alleviate this problem to some extent. However, this approach may suffer from a reduction in performance when applied to unseen COVID-19 images during the testing phase, caused by the difference in the image intensity and object region distribution between the training set and test set. In this paper, we proposed a novel unsupervised method for COVID-19 infection segmentation that aims to learn the domain-invariant features from lung cancer and COVID-19 images to improve the generalization ability of the segmentation network for use with COVID-19 CT images. First, to address the intensity difference, we proposed a novel data augmentation module based on Fourier Transform, which transfers the annotated lung cancer data into the style of COVID-19 image. Secondly, to reduce the distribution difference, we designed a teacher–student network to learn rotation-invariant features for segmentation. The experiments demonstrated that even without getting access to the annotations of the COVID-19 CT images during the training phase, the proposed network can achieve a state-of-the-art segmentation performance on COVID-19 infection.
Collapse
Affiliation(s)
- Han Chen
- School of Electrical Engineering, Korea University, Seoul, South Korea
| | - Yifan Jiang
- School of Electrical Engineering, Korea University, Seoul, South Korea
| | - Hanseok Ko
- School of Electrical Engineering, Korea University, Seoul, South Korea
| | - Murray Loew
- Biomedical Engineering, George Washington University, Washington D.C., USA
| |
Collapse
|
12
|
Li H, Zeng N, Wu P, Clawson K. Cov-Net: A computer-aided diagnosis method for recognizing COVID-19 from chest X-ray images via machine vision. EXPERT SYSTEMS WITH APPLICATIONS 2022; 207:118029. [PMID: 35812003 PMCID: PMC9252868 DOI: 10.1016/j.eswa.2022.118029] [Citation(s) in RCA: 35] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 06/17/2022] [Accepted: 06/29/2022] [Indexed: 05/05/2023]
Abstract
In the context of global pandemic Coronavirus disease 2019 (COVID-19) that threatens life of all human beings, it is of vital importance to achieve early detection of COVID-19 among symptomatic patients. In this paper, a computer aided diagnosis (CAD) model Cov-Net is proposed for accurate recognition of COVID-19 from chest X-ray images via machine vision techniques, which mainly concentrates on powerful and robust feature learning ability. In particular, a modified residual network with asymmetric convolution and attention mechanism embedded is selected as the backbone of feature extractor, after which skip-connected dilated convolution with varying dilation rates is applied to achieve sufficient feature fusion among high-level semantic and low-level detailed information. Experimental results on two public COVID-19 radiography databases have demonstrated the practicality of proposed Cov-Net in accurate COVID-19 recognition with accuracy of 0.9966 and 0.9901, respectively. Furthermore, within same experimental conditions, proposed Cov-Net outperforms other six state-of-the-art computer vision algorithms, which validates the superiority and competitiveness of Cov-Net in building highly discriminative features from the perspective of methodology. Hence, it is deemed that proposed Cov-Net has a good generalization ability so that it can be applied to other CAD scenarios. Consequently, one can conclude that this work has both practical value in providing reliable reference to the radiologist and theoretical significance in developing methods to build robust features with strong presentation ability.
Collapse
Affiliation(s)
- Han Li
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361102, China
| | - Nianyin Zeng
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361102, China
| | - Peishu Wu
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361102, China
| | - Kathy Clawson
- School of Computer Science, University of Sunderland, Saint Peter Campus, United Kingdom
| |
Collapse
|
13
|
Peng Y, Zhang T, Guo Y. Cov-TransNet: Dual branch fusion network with transformer for COVID-19 infection segmentation. Biomed Signal Process Control 2022; 80:104366. [PMCID: PMC9671472 DOI: 10.1016/j.bspc.2022.104366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 09/06/2022] [Accepted: 10/30/2022] [Indexed: 11/09/2022]
Abstract
Segmentation of COVID-19 infection is a challenging task due to the blurred boundaries and low contrast between the infected and the non-infected areas in COVID-19 CT images, especially for small infection regions. COV-TransNet is presented to achieve high-precision segmentation of COVID-19 infection regions in this paper. The proposed segmentation network is composed of the auxiliary branch and the backbone branch. The auxiliary branch network adopts transformer to provide global information, helping the convolution layers in backbone branch to learn specific local features better. A multi-scale feature attention module is introduced to capture contextual information and adaptively enhance feature representations. Specially, a high internal resolution is maintained during the attention calculation process. Moreover, feature activation module can effectively reduce the loss of valid information during sampling. The proposed network can take full advantage of different depth and multi-scale features to achieve high sensitivity for identifying lesions of varied sizes and locations. We experiment on several datasets of the COVID-19 lesion segmentation task, including COVID-19-CT-Seg, UESTC-COVID-19, MosMedData and COVID-19-MedSeg. Comprehensive results demonstrate that COV-TransNet outperforms the existing state-of-the-art segmentation methods and achieves better segmentation performance for multi-scale lesions.
Collapse
|
14
|
Shiri I, Mostafaei S, Haddadi Avval A, Salimi Y, Sanaat A, Akhavanallaf A, Arabi H, Rahmim A, Zaidi H. High-dimensional multinomial multiclass severity scoring of COVID-19 pneumonia using CT radiomics features and machine learning algorithms. Sci Rep 2022; 12:14817. [PMID: 36050434 PMCID: PMC9437017 DOI: 10.1038/s41598-022-18994-z] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 08/23/2022] [Indexed: 12/11/2022] Open
Abstract
We aimed to construct a prediction model based on computed tomography (CT) radiomics features to classify COVID-19 patients into severe-, moderate-, mild-, and non-pneumonic. A total of 1110 patients were studied from a publicly available dataset with 4-class severity scoring performed by a radiologist (based on CT images and clinical features). The entire lungs were segmented and followed by resizing, bin discretization and radiomic features extraction. We utilized two feature selection algorithms, namely bagging random forest (BRF) and multivariate adaptive regression splines (MARS), each coupled to a classifier, namely multinomial logistic regression (MLR), to construct multiclass classification models. The dataset was divided into 50% (555 samples), 20% (223 samples), and 30% (332 samples) for training, validation, and untouched test datasets, respectively. Subsequently, nested cross-validation was performed on train/validation to select the features and tune the models. All predictive power indices were reported based on the testing set. The performance of multi-class models was assessed using precision, recall, F1-score, and accuracy based on the 4 × 4 confusion matrices. In addition, the areas under the receiver operating characteristic curves (AUCs) for multi-class classifications were calculated and compared for both models. Using BRF, 23 radiomic features were selected, 11 from first-order, 9 from GLCM, 1 GLRLM, 1 from GLDM, and 1 from shape. Ten features were selected using the MARS algorithm, namely 3 from first-order, 1 from GLDM, 1 from GLRLM, 1 from GLSZM, 1 from shape, and 3 from GLCM features. The mean absolute deviation, skewness, and variance from first-order and flatness from shape, and cluster prominence from GLCM features and Gray Level Non Uniformity Normalize from GLRLM were selected by both BRF and MARS algorithms. All selected features by BRF or MARS were significantly associated with four-class outcomes as assessed within MLR (All p values < 0.05). BRF + MLR and MARS + MLR resulted in pseudo-R2 prediction performances of 0.305 and 0.253, respectively. Meanwhile, there was a significant difference between the feature selection models when using a likelihood ratio test (p value = 0.046). Based on confusion matrices for BRF + MLR and MARS + MLR algorithms, the precision was 0.856 and 0.728, the recall was 0.852 and 0.722, whereas the accuracy was 0.921 and 0.861, respectively. AUCs (95% CI) for multi-class classification were 0.846 (0.805-0.887) and 0.807 (0.752-0.861) for BRF + MLR and MARS + MLR algorithms, respectively. Our models based on the utilization of radiomic features, coupled with machine learning were able to accurately classify patients according to the severity of pneumonia, thus highlighting the potential of this emerging paradigm in the prognostication and management of COVID-19 patients.
Collapse
Affiliation(s)
- Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva, Switzerland
| | - Shayan Mostafaei
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden
| | | | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva, Switzerland
| | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva, Switzerland
| | - Azadeh Akhavanallaf
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva, Switzerland
| | - Arman Rahmim
- Departments of Radiology and Physics, University of British Columbia, Vancouver, BC, Canada.,Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva, Switzerland. .,Geneva University Neurocenter, Geneva University, Geneva, Switzerland. .,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands. .,Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
15
|
Jia LL, Zhao JX, Pan NN, Shi LY, Zhao LP, Tian JH, Huang G. Artificial intelligence model on chest imaging to diagnose COVID-19 and other pneumonias: A systematic review and meta-analysis. Eur J Radiol Open 2022; 9:100438. [PMID: 35996746 PMCID: PMC9385733 DOI: 10.1016/j.ejro.2022.100438] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 08/16/2022] [Indexed: 11/29/2022] Open
Abstract
Objectives When diagnosing Coronavirus disease 2019(COVID‐19), radiologists cannot make an accurate judgments because the image characteristics of COVID‐19 and other pneumonia are similar. As machine learning advances, artificial intelligence(AI) models show promise in diagnosing COVID-19 and other pneumonias. We performed a systematic review and meta-analysis to assess the diagnostic accuracy and methodological quality of the models. Methods We searched PubMed, Cochrane Library, Web of Science, and Embase, preprints from medRxiv and bioRxiv to locate studies published before December 2021, with no language restrictions. And a quality assessment (QUADAS-2), Radiomics Quality Score (RQS) tools and CLAIM checklist were used to assess the quality of each study. We used random-effects models to calculate pooled sensitivity and specificity, I2 values to assess heterogeneity, and Deeks' test to assess publication bias. Results We screened 32 studies from the 2001 retrieved articles for inclusion in the meta-analysis. We included 6737 participants in the test or validation group. The meta-analysis revealed that AI models based on chest imaging distinguishes COVID-19 from other pneumonias: pooled area under the curve (AUC) 0.96 (95 % CI, 0.94–0.98), sensitivity 0.92 (95 % CI, 0.88–0.94), pooled specificity 0.91 (95 % CI, 0.87–0.93). The average RQS score of 13 studies using radiomics was 7.8, accounting for 22 % of the total score. The 19 studies using deep learning methods had an average CLAIM score of 20, slightly less than half (48.24 %) the ideal score of 42.00. Conclusions The AI model for chest imaging could well diagnose COVID-19 and other pneumonias. However, it has not been implemented as a clinical decision-making tool. Future researchers should pay more attention to the quality of research methodology and further improve the generalizability of the developed predictive models.
Collapse
Key Words
- 2D, two-dimensional
- 3D, three-dimensional
- AI, artificial intelligence
- AUC, area under the curve
- Artificial Intelligence
- CNN, Convolutional neural network
- COVID-19
- COVID-19, Coronavirus disease 2019
- CRP, C-reactive protein
- CT, Computed tomography
- CXR, Chest X-Ray
- Diagnostic Imaging
- GGO, ground-glass opacities
- KNN, K-nearest neighbor
- LASSO, least absolute shrinkage and selection operator
- MEERS-COV, Middle East respiratory syndrome coronavirus
- ML, machine learning
- Machine learning
- PLR, negative likelihood ratio
- PLR, positive likelihood ratio
- Pneumonia
- ROI, regions of interest
- RT-PCR, Reverse transcriptase polymerase chain reaction
- SARS, severe acute respiratory syndrome
- SARS-CoV-2, severe acute respiratory syndrome coronavirus 2
- SROC, summary receiver operating characteristic
- SVM, Support vector machine
Collapse
Affiliation(s)
- Lu-Lu Jia
- First Clinical School of Medicine, Gansu University of Chinese Medicine, Lanzhou 73000, China
| | - Jian-Xin Zhao
- First Clinical School of Medicine, Gansu University of Chinese Medicine, Lanzhou 73000, China
| | - Ni-Ni Pan
- First Clinical School of Medicine, Gansu University of Chinese Medicine, Lanzhou 73000, China
| | - Liu-Yan Shi
- First Clinical School of Medicine, Gansu University of Chinese Medicine, Lanzhou 73000, China
| | - Lian-Ping Zhao
- Department of Radiology, Gansu Provincial Hospital, Lanzhou 730000, China
| | - Jin-Hui Tian
- Evidence-Based Medicine Center, School of Basic Medical Sciences, Lanzhou University, Lanzhou 730000, China
| | - Gang Huang
- Department of Radiology, Gansu Provincial Hospital, Lanzhou 730000, China
- Corresponding author.
| |
Collapse
|
16
|
Qin Z, Sun Y, Zhang J, Zhou L, Chen Y, Huang C. Lessons from SARS‑CoV‑2 and its variants (Review). Mol Med Rep 2022; 26:263. [PMID: 35730623 PMCID: PMC9260876 DOI: 10.3892/mmr.2022.12779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Accepted: 06/01/2022] [Indexed: 12/15/2022] Open
Abstract
COVID-19 has swept through mainland China by human-to-human transmission. The rapid spread of SARS-CoV-2 and its variants, including the currently prevalent Omicron strain, pose a serious threat worldwide. The present review summarizes epidemiological investigation and etiological analysis of genomic, epidemiological, and pathological characteristics of the original strain and its variants, as well as progress in diagnosis and treatment. Prevention and control measures used during the current Omicron pandemic are discussed to provide further knowledge of SARS-CoV-2.
Collapse
Affiliation(s)
- Ziwen Qin
- Department of Respiratory Diseases, Shandong University of Traditional Chinese Medicine, Jinan, Shandong 250013, P.R. China
| | - Yan Sun
- Department of Respiratory Diseases, Shandong Provincial Qianfoshan Hospital, Shandong University, Jinan, Shandong 250014, P.R. China
| | - Jian Zhang
- Department of Respiratory Diseases, Shandong Provincial Qianfoshan Hospital, Shandong University, Jinan, Shandong 250014, P.R. China
| | - Ling Zhou
- Department of Respiratory Diseases, Shandong Provincial Qianfoshan Hospital, Shandong University, Jinan, Shandong 250014, P.R. China
| | - Yujuan Chen
- Department of Respiratory Diseases, The Second Affiliated Hospital of Shandong University of Traditional Chinese Medicine, Jinan, Shandong 250013, P.R. China
| | - Chuanjun Huang
- Department of Respiratory Diseases, Shandong University of Traditional Chinese Medicine, Jinan, Shandong 250013, P.R. China
| |
Collapse
|
17
|
Single Channel Image Enhancement (SCIE) of White Blood Cells Based on Virtual Hexagonal Filter (VHF) Designed over Square Trellis. J Pers Med 2022; 12:jpm12081232. [PMID: 36013181 PMCID: PMC9410214 DOI: 10.3390/jpm12081232] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 07/06/2022] [Accepted: 07/12/2022] [Indexed: 11/17/2022] Open
Abstract
White blood cells (WBCs) are the important constituent of a blood cell. These blood cells are responsible for defending the body against infections. Abnormalities identified in WBC smears lead to the diagnosis of disease types such as leukocytosis, hepatitis, and immune system disorders. Digital image analysis for infection detection at an early stage can help fast and precise diagnosis, as compared to manual inspection. Sometimes, acquired blood cell smear images from an L2-type microscope are of very low quality. The manual handling, haziness, and dark areas of the image become problematic for an efficient and accurate diagnosis. Therefore, WBC image enhancement needs attention for an effective diagnosis of the disease. This paper proposed a novel virtual hexagonal trellis (VHT)-based image filtering method for WBC image enhancement and contrast adjustment. In this method, a filter named the virtual hexagonal filter (VHF), of size 3 × 3, and based on a hexagonal structure, is formulated by using the concept of the interpolation of real and square grid pixels. This filter is convolved with WBC ALL-IBD images for enhancement and contrast adjustment. The proposed filter improves the results both visually and statically. A comparison with existing image enhancement approaches proves the validity of the proposed work.
Collapse
|
18
|
Chamberlin JH, Aquino G, Nance S, Wortham A, Leaphart N, Paladugu N, Brady S, Baird H, Fiegel M, Fitzpatrick L, Kocher M, Ghesu F, Mansoor A, Hoelzer P, Zimmermann M, James WE, Dennis DJ, Houston BA, Kabakus IM, Baruah D, Schoepf UJ, Burt JR. Automated diagnosis and prognosis of COVID-19 pneumonia from initial ER chest X-rays using deep learning. BMC Infect Dis 2022; 22:637. [PMID: 35864468 PMCID: PMC9301895 DOI: 10.1186/s12879-022-07617-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Accepted: 07/14/2022] [Indexed: 11/10/2022] Open
Abstract
Background Airspace disease as seen on chest X-rays is an important point in triage for patients initially presenting to the emergency department with suspected COVID-19 infection. The purpose of this study is to evaluate a previously trained interpretable deep learning algorithm for the diagnosis and prognosis of COVID-19 pneumonia from chest X-rays obtained in the ED. Methods This retrospective study included 2456 (50% RT-PCR positive for COVID-19) adult patients who received both a chest X-ray and SARS-CoV-2 RT-PCR test from January 2020 to March of 2021 in the emergency department at a single U.S. institution. A total of 2000 patients were included as an additional training cohort and 456 patients in the randomized internal holdout testing cohort for a previously trained Siemens AI-Radiology Companion deep learning convolutional neural network algorithm. Three cardiothoracic fellowship-trained radiologists systematically evaluated each chest X-ray and generated an airspace disease area-based severity score which was compared against the same score produced by artificial intelligence. The interobserver agreement, diagnostic accuracy, and predictive capability for inpatient outcomes were assessed. Principal statistical tests used in this study include both univariate and multivariate logistic regression. Results Overall ICC was 0.820 (95% CI 0.790–0.840). The diagnostic AUC for SARS-CoV-2 RT-PCR positivity was 0.890 (95% CI 0.861–0.920) for the neural network and 0.936 (95% CI 0.918–0.960) for radiologists. Airspace opacities score by AI alone predicted ICU admission (AUC = 0.870) and mortality (0.829) in all patients. Addition of age and BMI into a multivariate log model improved mortality prediction (AUC = 0.906). Conclusion The deep learning algorithm provides an accurate and interpretable assessment of the disease burden in COVID-19 pneumonia on chest radiographs. The reported severity scores correlate with expert assessment and accurately predicts important clinical outcomes. The algorithm contributes additional prognostic information not currently incorporated into patient management.
Supplementary Information The online version contains supplementary material available at 10.1186/s12879-022-07617-7.
Collapse
Affiliation(s)
- Jordan H Chamberlin
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Gilberto Aquino
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Sophia Nance
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Andrew Wortham
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Nathan Leaphart
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Namrata Paladugu
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Sean Brady
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Henry Baird
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Matthew Fiegel
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Logan Fitzpatrick
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Madison Kocher
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | | | | | | | | | - W Ennis James
- Department of Internal Medicine, Division of Pulmonary, Critical Care, Allergy & Sleep Medicine, Medical University of South Carolina, Charleston, SC, USA
| | - D Jameson Dennis
- Department of Internal Medicine, Division of Pulmonary, Critical Care, Allergy & Sleep Medicine, Medical University of South Carolina, Charleston, SC, USA
| | - Brian A Houston
- Department of Internal Medicine, Division of Cardiology, Medical University of South Carolina, Charleston, SC, USA
| | - Ismail M Kabakus
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Dhiraj Baruah
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - U Joseph Schoepf
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA
| | - Jeremy R Burt
- Department of Radiology and Radiologic Sciences, Division of Cardiothoracic Radiology, Medical University of South Carolina, Charleston, SC, USA. .,MUSC-ART, Cardiothoracic Imaging, 25 Courtenay Drive, MSC 226, 2nd Floor, Rm 2256, Charleston, SC, 29425, USA.
| |
Collapse
|
19
|
Manafi-Farid R, Askari E, Shiri I, Pirich C, Asadi M, Khateri M, Zaidi H, Beheshti M. [ 18F]FDG-PET/CT radiomics and artificial intelligence in lung cancer: Technical aspects and potential clinical applications. Semin Nucl Med 2022; 52:759-780. [PMID: 35717201 DOI: 10.1053/j.semnuclmed.2022.04.004] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 04/10/2022] [Accepted: 04/13/2022] [Indexed: 02/07/2023]
Abstract
Lung cancer is the second most common cancer and the leading cause of cancer-related death worldwide. Molecular imaging using [18F]fluorodeoxyglucose Positron Emission Tomography and/or Computed Tomography ([18F]FDG-PET/CT) plays an essential role in the diagnosis, evaluation of response to treatment, and prediction of outcomes. The images are evaluated using qualitative and conventional quantitative indices. However, there is far more information embedded in the images, which can be extracted by sophisticated algorithms. Recently, the concept of uncovering and analyzing the invisible data extracted from medical images, called radiomics, is gaining more attention. Currently, [18F]FDG-PET/CT radiomics is growingly evaluated in lung cancer to discover if it enhances the diagnostic performance or implication of [18F]FDG-PET/CT in the management of lung cancer. In this review, we provide a short overview of the technical aspects, as they are discussed in different articles of this special issue. We mainly focus on the diagnostic performance of the [18F]FDG-PET/CT-based radiomics and the role of artificial intelligence in non-small cell lung cancer, impacting the early detection, staging, prediction of tumor subtypes, biomarkers, and patient's outcomes.
Collapse
Affiliation(s)
- Reyhaneh Manafi-Farid
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Emran Askari
- Department of Nuclear Medicine, School of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Christian Pirich
- Division of Molecular Imaging and Theranostics, Department of Nuclear Medicine, University Hospital Salzburg, Paracelsus Medical University, Salzburg, Austria
| | - Mahboobeh Asadi
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Maziar Khateri
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland; Geneva University Neurocenter, Geneva University, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| | - Mohsen Beheshti
- Division of Molecular Imaging and Theranostics, Department of Nuclear Medicine, University Hospital Salzburg, Paracelsus Medical University, Salzburg, Austria.
| |
Collapse
|
20
|
Shiri I, Salimi Y, Pakbin M, Hajianfar G, Avval AH, Sanaat A, Mostafaei S, Akhavanallaf A, Saberi A, Mansouri Z, Askari D, Ghasemian M, Sharifipour E, Sandoughdaran S, Sohrabi A, Sadati E, Livani S, Iranpour P, Kolahi S, Khateri M, Bijari S, Atashzar MR, Shayesteh SP, Khosravi B, Babaei MR, Jenabi E, Hasanian M, Shahhamzeh A, Foroghi Ghomi SY, Mozafari A, Teimouri A, Movaseghi F, Ahmari A, Goharpey N, Bozorgmehr R, Shirzad-Aski H, Mortazavi R, Karimi J, Mortazavi N, Besharat S, Afsharpad M, Abdollahi H, Geramifar P, Radmard AR, Arabi H, Rezaei-Kalantari K, Oveisi M, Rahmim A, Zaidi H. COVID-19 prognostic modeling using CT radiomic features and machine learning algorithms: Analysis of a multi-institutional dataset of 14,339 patients. Comput Biol Med 2022; 145:105467. [PMID: 35378436 PMCID: PMC8964015 DOI: 10.1016/j.compbiomed.2022.105467] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 03/24/2022] [Accepted: 03/26/2022] [Indexed: 12/16/2022]
Abstract
BACKGROUND We aimed to analyze the prognostic power of CT-based radiomics models using data of 14,339 COVID-19 patients. METHODS Whole lung segmentations were performed automatically using a deep learning-based model to extract 107 intensity and texture radiomics features. We used four feature selection algorithms and seven classifiers. We evaluated the models using ten different splitting and cross-validation strategies, including non-harmonized and ComBat-harmonized datasets. The sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were reported. RESULTS In the test dataset (4,301) consisting of CT and/or RT-PCR positive cases, AUC, sensitivity, and specificity of 0.83 ± 0.01 (CI95%: 0.81-0.85), 0.81, and 0.72, respectively, were obtained by ANOVA feature selector + Random Forest (RF) classifier. Similar results were achieved in RT-PCR-only positive test sets (3,644). In ComBat harmonized dataset, Relief feature selector + RF classifier resulted in the highest performance of AUC, reaching 0.83 ± 0.01 (CI95%: 0.81-0.85), with a sensitivity and specificity of 0.77 and 0.74, respectively. ComBat harmonization did not depict statistically significant improvement compared to a non-harmonized dataset. In leave-one-center-out, the combination of ANOVA feature selector and RF classifier resulted in the highest performance. CONCLUSION Lung CT radiomics features can be used for robust prognostic modeling of COVID-19. The predictive power of the proposed CT radiomics model is more reliable when using a large multicentric heterogeneous dataset, and may be used prospectively in clinical setting to manage COVID-19 patients.
Collapse
Affiliation(s)
- Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, 1211, Switzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, 1211, Switzerland
| | - Masoumeh Pakbin
- Imaging Department, Qom University of Medical Sciences, Qum, Iran
| | - Ghasem Hajianfar
- Rajaie Cardiovascular, Medical & Research Center, Iran University of Medical Science, Tehran, Iran
| | | | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, 1211, Switzerland
| | - Shayan Mostafaei
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden
| | - Azadeh Akhavanallaf
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, 1211, Switzerland
| | - Abdollah Saberi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, 1211, Switzerland
| | - Zahra Mansouri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, 1211, Switzerland
| | - Dariush Askari
- Department of Radiology Technology, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mohammadreza Ghasemian
- Department of Radiology, Shahid Beheshti Hospital, Qom University of Medical Sciences, Qum, Iran
| | - Ehsan Sharifipour
- Neuroscience Research Center, Qom University of Medical Sciences, Qum, Iran
| | - Saleh Sandoughdaran
- Men's Health and Reproductive Health Research Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Ahmad Sohrabi
- Cancer Control Research Center, Cancer Control Foundation, Iran University of Medical Sciences, Tehran, Iran
| | - Elham Sadati
- Department of Medical Physics, Faculty of Medical Sciences, Tarbiat Modares University, Tehran, Iran
| | - Somayeh Livani
- Clinical Research Development Unit (CRDU), Sayad Shirazi Hospital, Golestan University of Medical Sciences, Gorgan, Iran
| | - Pooya Iranpour
- Medical Imaging Research Center, Department of Radiology, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Shahriar Kolahi
- Department of Radiology, School of Medicine, Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Imam Khomeini Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Maziar Khateri
- Department of Medical Radiation Engineering, Science and Research Branch, Islamic Azad University, Tehran, Tehran, Iran
| | - Salar Bijari
- Department of Medical Physics, Faculty of Medical Sciences, Tarbiat Modares University, Tehran, Iran
| | - Mohammad Reza Atashzar
- Department of Immunology, School of Medicine, Fasa University of Medical Sciences, Fasa, Iran
| | - Sajad P. Shayesteh
- Department of Physiology, Pharmacology and Medical Physics, Alborz University of Medical Sciences, Karaj, Iran
| | - Bardia Khosravi
- Department of Radiology, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Mohammad Reza Babaei
- Department of Interventional Radiology, Firouzgar Hospital, Iran University of Medical Sciences, Tehran, Iran
| | - Elnaz Jenabi
- Research Centre for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Mohammad Hasanian
- Department of Radiology, Arak University of Medical Sciences, Arak, Iran
| | - Alireza Shahhamzeh
- Clinical Research Development Center, Qom University of Medical Sciences, Qum, Iran
| | - Seyaed Yaser Foroghi Ghomi
- Clinical Research Development Center, Shahid Beheshti Hospital, Qom University Of Medical Sciences, Qom, Iran
| | - Abolfazl Mozafari
- Department of Medical Sciences, Qom Branch, Islamic Azad University, Qum, Iran
| | - Arash Teimouri
- Medical Imaging Research Center, Department of Radiology, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Fatemeh Movaseghi
- Department of Medical Sciences, Qom Branch, Islamic Azad University, Qum, Iran
| | - Azin Ahmari
- Ayatolah Khansary Hospital, Arak University of Medical Sciences, Arak, Iran
| | - Neda Goharpey
- Department of Radiation Oncology, Shohadaye Tajrish Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Rama Bozorgmehr
- Clinical Research Development Unit, Shohadaye Tajrish Hospital, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | | | - Roozbeh Mortazavi
- Department of Internal Medicine, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Jalal Karimi
- Department of Infectious Disease, School of Medicine, Fasa University of Medical Sciences, Fasa, Iran
| | - Nazanin Mortazavi
- Dental Research Center, Golestan University of Medical Sciences, Gorgan, Iran
| | - Sima Besharat
- Golestan Research Center of Gastroenterology and Hepatology, Golestan University of Medical Sciences, Gorgan, Iran
| | - Mandana Afsharpad
- Cancer Control Research Center, Cancer Control Foundation, Iran University of Medical Sciences, Tehran, Iran
| | - Hamid Abdollahi
- Department of Radiologic Technology, Faculty of Allied Medical Sciences, Kerman University of Medical Sciences, Kerman, Iran
| | - Parham Geramifar
- Research Centre for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Amir Reza Radmard
- Department of Radiology, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, 1211, Switzerland
| | - Kiara Rezaei-Kalantari
- Rajaie Cardiovascular, Medical & Research Center, Iran University of Medical Science, Tehran, Iran
| | - Mehrdad Oveisi
- Comprehensive Cancer Centre, School of Cancer & Pharmaceutical Sciences, Faculty of Life Sciences & Medicine, King’s College London, London, United Kingdom
| | - Arman Rahmim
- Departments of Radiology and Physics, University of British Columbia, Vancouver, BC, Canada,Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, 1211, Switzerland,Geneva University Neurocenter, Geneva University, Geneva, Switzerland,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands,Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark,Corresponding author. Geneva University Hospital Division of Nuclear Medicine and Molecular Imaging, CH-1211, Geneva, Switzerland
| |
Collapse
|
21
|
Chamberlin JH, Aquino G, Schoepf UJ, Nance S, Godoy F, Carson L, Giovagnoli VM, Gill CE, McGill LJ, O'Doherty J, Emrich T, Burt JR, Baruah D, Varga-Szemes A, Kabakus IM. An Interpretable Chest CT Deep Learning Algorithm for Quantification of COVID-19 Lung Disease and Prediction of Inpatient Morbidity and Mortality. Acad Radiol 2022; 29:1178-1188. [PMID: 35610114 PMCID: PMC8977389 DOI: 10.1016/j.acra.2022.03.023] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Revised: 03/17/2022] [Accepted: 03/24/2022] [Indexed: 12/23/2022]
Abstract
Rationale and Objectives The burden of coronavirus disease 2019 (COVID-19) airspace opacities is time consuming and challenging to quantify on computed tomography. The purpose of this study was to evaluate the ability of a deep convolutional neural network (dCNN) to predict inpatient outcomes associated with COVID-19 pneumonia. Materials and Methods A previously trained dCNN was tested on an external validation cohort of 241 patients who presented to the emergency department and received a chest computed tomography scan, 93 with COVID-19 and 168 without. Airspace opacity scoring systems were defined by the extent of airspace opacity in each lobe, totaled across the entire lungs. Expert and dCNN scores were concurrently evaluated for interobserver agreement, while both dCNN identified airspace opacity scoring and raw opacity values were used in the prediction of COVID-19 diagnosis and inpatient outcomes. Results Interobserver agreement for airspace opacity scoring was 0.892 (95% CI 0.834-0.930). Probability of each outcome behaved as a logistic function of the opacity scoring (25% intensive care unit admission at score of 13/25, 25% intubation at 17/25, and 25% mortality at 20/25). Length of hospitalization, intensive care unit stay, and intubation were associated with larger airspace opacity score (p = 0.032, 0.039, 0.036, respectively). Conclusion The tested dCNN was highly predictive of inpatient outcomes, performs at a near expert level, and provides added value for clinicians in terms of prognostication and disease severity.
Collapse
|
22
|
Xiong Y, Ma Y, Ruan L, Li D, Lu C, Huang L. Comparing different machine learning techniques for predicting COVID-19 severity. Infect Dis Poverty 2022; 11:19. [PMID: 35177120 PMCID: PMC8851750 DOI: 10.1186/s40249-022-00946-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 02/09/2022] [Indexed: 12/28/2022] Open
Abstract
Background Coronavirus disease 2019 (COVID-19) is still ongoing spreading globally, machine learning techniques were used in disease diagnosis and to predict treatment outcomes, which showed favorable performance. The present study aims to predict COVID-19 severity at admission by different machine learning techniques including random forest (RF), support vector machine (SVM), and logistic regression (LR). Feature importance to COVID-19 severity were further identified. Methods A retrospective design was adopted in the JinYinTan Hospital from January 26 to March 28, 2020, eighty-six demographic, clinical, and laboratory features were selected with LassoCV method, Spearman’s rank correlation, experts’ opinions, and literature evaluation. RF, SVM, and LR were performed to predict severe COVID-19, the performance of the models was compared by the area under curve (AUC). Additionally, feature importance to COVID-19 severity were analyzed by the best performance model. Results A total of 287 patients were enrolled with 36.6% severe cases and 63.4% non-severe cases. The median age was 60.0 years (interquartile range: 49.0–68.0 years). Three models were established using 23 features including 1 clinical, 1 chest computed tomography (CT) and 21 laboratory features. Among three models, RF yielded better overall performance with the highest AUC of 0.970 than SVM of 0.948 and LR of 0.928, RF also achieved a favorable sensitivity of 96.7%, specificity of 69.5%, and accuracy of 84.5%. SVM had sensitivity of 93.9%, specificity of 79.0%, and accuracy of 88.5%. LR also achieved a favorable sensitivity of 92.3%, specificity of 72.3%, and accuracy of 85.2%. Additionally, chest-CT had highest importance to illness severity, and the following features were neutrophil to lymphocyte ratio, lactate dehydrogenase, and D-dimer, respectively. Conclusions Our results indicated that RF could be a useful predictive tool to identify patients with severe COVID-19, which may facilitate effective care and further optimize resources. Graphical Abstract ![]()
Supplementary Information The online version contains supplementary material available at 10.1186/s40249-022-00946-4.
Collapse
Affiliation(s)
- Yibai Xiong
- Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, No. 16, Nanxiao Street, Dongzhimen, Dongcheng District, Beijing, 100700, Beijing, China
| | - Yan Ma
- Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, No. 16, Nanxiao Street, Dongzhimen, Dongcheng District, Beijing, 100700, Beijing, China
| | - Lianguo Ruan
- Department of Infectious Diseases, JinYinTan Hospital, Wuhan, 430040, China
| | - Dan Li
- Information Center, Chinese Center for Disease Control and Prevention, Beijing, 102206, China
| | - Cheng Lu
- Institute of Basic Research in Clinical Medicine, China Academy of Chinese Medical Sciences, No. 16, Nanxiao Street, Dongzhimen, Dongcheng District, Beijing, 100700, Beijing, China.
| | - Luqi Huang
- National Resource Center for Chinese Materia Medica, China Academy of Chinese Medical Sciences, No. 16, Nanxiao Street, Dongzhimen, Dongcheng District, Beijing, 100700, Beijing, China.
| | | |
Collapse
|
23
|
Krauze AV, Zhuge Y, Zhao R, Tasci E, Camphausen K. AI-Driven Image Analysis in Central Nervous System Tumors-Traditional Machine Learning, Deep Learning and Hybrid Models. JOURNAL OF BIOTECHNOLOGY AND BIOMEDICINE 2022; 5:1-19. [PMID: 35106480 PMCID: PMC8802234 DOI: 10.26502/jbb.2642-91280046] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
The interpretation of imaging in medicine in general and in oncology specifically remains problematic due to several limitations which include the need to incorporate detailed clinical history, patient and disease-specific history, clinical exam features, previous and ongoing treatment, and account for the dependency on reproducible human interpretation of multiple factors with incomplete data linkage. To standardize reporting, minimize bias, expedite management, and improve outcomes, the use of Artificial Intelligence (AI) has gained significant prominence in imaging analysis. In oncology, AI methods have as a result been explored in most cancer types with ongoing progress in employing AI towards imaging for oncology treatment, assessing treatment response, and understanding and communicating prognosis. Challenges remain with limited available data sets, variability in imaging changes over time augmented by a growing heterogeneity in analysis approaches. We review the imaging analysis workflow and examine how hand-crafted features also referred to as traditional Machine Learning (ML), Deep Learning (DL) approaches, and hybrid analyses, are being employed in AI-driven imaging analysis in central nervous system tumors. ML, DL, and hybrid approaches coexist, and their combination may produce superior results although data in this space is as yet novel, and conclusions and pitfalls have yet to be fully explored. We note the growing technical complexities that may become increasingly separated from the clinic and enforce the acute need for clinician engagement to guide progress and ensure that conclusions derived from AI-driven imaging analysis reflect that same level of scrutiny lent to other avenues of clinical research.
Collapse
Affiliation(s)
- A V Krauze
- Center for Cancer Research, National Cancer Institute, NIH, Building 10, Room B2-3637, Bethesda, USA
| | - Y Zhuge
- Center for Cancer Research, National Cancer Institute, NIH, Building 10, Room B2-3637, Bethesda, USA
| | - R Zhao
- University of British Columbia, Faculty of Medicine, 317 - 2194 Health Sciences Mall, Vancouver, Canada
| | - E Tasci
- Center for Cancer Research, National Cancer Institute, NIH, Building 10, Room B2-3637, Bethesda, USA
| | - K Camphausen
- Center for Cancer Research, National Cancer Institute, NIH, Building 10, Room B2-3637, Bethesda, USA
| |
Collapse
|
24
|
Non-contrast Cine Cardiac Magnetic Resonance image radiomics features and machine learning algorithms for myocardial infarction detection. Comput Biol Med 2021; 141:105145. [PMID: 34929466 DOI: 10.1016/j.compbiomed.2021.105145] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Accepted: 12/13/2021] [Indexed: 12/22/2022]
Abstract
OBJECTIVE Robust differentiation between infarcted and normal tissue is important for clinical diagnosis and precision medicine. The aim of this work is to investigate the radiomic features and to develop a machine learning algorithm for the differentiation of myocardial infarction (MI) and viable tissues/normal cases in the left ventricular myocardium on non-contrast Cine Cardiac Magnetic Resonance (Cine-CMR) images. METHODS Seventy-two patients (52 with MI and 20 healthy control patients) were enrolled in this study. MR imaging was performed on a 1.5 T MRI using the following parameters: TR = 43.35 ms, TE = 1.22 ms, flip angle = 65°, temporal resolution of 30-40 ms. N4 bias field correction algorithm was applied to correct the inhomogeneity of images. All images were segmented and verified simultaneously by two cardiac imaging experts in consensus. Subsequently, features extraction was performed within the whole left ventricular myocardium (3D volume) in end-diastolic volume phase. Re-sampling to 1 × 1 × 1 mm3 voxels was performed for MR images. All intensities within the VOI of MR images were discretized to 64 bins. Radiomic features were normalized to obtain Z-scores, followed by Student's t-test statistical analysis for comparison. A p-value < 0.05 was used as a threshold for statistically significant differences and false discovery rate (FDR) correction performed to report q-value (FDR adjusted p-value). The extracted features were ranked using the MSVM-RFE algorithm, then Spearman correlation between features was performed to eliminate highly correlated features (R2 > 0.80). Ten different machine learning algorithms were used for classification and different metrics used for evaluation and various parameters used for models' evaluation. RESULTS In univariate analysis, the highest area under the curve (AUC) of receiver operating characteristic (ROC) value was achieved for the Maximum 2D diameter slice (M2DS) shape feature (AUC = 0.88, q-value = 1.02E-7), while the average of univariate AUCs was 0.62 ± 0.08. In multivariate analysis, Logistic Regression (AUC = 0.93 ± 0.03, Accuracy = 0.86 ± 0.05, Recall = 0.87 ± 0.1, Precision = 0.93 ± 0.03 and F1 Score = 0.90 ± 0.04) and SVM (AUC = 0.92 ± 0.05, Accuracy = 0.85 ± 0.04, Recall = 0.92 ± 0.01, Precision = 0.88 ± 0.04 and F1 Score = 0.90 ± 0.02) yielded optimal performance as the best machine learning algorithm for this radiomics analysis. CONCLUSION This study demonstrated that using radiomics analysis on non-contrast Cine-CMR images enables to accurately detect MI, which could potentially be used as an alternative diagnostic method for Late Gadolinium Enhancement Cardiac Magnetic Resonance (LGE-CMR).
Collapse
|