1
|
Siddiqi R, Javaid S. Deep Learning for Pneumonia Detection in Chest X-ray Images: A Comprehensive Survey. J Imaging 2024; 10:176. [PMID: 39194965 DOI: 10.3390/jimaging10080176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 07/15/2024] [Accepted: 07/19/2024] [Indexed: 08/29/2024] Open
Abstract
This paper addresses the significant problem of identifying the relevant background and contextual literature related to deep learning (DL) as an evolving technology in order to provide a comprehensive analysis of the application of DL to the specific problem of pneumonia detection via chest X-ray (CXR) imaging, which is the most common and cost-effective imaging technique available worldwide for pneumonia diagnosis. This paper in particular addresses the key period associated with COVID-19, 2020-2023, to explain, analyze, and systematically evaluate the limitations of approaches and determine their relative levels of effectiveness. The context in which DL is applied as both an aid to and an automated substitute for existing expert radiography professionals, who often have limited availability, is elaborated in detail. The rationale for the undertaken research is provided, along with a justification of the resources adopted and their relevance. This explanatory text and the subsequent analyses are intended to provide sufficient detail of the problem being addressed, existing solutions, and the limitations of these, ranging in detail from the specific to the more general. Indeed, our analysis and evaluation agree with the generally held view that the use of transformers, specifically, vision transformers (ViTs), is the most promising technique for obtaining further effective results in the area of pneumonia detection using CXR images. However, ViTs require extensive further research to address several limitations, specifically the following: biased CXR datasets, data and code availability, the ease with which a model can be explained, systematic methods of accurate model comparison, the notion of class imbalance in CXR datasets, and the possibility of adversarial attacks, the latter of which remains an area of fundamental research.
Collapse
Affiliation(s)
- Raheel Siddiqi
- Computer Science Department, Karachi Campus, Bahria University, Karachi 73500, Pakistan
| | - Sameena Javaid
- Computer Science Department, Karachi Campus, Bahria University, Karachi 73500, Pakistan
| |
Collapse
|
2
|
Danilov VV, Laptev VV, Klyshnikov KY, Stepanov AD, Bogdanov LA, Antonova LV, Krivkina EO, Kutikhin AG, Ovcharenko EA. ML-driven segmentation of microvascular features during histological examination of tissue-engineered vascular grafts. Front Bioeng Biotechnol 2024; 12:1411680. [PMID: 38988863 PMCID: PMC11233802 DOI: 10.3389/fbioe.2024.1411680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Accepted: 05/21/2024] [Indexed: 07/12/2024] Open
Abstract
Introduction The development of next-generation tissue-engineered medical devices such as tissue-engineered vascular grafts (TEVGs) is a leading trend in translational medicine. Microscopic examination is an indispensable part of animal experimentation, and histopathological analysis of regenerated tissue is crucial for assessing the outcomes of implanted medical devices. However, the objective quantification of regenerated tissues can be challenging due to their unusual and complex architecture. To address these challenges, research and development of advanced ML-driven tools for performing adequate histological analysis appears to be an extremely promising direction. Methods We compiled a dataset of 104 representative whole slide images (WSIs) of TEVGs which were collected after a 6-month implantation into the sheep carotid artery. The histological examination aimed to analyze the patterns of vascular tissue regeneration in TEVGs in situ. Having performed an automated slicing of these WSIs by the Entropy Masker algorithm, we filtered and then manually annotated 1,401 patches to identify 9 histological features: arteriole lumen, arteriole media, arteriole adventitia, venule lumen, venule wall, capillary lumen, capillary wall, immune cells, and nerve trunks. To segment and quantify these features, we rigorously tuned and evaluated the performance of six deep learning models (U-Net, LinkNet, FPN, PSPNet, DeepLabV3, and MA-Net). Results After rigorous hyperparameter optimization, all six deep learning models achieved mean Dice Similarity Coefficients (DSC) exceeding 0.823. Notably, FPN and PSPNet exhibited the fastest convergence rates. MA-Net stood out with the highest mean DSC of 0.875, demonstrating superior performance in arteriole segmentation. DeepLabV3 performed well in segmenting venous and capillary structures, while FPN exhibited proficiency in identifying immune cells and nerve trunks. An ensemble of these three models attained an average DSC of 0.889, surpassing their individual performances. Conclusion This study showcases the potential of ML-driven segmentation in the analysis of histological images of tissue-engineered vascular grafts. Through the creation of a unique dataset and the optimization of deep neural network hyperparameters, we developed and validated an ensemble model, establishing an effective tool for detecting key histological features essential for understanding vascular tissue regeneration. These advances herald a significant improvement in ML-assisted workflows for tissue engineering research and development.
Collapse
Affiliation(s)
| | - Vladislav V Laptev
- Siberian State Medical University, Tomsk, Russia
- Research Institute for Complex Issues of Cardiovascular Diseases, Kemerovo, Russia
| | - Kirill Yu Klyshnikov
- Research Institute for Complex Issues of Cardiovascular Diseases, Kemerovo, Russia
| | - Alexander D Stepanov
- Research Institute for Complex Issues of Cardiovascular Diseases, Kemerovo, Russia
| | - Leo A Bogdanov
- Research Institute for Complex Issues of Cardiovascular Diseases, Kemerovo, Russia
| | - Larisa V Antonova
- Research Institute for Complex Issues of Cardiovascular Diseases, Kemerovo, Russia
| | - Evgenia O Krivkina
- Research Institute for Complex Issues of Cardiovascular Diseases, Kemerovo, Russia
| | - Anton G Kutikhin
- Research Institute for Complex Issues of Cardiovascular Diseases, Kemerovo, Russia
| | - Evgeny A Ovcharenko
- Research Institute for Complex Issues of Cardiovascular Diseases, Kemerovo, Russia
| |
Collapse
|
3
|
Sobiecki A, Hadjiiski LM, Chan HP, Samala RK, Zhou C, Stojanovska J, Agarwal PP. Detection of Severe Lung Infection on Chest Radiographs of COVID-19 Patients: Robustness of AI Models across Multi-Institutional Data. Diagnostics (Basel) 2024; 14:341. [PMID: 38337857 PMCID: PMC10855789 DOI: 10.3390/diagnostics14030341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 01/24/2024] [Accepted: 01/30/2024] [Indexed: 02/12/2024] Open
Abstract
The diagnosis of severe COVID-19 lung infection is important because it carries a higher risk for the patient and requires prompt treatment with oxygen therapy and hospitalization while those with less severe lung infection often stay on observation. Also, severe infections are more likely to have long-standing residual changes in their lungs and may need follow-up imaging. We have developed deep learning neural network models for classifying severe vs. non-severe lung infections in COVID-19 patients on chest radiographs (CXR). A deep learning U-Net model was developed to segment the lungs. Inception-v1 and Inception-v4 models were trained for the classification of severe vs. non-severe COVID-19 infection. Four CXR datasets from multi-country and multi-institutional sources were used to develop and evaluate the models. The combined dataset consisted of 5748 cases and 6193 CXR images with physicians' severity ratings as reference standard. The area under the receiver operating characteristic curve (AUC) was used to evaluate model performance. We studied the reproducibility of classification performance using the different combinations of training and validation data sets. We also evaluated the generalizability of the trained deep learning models using both independent internal and external test sets. The Inception-v1 based models achieved AUC ranging between 0.81 ± 0.02 and 0.84 ± 0.0, while the Inception-v4 models achieved AUC in the range of 0.85 ± 0.06 and 0.89 ± 0.01, on the independent test sets, respectively. These results demonstrate the promise of using deep learning models in differentiating COVID-19 patients with severe from non-severe lung infection on chest radiographs.
Collapse
Affiliation(s)
- André Sobiecki
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (A.S.); (H.-P.C.); (C.Z.); (P.P.A.)
| | - Lubomir M. Hadjiiski
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (A.S.); (H.-P.C.); (C.Z.); (P.P.A.)
| | - Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (A.S.); (H.-P.C.); (C.Z.); (P.P.A.)
| | - Ravi K. Samala
- Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD 20993, USA;
| | - Chuan Zhou
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (A.S.); (H.-P.C.); (C.Z.); (P.P.A.)
| | | | - Prachi P. Agarwal
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (A.S.); (H.-P.C.); (C.Z.); (P.P.A.)
| |
Collapse
|
4
|
Wu Y, Egan C, Olvera-Barrios A, Scheppke L, Peto T, Charbel Issa P, Heeren TFC, Leung I, Rajesh AE, Tufail A, Lee CS, Chew EY, Friedlander M, Lee AY. Developing a Continuous Severity Scale for Macular Telangiectasia Type 2 Using Deep Learning and Implications for Disease Grading. Ophthalmology 2024; 131:219-226. [PMID: 37739233 PMCID: PMC10841914 DOI: 10.1016/j.ophtha.2023.09.016] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 09/11/2023] [Accepted: 09/13/2023] [Indexed: 09/24/2023] Open
Abstract
PURPOSE Deep learning (DL) models have achieved state-of-the-art medical diagnosis classification accuracy. Current models are limited by discrete diagnosis labels, but could yield more information with diagnosis in a continuous scale. We developed a novel continuous severity scaling system for macular telangiectasia (MacTel) type 2 by combining a DL classification model with uniform manifold approximation and projection (UMAP). DESIGN We used a DL network to learn a feature representation of MacTel severity from discrete severity labels and applied UMAP to embed this feature representation into 2 dimensions, thereby creating a continuous MacTel severity scale. PARTICIPANTS A total of 2003 OCT volumes were analyzed from 1089 MacTel Project participants. METHODS We trained a multiview DL classifier using multiple B-scans from OCT volumes to learn a previously published discrete 7-step MacTel severity scale. The classifiers' last feature layer was extracted as input for UMAP, which embedded these features into a continuous 2-dimensional manifold. The DL classifier was assessed in terms of test accuracy. Rank correlation for the continuous UMAP scale against the previously published scale was calculated. Additionally, the UMAP scale was assessed in the κ agreement against 5 clinical experts on 100 pairs of patient volumes. For each pair of patient volumes, clinical experts were asked to select the volume with more severe MacTel disease and to compare them against the UMAP scale. MAIN OUTCOME MEASURES Classification accuracy for the DL classifier and κ agreement versus clinical experts for UMAP. RESULTS The multiview DL classifier achieved top 1 accuracy of 63.3% (186/294) on held-out test OCT volumes. The UMAP metric showed a clear continuous gradation of MacTel severity with a Spearman rank correlation of 0.84 with the previously published scale. Furthermore, the continuous UMAP metric achieved κ agreements of 0.56 to 0.63 with 5 clinical experts, which was comparable with interobserver κ values. CONCLUSIONS Our UMAP embedding generated a continuous MacTel severity scale, without requiring continuous training labels. This technique can be applied to other diseases and may lead to more accurate diagnosis, improved understanding of disease progression, and key imaging features for pathologic characteristics. FINANCIAL DISCLOSURE(S) Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Yue Wu
- Department of Ophthalmology, University of Washington, Seattle, Washington; The Roger and Angie Karalis Johnson Retina Center, Seattle, Washington
| | - Catherine Egan
- Moorfields Eye Hospital, London, United Kingdom; University College London, Institute of Ophthalmology, London, United Kingdom
| | - Abraham Olvera-Barrios
- Moorfields Eye Hospital, London, United Kingdom; University College London, Institute of Ophthalmology, London, United Kingdom
| | - Lea Scheppke
- Lowy Medical Research Institute, La Jolla, California; The Scripps Research Institute, La Jolla, California
| | - Tunde Peto
- Center for Public Health, Queen's University Belfast, Belfast, United Kingdom
| | - Peter Charbel Issa
- Oxford Eye Hospital, Oxford University Hospitals NHS Foundation Trust, Oxford, United Kingdom; Nuffield Laboratory of Ophthalmology, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | | | - Irene Leung
- Moorfields Eye Hospital, London, United Kingdom
| | - Anand E Rajesh
- Department of Ophthalmology, University of Washington, Seattle, Washington; The Roger and Angie Karalis Johnson Retina Center, Seattle, Washington
| | - Adnan Tufail
- Moorfields Eye Hospital, London, United Kingdom; University College London, Institute of Ophthalmology, London, United Kingdom
| | - Cecilia S Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington; The Roger and Angie Karalis Johnson Retina Center, Seattle, Washington
| | - Emily Y Chew
- Division of Epidemiology and Clinical Applications, National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - Martin Friedlander
- Lowy Medical Research Institute, La Jolla, California; The Scripps Research Institute, La Jolla, California
| | - Aaron Y Lee
- Department of Ophthalmology, University of Washington, Seattle, Washington; The Roger and Angie Karalis Johnson Retina Center, Seattle, Washington.
| |
Collapse
|
5
|
Siracusano G, La Corte A, Nucera AG, Gaeta M, Chiappini M, Finocchio G. Effective processing pipeline PACE 2.0 for enhancing chest x-ray contrast and diagnostic interpretability. Sci Rep 2023; 13:22471. [PMID: 38110512 PMCID: PMC10728198 DOI: 10.1038/s41598-023-49534-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 12/09/2023] [Indexed: 12/20/2023] Open
Abstract
Preprocessing is an essential task for the correct analysis of digital medical images. In particular, X-ray imaging might contain artifacts, low contrast, diffractions or intensity inhomogeneities. Recently, we have developed a procedure named PACE that is able to improve chest X-ray (CXR) images including the enforcement of clinical evaluation of pneumonia originated by COVID-19. At the clinical benchmark state of this tool, there have been found some peculiar conditions causing a reduction of details over large bright regions (as in ground-glass opacities and in pleural effusions in bedridden patients) and resulting in oversaturated areas. Here, we have significantly improved the overall performance of the original approach including the results in those specific cases by developing PACE2.0. It combines 2D image decomposition, non-local means denoising, gamma correction, and recursive algorithms to improve image quality. The tool has been evaluated using three metrics: contrast improvement index, information entropy, and effective measure of enhancement, resulting in an average increase of 35% in CII, 7.5% in ENT, 95.6% in EME and 13% in BRISQUE against original radiographies. Additionally, the enhanced images were fed to a pre-trained DenseNet-121 model for transfer learning, resulting in an increase in classification accuracy from 80 to 94% and recall from 89 to 97%, respectively. These improvements led to a potential enhancement of the interpretability of lesion detection in CXRs. PACE2.0 has the potential to become a valuable tool for clinical decision support and could help healthcare professionals detect pneumonia more accurately.
Collapse
Affiliation(s)
- Giulio Siracusano
- Department of Electric, Electronic and Computer Engineering, University of Catania, Viale Andrea Doria 6, 95125, Catania, Italy.
| | - Aurelio La Corte
- Department of Electric, Electronic and Computer Engineering, University of Catania, Viale Andrea Doria 6, 95125, Catania, Italy
| | - Annamaria Giuseppina Nucera
- Unit of Radiology, Department of Advanced Diagnostic-Therapeutic Technologies, "Bianchi-Melacrino-Morelli" Hospital, Reggio Calabria, Via Giuseppe Melacrino, 21, 89124, Reggio Calabria, Italy
| | - Michele Gaeta
- Department of Biomedical Sciences, Dental and of Morphological and Functional Images, University of Messina, Via Consolare Valeria 1, 98125, Messina, Italy
| | - Massimo Chiappini
- Istituto Nazionale di Geofisica e Vulcanologia (INGV), Via di Vigna Murata 605, 00143, Rome, Italy.
- Maris Scarl, Via Vigna Murata 606, 00143, Rome, Italy.
| | - Giovanni Finocchio
- Istituto Nazionale di Geofisica e Vulcanologia (INGV), Via di Vigna Murata 605, 00143, Rome, Italy.
- Department of Mathematical and Computer Sciences, Physical Sciences and Earth Sciences, University of Messina, V.le F. Stagno D'Alcontres 31, 98166, Messina, Italy.
| |
Collapse
|
6
|
Schaudt D, von Schwerin R, Hafner A, Riedel P, Reichert M, von Schwerin M, Beer M, Kloth C. Augmentation strategies for an imbalanced learning problem on a novel COVID-19 severity dataset. Sci Rep 2023; 13:18299. [PMID: 37880333 PMCID: PMC10600145 DOI: 10.1038/s41598-023-45532-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 10/20/2023] [Indexed: 10/27/2023] Open
Abstract
Since the beginning of the COVID-19 pandemic, many different machine learning models have been developed to detect and verify COVID-19 pneumonia based on chest X-ray images. Although promising, binary models have only limited implications for medical treatment, whereas the prediction of disease severity suggests more suitable and specific treatment options. In this study, we publish severity scores for the 2358 COVID-19 positive images in the COVIDx8B dataset, creating one of the largest collections of publicly available COVID-19 severity data. Furthermore, we train and evaluate deep learning models on the newly created dataset to provide a first benchmark for the severity classification task. One of the main challenges of this dataset is the skewed class distribution, resulting in undesirable model performance for the most severe cases. We therefore propose and examine different augmentation strategies, specifically targeting majority and minority classes. Our augmentation strategies show significant improvements in precision and recall values for the rare and most severe cases. While the models might not yet fulfill medical requirements, they serve as an appropriate starting point for further research with the proposed dataset to optimize clinical resource allocation and treatment.
Collapse
Affiliation(s)
- Daniel Schaudt
- Department of Computer Science, Ulm University of Applied Science, Albert-Einstein-Allee 55, 89081, Ulm, Baden-Wurttemberg, Germany.
| | - Reinhold von Schwerin
- Department of Computer Science, Ulm University of Applied Science, Albert-Einstein-Allee 55, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Alexander Hafner
- Department of Computer Science, Ulm University of Applied Science, Albert-Einstein-Allee 55, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Pascal Riedel
- Department of Computer Science, Ulm University of Applied Science, Albert-Einstein-Allee 55, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Manfred Reichert
- Institute of Databases and Information Systems, Ulm University, James-Franck-Ring, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Marianne von Schwerin
- Department of Computer Science, Ulm University of Applied Science, Albert-Einstein-Allee 55, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Meinrad Beer
- Department of Radiology, University Hospital of Ulm, Albert-Einstein-Allee 23, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Christopher Kloth
- Department of Radiology, University Hospital of Ulm, Albert-Einstein-Allee 23, 89081, Ulm, Baden-Wurttemberg, Germany
| |
Collapse
|
7
|
Li H, Drukker K, Hu Q, Whitney HM, Fuhrman JD, Giger ML. Predicting intensive care need for COVID-19 patients using deep learning on chest radiography. J Med Imaging (Bellingham) 2023; 10:044504. [PMID: 37608852 PMCID: PMC10440543 DOI: 10.1117/1.jmi.10.4.044504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Revised: 07/12/2023] [Accepted: 08/01/2023] [Indexed: 08/24/2023] Open
Abstract
Purpose Image-based prediction of coronavirus disease 2019 (COVID-19) severity and resource needs can be an important means to address the COVID-19 pandemic. In this study, we propose an artificial intelligence/machine learning (AI/ML) COVID-19 prognosis method to predict patients' needs for intensive care by analyzing chest X-ray radiography (CXR) images using deep learning. Approach The dataset consisted of 8357 CXR exams from 5046 COVID-19-positive patients as confirmed by reverse transcription polymerase chain reaction (RT-PCR) tests for the SARS-CoV-2 virus with a training/validation/test split of 64%/16%/20% on a by patient level. Our model involved a DenseNet121 network with a sequential transfer learning technique employed to train on a sequence of gradually more specific and complex tasks: (1) fine-tuning a model pretrained on ImageNet using a previously established CXR dataset with a broad spectrum of pathologies; (2) refining on another established dataset to detect pneumonia; and (3) fine-tuning using our in-house training/validation datasets to predict patients' needs for intensive care within 24, 48, 72, and 96 h following the CXR exams. The classification performances were evaluated on our independent test set (CXR exams of 1048 patients) using the area under the receiver operating characteristic curve (AUC) as the figure of merit in the task of distinguishing between those COVID-19-positive patients who required intensive care following the imaging exam and those who did not. Results Our proposed AI/ML model achieved an AUC (95% confidence interval) of 0.78 (0.74, 0.81) when predicting the need for intensive care 24 h in advance, and at least 0.76 (0.73, 0.80) for 48 h or more in advance using predictions based on the AI prognostic marker derived from CXR images. Conclusions This AI/ML prediction model for patients' needs for intensive care has the potential to support both clinical decision-making and resource management.
Collapse
Affiliation(s)
- Hui Li
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Karen Drukker
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Qiyuan Hu
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Heather M. Whitney
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Jordan D. Fuhrman
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Maryellen L. Giger
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| |
Collapse
|
8
|
Yoo SJ, Kim H, Witanto JN, Inui S, Yoon JH, Lee KD, Choi YW, Goo JM, Yoon SH. Generative adversarial network for automatic quantification of Coronavirus disease 2019 pneumonia on chest radiographs. Eur J Radiol 2023; 164:110858. [PMID: 37209462 DOI: 10.1016/j.ejrad.2023.110858] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 04/10/2023] [Accepted: 04/29/2023] [Indexed: 05/22/2023]
Abstract
PURPOSE To develop a generative adversarial network (GAN) to quantify COVID-19 pneumonia on chest radiographs automatically. MATERIALS AND METHODS This retrospective study included 50,000 consecutive non-COVID-19 chest CT scans in 2015-2017 for training. Anteroposterior virtual chest, lung, and pneumonia radiographs were generated from whole, segmented lung, and pneumonia pixels from each CT scan. Two GANs were sequentially trained to generate lung images from radiographs and to generate pneumonia images from lung images. GAN-driven pneumonia extent (pneumonia area/lung area) was expressed from 0% to 100%. We examined the correlation of GAN-driven pneumonia extent with semi-quantitative Brixia X-ray severity score (one dataset, n = 4707) and quantitative CT-driven pneumonia extent (four datasets, n = 54-375), along with analyzing a measurement difference between the GAN and CT extents. Three datasets (n = 243-1481), where unfavorable outcomes (respiratory failure, intensive care unit admission, and death) occurred in 10%, 38%, and 78%, respectively, were used to examine the predictive power of GAN-driven pneumonia extent. RESULTS GAN-driven radiographic pneumonia was correlated with the severity score (0.611) and CT-driven extent (0.640). 95% limits of agreements between GAN and CT-driven extents were -27.1% to 17.4%. GAN-driven pneumonia extent provided odds ratios of 1.05-1.18 per percent for unfavorable outcomes in the three datasets, with areas under the receiver operating characteristic curve (AUCs) of 0.614-0.842. When combined with demographic information only and with both demographic and laboratory information, the prediction models yielded AUCs of 0.643-0.841 and 0.688-0.877, respectively. CONCLUSION The generative adversarial network automatically quantified COVID-19 pneumonia on chest radiographs and identified patients with unfavorable outcomes.
Collapse
Affiliation(s)
- Seung-Jin Yoo
- Department of Radiology, Hanyang University Medical Center, Hanyang University College of Medicine, Seoul, Republic of Korea
| | - Hyungjin Kim
- Department of Radiology, Seoul National University Hospital, Seoul National College of Medicine, Seoul, Korea
| | | | - Shohei Inui
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan; Department of Radiology, Japan Self-Defense Forces Central Hospital, Tokyo, Japan
| | - Jeong-Hwa Yoon
- Institute of Health Policy and Management, Medical Research Center, Seoul National University, Seoul, South Korea
| | - Ki-Deok Lee
- Division of Infectious diseases, Department of Internal Medicine, Myongji Hospital, Goyang, Korea
| | - Yo Won Choi
- Department of Radiology, Hanyang University Medical Center, Hanyang University College of Medicine, Seoul, Republic of Korea
| | - Jin Mo Goo
- Department of Radiology, Seoul National University Hospital, Seoul National College of Medicine, Seoul, Korea; Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea
| | - Soon Ho Yoon
- Department of Radiology, Seoul National University Hospital, Seoul National College of Medicine, Seoul, Korea; MEDICALIP Co. Ltd., Seoul, Korea
| |
Collapse
|
9
|
Al-Zyoud W, Erekat D, Saraiji R. COVID-19 chest X-ray image analysis by threshold-based segmentation. Heliyon 2023; 9:e14453. [PMID: 36919086 PMCID: PMC9998128 DOI: 10.1016/j.heliyon.2023.e14453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 03/04/2023] [Accepted: 03/06/2023] [Indexed: 03/11/2023] Open
Abstract
COVID-19 is a severe acute respiratory syndrome that has caused a major ongoing pandemic worldwide. Imaging systems such as conventional chest X-ray (CXR) and computed tomography (CT) were proven essential for patients due to the lack of information about the complications that could result from this disease. In this study, the aim was to develop and evaluate a method for automatic diagnosis of COVID-19 using binary segmentation of chest X-ray images. The study used frontal chest X-ray images of 27 infected and 19 uninfected individuals from Kaggle COVID-19 Radiography Database, and applied binary segmentation and quartering in MATLAB to analyze the images. The binary images of the lung were split into four quarters; Q1 = right upper quarter, Q2 = left upper quarter, Q3 = right lower, and Q4 = left lower. The results showed that COVID-19 patients had a higher percentage of attenuation in the lower lobes of the lungs (p-value < 0.00001) compared to healthy individuals, which is likely due to ground-glass opacities and consolidations caused by the infection. The ratios of white pixels in the four quarters of the X-ray images were calculated, and it was found that the left lower quarter had the highest number of white pixels but without a statistical significance compared to right lower quarter (p-value = 0.102792). This supports the theory that COVID-19 primarily affects the lower and lateral fields of the lungs, and suggests that the virus is accumulated mostly in the lower left quarter of the lungs. Overall, this study contributes to the understanding of the impact of COVID-19 on the respiratory system and can help in the development of accurate diagnostic methods.
Collapse
Affiliation(s)
- Walid Al-Zyoud
- Department of Biomedical Engineering, School of Applied Medical Sciences, German Jordanian University, 11180 Amman Jordan
| | - Dana Erekat
- Department of Biomedical Engineering, School of Applied Medical Sciences, German Jordanian University, 11180 Amman Jordan
| | - Rama Saraiji
- Department of Biomedical Engineering, School of Applied Medical Sciences, German Jordanian University, 11180 Amman Jordan
| |
Collapse
|
10
|
Hamad QS, Samma H, Suandi SA. Feature selection of pre-trained shallow CNN using the QLESCA optimizer: COVID-19 detection as a case study. APPL INTELL 2023; 53:1-23. [PMID: 36777882 PMCID: PMC9900578 DOI: 10.1007/s10489-022-04446-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/29/2022] [Indexed: 02/08/2023]
Abstract
According to the World Health Organization, millions of infections and a lot of deaths have been recorded worldwide since the emergence of the coronavirus disease (COVID-19). Since 2020, a lot of computer science researchers have used convolutional neural networks (CNNs) to develop interesting frameworks to detect this disease. However, poor feature extraction from the chest X-ray images and the high computational cost of the available models introduce difficulties for an accurate and fast COVID-19 detection framework. Moreover, poor feature extraction has caused the issue of 'the curse of dimensionality', which will negatively affect the performance of the model. Feature selection is typically considered as a preprocessing mechanism to find an optimal subset of features from a given set of all features in the data mining process. Thus, the major purpose of this study is to offer an accurate and efficient approach for extracting COVID-19 features from chest X-rays that is also less computationally expensive than earlier approaches. To achieve the specified goal, we design a mechanism for feature extraction based on shallow conventional neural network (SCNN) and used an effective method for selecting features by utilizing the newly developed optimization algorithm, Q-Learning Embedded Sine Cosine Algorithm (QLESCA). Support vector machines (SVMs) are used as a classifier. Five publicly available chest X-ray image datasets, consisting of 4848 COVID-19 images and 8669 non-COVID-19 images, are used to train and evaluate the proposed model. The performance of the QLESCA is evaluated against nine recent optimization algorithms. The proposed method is able to achieve the highest accuracy of 97.8086% while reducing the number of features from 100 to 38. Experiments prove that the accuracy of the model improves with the usage of the QLESCA as the dimensionality reduction technique by selecting relevant features. Graphical abstract
Collapse
Affiliation(s)
- Qusay Shihab Hamad
- Intelligent Biometric Group, School of Electrical and Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, 14300 Nibong Tebal, Penang, Malaysia
- University of Information Technology and Communications (UOITC), Baghdad, Iraq
| | - Hussein Samma
- SDAIA-KFUPM Joint Research Center for Artificial Intelligence (JRC-AI), King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia
| | - Shahrel Azmin Suandi
- Intelligent Biometric Group, School of Electrical and Electronic Engineering, Engineering Campus, Universiti Sains Malaysia, 14300 Nibong Tebal, Penang, Malaysia
| |
Collapse
|