1
|
Wang X, Lu Z, Huang S, Ting Y, Ting JSZ, Chen W, Tan CH, Huang W. TransMVAN: Multi-view Aggregation Network with Transformer for Pneumonia Diagnosis. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01169-9. [PMID: 38977615 DOI: 10.1007/s10278-024-01169-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 04/30/2024] [Accepted: 05/01/2024] [Indexed: 07/10/2024]
Abstract
Automated and accurate classification of pneumonia plays a crucial role in improving the performance of computer-aided diagnosis systems for chest X-ray images. Nevertheless, it is a challenging task due to the difficulty of learning the complex structure information of lung abnormality from chest X-ray images. In this paper, we propose a multi-view aggregation network with Transformer (TransMVAN) for pneumonia classification in chest X-ray images. Specifically, we propose to incorporate the knowledge from glance and focus views to enrich the feature representation of lung abnormality. Moreover, to capture the complex relationships among different lung regions, we propose a bi-directional multi-scale vision Transformer (biMSVT), with which the informative messages between different lung regions are propagated through two directions. In addition, we also propose a gated multi-view aggregation (GMVA) to adaptively select the feature information from glance and focus views for further performance enhancement of pneumonia diagnosis. Our proposed method achieves AUCs of 0.9645 and 0.9550 for pneumonia classification on two different chest X-ray image datasets. In addition, it achieves an AUC of 0.9761 for evaluating positive and negative polymerase chain reaction (PCR). Furthermore, our proposed method also attains an AUC of 0.9741 for classifying non-COVID-19 pneumonia, COVID-19 pneumonia, and normal cases. Experimental results demonstrate the effectiveness of our method over other methods used for comparison in pneumonia diagnosis from chest X-ray images.
Collapse
Affiliation(s)
- Xiaohong Wang
- Institute for Infocomm Research (I²R), A*STAR, 138632, Singapore, Singapore
| | - Zhongkang Lu
- Institute for Infocomm Research (I²R), A*STAR, 138632, Singapore, Singapore
| | - Su Huang
- Institute for Infocomm Research (I²R), A*STAR, 138632, Singapore, Singapore
| | - Yonghan Ting
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, 308433, Singapore, Singapore
| | - Jordan Sim Zheng Ting
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, 308433, Singapore, Singapore
| | - Wenxiang Chen
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, 308433, Singapore, Singapore
| | - Cher Heng Tan
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, 308433, Singapore, Singapore
- Lee Kong Chian School of Medicine, Nanyang Technological University, 308232, Singapore, Singapore
| | - Weimin Huang
- Institute for Infocomm Research (I²R), A*STAR, 138632, Singapore, Singapore.
| |
Collapse
|
2
|
Cao R, Liu Y, Wen X, Liao C, Wang X, Gao Y, Tan T. Reinvestigating the performance of artificial intelligence classification algorithms on COVID-19 X-Ray and CT images. iScience 2024; 27:109712. [PMID: 38689643 PMCID: PMC11059117 DOI: 10.1016/j.isci.2024.109712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 03/01/2024] [Accepted: 04/07/2024] [Indexed: 05/02/2024] Open
Abstract
There are concerns that artificial intelligence (AI) algorithms may create underdiagnosis bias by mislabeling patient individuals with certain attributes (e.g., female and young) as healthy. Addressing this bias is crucial given the urgent need for AI diagnostics facing rapidly spreading infectious diseases like COVID-19. We find the prevalent AI diagnostic models show an underdiagnosis rate among specific patient populations, and the underdiagnosis rate is higher in some intersectional specific patient populations (for example, females aged 20-40 years). Additionally, we find training AI models on heterogeneous datasets (positive and negative samples from different datasets) may lead to poor model generalization. The model's classification performance varies significantly across test sets, with the accuracy of the better performance being over 40% higher than that of the poor performance. In conclusion, we developed an AI bias analysis pipeline to help researchers recognize and address biases that impact medical equality and ethics.
Collapse
Affiliation(s)
- Rui Cao
- School of Software, Taiyuan University of Technology, Taiyuan 030024, China
| | - Yanan Liu
- School of Software, Taiyuan University of Technology, Taiyuan 030024, China
| | - Xin Wen
- School of Software, Taiyuan University of Technology, Taiyuan 030024, China
| | - Caiqing Liao
- School of Software, Taiyuan University of Technology, Taiyuan 030024, China
| | - Xin Wang
- Department of Radiology, Netherlands Cancer Institute (NKI), Plesmanlaan 121, Amsterdam 1066 CX, the Netherlands
- Department of Radiology and Nuclear Medicine, Radboud University Medical Centre, Geert Grooteplein 10, 6525 GA Nijmegen, the Netherlands
- GROW School for Oncology and Development Biology, Maastricht University, MD, Maastricht 6200, the Netherlands
| | - Yuan Gao
- Department of Radiology, Netherlands Cancer Institute (NKI), Plesmanlaan 121, Amsterdam 1066 CX, the Netherlands
- GROW School for Oncology and Development Biology, Maastricht University, MD, Maastricht 6200, the Netherlands
| | - Tao Tan
- Department of Radiology, Netherlands Cancer Institute (NKI), Plesmanlaan 121, Amsterdam 1066 CX, the Netherlands
- Department of Radiology and Nuclear Medicine, Radboud University Medical Centre, Geert Grooteplein 10, 6525 GA Nijmegen, the Netherlands
- Faculty of Applied Sciences, Macao Polytechnic University, Macao SAR 999078, China
| |
Collapse
|
3
|
Imagawa K, Shiomoto K. Evaluation of Effectiveness of Self-Supervised Learning in Chest X-Ray Imaging to Reduce Annotated Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-00975-5. [PMID: 38459399 DOI: 10.1007/s10278-024-00975-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 11/17/2023] [Accepted: 11/17/2023] [Indexed: 03/10/2024]
Abstract
A significant challenge in machine learning-based medical image analysis is the scarcity of medical images. Obtaining a large number of labeled medical images is difficult because annotating medical images is a time-consuming process that requires specialized knowledge. In addition, inappropriate annotation processes can increase model bias. Self-supervised learning (SSL) is a type of unsupervised learning method that extracts image representations. Thus, SSL can be an effective method to reduce the number of labeled images. In this study, we investigated the feasibility of reducing the number of labeled images in a limited set of unlabeled medical images. The unlabeled chest X-ray (CXR) images were pretrained using the SimCLR framework, and then the representations were fine-tuned as supervised learning for the target task. A total of 2000 task-specific CXR images were used to perform binary classification of coronavirus disease 2019 (COVID-19) and normal cases. The results demonstrate that the performance of pretraining on task-specific unlabeled CXR images can be maintained when the number of labeled CXR images is reduced by approximately 40%. In addition, the performance was significantly better than that obtained without pretraining. In contrast, a large number of pretrained unlabeled images are required to maintain performance regardless of task specificity among a small number of labeled CXR images. In summary, to reduce the number of labeled images using SimCLR, we must consider both the number of images and the task-specific characteristics of the target images.
Collapse
Affiliation(s)
- Kuniki Imagawa
- Faculty of Information Technology, Tokyo City University, 1-28-1 Tamazutsumi, Setagaya-ku, Tokyo, 158-8557, Japan.
| | - Kohei Shiomoto
- Faculty of Information Technology, Tokyo City University, 1-28-1 Tamazutsumi, Setagaya-ku, Tokyo, 158-8557, Japan
| |
Collapse
|
4
|
Sobiecki A, Hadjiiski LM, Chan HP, Samala RK, Zhou C, Stojanovska J, Agarwal PP. Detection of Severe Lung Infection on Chest Radiographs of COVID-19 Patients: Robustness of AI Models across Multi-Institutional Data. Diagnostics (Basel) 2024; 14:341. [PMID: 38337857 PMCID: PMC10855789 DOI: 10.3390/diagnostics14030341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 01/24/2024] [Accepted: 01/30/2024] [Indexed: 02/12/2024] Open
Abstract
The diagnosis of severe COVID-19 lung infection is important because it carries a higher risk for the patient and requires prompt treatment with oxygen therapy and hospitalization while those with less severe lung infection often stay on observation. Also, severe infections are more likely to have long-standing residual changes in their lungs and may need follow-up imaging. We have developed deep learning neural network models for classifying severe vs. non-severe lung infections in COVID-19 patients on chest radiographs (CXR). A deep learning U-Net model was developed to segment the lungs. Inception-v1 and Inception-v4 models were trained for the classification of severe vs. non-severe COVID-19 infection. Four CXR datasets from multi-country and multi-institutional sources were used to develop and evaluate the models. The combined dataset consisted of 5748 cases and 6193 CXR images with physicians' severity ratings as reference standard. The area under the receiver operating characteristic curve (AUC) was used to evaluate model performance. We studied the reproducibility of classification performance using the different combinations of training and validation data sets. We also evaluated the generalizability of the trained deep learning models using both independent internal and external test sets. The Inception-v1 based models achieved AUC ranging between 0.81 ± 0.02 and 0.84 ± 0.0, while the Inception-v4 models achieved AUC in the range of 0.85 ± 0.06 and 0.89 ± 0.01, on the independent test sets, respectively. These results demonstrate the promise of using deep learning models in differentiating COVID-19 patients with severe from non-severe lung infection on chest radiographs.
Collapse
Affiliation(s)
- André Sobiecki
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (A.S.); (H.-P.C.); (C.Z.); (P.P.A.)
| | - Lubomir M. Hadjiiski
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (A.S.); (H.-P.C.); (C.Z.); (P.P.A.)
| | - Heang-Ping Chan
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (A.S.); (H.-P.C.); (C.Z.); (P.P.A.)
| | - Ravi K. Samala
- Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD 20993, USA;
| | - Chuan Zhou
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (A.S.); (H.-P.C.); (C.Z.); (P.P.A.)
| | | | - Prachi P. Agarwal
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109, USA; (A.S.); (H.-P.C.); (C.Z.); (P.P.A.)
| |
Collapse
|
5
|
Bassi PRAS, Dertkigil SSJ, Cavalli A. Improving deep neural network generalization and robustness to background bias via layer-wise relevance propagation optimization. Nat Commun 2024; 15:291. [PMID: 38177129 PMCID: PMC10767127 DOI: 10.1038/s41467-023-44371-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 12/11/2023] [Indexed: 01/06/2024] Open
Abstract
Features in images' backgrounds can spuriously correlate with the images' classes, representing background bias. They can influence the classifier's decisions, causing shortcut learning (Clever Hans effect). The phenomenon generates deep neural networks (DNNs) that perform well on standard evaluation datasets but generalize poorly to real-world data. Layer-wise Relevance Propagation (LRP) explains DNNs' decisions. Here, we show that the optimization of LRP heatmaps can minimize the background bias influence on deep classifiers, hindering shortcut learning. By not increasing run-time computational cost, the approach is light and fast. Furthermore, it applies to virtually any classification architecture. After injecting synthetic bias in images' backgrounds, we compared our approach (dubbed ISNet) to eight state-of-the-art DNNs, quantitatively demonstrating its superior robustness to background bias. Mixed datasets are common for COVID-19 and tuberculosis classification with chest X-rays, fostering background bias. By focusing on the lungs, the ISNet reduced shortcut learning. Thus, its generalization performance on external (out-of-distribution) test databases significantly surpassed all implemented benchmark models.
Collapse
Affiliation(s)
- Pedro R A S Bassi
- Alma Mater Studiorum - University of Bologna, Bologna, Italy.
- Center for Biomolecular Nanotechnologies, Istituto Italiano di Tecnologia, 73010, Arnesano (LE), Italy.
| | - Sergio S J Dertkigil
- School of Medical Sciences, University of Campinas (UNICAMP), Campinas (SP), Brazil
| | - Andrea Cavalli
- Alma Mater Studiorum - University of Bologna, Bologna, Italy.
- Istituto Italiano di Tecnologia, 16163, Genova (GE), Italy.
| |
Collapse
|
6
|
Azad R, Kazerouni A, Heidari M, Aghdam EK, Molaei A, Jia Y, Jose A, Roy R, Merhof D. Advances in medical image analysis with vision Transformers: A comprehensive review. Med Image Anal 2024; 91:103000. [PMID: 37883822 DOI: 10.1016/j.media.2023.103000] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 09/30/2023] [Accepted: 10/11/2023] [Indexed: 10/28/2023]
Abstract
The remarkable performance of the Transformer architecture in natural language processing has recently also triggered broad interest in Computer Vision. Among other merits, Transformers are witnessed as capable of learning long-range dependencies and spatial correlations, which is a clear advantage over convolutional neural networks (CNNs), which have been the de facto standard in Computer Vision problems so far. Thus, Transformers have become an integral part of modern medical image analysis. In this review, we provide an encyclopedic review of the applications of Transformers in medical imaging. Specifically, we present a systematic and thorough review of relevant recent Transformer literature for different medical image analysis tasks, including classification, segmentation, detection, registration, synthesis, and clinical report generation. For each of these applications, we investigate the novelty, strengths and weaknesses of the different proposed strategies and develop taxonomies highlighting key properties and contributions. Further, if applicable, we outline current benchmarks on different datasets. Finally, we summarize key challenges and discuss different future research directions. In addition, we have provided cited papers with their corresponding implementations in https://github.com/mindflow-institue/Awesome-Transformer.
Collapse
Affiliation(s)
- Reza Azad
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Amirhossein Kazerouni
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Moein Heidari
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| | | | - Amirali Molaei
- School of Computer Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Yiwei Jia
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Abin Jose
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Rijo Roy
- Faculty of Electrical Engineering and Information Technology, RWTH Aachen University, Aachen, Germany
| | - Dorit Merhof
- Faculty of Informatics and Data Science, University of Regensburg, Regensburg, Germany; Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany.
| |
Collapse
|
7
|
Benatti SV, Venturelli S, Crotti G, Ghirardi A, Binda F, Savardi M, Previtali G, Seghezzi M, Marozzi R, Corsi A, Bonaffini PA, Gori M, Falanga A, Signoroni A, Alessio MG, Zucchi A, Barbui T, Rizzi M. Clinical variables associated with late-onset thrombotic and cardiovascular events, after SARS-CoV-2 infection, in a cohort of patients from the first epidemic wave: an 18-month analysis on the "Surviving-COVID" cohort from Bergamo, Italy. Front Cardiovasc Med 2023; 10:1280584. [PMID: 38099229 PMCID: PMC10720075 DOI: 10.3389/fcvm.2023.1280584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Accepted: 11/08/2023] [Indexed: 12/17/2023] Open
Abstract
Importance Population studies have recorded an increased, unexplained risk of post-acute cardiovascular and thrombotic events, up to 1 year after acute severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection. Objectives To search for clinical variables and biomarkers associated with late post-acute thrombotic and cardiovascular events after SARS-CoV-2 infection. Design Retrospective cohort study. Setting Third-level referral hospital in Bergamo (Italy). Participants Analysis of an existing database of adult patients, who received care for SARS-CoV-2 infection at our institution between 20 February and 30 September 2020, followed up on a single date ("entry date") at 3-6 months. Exposure Initial infection by SARS-CoV-2. Main outcomes and measures Primary outcome: occurrence, in the 18 months after entry date, of a composite endpoint, defined by the International Classification of Diseases-9th edition (ICD-9) codes for at least one of: cerebral/cardiac ischemia, venous/arterial thrombosis (any site), pulmonary embolism, cardiac arrhythmia, heart failure. Measures (as recorded on entry date): history of initial infection, symptoms, current medications, pulmonary function test, blood tests results, and semi-quantitative radiographic lung damage (BRIXIA score). Individual clinical data were matched to hospitalizations, voluntary vaccination against SARS-CoV-2 (according to regulations and product availability), and documented reinfections in the following 18 months, as recorded in the provincial Health Authority database. A multivariable Cox proportional hazard model (including vaccine doses as a time-dependent variable) was fitted, adjusting for potential confounders. We report associations as hazard ratios (HR) and 95% confidence intervals (CI). Results Among 1,515 patients (948 men, 62.6%, median age 59; interquartile range: 50-69), we identified 84 endpoint events, occurring to 75 patients (5%): 30 arterial thromboses, 11 venous thromboses, 28 arrhythmic and 24 heart failure events. From a multivariable Cox model, we found the following significant associations with the outcome: previous occurrence of any outcome event, in the 18 months before infection (HR: 2.38; 95% CI: 1.23-4.62); BRIXIA score ≥ 3 (HR: 2.43; 95% CI: 1.30-4.55); neutrophils-to-lymphocytes ratio ≥ 3.3 (HR: 2.60; 95% CI: 1.43-4.72), and estimated glomerular filtration rate < 45 ml/min/1.73 m2 (HR: 3.84; 95% CI: 1.49-9.91). Conclusions and relevance We identified four clinical variables, associated with the occurrence of post-acute thrombotic and cardiovascular events, after SARS-CoV-2 infection. Further research is needed, to confirm these results.
Collapse
Affiliation(s)
- S. V. Benatti
- Infectious Diseases Unit, ASST Papa Giovanni XXIII, Bergamo, Italy
| | - S. Venturelli
- Infectious Diseases Unit, ASST Papa Giovanni XXIII, Bergamo, Italy
- Scuola di Medicina, Università degli Studi di Milano-Bicocca, Milano, Italy
| | - G. Crotti
- ATS Bergamo, Ufficio Epidemiologico, Bergamo, Italy
| | - A. Ghirardi
- Fondazione per la Ricerca Ospedale di Bergamo (FROM)—ETS, Bergamo, Italy
| | - F. Binda
- Infectious Diseases Unit, ASST Papa Giovanni XXIII, Bergamo, Italy
| | - M. Savardi
- Dipartimento di Specialità Medico Chirurgiche, Scienze Radiologiche e Sanità Pubblica, Università Degli Studi di Brescia, Brescia, Italy
| | - G. Previtali
- Central Laboratory, ASST Papa Giovanni XXIII, Bergamo, Italy
| | - M. Seghezzi
- Central Laboratory, ASST Papa Giovanni XXIII, Bergamo, Italy
| | - R. Marozzi
- Central Laboratory, ASST Papa Giovanni XXIII, Bergamo, Italy
| | - A. Corsi
- Scuola di Medicina, Università degli Studi di Milano-Bicocca, Milano, Italy
- Scuola di Specializzazione in Radiologia, Università Degli Studi di Milano-Bicocca, Milano, Italy
| | - P. A. Bonaffini
- Scuola di Medicina, Università degli Studi di Milano-Bicocca, Milano, Italy
- Radiology Unit, ASST Papa Giovanni XXIII, Bergamo, Italy
| | - M. Gori
- Cardiology Unit, ASST Papa Giovanni XXIII, Bergamo, Italy
| | - A. Falanga
- Scuola di Medicina, Università degli Studi di Milano-Bicocca, Milano, Italy
- Immunohematology and Transfusion Medicine, ASST Papa Giovanni XXIII, Bergamo, Italy
| | - A. Signoroni
- Dipartimento di Specialità Medico Chirurgiche, Scienze Radiologiche e Sanità Pubblica, Università Degli Studi di Brescia, Brescia, Italy
| | - M. G. Alessio
- Central Laboratory, ASST Papa Giovanni XXIII, Bergamo, Italy
| | - A. Zucchi
- ATS Bergamo, Ufficio Epidemiologico, Bergamo, Italy
| | - T. Barbui
- Fondazione per la Ricerca Ospedale di Bergamo (FROM)—ETS, Bergamo, Italy
| | - M. Rizzi
- Infectious Diseases Unit, ASST Papa Giovanni XXIII, Bergamo, Italy
- Scuola di Medicina, Università degli Studi di Milano-Bicocca, Milano, Italy
| |
Collapse
|
8
|
Sun J, Shi W, Giuste FO, Vaghani YS, Tang L, Wang MD. Improving explainable AI with patch perturbation-based evaluation pipeline: a COVID-19 X-ray image analysis case study. Sci Rep 2023; 13:19488. [PMID: 37945586 PMCID: PMC10636093 DOI: 10.1038/s41598-023-46493-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 11/01/2023] [Indexed: 11/12/2023] Open
Abstract
Recent advances in artificial intelligence (AI) have sparked interest in developing explainable AI (XAI) methods for clinical decision support systems, especially in translational research. Although using XAI methods may enhance trust in black-box models, evaluating their effectiveness has been challenging, primarily due to the absence of human (expert) intervention, additional annotations, and automated strategies. In order to conduct a thorough assessment, we propose a patch perturbation-based approach to automatically evaluate the quality of explanations in medical imaging analysis. To eliminate the need for human efforts in conventional evaluation methods, our approach executes poisoning attacks during model retraining by generating both static and dynamic triggers. We then propose a comprehensive set of evaluation metrics during the model inference stage to facilitate the evaluation from multiple perspectives, covering a wide range of correctness, completeness, consistency, and complexity. In addition, we include an extensive case study to showcase the proposed evaluation strategy by applying widely-used XAI methods on COVID-19 X-ray imaging classification tasks, as well as a thorough review of existing XAI methods in medical imaging analysis with evaluation availability. The proposed patch perturbation-based workflow offers model developers an automated and generalizable evaluation strategy to identify potential pitfalls and optimize their proposed explainable solutions, while also aiding end-users in comparing and selecting appropriate XAI methods that meet specific clinical needs in real-world clinical research and practice.
Collapse
Affiliation(s)
- Jimin Sun
- School of Computer Science and Engineering, Georgia Institute of Technology, Atlanta, 30322, USA
| | - Wenqi Shi
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, 30322, USA
| | - Felipe O Giuste
- The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, 30322, USA
| | - Yog S Vaghani
- The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, 30322, USA
| | - Lingzi Tang
- The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, 30322, USA
| | - May D Wang
- The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, 30322, USA.
| |
Collapse
|
9
|
Schaudt D, von Schwerin R, Hafner A, Riedel P, Reichert M, von Schwerin M, Beer M, Kloth C. Augmentation strategies for an imbalanced learning problem on a novel COVID-19 severity dataset. Sci Rep 2023; 13:18299. [PMID: 37880333 PMCID: PMC10600145 DOI: 10.1038/s41598-023-45532-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 10/20/2023] [Indexed: 10/27/2023] Open
Abstract
Since the beginning of the COVID-19 pandemic, many different machine learning models have been developed to detect and verify COVID-19 pneumonia based on chest X-ray images. Although promising, binary models have only limited implications for medical treatment, whereas the prediction of disease severity suggests more suitable and specific treatment options. In this study, we publish severity scores for the 2358 COVID-19 positive images in the COVIDx8B dataset, creating one of the largest collections of publicly available COVID-19 severity data. Furthermore, we train and evaluate deep learning models on the newly created dataset to provide a first benchmark for the severity classification task. One of the main challenges of this dataset is the skewed class distribution, resulting in undesirable model performance for the most severe cases. We therefore propose and examine different augmentation strategies, specifically targeting majority and minority classes. Our augmentation strategies show significant improvements in precision and recall values for the rare and most severe cases. While the models might not yet fulfill medical requirements, they serve as an appropriate starting point for further research with the proposed dataset to optimize clinical resource allocation and treatment.
Collapse
Affiliation(s)
- Daniel Schaudt
- Department of Computer Science, Ulm University of Applied Science, Albert-Einstein-Allee 55, 89081, Ulm, Baden-Wurttemberg, Germany.
| | - Reinhold von Schwerin
- Department of Computer Science, Ulm University of Applied Science, Albert-Einstein-Allee 55, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Alexander Hafner
- Department of Computer Science, Ulm University of Applied Science, Albert-Einstein-Allee 55, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Pascal Riedel
- Department of Computer Science, Ulm University of Applied Science, Albert-Einstein-Allee 55, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Manfred Reichert
- Institute of Databases and Information Systems, Ulm University, James-Franck-Ring, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Marianne von Schwerin
- Department of Computer Science, Ulm University of Applied Science, Albert-Einstein-Allee 55, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Meinrad Beer
- Department of Radiology, University Hospital of Ulm, Albert-Einstein-Allee 23, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Christopher Kloth
- Department of Radiology, University Hospital of Ulm, Albert-Einstein-Allee 23, 89081, Ulm, Baden-Wurttemberg, Germany
| |
Collapse
|
10
|
Liang Z, Xue Z, Rajaraman S, Feng Y, Antani S. Automatic Quantification of COVID-19 Pulmonary Edema by Self-supervised Contrastive Learning. MEDICAL IMAGE LEARNING WITH LIMITED AND NOISY DATA : SECOND INTERNATIONAL WORKSHOP, MILLAND 2023, HELD IN CONJUNCTION WITH MICCAI 2023, VANCOUVER, BC, CANADA, OCTOBER 8, 2023, PROCEEDINGS. MILLAND (WORKSHOP) : (2ND : 2023 : VANCOUVER, B... 2023; 14307:128-137. [PMID: 38415180 PMCID: PMC10896252 DOI: 10.1007/978-3-031-44917-8_12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/29/2024]
Abstract
We proposed a self-supervised machine learning method to automatically rate the severity of pulmonary edema in the frontal chest X-ray radiographs (CXR) which could be potentially related to COVID-19 viral pneumonia. For this we use the modified radiographic assessment of lung edema (mRALE) scoring system. The new model was first optimized with the simple Siamese network (SimSiam) architecture where a ResNet-50 pretrained by ImageNet database was used as the backbone. The encoder projected a 2048-dimension embedding as representation features to a downstream fully connected deep neural network for mRALE score prediction. A 5-fold cross-validation with 2,599 frontal CXRs was used to examine the new model's performance with comparison to a non-pretrained SimSiam encoder and a ResNet-50 trained from scratch. The mean absolute error (MAE) of the new model is 5.05 (95%CI 5.03-5.08), the mean squared error (MSE) is 66.67 (95%CI 66.29-67.06), and the Spearman's correlation coefficient (Spearman ρ) to the expert-annotated scores is 0.77 (95%CI 0.75-0.79). All the performance metrics of the new model are superior to the two comparators (P<0.01), and the scores of MSE and Spearman ρ of the two comparators have no statistical difference (P>0.05). The model also achieved a prediction probability concordance of 0.811 and a quadratic weighted kappa of 0.739 with the medical expert annotations in external validation. We conclude that the self-supervised contrastive learning method is an effective strategy for mRALE automated scoring. It provides a new approach to improve machine learning performance and minimize the expert knowledge involvement in quantitative medical image pattern learning.
Collapse
Affiliation(s)
- Zhaohui Liang
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Zhiyun Xue
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Sivaramakrishnan Rajaraman
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Yang Feng
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Sameer Antani
- Computational Health Research Branch, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
11
|
Nizam NB, Siddiquee SM, Shirin M, Bhuiyan MIH, Hasan T. COVID-19 Severity Prediction from Chest X-ray Images Using an Anatomy-Aware Deep Learning Model. J Digit Imaging 2023; 36:2100-2112. [PMID: 37369941 PMCID: PMC10502002 DOI: 10.1007/s10278-023-00861-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 05/17/2023] [Accepted: 05/25/2023] [Indexed: 06/29/2023] Open
Abstract
The COVID-19 pandemic has been adversely affecting the patient management systems in hospitals around the world. Radiological imaging, especially chest x-ray and lung Computed Tomography (CT) scans, plays a vital role in the severity analysis of hospitalized COVID-19 patients. However, with an increasing number of patients and a lack of skilled radiologists, automated assessment of COVID-19 severity using medical image analysis has become increasingly important. Chest x-ray (CXR) imaging plays a significant role in assessing the severity of pneumonia, especially in low-resource hospitals, and is the most frequently used diagnostic imaging in the world. Previous methods that automatically predict the severity of COVID-19 pneumonia mainly focus on feature pooling from pre-trained CXR models without explicitly considering the underlying human anatomical attributes. This paper proposes an anatomy-aware (AA) deep learning model that learns the generic features from x-ray images considering the underlying anatomical information. Utilizing a pre-trained model and lung segmentation masks, the model generates a feature vector including disease-level features and lung involvement scores. We have used four different open-source datasets, along with an in-house annotated test set for training and evaluation of the proposed method. The proposed method improves the geographical extent score by 11% in terms of mean squared error (MSE) while preserving the benchmark result in lung opacity score. The results demonstrate the effectiveness of the proposed AA model in COVID-19 severity prediction from chest X-ray images. The algorithm can be used in low-resource setting hospitals for COVID-19 severity prediction, especially where there is a lack of skilled radiologists.
Collapse
Affiliation(s)
- Nusrat Binta Nizam
- mHealth Research Group, Department of Biomedical Engineering, Bangladesh University of Engineering and Technology (BUET), Dhaka, 1205, Bangladesh
| | - Sadi Mohammad Siddiquee
- mHealth Research Group, Department of Biomedical Engineering, Bangladesh University of Engineering and Technology (BUET), Dhaka, 1205, Bangladesh
| | - Mahbuba Shirin
- Department of Radiology and Imaging, Bangabandhu Sheikh Mujib Medical University, Shahbagh, Dhaka, 1000, Bangladesh
| | - Mohammed Imamul Hassan Bhuiyan
- Department of Electrical and Electronics Engineering (EEE), Bangladesh University of Engineering and Technology, Dhaka, 1205, Bangladesh
| | - Taufiq Hasan
- mHealth Research Group, Department of Biomedical Engineering, Bangladesh University of Engineering and Technology (BUET), Dhaka, 1205, Bangladesh.
- Center for Bioengineering Innovation and Design (CBID), Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
12
|
Liu Z, Lv Q, Yang Z, Li Y, Lee CH, Shen L. Recent progress in transformer-based medical image analysis. Comput Biol Med 2023; 164:107268. [PMID: 37494821 DOI: 10.1016/j.compbiomed.2023.107268] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 05/30/2023] [Accepted: 07/16/2023] [Indexed: 07/28/2023]
Abstract
The transformer is primarily used in the field of natural language processing. Recently, it has been adopted and shows promise in the computer vision (CV) field. Medical image analysis (MIA), as a critical branch of CV, also greatly benefits from this state-of-the-art technique. In this review, we first recap the core component of the transformer, the attention mechanism, and the detailed structures of the transformer. After that, we depict the recent progress of the transformer in the field of MIA. We organize the applications in a sequence of different tasks, including classification, segmentation, captioning, registration, detection, enhancement, localization, and synthesis. The mainstream classification and segmentation tasks are further divided into eleven medical image modalities. A large number of experiments studied in this review illustrate that the transformer-based method outperforms existing methods through comparisons with multiple evaluation metrics. Finally, we discuss the open challenges and future opportunities in this field. This task-modality review with the latest contents, detailed information, and comprehensive comparison may greatly benefit the broad MIA community.
Collapse
Affiliation(s)
- Zhaoshan Liu
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore.
| | - Qiujie Lv
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore; School of Intelligent Systems Engineering, Sun Yat-sen University, No. 66, Gongchang Road, Guangming District, 518107, China.
| | - Ziduo Yang
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore; School of Intelligent Systems Engineering, Sun Yat-sen University, No. 66, Gongchang Road, Guangming District, 518107, China.
| | - Yifan Li
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore.
| | - Chau Hung Lee
- Department of Radiology, Tan Tock Seng Hospital, 11 Jalan Tan Tock Seng, Singapore, 308433, Singapore.
| | - Lei Shen
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore.
| |
Collapse
|
13
|
Zaeri N. Artificial intelligence and machine learning responses to COVID-19 related inquiries. J Med Eng Technol 2023; 47:301-320. [PMID: 38625639 DOI: 10.1080/03091902.2024.2321846] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2021] [Accepted: 02/18/2024] [Indexed: 04/17/2024]
Abstract
Researchers and scientists can use computational-based models to turn linked data into useful information, aiding in disease diagnosis, examination, and viral containment due to recent artificial intelligence and machine learning breakthroughs. In this paper, we extensively study the role of artificial intelligence and machine learning in delivering efficient responses to the COVID-19 pandemic almost four years after its start. In this regard, we examine a large number of critical studies conducted by various academic and research communities from multiple disciplines, as well as practical implementations of artificial intelligence algorithms that suggest potential solutions in investigating different COVID-19 decision-making scenarios. We identify numerous areas where artificial intelligence and machine learning can impact this context, including diagnosis (using chest X-ray imaging and CT imaging), severity, tracking, treatment, and the drug industry. Furthermore, we analyse the dilemma's limits, restrictions, and hazards.
Collapse
Affiliation(s)
- Naser Zaeri
- Faculty of Computer Studies, Arab Open University, Kuwait
| |
Collapse
|
14
|
Multi-head deep learning framework for pulmonary disease detection and severity scoring with modified progressive learning. Biomed Signal Process Control 2023; 85:104855. [PMID: 36987448 PMCID: PMC10036214 DOI: 10.1016/j.bspc.2023.104855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 03/04/2023] [Accepted: 03/11/2023] [Indexed: 03/26/2023]
Abstract
Chest X-rays (CXR) are the most commonly used imaging methodology in radiology to diagnose pulmonary diseases with close to 2 billion CXRs taken every year. The recent upsurge of COVID-19 and its variants accompanied by pneumonia and tuberculosis can be fatal in some cases and lives could be saved through early detection and appropriate intervention for the advanced cases. Thus CXRs can be used for an automated severity grading of pulmonary diseases that can aid radiologists in making better and informed diagnoses. In this article, we propose a single framework for disease classification and severity scoring produced by segmenting the lungs into six regions. We present a modified progressive learning technique in which the amount of augmentations at each step is capped. Our base network in the framework is first trained using modified progressive learning and can then be tweaked for new data sets. Furthermore, the segmentation task makes use of an attention map generated within and by the network itself. This attention mechanism allows to achieve segmentation results that are on par with networks having an order of magnitude or more parameters. We also propose severity score grading for 4 thoracic diseases that can provide a single-digit score corresponding to the spread of opacity in different lung segments with the help of radiologists. The proposed framework is evaluated using the BRAX data set for segmentation and classification into six classes with severity grading for a subset of the classes. On the BRAX validation data set, we achieve F1 scores of 0.924 and 0.939 without and with fine-tuning, respectively. A mean matching score of 80.8% is obtained for severity score grading while an average area under receiver operating characteristic curve of 0.88 is achieved for classification.
Collapse
|
15
|
Zhu H, Zhu Z, Wang S, Zhang Y. CovC-ReDRNet: A Deep Learning Model for COVID-19 Classification. MACHINE LEARNING AND KNOWLEDGE EXTRACTION 2023; 5:684-712. [PMID: 38560420 PMCID: PMC7615781 DOI: 10.3390/make5030037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Since the COVID-19 pandemic outbreak, over 760 million confirmed cases and over 6.8 million deaths have been reported globally, according to the World Health Organization. While the SARS-CoV-2 virus carried by COVID-19 patients can be identified though the reverse transcription-polymerase chain reaction (RT-PCR) test with high accuracy, clinical misdiagnosis between COVID-19 and pneumonia patients remains a challenge. Therefore, we developed a novel CovC-ReDRNet model to distinguish COVID-19 patients from pneumonia patients as well as normal cases. ResNet-18 was introduced as the backbone model and tailored for the feature representation afterward. In our feature-based randomized neural network (RNN) framework, the feature representation automatically pairs with the deep random vector function link network (dRVFL) as the optimal classifier, producing a CovC-ReDRNet model for the classification task. Results based on five-fold cross-validation reveal that our method achieved 94.94%, 97.01%, 97.56%, 96.81%, and 95.84% MA sensitivity, MA specificity, MA accuracy, MA precision, and MA F1-score, respectively. Ablation studies evidence the superiority of ResNet-18 over different backbone networks, RNNs over traditional classifiers, and deep RNNs over shallow RNNs. Moreover, our proposed model achieved a better MA accuracy than the state-of-the-art (SOTA) methods, the highest score of which was 95.57%. To conclude, our CovC-ReDRNet model could be perceived as an advanced computer-aided diagnostic model with high speed and high accuracy for classifying and predicting COVID-19 diseases.
Collapse
Affiliation(s)
- Hanruo Zhu
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| | - Ziquan Zhu
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| | - Shuihua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| | - Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo 454000, China
- Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| |
Collapse
|
16
|
Yoo SJ, Kim H, Witanto JN, Inui S, Yoon JH, Lee KD, Choi YW, Goo JM, Yoon SH. Generative adversarial network for automatic quantification of Coronavirus disease 2019 pneumonia on chest radiographs. Eur J Radiol 2023; 164:110858. [PMID: 37209462 DOI: 10.1016/j.ejrad.2023.110858] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 04/10/2023] [Accepted: 04/29/2023] [Indexed: 05/22/2023]
Abstract
PURPOSE To develop a generative adversarial network (GAN) to quantify COVID-19 pneumonia on chest radiographs automatically. MATERIALS AND METHODS This retrospective study included 50,000 consecutive non-COVID-19 chest CT scans in 2015-2017 for training. Anteroposterior virtual chest, lung, and pneumonia radiographs were generated from whole, segmented lung, and pneumonia pixels from each CT scan. Two GANs were sequentially trained to generate lung images from radiographs and to generate pneumonia images from lung images. GAN-driven pneumonia extent (pneumonia area/lung area) was expressed from 0% to 100%. We examined the correlation of GAN-driven pneumonia extent with semi-quantitative Brixia X-ray severity score (one dataset, n = 4707) and quantitative CT-driven pneumonia extent (four datasets, n = 54-375), along with analyzing a measurement difference between the GAN and CT extents. Three datasets (n = 243-1481), where unfavorable outcomes (respiratory failure, intensive care unit admission, and death) occurred in 10%, 38%, and 78%, respectively, were used to examine the predictive power of GAN-driven pneumonia extent. RESULTS GAN-driven radiographic pneumonia was correlated with the severity score (0.611) and CT-driven extent (0.640). 95% limits of agreements between GAN and CT-driven extents were -27.1% to 17.4%. GAN-driven pneumonia extent provided odds ratios of 1.05-1.18 per percent for unfavorable outcomes in the three datasets, with areas under the receiver operating characteristic curve (AUCs) of 0.614-0.842. When combined with demographic information only and with both demographic and laboratory information, the prediction models yielded AUCs of 0.643-0.841 and 0.688-0.877, respectively. CONCLUSION The generative adversarial network automatically quantified COVID-19 pneumonia on chest radiographs and identified patients with unfavorable outcomes.
Collapse
Affiliation(s)
- Seung-Jin Yoo
- Department of Radiology, Hanyang University Medical Center, Hanyang University College of Medicine, Seoul, Republic of Korea
| | - Hyungjin Kim
- Department of Radiology, Seoul National University Hospital, Seoul National College of Medicine, Seoul, Korea
| | | | - Shohei Inui
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan; Department of Radiology, Japan Self-Defense Forces Central Hospital, Tokyo, Japan
| | - Jeong-Hwa Yoon
- Institute of Health Policy and Management, Medical Research Center, Seoul National University, Seoul, South Korea
| | - Ki-Deok Lee
- Division of Infectious diseases, Department of Internal Medicine, Myongji Hospital, Goyang, Korea
| | - Yo Won Choi
- Department of Radiology, Hanyang University Medical Center, Hanyang University College of Medicine, Seoul, Republic of Korea
| | - Jin Mo Goo
- Department of Radiology, Seoul National University Hospital, Seoul National College of Medicine, Seoul, Korea; Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea
| | - Soon Ho Yoon
- Department of Radiology, Seoul National University Hospital, Seoul National College of Medicine, Seoul, Korea; MEDICALIP Co. Ltd., Seoul, Korea
| |
Collapse
|
17
|
Sunnetci KM, Alkan A. Biphasic majority voting-based comparative COVID-19 diagnosis using chest X-ray images. EXPERT SYSTEMS WITH APPLICATIONS 2023; 216:119430. [PMID: 36570382 PMCID: PMC9767662 DOI: 10.1016/j.eswa.2022.119430] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 05/27/2022] [Accepted: 12/09/2022] [Indexed: 06/17/2023]
Abstract
The COVID-19 pandemic has been affecting the world since December 2019, and nowadays, the number of infected is increasing rapidly. Chest X-ray images are clinical adjuncts that can be used in the diagnosis of COVID-19 disease. Because of the rapid spread of COVID-19 disease worldwide and the limited number of expert radiologists, the proposed method uses the automatic diagnosis method rather than a manual diagnosis method. In the paper, COVID-19 Positive/Negative (2275 Positive, 4626 Negative) and Normal/Pneumonia (2313 Normal, 2313 Pneumonia) are diagnosed using chest X-ray images. Herein, 80 % and 20 % of the images are used in the training and validation set, respectively. In the proposed method, six different classifiers are trained using chest X-ray images, and the five most successful classifiers are used in both phases. In Phase-1 and Phase-2, image features are extracted using the Bag of Features method for Cosine K-Nearest Neighbor (KNN), Linear Discriminant, Logistic Regression, Bagged Trees Ensemble, Medium Gaussian Support Vector Machine (SVM), excluding SqueezeNet Deep Learning (K = 2000 and K = 1500 for Phase-1 and Phase-2, respectively). In both phases, the five most successful classifiers are determined, and images classify with the help of the Majority Voting (Mathematical Evaluation) method. The application of the proposed method is designed for users to diagnose COVID-19 Positive, Normal, and Pneumonia. The results show that accuracy values obtained by Majority Voting (Mathematical Evaluation) method for Phase-1 and Phase-2 are equal to 99.86 % and 99.28 %, respectively. Thus, it indicates that the accuracy of the whole system is 99.63 %. When we analyze the classification performance metrics for Phase-1 and Phase-2, Specificity (%), Precision (%), Recall (%), F1 Score (%), Area Under Curve (AUC), and Matthews Correlation Coefficient (MCC) are equal to 99.98-99.83-99.07-99.51-0.9974-0.9855 and 99.73-99.69-98.63-99.23-0.9928-0.9518, respectively. Moreover, if the classification performance metrics of the whole system are examined, it is seen that Specificity (%), Precision (%), Recall (%), F1 Score (%), AUC, and MCC are 99.88, 99.78, 98.90, 99.40, 0.9956, and 0.9720, respectively. When the studies in the literature are examined, the results show that the proposed model is better than its counterparts. Because the best performance metrics for the dataset used were obtained in this study. In addition, since the biphasic majority voting technique is used in the study, it is seen that the proposed model is more reliable. On the other hand, although there are tens of thousands of studies on this subject, the usability of these models is debatable since most of them do not have graphical user interface applications. Already, in artificial intelligence technologies, besides the performance of the developed models, their usability is also important. Because the developed models can generally be used by people who are less knowledgeable about artificial intelligence.
Collapse
Affiliation(s)
- Kubilay Muhammed Sunnetci
- Department of Electrical and Electronics Engineering, Osmaniye Korkut Ata University, Osmaniye, Turkey
- Department of Electrical and Electronics Engineering, Kahramanmaraş Sütçü İmam University, Kahramanmaraş, Turkey
| | - Ahmet Alkan
- Department of Electrical and Electronics Engineering, Kahramanmaraş Sütçü İmam University, Kahramanmaraş, Turkey
| |
Collapse
|
18
|
Panigutti C, Beretta A, Fadda D, Giannotti F, Pedreschi D, Perotti A, Rinzivillo S. Co-design of human-centered, explainable AI for clinical decision support. ACM T INTERACT INTEL 2023. [DOI: 10.1145/3587271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/16/2023]
Abstract
eXplainable AI (XAI) involves two intertwined but separate challenges: the development of techniques to extract explanations from black-box AI models, and the way such explanations are presented to users, i.e., the explanation user interface. Despite its importance, the second aspect has received limited attention so far in the literature. Effective AI explanation interfaces are fundamental for allowing human decision-makers to take advantage and oversee high-risk AI systems effectively. Following an iterative design approach, we present the first cycle of prototyping-testing-redesigning of an explainable AI technique, and its explanation user interface for clinical Decision Support Systems (DSS). We first present an XAI technique that meets the technical requirements of the healthcare domain: sequential, ontology-linked patient data, and multi-label classification tasks. We demonstrate its applicability to explain a clinical DSS, and we design a first prototype of an explanation user interface. Next, we test such a prototype with healthcare providers and collect their feedback, with a two-fold outcome: first, we obtain evidence that explanations increase users’ trust in the XAI system, and second, we obtain useful insights on the perceived deficiencies of their interaction with the system, so that we can re-design a better, more human-centered explanation interface.
Collapse
Affiliation(s)
- Cecilia Panigutti
- Università di Pisa, Italy and European Commission, Joint Research Centre (JRC), Italy
| | | | | | | | | | | | | |
Collapse
|
19
|
Automated prediction of COVID-19 severity upon admission by chest X-ray images and clinical metadata aiming at accuracy and explainability. Sci Rep 2023; 13:4226. [PMID: 36918593 PMCID: PMC10012307 DOI: 10.1038/s41598-023-30505-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 02/24/2023] [Indexed: 03/16/2023] Open
Abstract
In the past few years COVID-19 posed a huge threat to healthcare systems around the world. One of the first waves of the pandemic hit Northern Italy severely resulting in high casualties and in the near breakdown of primary care. Due to these facts, the Covid CXR Hackathon-Artificial Intelligence for Covid-19 prognosis: aiming at accuracy and explainability challenge had been launched at the beginning of February 2022, releasing a new imaging dataset with additional clinical metadata for each accompanying chest X-ray (CXR). In this article we summarize our techniques at correctly diagnosing chest X-ray images collected upon admission for severity of COVID-19 outcome. In addition to X-ray imagery, clinical metadata was provided and the challenge also aimed at creating an explainable model. We created a best-performing, as well as, an explainable model that makes an effort to map clinical metadata to image features whilst predicting the prognosis. We also did many ablation studies in order to identify crucial parts of the models and the predictive power of each feature in the datasets. We conclude that CXRs at admission do not help the predicting power of the metadata significantly by itself and contain mostly information that is also mutually present in the blood samples and other clinical factors collected at admission.
Collapse
|
20
|
Guarrasi V, Soda P. Multi-objective optimization determines when, which and how to fuse deep networks: An application to predict COVID-19 outcomes. Comput Biol Med 2023; 154:106625. [PMID: 36738713 PMCID: PMC9892294 DOI: 10.1016/j.compbiomed.2023.106625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 01/18/2023] [Accepted: 01/28/2023] [Indexed: 02/05/2023]
Abstract
The COVID-19 pandemic has caused millions of cases and deaths and the AI-related scientific community, after being involved with detecting COVID-19 signs in medical images, has been now directing the efforts towards the development of methods that can predict the progression of the disease. This task is multimodal by its very nature and, recently, baseline results achieved on the publicly available AIforCOVID dataset have shown that chest X-ray scans and clinical information are useful to identify patients at risk of severe outcomes. While deep learning has shown superior performance in several medical fields, in most of the cases it considers unimodal data only. In this respect, when, which and how to fuse the different modalities is an open challenge in multimodal deep learning. To cope with these three questions here we present a novel approach optimizing the setup of a multimodal end-to-end model. It exploits Pareto multi-objective optimization working with a performance metric and the diversity score of multiple candidate unimodal neural networks to be fused. We test our method on the AIforCOVID dataset, attaining state-of-the-art results, not only outperforming the baseline performance but also being robust to external validation. Moreover, exploiting XAI algorithms we figure out a hierarchy among the modalities and we extract the features' intra-modality importance, enriching the trust on the predictions made by the model.
Collapse
Affiliation(s)
- Valerio Guarrasi
- Unit of Computer Systems and Bioinformatics, Department of Engineering, University Campus Bio-Medico of Rome, Italy; Department of Computer, Control, and Management Engineering, Sapienza University of Rome, Italy.
| | - Paolo Soda
- Unit of Computer Systems and Bioinformatics, Department of Engineering, University Campus Bio-Medico of Rome, Italy; Department of Radiation Sciences, Radiation Physics, Biomedical Engineering, Umeå, University, Umeå, Sweden.
| |
Collapse
|
21
|
Arias-Garzón D, Tabares-Soto R, Bernal-Salcedo J, Ruz GA. Biases associated with database structure for COVID-19 detection in X-ray images. Sci Rep 2023; 13:3477. [PMID: 36859430 PMCID: PMC9975856 DOI: 10.1038/s41598-023-30174-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Accepted: 02/17/2023] [Indexed: 03/03/2023] Open
Abstract
Several artificial intelligence algorithms have been developed for COVID-19-related topics. One that has been common is the COVID-19 diagnosis using chest X-rays, where the eagerness to obtain early results has triggered the construction of a series of datasets where bias management has not been thorough from the point of view of patient information, capture conditions, class imbalance, and careless mixtures of multiple datasets. This paper analyses 19 datasets of COVID-19 chest X-ray images, identifying potential biases. Moreover, computational experiments were conducted using one of the most popular datasets in this domain, which obtains a 96.19% of classification accuracy on the complete dataset. Nevertheless, when evaluated with the ethical tool Aequitas, it fails on all the metrics. Ethical tools enhanced with some distribution and image quality considerations are the keys to developing or choosing a dataset with fewer bias issues. We aim to provide broad research on dataset problems, tools, and suggestions for future dataset developments and COVID-19 applications using chest X-ray images.
Collapse
Affiliation(s)
- Daniel Arias-Garzón
- grid.441739.c0000 0004 0486 2919Departamento de Electrónica y Automatización, Universidad Autónoma de Manizales, Manizales, 170001 Colombia
| | - Reinel Tabares-Soto
- grid.441739.c0000 0004 0486 2919Departamento de Electrónica y Automatización, Universidad Autónoma de Manizales, Manizales, 170001 Colombia ,grid.440617.00000 0001 2162 5606Facultad de Ingeniería y Ciencias, Universidad Adolfo Ibáñez, 7941169 Santiago, Chile ,grid.7779.e0000 0001 2290 6370Departamento de Sistemas e Informática, Universidad de Caldas, Manizales, 170001 Colombia
| | - Joshua Bernal-Salcedo
- grid.441739.c0000 0004 0486 2919Departamento de Electrónica y Automatización, Universidad Autónoma de Manizales, Manizales, 170001 Colombia
| | - Gonzalo A. Ruz
- grid.440617.00000 0001 2162 5606Facultad de Ingeniería y Ciencias, Universidad Adolfo Ibáñez, 7941169 Santiago, Chile ,grid.512276.5Center of Applied Ecology and Sustainability (CAPES), 8331150 Santiago, Chile ,Data Observatory Foundation, 7941169 Santiago, Chile
| |
Collapse
|
22
|
Meng Y, Bridge J, Addison C, Wang M, Merritt C, Franks S, Mackey M, Messenger S, Sun R, Fitzmaurice T, McCann C, Li Q, Zhao Y, Zheng Y. Bilateral adaptive graph convolutional network on CT based Covid-19 diagnosis with uncertainty-aware consensus-assisted multiple instance learning. Med Image Anal 2023; 84:102722. [PMID: 36574737 PMCID: PMC9753459 DOI: 10.1016/j.media.2022.102722] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 10/17/2022] [Accepted: 12/02/2022] [Indexed: 12/23/2022]
Abstract
Coronavirus disease (COVID-19) has caused a worldwide pandemic, putting millions of people's health and lives in jeopardy. Detecting infected patients early on chest computed tomography (CT) is critical in combating COVID-19. Harnessing uncertainty-aware consensus-assisted multiple instance learning (UC-MIL), we propose to diagnose COVID-19 using a new bilateral adaptive graph-based (BA-GCN) model that can use both 2D and 3D discriminative information in 3D CT volumes with arbitrary number of slices. Given the importance of lung segmentation for this task, we have created the largest manual annotation dataset so far with 7,768 slices from COVID-19 patients, and have used it to train a 2D segmentation model to segment the lungs from individual slices and mask the lungs as the regions of interest for the subsequent analyses. We then used the UC-MIL model to estimate the uncertainty of each prediction and the consensus between multiple predictions on each CT slice to automatically select a fixed number of CT slices with reliable predictions for the subsequent model reasoning. Finally, we adaptively constructed a BA-GCN with vertices from different granularity levels (2D and 3D) to aggregate multi-level features for the final diagnosis with the benefits of the graph convolution network's superiority to tackle cross-granularity relationships. Experimental results on three largest COVID-19 CT datasets demonstrated that our model can produce reliable and accurate COVID-19 predictions using CT volumes with any number of slices, which outperforms existing approaches in terms of learning and generalisation ability. To promote reproducible research, we have made the datasets, including the manual annotations and cleaned CT dataset, as well as the implementation code, available at https://doi.org/10.5281/zenodo.6361963.
Collapse
Affiliation(s)
- Yanda Meng
- Department of Eye and Vision Science, University of Liverpool, Liverpool, United Kingdom
| | - Joshua Bridge
- Department of Eye and Vision Science, University of Liverpool, Liverpool, United Kingdom
| | - Cliff Addison
- Advanced Research Computing, University of Liverpool, Liverpool, United Kingdom
| | - Manhui Wang
- Advanced Research Computing, University of Liverpool, Liverpool, United Kingdom
| | | | - Stu Franks
- Alces Flight Limited, Bicester, United Kingdom
| | - Maria Mackey
- Amazon Web Services, 60 Holborn Viaduct, London, United Kingdom
| | - Steve Messenger
- Amazon Web Services, 60 Holborn Viaduct, London, United Kingdom
| | - Renrong Sun
- Department of Radiology, Hubei Provincial Hospital of Integrated Chinese and Western Medicine, Hubei University of Chinese Medicine, Wuhan, China
| | - Thomas Fitzmaurice
- Adult Cystic Fibrosis Unit, Liverpool Heart and Chest Hospital NHS Foundation Trust, Liverpool, United Kingdom
| | - Caroline McCann
- Radiology, Liverpool Heart and Chest Hospital NHS Foundation Trust, United Kingdom
| | - Qiang Li
- The Affiliated People’s Hospital of Ningbo University, Ningbo, China
| | - Yitian Zhao
- The Affiliated People's Hospital of Ningbo University, Ningbo, China; Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Science, Ningbo, China.
| | - Yalin Zheng
- Department of Eye and Vision Science, University of Liverpool, Liverpool, United Kingdom; Liverpool Centre for Cardiovascular Science, University of Liverpool and Liverpool Heart & Chest Hospital, Liverpool, United Kingdom.
| |
Collapse
|
23
|
Khan A, Akram MU, Nazir S. Automated grading of chest x-ray images for viral pneumonia with convolutional neural networks ensemble and region of interest localization. PLoS One 2023; 18:e0280352. [PMID: 36649367 PMCID: PMC9844910 DOI: 10.1371/journal.pone.0280352] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Accepted: 12/27/2022] [Indexed: 01/18/2023] Open
Abstract
Following its initial identification on December 31, 2019, COVID-19 quickly spread around the world as a pandemic claiming more than six million lives. An early diagnosis with appropriate intervention can help prevent deaths and serious illness as the distinguishing symptoms that set COVID-19 apart from pneumonia and influenza frequently don't show up until after the patient has already suffered significant damage. A chest X-ray (CXR), one of many imaging modalities that are useful for detection and one of the most used, offers a non-invasive method of detection. The CXR image analysis can also reveal additional disorders, such as pneumonia, which show up as anomalies in the lungs. Thus these CXRs can be used for automated grading aiding the doctors in making a better diagnosis. In order to classify a CXR image into the Negative for Pneumonia, Typical, Indeterminate, and Atypical, we used the publicly available CXR image competition dataset SIIM-FISABIO-RSNA COVID-19 from Kaggle. The suggested architecture employed an ensemble of EfficientNetv2-L for classification, which was trained via transfer learning from the initialised weights of ImageNet21K on various subsets of data (Code for the proposed methodology is available at: https://github.com/asadkhan1221/siim-covid19.git). To identify and localise opacities, an ensemble of YOLO was combined using Weighted Boxes Fusion (WBF). Significant generalisability gains were made possible by the suggested technique's addition of classification auxiliary heads to the CNN backbone. The suggested method improved further by utilising test time augmentation for both classifiers and localizers. The results for Mean Average Precision score show that the proposed deep learning model achieves 0.617 and 0.609 on public and private sets respectively and these are comparable to other techniques for the Kaggle dataset.
Collapse
Affiliation(s)
- Asad Khan
- Computer and Software Engineering Department, National University of Sciences and Technology, Islamabad, Pakistan
| | - Muhammad Usman Akram
- Computer and Software Engineering Department, National University of Sciences and Technology, Islamabad, Pakistan
| | - Sajid Nazir
- Department of Computing, Glasgow Caledonian University, Glasgow, United Kingdom
- * E-mail:
| |
Collapse
|
24
|
A few-shot approach for COVID-19 screening in standard and portable chest X-ray images. Sci Rep 2022; 12:21511. [PMID: 36513713 PMCID: PMC9745688 DOI: 10.1038/s41598-022-25754-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Accepted: 12/05/2022] [Indexed: 12/15/2022] Open
Abstract
Reliable and effective diagnostic systems are of vital importance for COVID-19, specifically for triage and screening procedures. In this work, a fully automatic diagnostic system based on chest X-ray images (CXR) has been proposed. It relies on the few-shot paradigm, which allows to work with small databases. Furthermore, three components have been added to improve the diagnosis performance: (1) a region proposal network which makes the system focus on the lungs; (2) a novel cost function which adds expert knowledge by giving specific penalties to each misdiagnosis; and (3) an ensembling procedure integrating multiple image comparisons to produce more reliable diagnoses. Moreover, the COVID-SC dataset has been introduced, comprising almost 1100 AnteroPosterior CXR images, namely 439 negative and 653 positive according to the RT-PCR test. Expert radiologists divided the negative images into three categories (normal lungs, COVID-related diseases, and other diseases) and the positive images into four severity levels. This entails the most complete COVID-19 dataset in terms of patient diversity. The proposed system has been compared with state-of-the-art methods in the COVIDGR-1.0 public database, achieving the highest accuracy (81.13% ± 2.76%) and the most robust results. An ablation study proved that each system component contributes to improve the overall performance. The procedure has also been validated on the COVID-SC dataset under different scenarios, with accuracies ranging from 70.81 to 87.40%. In conclusion, our proposal provides a good accuracy appropriate for the early detection of COVID-19.
Collapse
|
25
|
Cov-caldas: A new COVID-19 chest X-Ray dataset from state of Caldas-Colombia. Sci Data 2022; 9:757. [PMID: 36476596 PMCID: PMC9729565 DOI: 10.1038/s41597-022-01576-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Accepted: 07/21/2022] [Indexed: 12/13/2022] Open
Abstract
The emergence of COVID-19 as a global pandemic forced researchers worldwide in various disciplines to investigate and propose efficient strategies and/or technologies to prevent COVID-19 from further spreading. One of the main challenges to be overcome is the fast and efficient detection of COVID-19 using deep learning approaches and medical images such as Chest Computed Tomography (CT) and Chest X-ray images. In order to contribute to this challenge, a new dataset was collected in collaboration with "S.E.S Hospital Universitario de Caldas" ( https://hospitaldecaldas.com/ ) from Colombia and organized following the Medical Imaging Data Structure (MIDS) format. The dataset contains 7,307 chest X-ray images divided into 3,077 and 4,230 COVID-19 positive and negative images. Images were subjected to a selection and anonymization process to allow the scientific community to use them freely. Finally, different convolutional neural networks were used to perform technical validation. This dataset contributes to the scientific community by tackling significant limitations regarding data quality and availability for the detection of COVID-19.
Collapse
|
26
|
Li H, Zeng N, Wu P, Clawson K. Cov-Net: A computer-aided diagnosis method for recognizing COVID-19 from chest X-ray images via machine vision. EXPERT SYSTEMS WITH APPLICATIONS 2022; 207:118029. [PMID: 35812003 PMCID: PMC9252868 DOI: 10.1016/j.eswa.2022.118029] [Citation(s) in RCA: 35] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 06/17/2022] [Accepted: 06/29/2022] [Indexed: 05/05/2023]
Abstract
In the context of global pandemic Coronavirus disease 2019 (COVID-19) that threatens life of all human beings, it is of vital importance to achieve early detection of COVID-19 among symptomatic patients. In this paper, a computer aided diagnosis (CAD) model Cov-Net is proposed for accurate recognition of COVID-19 from chest X-ray images via machine vision techniques, which mainly concentrates on powerful and robust feature learning ability. In particular, a modified residual network with asymmetric convolution and attention mechanism embedded is selected as the backbone of feature extractor, after which skip-connected dilated convolution with varying dilation rates is applied to achieve sufficient feature fusion among high-level semantic and low-level detailed information. Experimental results on two public COVID-19 radiography databases have demonstrated the practicality of proposed Cov-Net in accurate COVID-19 recognition with accuracy of 0.9966 and 0.9901, respectively. Furthermore, within same experimental conditions, proposed Cov-Net outperforms other six state-of-the-art computer vision algorithms, which validates the superiority and competitiveness of Cov-Net in building highly discriminative features from the perspective of methodology. Hence, it is deemed that proposed Cov-Net has a good generalization ability so that it can be applied to other CAD scenarios. Consequently, one can conclude that this work has both practical value in providing reliable reference to the radiologist and theoretical significance in developing methods to build robust features with strong presentation ability.
Collapse
Affiliation(s)
- Han Li
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361102, China
| | - Nianyin Zeng
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361102, China
| | - Peishu Wu
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361102, China
| | - Kathy Clawson
- School of Computer Science, University of Sunderland, Saint Peter Campus, United Kingdom
| |
Collapse
|
27
|
Developing medical imaging AI for emerging infectious diseases. Nat Commun 2022; 13:7060. [PMID: 36400764 PMCID: PMC9672573 DOI: 10.1038/s41467-022-34234-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Accepted: 10/19/2022] [Indexed: 11/19/2022] Open
Abstract
Very few of the COVID-19 ML models were fit for deployment in real-world settings. In this Comment, Huang et al. discuss the main steps required to develop clinically useful models in the context of an emerging infectious disease.
Collapse
|
28
|
New patch-based strategy for COVID-19 automatic identification using chest x-ray images. HEALTH AND TECHNOLOGY 2022; 12:1117-1132. [PMCID: PMC9647770 DOI: 10.1007/s12553-022-00704-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Accepted: 10/09/2022] [Indexed: 11/11/2022]
Abstract
Purpose The development of a robust model for automatic identification of COVID-19 based on chest x-rays has been a widely addressed topic over the last couple of years; however, the scarcity of good quality images sets, and their limited size, have proven to be an important obstacle to obtain reliable models. In fact, models proposed so far have suffered from over-fitting erroneous features instead of learning lung features, a phenomenon known as shortcut learning. In this research, a new image classification methodology is proposed that attempts to mitigate this problem. Methods To this end, annotation by expert radiologists of a set of images was performed. The lung region was then segmented and a new classification strategy based on a patch partitioning that improves the resolution of the convolution neural network is proposed. In addition, a set of native images, used as an external evaluation set, is released. Results The best results were obtained for the 6-patch splitting variant with 0.887 accuracy, 0.85 recall and 0.848 F1score on the external validation set. Conclusion The results show that the proposed new strategy maintains similar values between internal and external validation, which gives our model generalization power, making it available for use in hospital settings. Supplementary Information The online version contains supplementary material available at 10.1007/s12553-022-00704-4.
Collapse
|
29
|
Sharma A, Mishra PK. Covid-MANet: Multi-task attention network for explainable diagnosis and severity assessment of COVID-19 from CXR images. PATTERN RECOGNITION 2022; 131:108826. [PMID: 35698723 PMCID: PMC9170279 DOI: 10.1016/j.patcog.2022.108826] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Revised: 04/24/2022] [Accepted: 06/02/2022] [Indexed: 05/17/2023]
Abstract
The devastating outbreak of Coronavirus Disease (COVID-19) cases in early 2020 led the world to face health crises. Subsequently, the exponential reproduction rate of COVID-19 disease can only be reduced by early diagnosis of COVID-19 infection cases correctly. The initial research findings reported that radiological examinations using CT and CXR modality have successfully reduced false negatives by RT-PCR test. This research study aims to develop an explainable diagnosis system for the detection and infection region quantification of COVID-19 disease. The existing research studies successfully explored deep learning approaches with higher performance measures but lacked generalization and interpretability for COVID-19 diagnosis. In this study, we address these issues by the Covid-MANet network, an automated end-to-end multi-task attention network that works for 5 classes in three stages for COVID-19 infection screening. The first stage of the Covid-MANet network localizes attention of the model to the relevant lungs region for disease recognition. The second stage of the Covid-MANet network differentiates COVID-19 cases from bacterial pneumonia, viral pneumonia, normal and tuberculosis cases, respectively. To improve the interpretation and explainability, three experiments have been conducted in exploration of the most coherent and appropriate classification approach. Moreover, the multi-scale attention model MA-DenseNet201 proposed for the classification of COVID-19 cases. The final stage of the Covid-MANet network quantifies the proportion of infection and severity of COVID-19 in the lungs. The COVID-19 cases are graded into more specific severity levels such as mild, moderate, severe, and critical as per the score assigned by the RALE scoring system. The MA-DenseNet201 classification model outperforms eight state-of-the-art CNN models, in terms of sensitivity and interpretation with lung localization network. The COVID-19 infection segmentation by UNet with DenseNet121 encoder achieves dice score of 86.15% outperforming UNet, UNet++, AttentionUNet, R2UNet, with VGG16, ResNet50 and DenseNet201 encoder. The proposed network not only classifies images based on the predicted label but also highlights the infection by segmentation/localization of model-focused regions to support explainable decisions. MA-DenseNet201 model with a segmentation-based cropping approach achieves maximum interpretation of 96% with COVID-19 sensitivity of 97.75%. Finally, based on class-varied sensitivity analysis Covid-MANet ensemble network of MA-DenseNet201, ResNet50 and MobileNet achieve 95.05% accuracy and 98.75% COVID-19 sensitivity. The proposed model is externally validated on an unseen dataset, yields 98.17% COVID-19 sensitivity.
Collapse
Affiliation(s)
- Ajay Sharma
- Department of Computer Science, Institute of Science, Banaras Hindu University, Varanasi 221005, India
| | - Pramod Kumar Mishra
- Department of Computer Science, Institute of Science, Banaras Hindu University, Varanasi 221005, India
| |
Collapse
|
30
|
Mittal S, Venugopal VK, Agarwal VK, Malhotra M, Chatha JS, Kapur S, Gupta A, Batra V, Majumdar P, Malhotra A, Thakral K, Chhabra S, Vatsa M, Singh R, Chaudhury S. A novel abnormality annotation database for COVID-19 affected frontal lung X-rays. PLoS One 2022; 17:e0271931. [PMID: 36240175 PMCID: PMC9565456 DOI: 10.1371/journal.pone.0271931] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Accepted: 07/10/2022] [Indexed: 12/23/2022] Open
Abstract
Consistent clinical observations of characteristic findings of COVID-19 pneumonia on chest X-rays have attracted the research community to strive to provide a fast and reliable method for screening suspected patients. Several machine learning algorithms have been proposed to find the abnormalities in the lungs using chest X-rays specific to COVID-19 pneumonia and distinguish them from other etiologies of pneumonia. However, despite the enormous magnitude of the pandemic, there are very few instances of public databases of COVID-19 pneumonia, and to the best of our knowledge, there is no database with annotation of abnormalities on the chest X-rays of COVID-19 affected patients. Annotated databases of X-rays can be of significant value in the design and development of algorithms for disease prediction. Further, explainability analysis for the performance of existing or new deep learning algorithms will be enhanced significantly with access to ground-truth abnormality annotations. The proposed COVID Abnormality Annotation for X-Rays (CAAXR) database is built upon the BIMCV-COVID19+ database which is a large-scale dataset containing COVID-19+ chest X-rays. The primary contribution of this study is the annotation of the abnormalities in over 1700 frontal chest X-rays. Further, we define protocols for semantic segmentation as well as classification for robust evaluation of algorithms. We provide benchmark results on the defined protocols using popular deep learning models such as DenseNet, ResNet, MobileNet, and VGG for classification, and UNet, SegNet, and Mask-RCNN for semantic segmentation. The classwise accuracy, sensitivity, and AUC-ROC scores are reported for the classification models, and the IoU and DICE scores are reported for the segmentation models.
Collapse
Affiliation(s)
- Surbhi Mittal
- Department of Computer Science, IIT Jodhpur, Karwar, Rajasthan, India
| | | | | | | | | | | | | | | | - Puspita Majumdar
- Department of Computer Science, IIT Jodhpur, Karwar, Rajasthan, India
- Department of Computer Science, IIIT Delhi, New Delhi, India
| | - Aakarsh Malhotra
- Department of Computer Science, IIT Jodhpur, Karwar, Rajasthan, India
- Department of Computer Science, IIIT Delhi, New Delhi, India
| | - Kartik Thakral
- Department of Computer Science, IIT Jodhpur, Karwar, Rajasthan, India
| | - Saheb Chhabra
- Department of Computer Science, IIT Jodhpur, Karwar, Rajasthan, India
- Department of Computer Science, IIIT Delhi, New Delhi, India
| | - Mayank Vatsa
- Department of Computer Science, IIT Jodhpur, Karwar, Rajasthan, India
| | - Richa Singh
- Department of Computer Science, IIT Jodhpur, Karwar, Rajasthan, India
- * E-mail:
| | - Santanu Chaudhury
- Department of Computer Science, IIT Jodhpur, Karwar, Rajasthan, India
| |
Collapse
|
31
|
He S, Feng Y, Grant PE, Ou Y. Deep Relation Learning for Regression and Its Application to Brain Age Estimation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2304-2317. [PMID: 35320092 PMCID: PMC9782832 DOI: 10.1109/tmi.2022.3161739] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Most deep learning models for temporal regression directly output the estimation based on single input images, ignoring the relationships between different images. In this paper, we propose deep relation learning for regression, aiming to learn different relations between a pair of input images. Four non-linear relations are considered: "cumulative relation," "relative relation," "maximal relation" and "minimal relation." These four relations are learned simultaneously from one deep neural network which has two parts: feature extraction and relation regression. We use an efficient convolutional neural network to extract deep features from the pair of input images and apply a Transformer for relation learning. The proposed method is evaluated on a merged dataset with 6,049 subjects with ages of 0-97 years using 5-fold cross-validation for the task of brain age estimation. The experimental results have shown that the proposed method achieved a mean absolute error (MAE) of 2.38 years, which is lower than the MAEs of 8 other state-of-the-art algorithms with statistical significance (p<0.05) in paired T-test (two-side).
Collapse
|
32
|
Zicari RV. Assessing Trustworthy AI in Times of COVID-19: Deep Learning for Predicting a Multiregional Score Conveying the Degree of Lung Compromise in COVID-19 Patients. IEEE TRANSACTIONS ON TECHNOLOGY AND SOCIETY 2022; 3:272-289. [PMID: 36573115 PMCID: PMC9762021 DOI: 10.1109/tts.2022.3195114] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 07/13/2022] [Accepted: 07/18/2022] [Indexed: 12/30/2022]
Abstract
This article's main contributions are twofold: 1) to demonstrate how to apply the general European Union's High-Level Expert Group's (EU HLEG) guidelines for trustworthy AI in practice for the domain of healthcare and 2) to investigate the research question of what does "trustworthy AI" mean at the time of the COVID-19 pandemic. To this end, we present the results of a post-hoc self-assessment to evaluate the trustworthiness of an AI system for predicting a multiregional score conveying the degree of lung compromise in COVID-19 patients, developed and verified by an interdisciplinary team with members from academia, public hospitals, and industry in time of pandemic. The AI system aims to help radiologists to estimate and communicate the severity of damage in a patient's lung from Chest X-rays. It has been experimentally deployed in the radiology department of the ASST Spedali Civili clinic in Brescia, Italy, since December 2020 during pandemic time. The methodology we have applied for our post-hoc assessment, called Z-Inspection®, uses sociotechnical scenarios to identify ethical, technical, and domain-specific issues in the use of the AI system in the context of the pandemic.
Collapse
Affiliation(s)
- Roberto V. Zicari
- Department of Business Management and Analytics, Arcada University of Applied Sciences, Helsinki, Finland
| |
Collapse
|
33
|
Automatic scoring of COVID-19 severity in X-ray imaging based on a novel deep learning workflow. Sci Rep 2022; 12:12791. [PMID: 35896761 PMCID: PMC9326426 DOI: 10.1038/s41598-022-15013-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Accepted: 06/16/2022] [Indexed: 11/30/2022] Open
Abstract
In this study, we propose a two-stage workflow used for the segmentation and scoring of lung diseases. The workflow inherits quantification, qualification, and visual assessment of lung diseases on X-ray images estimated by radiologists and clinicians. It requires the fulfillment of two core stages devoted to lung and disease segmentation as well as an additional post-processing stage devoted to scoring. The latter integrated block is utilized, mainly, for the estimation of segment scores and computes the overall severity score of a patient. The models of the proposed workflow were trained and tested on four publicly available X-ray datasets of COVID-19 patients and two X-ray datasets of patients with no pulmonary pathology. Based on a combined dataset consisting of 580 COVID-19 patients and 784 patients with no disorders, our best-performing algorithm is based on a combination of DeepLabV3 + , for lung segmentation, and MA-Net, for disease segmentation. The proposed algorithms’ mean absolute error (MAE) of 0.30 is significantly reduced in comparison to established COVID-19 algorithms; BS-net and COVID-Net-S, possessing MAEs of 2.52 and 1.83 respectively. Moreover, the proposed two-stage workflow was not only more accurate but also computationally efficient, it was approximately 11 times faster than the mentioned methods. In summary, we proposed an accurate, time-efficient, and versatile approach for segmentation and scoring of lung diseases illustrated for COVID-19 and with broader future applications for pneumonia, tuberculosis, pneumothorax, amongst others.
Collapse
|
34
|
Chandra TB, Singh BK, Jain D. Disease Localization and Severity Assessment in Chest X-Ray Images using Multi-Stage Superpixels Classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 222:106947. [PMID: 35749885 PMCID: PMC9403875 DOI: 10.1016/j.cmpb.2022.106947] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 05/25/2022] [Accepted: 06/08/2022] [Indexed: 05/13/2023]
Abstract
BACKGROUND AND OBJECTIVES Chest X-ray (CXR) is a non-invasive imaging modality used in the prognosis and management of chronic lung disorders like tuberculosis (TB), pneumonia, coronavirus disease (COVID-19), etc. The radiomic features associated with different disease manifestations assist in detection, localization, and grading the severity of infected lung regions. The majority of the existing computer-aided diagnosis (CAD) system used these features for the classification task, and only a few works have been dedicated to disease-localization and severity scoring. Moreover, the existing deep learning approaches use class activation map and Saliency map, which generate a rough localization. This study aims to generate a compact disease boundary, infection map, and grade the infection severity using proposed multistage superpixel classification-based disease localization and severity assessment framework. METHODS The proposed method uses a simple linear iterative clustering (SLIC) technique to subdivide the lung field into small superpixels. Initially, the different radiomic texture and proposed shape features are extracted and combined to train different benchmark classifiers in a multistage framework. Subsequently, the predicted class labels are used to generate an infection map, mark disease boundary, and grade the infection severity. The performance is evaluated using a publicly available Montgomery dataset and validated using Friedman average ranking and Holm and Nemenyi post-hoc procedures. RESULTS The proposed multistage classification approach achieved accuracy (ACC)= 95.52%, F-Measure (FM)= 95.48%, area under the curve (AUC)= 0.955 for Stage-I and ACC=85.35%, FM=85.20%, AUC=0.853 for Stage-II using calibration dataset and ACC = 93.41%, FM = 95.32%, AUC = 0.936 for Stage-I and ACC = 84.02%, FM = 71.01%, AUC = 0.795 for Stage-II using validation dataset. Also, the model has demonstrated the average Jaccard Index (JI) of 0.82 and Pearson's correlation coefficient (r) of 0.9589. CONCLUSIONS The obtained classification results using calibration and validation dataset confirms the promising performance of the proposed framework. Also, the average JI shows promising potential to localize the disease, and better agreement between radiologist score and predicted severity score (r) confirms the robustness of the method. Finally, the statistical test justified the significance of the obtained results.
Collapse
Affiliation(s)
- Tej Bahadur Chandra
- Department of Computer Applications, National Institute of Technology Raipur, Chhattisgarh, India.
| | - Bikesh Kumar Singh
- Department of Biomedical Engineering, National Institute of Technology Raipur, Chhattisgarh, India
| | - Deepak Jain
- Department of Radiodiagnosis, Pt. Jawahar Lal Nehru Memorial Medical College, Raipur, Chhattisgarh, India
| |
Collapse
|
35
|
Dense Convolutional Network and Its Application in Medical Image Analysis. BIOMED RESEARCH INTERNATIONAL 2022; 2022:2384830. [PMID: 35509707 PMCID: PMC9060995 DOI: 10.1155/2022/2384830] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 03/23/2022] [Indexed: 12/28/2022]
Abstract
Dense convolutional network (DenseNet) is a hot topic in deep learning research in recent years, which has good applications in medical image analysis. In this paper, DenseNet is summarized from the following aspects. First, the basic principle of DenseNet is introduced; second, the development of DenseNet is summarized and analyzed from five aspects: broaden DenseNet structure, lightweight DenseNet structure, dense unit, dense connection mode, and attention mechanism; finally, the application research of DenseNet in the field of medical image analysis is summarized from three aspects: pattern recognition, image segmentation, and object detection. The network structures of DenseNet are systematically summarized in this paper, which has certain positive significance for the research and development of DenseNet.
Collapse
|
36
|
Hsieh PC, Chao YC, Tsai KW, Li CH, Tzeng IS, Wu YK, Shih CY. Efficacy and Safety of Complementary Therapy With Jing Si Herbal Tea in Patients With Mild-To-Moderate COVID-19: A Prospective Cohort Study. Front Nutr 2022; 9:832321. [PMID: 35369061 PMCID: PMC8967163 DOI: 10.3389/fnut.2022.832321] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Accepted: 01/24/2022] [Indexed: 01/02/2023] Open
Abstract
Background Since late 2019, there has been a global COVID-19 pandemic. To preserve medical capacity and decrease adverse health effects, preventing the progression of COVID-19 to severe status is essential. Jing-Si Herbal Tea (JSHT), a novel traditional Chinese medicine formula was developed to treat COVID-19. This study examined the clinical efficacy and safety of JSHT in patients with mild-to-moderate COVID-19. Methods In this prospective cohort study, we enrolled 260 patients with mild-to-moderate COVID-19. The enrolled patients were divided into the JSHT (n = 117) and control (n = 143) groups. Both groups received standard management. The JSHT group was treated with JSHT as a complementary therapy. Results Compared with standard management alone, JSHT combined with standard management more effectively improved the reverse transcription–polymerase chain reaction cycle threshold value, C-reactive protein level, and Brixia score in the adult patients with mild-to-moderate COVID-19, especially in the male and older patients (those aged ≥60 years). The results revealed that the patients treated with JSHT combined with standard management had 51, 70, and 100% lower risks of intubation, Medisave Care Unit admission, and mortality compared with those receiving standard management only. Conclusions JSHT combined with standard management more effectively reduced the SARS-CoV-2 viral load and systemic inflammation and alleviated lung infiltrates in the patients with mild-to-moderate COVID-19, especially in the male and older patients (those aged ≥60 years). JSHT combined with standard management may prevent critical status and mortality in patients with mild-to-moderate COVID-19. JSHT is a promising complementary therapy for patients with mild-to-moderate COVID-19.
Collapse
Affiliation(s)
- Po-Chun Hsieh
- Department of Chinese Medicine, Taipei Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, New Taipei City, Taiwan
| | - You-Chen Chao
- Department of Internal Medicine, Taipei Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, New Taipei City, Taiwan
- School of Medicine, Tzu Chi University, Hualien, Taiwan
| | - Kuo-Wang Tsai
- Department of Research, Taipei Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, New Taipei City, Taiwan
| | - Chung-Hsien Li
- Department of Internal Medicine, Taipei Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, New Taipei City, Taiwan
| | - I-Shiang Tzeng
- Department of Research, Taipei Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, New Taipei City, Taiwan
| | - Yao-Kuang Wu
- School of Medicine, Tzu Chi University, Hualien, Taiwan
- Division of Pulmonary Medicine, Department of Internal Medicine, Taipei Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, New Taipei City, Taiwan
- *Correspondence: Yao-Kuang Wu
| | | |
Collapse
|
37
|
Tricarico D, Calandri M, Barba M, Piatti C, Geninatti C, Basile D, Gatti M, Melis M, Veltri A. Convolutional Neural Network-Based Automatic Analysis of Chest Radiographs for the Detection of COVID-19 Pneumonia: A Prioritizing Tool in the Emergency Department, Phase I Study and Preliminary "Real Life" Results. Diagnostics (Basel) 2022; 12:diagnostics12030570. [PMID: 35328122 PMCID: PMC8947382 DOI: 10.3390/diagnostics12030570] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2021] [Revised: 02/11/2022] [Accepted: 02/15/2022] [Indexed: 12/26/2022] Open
Abstract
The aim of our study is the development of an automatic tool for the prioritization of COVID-19 diagnostic workflow in the emergency department by analyzing chest X-rays (CXRs). The Convolutional Neural Network (CNN)-based method we propose has been tested retrospectively on a single-center set of 542 CXRs evaluated by experienced radiologists. The SARS-CoV-2 positive dataset (n = 234) consists of CXRs collected between March and April 2020, with the COVID-19 infection being confirmed by an RT-PCR test within 24 h. The SARS-CoV-2 negative dataset (n = 308) includes CXRs from 2019, therefore prior to the pandemic. For each image, the CNN computes COVID-19 risk indicators, identifying COVID-19 cases and prioritizing the urgent ones. After installing the software into the hospital RIS, a preliminary comparison between local daily COVID-19 cases and predicted risk indicators for 2918 CXRs in the same period was performed. Significant improvements were obtained for both prioritization and identification using the proposed method. Mean Average Precision (MAP) increased (p < 1.21 × 10−21 from 43.79% with random sorting to 71.75% with our method. CNN sensitivity was 78.23%, higher than radiologists’ 61.1%; specificity was 64.20%. In the real-life setting, this method had a correlation of 0.873. The proposed CNN-based system effectively prioritizes CXRs according to COVID-19 risk in an experimental setting; preliminary real-life results revealed high concordance with local pandemic incidence.
Collapse
Affiliation(s)
- Davide Tricarico
- AITEM Artificial Intelligence Technologies Multipurpose s.r.l., Corso Castelfidardo 36, 10129 Turin, Italy; (D.T.); (M.M.)
- Department of Mathematics “G. Peano”, University of Turin, Via Carlo Alberto 10, 10123 Turin, Italy
| | - Marco Calandri
- Diagnostic and Interventional Radiology Unit, Oncology Department, San Luigi Gonzaga University Hospital, University of Turin, Regione Gonzole 10, 10043 Orbassano, Turin, Italy; (M.C.); (C.P.); (C.G.); (D.B.); (A.V.)
| | - Matteo Barba
- Diagnostic and Interventional Radiology Unit, Oncology Department, San Luigi Gonzaga University Hospital, University of Turin, Regione Gonzole 10, 10043 Orbassano, Turin, Italy; (M.C.); (C.P.); (C.G.); (D.B.); (A.V.)
- Correspondence:
| | - Clara Piatti
- Diagnostic and Interventional Radiology Unit, Oncology Department, San Luigi Gonzaga University Hospital, University of Turin, Regione Gonzole 10, 10043 Orbassano, Turin, Italy; (M.C.); (C.P.); (C.G.); (D.B.); (A.V.)
| | - Carlotta Geninatti
- Diagnostic and Interventional Radiology Unit, Oncology Department, San Luigi Gonzaga University Hospital, University of Turin, Regione Gonzole 10, 10043 Orbassano, Turin, Italy; (M.C.); (C.P.); (C.G.); (D.B.); (A.V.)
| | - Domenico Basile
- Diagnostic and Interventional Radiology Unit, Oncology Department, San Luigi Gonzaga University Hospital, University of Turin, Regione Gonzole 10, 10043 Orbassano, Turin, Italy; (M.C.); (C.P.); (C.G.); (D.B.); (A.V.)
| | - Marco Gatti
- Radiology Unit, Department of Surgical Sciences, University of Turin, Città della Salute e della Scienza di Torino, Corso Bramante, 88/90, 10126 Turin, Italy;
| | - Massimiliano Melis
- AITEM Artificial Intelligence Technologies Multipurpose s.r.l., Corso Castelfidardo 36, 10129 Turin, Italy; (D.T.); (M.M.)
| | - Andrea Veltri
- Diagnostic and Interventional Radiology Unit, Oncology Department, San Luigi Gonzaga University Hospital, University of Turin, Regione Gonzole 10, 10043 Orbassano, Turin, Italy; (M.C.); (C.P.); (C.G.); (D.B.); (A.V.)
| |
Collapse
|
38
|
Imagawa K, Shiomoto K. Performance change with the number of training data: A case study on the binary classification of COVID-19 chest X-ray by using convolutional neural networks. Comput Biol Med 2022; 142:105251. [PMID: 35093727 DOI: 10.1016/j.compbiomed.2022.105251] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 01/15/2022] [Accepted: 01/19/2022] [Indexed: 12/24/2022]
Abstract
One of the features of artificial intelligence/machine learning-based medical devices resides in their ability to learn from real-world data. However, obtaining a large number of training data in the early phase is difficult, and the device performance may change after their first introduction into the market. To introduce the safety and effectiveness of these devices into the market in a timely manner, an appropriate post-market performance change plan must be established at the timing of the premarket approval. In this work, we evaluate the performance change with the variation of the number of training data. Two publicly available datasets are used: one consisting of 4000 images for COVID-19 and another comprising 4000 images for Normal. The dataset was split into 7000 images for training and validation, also 1000 images for test. Furthermore, the training and validation data were selected as different 16 datasets. Two different convolutional neural networks, namely AlexNet and ResNet34, with and without a fine-tuning method were used to classify two image types. The area under the curve, sensitivity, and specificity were evaluated for each dataset. Our result shows that all performances were rapidly improved as the number of training data was increased and reached an equilibrium state. AlexNet outperformed ResNet34 when the number of images was small. The difference tended to decrease as the number of training data increased, and the fine-tuning method improved all performances. In conclusion, the appropriate model and method should be selected considering the intended performance and available number of data.
Collapse
Affiliation(s)
- Kuniki Imagawa
- Tokyo City University, Faculty of Information Technology, 1-28-1 Tamazutsumi, Setagaya-ku, Tokyo, 158-8557, Japan.
| | - Kohei Shiomoto
- Tokyo City University, Faculty of Information Technology, 1-28-1 Tamazutsumi, Setagaya-ku, Tokyo, 158-8557, Japan
| |
Collapse
|
39
|
Guarrasi V, D'Amico NC, Sicilia R, Cordelli E, Soda P. Pareto optimization of deep networks for COVID-19 diagnosis from chest X-rays. PATTERN RECOGNITION 2022; 121:108242. [PMID: 34393277 PMCID: PMC8351284 DOI: 10.1016/j.patcog.2021.108242] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 07/26/2021] [Accepted: 08/08/2021] [Indexed: 05/05/2023]
Abstract
The year 2020 was characterized by the COVID-19 pandemic that has caused, by the end of March 2021, more than 2.5 million deaths worldwide. Since the beginning, besides the laboratory test, used as the gold standard, many applications have been applying deep learning algorithms to chest X-ray images to recognize COVID-19 infected patients. In this context, we found out that convolutional neural networks perform well on a single dataset but struggle to generalize to other data sources. To overcome this limitation, we propose a late fusion approach where we combine the outputs of several state-of-the-art CNNs, introducing a novel method that allows us to construct an optimum ensemble determining which and how many base learners should be aggregated. This choice is driven by a two-objective function that maximizes, on a validation set, the accuracy and the diversity of the ensemble itself. A wide set of experiments on several publicly available datasets, accounting for more than 92,000 images, shows that the proposed approach provides average recognition rates up to 93.54% when tested on external datasets.
Collapse
Affiliation(s)
- Valerio Guarrasi
- Unit of Computer Systems and Bioinformatics, Department of Engineering, University Campus Bio-Medico of Rome, Italy
- Department of Computer, Control, and Management Engineering, Sapienza University of Rome, Italy
| | - Natascha Claudia D'Amico
- Department of Diagnostic Imaging and Stereotactic Radiosurgery, Centro Diagnostico Italiano S.p.A., Milan, Italy
- Unit of Computer Systems and Bioinformatics, Department of Engineering, University Campus Bio-Medico of Rome, Italy
| | - Rosa Sicilia
- Unit of Computer Systems and Bioinformatics, Department of Engineering, University Campus Bio-Medico of Rome, Italy
| | - Ermanno Cordelli
- Unit of Computer Systems and Bioinformatics, Department of Engineering, University Campus Bio-Medico of Rome, Italy
| | - Paolo Soda
- Unit of Computer Systems and Bioinformatics, Department of Engineering, University Campus Bio-Medico of Rome, Italy
| |
Collapse
|
40
|
Garcia Santa Cruz B, Bossa MN, Sölter J, Husch AD. Public Covid-19 X-ray datasets and their impact on model bias - A systematic review of a significant problem. Med Image Anal 2021. [PMID: 34597937 DOI: 10.1101/2021.02.15.21251775] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Computer-aided-diagnosis and stratification of COVID-19 based on chest X-ray suffers from weak bias assessment and limited quality-control. Undetected bias induced by inappropriate use of datasets, and improper consideration of confounders prevents the translation of prediction models into clinical practice. By adopting established tools for model evaluation to the task of evaluating datasets, this study provides a systematic appraisal of publicly available COVID-19 chest X-ray datasets, determining their potential use and evaluating potential sources of bias. Only 9 out of more than a hundred identified datasets met at least the criteria for proper assessment of risk of bias and could be analysed in detail. Remarkably most of the datasets utilised in 201 papers published in peer-reviewed journals, are not among these 9 datasets, thus leading to models with high risk of bias. This raises concerns about the suitability of such models for clinical use. This systematic review highlights the limited description of datasets employed for modelling and aids researchers to select the most suitable datasets for their task.
Collapse
Affiliation(s)
- Beatriz Garcia Santa Cruz
- Centre Hospitalier de Luxembourg, 4, Rue Ernest Barble, Luxembourg L-1210, Luxembourg; Luxembourg Centre for Systems Biomedicine, University of Luxembourg, 7, Avenue des Hauts Fourneaux, Esch-sur-Alzette L-4362, Luxembourg.
| | - Matías Nicolás Bossa
- Luxembourg Centre for Systems Biomedicine, University of Luxembourg, 7, Avenue des Hauts Fourneaux, Esch-sur-Alzette L-4362, Luxembourg; Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel (VUB), Pleinlaan 2, Brussels B-1050, Belgium
| | - Jan Sölter
- Luxembourg Centre for Systems Biomedicine, University of Luxembourg, 7, Avenue des Hauts Fourneaux, Esch-sur-Alzette L-4362, Luxembourg
| | - Andreas Dominik Husch
- Luxembourg Centre for Systems Biomedicine, University of Luxembourg, 7, Avenue des Hauts Fourneaux, Esch-sur-Alzette L-4362, Luxembourg
| |
Collapse
|
41
|
Gidde PS, Prasad SS, Singh AP, Bhatheja N, Prakash S, Singh P, Saboo A, Takhar R, Gupta S, Saurav S, M V R, Singh A, Sardana V, Mahajan H, Kalyanpur A, Mandal AS, Mahajan V, Agrawal A, Agrawal A, Venugopal VK, Singh S, Dash D. Validation of expert system enhanced deep learning algorithm for automated screening for COVID-Pneumonia on chest X-rays. Sci Rep 2021; 11:23210. [PMID: 34853342 PMCID: PMC8636645 DOI: 10.1038/s41598-021-02003-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Accepted: 11/01/2021] [Indexed: 12/15/2022] Open
Abstract
SARS-CoV2 pandemic exposed the limitations of artificial intelligence based medical imaging systems. Earlier in the pandemic, the absence of sufficient training data prevented effective deep learning (DL) solutions for the diagnosis of COVID-19 based on X-Ray data. Here, addressing the lacunae in existing literature and algorithms with the paucity of initial training data; we describe CovBaseAI, an explainable tool using an ensemble of three DL models and an expert decision system (EDS) for COVID-Pneumonia diagnosis, trained entirely on pre-COVID-19 datasets. The performance and explainability of CovBaseAI was primarily validated on two independent datasets. Firstly, 1401 randomly selected CxR from an Indian quarantine center to assess effectiveness in excluding radiological COVID-Pneumonia requiring higher care. Second, curated dataset; 434 RT-PCR positive cases and 471 non-COVID/Normal historical scans, to assess performance in advanced medical settings. CovBaseAI had an accuracy of 87% with a negative predictive value of 98% in the quarantine-center data. However, sensitivity was 0.66-0.90 taking RT-PCR/radiologist opinion as ground truth. This work provides new insights on the usage of EDS with DL methods and the ability of algorithms to confidently predict COVID-Pneumonia while reinforcing the established learning; that benchmarking based on RT-PCR may not serve as reliable ground truth in radiological diagnosis. Such tools can pave the path for multi-modal high throughput detection of COVID-Pneumonia in screening and referral.
Collapse
Affiliation(s)
| | - Shyam Sunder Prasad
- CSIR-Central Electronics Engineering Research Institute, Pilani, Rajasthan, 333031, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, 201002, India
| | - Ajay Pratap Singh
- CSIR-Institute of Genomics and Integrative Biology, Mathura Road, New Delhi, 110025, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, 201002, India
| | - Nitin Bhatheja
- CSIR-Institute of Genomics and Integrative Biology, Mathura Road, New Delhi, 110025, India
| | - Satyartha Prakash
- CSIR-Institute of Genomics and Integrative Biology, Mathura Road, New Delhi, 110025, India
| | - Prateek Singh
- CSIR-Institute of Genomics and Integrative Biology, Mathura Road, New Delhi, 110025, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, 201002, India
| | - Aakash Saboo
- Centre for Advanced Research in Imaging, Neurosciences Genomics (CARING), New Delhi, India
| | - Rohit Takhar
- Centre for Advanced Research in Imaging, Neurosciences Genomics (CARING), New Delhi, India
| | - Salil Gupta
- Centre for Advanced Research in Imaging, Neurosciences Genomics (CARING), New Delhi, India
| | - Sumeet Saurav
- CSIR-Central Electronics Engineering Research Institute, Pilani, Rajasthan, 333031, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, 201002, India
| | - Raghunandanan M V
- CSIR-Institute of Genomics and Integrative Biology, Mathura Road, New Delhi, 110025, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, 201002, India
| | | | - Viren Sardana
- CSIR-Institute of Genomics and Integrative Biology, Mathura Road, New Delhi, 110025, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, 201002, India
| | - Harsh Mahajan
- Centre for Advanced Research in Imaging, Neurosciences Genomics (CARING), New Delhi, India
| | - Arjun Kalyanpur
- Teleradiology Solutions, 7G, Opposite Graphite India, Whitefield, Bangalore, Karnataka, 560048, India
| | - Atanendu Shekhar Mandal
- CSIR-Central Electronics Engineering Research Institute, Pilani, Rajasthan, 333031, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, 201002, India
| | - Vidur Mahajan
- Centre for Advanced Research in Imaging, Neurosciences Genomics (CARING), New Delhi, India
| | - Anurag Agrawal
- CSIR-Institute of Genomics and Integrative Biology, Mathura Road, New Delhi, 110025, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, 201002, India
| | - Anjali Agrawal
- Teleradiology Solutions, 12B Sriram Road, Civil Lines, Delhi, 110054, India.
| | | | - Sanjay Singh
- CSIR-Central Electronics Engineering Research Institute, Pilani, Rajasthan, 333031, India.
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, 201002, India.
| | - Debasis Dash
- CSIR-Institute of Genomics and Integrative Biology, Mathura Road, New Delhi, 110025, India.
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, 201002, India.
| |
Collapse
|
42
|
Gudigar A, Raghavendra U, Nayak S, Ooi CP, Chan WY, Gangavarapu MR, Dharmik C, Samanth J, Kadri NA, Hasikin K, Barua PD, Chakraborty S, Ciaccio EJ, Acharya UR. Role of Artificial Intelligence in COVID-19 Detection. SENSORS (BASEL, SWITZERLAND) 2021; 21:8045. [PMID: 34884045 PMCID: PMC8659534 DOI: 10.3390/s21238045] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 11/26/2021] [Accepted: 11/26/2021] [Indexed: 12/15/2022]
Abstract
The global pandemic of coronavirus disease (COVID-19) has caused millions of deaths and affected the livelihood of many more people. Early and rapid detection of COVID-19 is a challenging task for the medical community, but it is also crucial in stopping the spread of the SARS-CoV-2 virus. Prior substantiation of artificial intelligence (AI) in various fields of science has encouraged researchers to further address this problem. Various medical imaging modalities including X-ray, computed tomography (CT) and ultrasound (US) using AI techniques have greatly helped to curb the COVID-19 outbreak by assisting with early diagnosis. We carried out a systematic review on state-of-the-art AI techniques applied with X-ray, CT, and US images to detect COVID-19. In this paper, we discuss approaches used by various authors and the significance of these research efforts, the potential challenges, and future trends related to the implementation of an AI system for disease detection during the COVID-19 pandemic.
Collapse
Affiliation(s)
- Anjan Gudigar
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - U Raghavendra
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Sneha Nayak
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Chui Ping Ooi
- School of Science and Technology, Singapore University of Social Sciences, Singapore 599494, Singapore;
| | - Wai Yee Chan
- Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, Kuala Lumpur 50603, Malaysia;
| | - Mokshagna Rohit Gangavarapu
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Chinmay Dharmik
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India; (A.G.); (S.N.); (M.R.G.); (C.D.)
| | - Jyothi Samanth
- Department of Cardiovascular Technology, Manipal College of Health Professions, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Nahrizul Adib Kadri
- Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur 50603, Malaysia; (N.A.K.); (K.H.)
| | - Khairunnisa Hasikin
- Department of Biomedical Engineering, Faculty of Engineering, University of Malaya, Kuala Lumpur 50603, Malaysia; (N.A.K.); (K.H.)
| | - Prabal Datta Barua
- Cogninet Brain Team, Cogninet Australia, Sydney, NSW 2010, Australia;
- School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Toowoomba, QLD 4350, Australia
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia;
| | - Subrata Chakraborty
- Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW 2007, Australia;
- Faculty of Science, Agriculture, Business and Law, University of New England, Armidale, NSW 2351, Australia
| | - Edward J. Ciaccio
- Department of Medicine, Columbia University Medical Center, New York, NY 10032, USA;
| | - U. Rajendra Acharya
- School of Engineering, Ngee Ann Polytechnic, Singapore 599489, Singapore;
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung 41354, Taiwan
- International Research Organization for Advanced Science and Technology (IROAST), Kumamoto University, Kumamoto 860-8555, Japan
| |
Collapse
|
43
|
Candemir S, Nguyen XV, Folio LR, Prevedello LM. Training Strategies for Radiology Deep Learning Models in Data-limited Scenarios. Radiol Artif Intell 2021; 3:e210014. [PMID: 34870217 PMCID: PMC8637222 DOI: 10.1148/ryai.2021210014] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 09/08/2021] [Accepted: 09/16/2021] [Indexed: 12/22/2022]
Abstract
Data-driven approaches have great potential to shape future practices in radiology. The most straightforward strategy to obtain clinically accurate models is to use large, well-curated and annotated datasets. However, patient privacy constraints, tedious annotation processes, and the limited availability of radiologists pose challenges to building such datasets. This review details model training strategies in scenarios with limited data, insufficiently labeled data, and/or limited expert resources. This review discusses strategies to enlarge the data sample, decrease the time burden of manual supervised labeling, adjust the neural network architecture to improve model performance, apply semisupervised approaches, and leverage efficiencies from pretrained models. Keywords: Computer-aided Detection/Diagnosis, Transfer Learning, Limited Annotated Data, Augmentation, Synthetic Data, Semisupervised Learning, Federated Learning, Few-Shot Learning, Class Imbalance.
Collapse
Affiliation(s)
- Sema Candemir
- From the Department of Radiology, The Ohio State University College
of Medicine, 395 W 12th Ave, Columbus, OH 43212 (S.C., X.V.N., L.M.P.); and
Department of Radiology and Imaging Sciences, Clinical Center, National
Institutes of Health, Bethesda, Md (L.R.F.)
| | - Xuan V. Nguyen
- From the Department of Radiology, The Ohio State University College
of Medicine, 395 W 12th Ave, Columbus, OH 43212 (S.C., X.V.N., L.M.P.); and
Department of Radiology and Imaging Sciences, Clinical Center, National
Institutes of Health, Bethesda, Md (L.R.F.)
| | - Les R. Folio
- From the Department of Radiology, The Ohio State University College
of Medicine, 395 W 12th Ave, Columbus, OH 43212 (S.C., X.V.N., L.M.P.); and
Department of Radiology and Imaging Sciences, Clinical Center, National
Institutes of Health, Bethesda, Md (L.R.F.)
| | - Luciano M. Prevedello
- From the Department of Radiology, The Ohio State University College
of Medicine, 395 W 12th Ave, Columbus, OH 43212 (S.C., X.V.N., L.M.P.); and
Department of Radiology and Imaging Sciences, Clinical Center, National
Institutes of Health, Bethesda, Md (L.R.F.)
| |
Collapse
|
44
|
Radiology Implementation Considerations for Artificial Intelligence (AI) Applied to COVID-19, From the AJR Special Series on AI Applications. AJR Am J Roentgenol 2021; 219:15-23. [PMID: 34612681 DOI: 10.2214/ajr.21.26717] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Hundreds of imaging-based artificial intelligence (AI) models have been developed in response to the COVID-19 pandemic. AI systems that incorporate imaging have shown promise in primary detection, severity grading, and prognostication of outcomes in COVID-19, and have enabled integration of imaging with a broad range of additional clinical and epidemiologic data. However, systematic reviews of AI models applied to COVID-19 medical imaging have highlighted problems in the field, including methodologic issues and problems in real-world deployment. Clinical use of such models should be informed by both the promise and potential pitfalls of implementation. How does a practicing radiologist make sense of this complex topic, and what factors should be considered in the implementation of AI tools for imaging of COVID-19? This critical review aims to help the radiologist understand the nuances that impact the clinical deployment of AI for imaging of COVID-19. We review imaging use cases for AI models in COVID-19 (e.g., diagnosis, severity assessment, and prognostication) and explore considerations for AI model development and testing, deployment infrastructure, clinical user interfaces, quality control, and institutional review board and regulatory approvals, with a practical focus on what a radiologist should consider when implementing an AI tool for COVID-19.
Collapse
|
45
|
Abdelwhab Ouahab. Multimodal Convolutional Neural Networks for Detection of Covid-19 Using Chest X-Ray and CT Images. OPTICAL MEMORY AND NEURAL NETWORKS 2021. [PMCID: PMC8715284 DOI: 10.3103/s1060992x21040044] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
The Covid-19 was first appeared in 2019 in Wuhan, China. It widely and rapidly expanded all over the world. Since then, it has had a strong effect on people’s daily lives, the world economy and the public health. The fast prediction of Covid-19 can assist the medicine to choose the right treatment. In this paper, we propose a classification of Covid-19 using Models based on a Convolutional Neural Network (CNN). We propose two models to detect Covid-19. The first one uses CNN with CT or X-ray images separately. The second uses CNN with both CT and X-ray images at the same time. The used datasets contain X-ray and CT images divided into three classes which are Covid-19, Normal and Pneumonia. Each type image class has 1045 images for training and 300 for testing. All these data sets are available in Kaggle repository. In order to evaluate the proposed models, we calculate the confusion matrix, the accuracy, precision, recall and F1 score. The model that uses CNN with both X-ray and CT images of 0.99 achieves the best accuracy. We deduced that using CT images is more efficient than using X-ray images to predict Covid-19. The combination of the CT and X-ray images to detect Covid-19 is more efficient than using only CT or X-ray images. The proposed models could effectively assist the radiologists in predicting Covid-19.
Collapse
|
46
|
Hryniewska W, Bombiński P, Szatkowski P, Tomaszewska P, Przelaskowski A, Biecek P. Checklist for responsible deep learning modeling of medical images based on COVID-19 detection studies. PATTERN RECOGNITION 2021; 118:108035. [PMID: 34054148 PMCID: PMC8139442 DOI: 10.1016/j.patcog.2021.108035] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Revised: 04/23/2021] [Accepted: 05/11/2021] [Indexed: 05/21/2023]
Abstract
The sudden outbreak and uncontrolled spread of COVID-19 disease is one of the most important global problems today. In a short period of time, it has led to the development of many deep neural network models for COVID-19 detection with modules for explainability. In this work, we carry out a systematic analysis of various aspects of proposed models. Our analysis revealed numerous mistakes made at different stages of data acquisition, model development, and explanation construction. In this work, we overview the approaches proposed in the surveyed Machine Learning articles and indicate typical errors emerging from the lack of deep understanding of the radiography domain. We present the perspective of both: experts in the field - radiologists and deep learning engineers dealing with model explanations. The final result is a proposed checklist with the minimum conditions to be met by a reliable COVID-19 diagnostic model.
Collapse
Affiliation(s)
- Weronika Hryniewska
- Faculty of Mathematics and Information Science, Warsaw University of Technology, Poland
| | | | - Patryk Szatkowski
- Department of Pediatric Radiology, Medical University of Warsaw, Poland
| | - Paulina Tomaszewska
- Faculty of Mathematics and Information Science, Warsaw University of Technology, Poland
| | - Artur Przelaskowski
- Faculty of Mathematics and Information Science, Warsaw University of Technology, Poland
| | - Przemysław Biecek
- Faculty of Mathematics and Information Science, Warsaw University of Technology, Poland
- Faculty of Mathematics, Informatics and Mechanics, University of Warsaw, Poland
| |
Collapse
|
47
|
Pal A, Ali A, Young TR, Oostenbrink J, Prabhakar A, Prabhakar A, Deacon N, Arnold A, Eltayeb A, Yap C, Young DM, Tang A, Lakshmanan S, Lim YY, Pokarowski M, Kakodkar P. Comprehensive literature review on the radiographic findings, imaging modalities, and the role of radiology in the COVID-19 pandemic. World J Radiol 2021; 13:258-282. [PMID: 34630913 PMCID: PMC8473437 DOI: 10.4329/wjr.v13.i9.258] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/06/2021] [Revised: 03/28/2021] [Accepted: 08/04/2021] [Indexed: 02/06/2023] Open
Abstract
Since the outbreak of the coronavirus disease 2019 (COVID-19) pandemic, over 103214008 cases have been reported, with more than 2231158 deaths as of January 31, 2021. Although the gold standard for diagnosis of this disease remains the reverse-transcription polymerase chain reaction of nasopharyngeal and oropharyngeal swabs, its false-negative rates have ignited the use of medical imaging as an important adjunct or alternative. Medical imaging assists in identifying the pathogenesis, the degree of pulmonary damage, and the characteristic features in each imaging modality. This literature review collates the characteristic radiographic findings of COVID-19 in various imaging modalities while keeping the preliminary focus on chest radiography, computed tomography (CT), and ultrasound scans. Given the higher sensitivity and greater proficiency in detecting characteristic findings during the early stages, CT scans are more reliable in diagnosis and serve as a practical method in following up the disease time course. As research rapidly expands, we have emphasized the CO-RADS classification system as a tool to aid in communicating the likelihood of COVID-19 suspicion among healthcare workers. Additionally, the utilization of other scoring systems such as MuLBSTA, Radiological Assessment of Lung Edema, and Brixia in this pandemic are reviewed as they integrate the radiographic findings into an objective scoring system to risk stratify the patients and predict the severity of disease. Furthermore, current progress in the utilization of artificial intelligence via radiomics is evaluated. Lastly, the lesson from the first wave and preparation for the second wave from the point of view of radiology are summarized.
Collapse
Affiliation(s)
- Aman Pal
- School of Medicine, National University of Ireland Galway, Galway H91 TK33, Galway, Ireland
| | - Abulhassan Ali
- School of Medicine, National University of Ireland Galway, Galway H91 TK33, Galway, Ireland
| | - Timothy R Young
- School of Medicine, National University of Ireland Galway, Galway H91 TK33, Galway, Ireland
| | - Juan Oostenbrink
- School of Medicine, National University of Ireland Galway, Galway H91 TK33, Galway, Ireland
| | - Akul Prabhakar
- School of Medicine, National University of Ireland Galway, Galway H91 TK33, Galway, Ireland
| | - Amogh Prabhakar
- School of Medicine, National University of Ireland Galway, Galway H91 TK33, Galway, Ireland
| | - Nina Deacon
- School of Medicine, National University of Ireland Galway, Galway H91 TK33, Galway, Ireland
| | - Amar Arnold
- School of Medicine, National University of Ireland Galway, Galway H91 TK33, Galway, Ireland
| | - Ahmed Eltayeb
- School of Medicine, National University of Ireland Galway, Galway H91 TK33, Galway, Ireland
| | - Charles Yap
- School of Medicine, National University of Ireland Galway, Galway H91 TK33, Galway, Ireland
| | - David M Young
- Department of Computer Science, Yale University, New Haven, CO 06520, United States
| | - Alan Tang
- Department of Health Science, Duke University, Durham, NC 27708, United States
| | - Subramanian Lakshmanan
- School of Medicine, National University of Ireland Galway, Galway H91 TK33, Galway, Ireland
| | - Ying Yi Lim
- School of Medicine, National University of Ireland Galway, Galway H91 TK33, Galway, Ireland
| | - Martha Pokarowski
- The Hospital for Sick Kids, University of Toronto, Toronto M5S, Ontario, Canada
| | - Pramath Kakodkar
- School of Medicine, National University of Ireland Galway, Galway H91 TK33, Galway, Ireland
| |
Collapse
|
48
|
Asada K, Komatsu M, Shimoyama R, Takasawa K, Shinkai N, Sakai A, Bolatkan A, Yamada M, Takahashi S, Machino H, Kobayashi K, Kaneko S, Hamamoto R. Application of Artificial Intelligence in COVID-19 Diagnosis and Therapeutics. J Pers Med 2021; 11:886. [PMID: 34575663 PMCID: PMC8471764 DOI: 10.3390/jpm11090886] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Revised: 09/01/2021] [Accepted: 09/02/2021] [Indexed: 12/12/2022] Open
Abstract
The coronavirus disease 2019 (COVID-19) pandemic began at the end of December 2019, giving rise to a high rate of infections and causing COVID-19-associated deaths worldwide. It was first reported in Wuhan, China, and since then, not only global leaders, organizations, and pharmaceutical/biotech companies, but also researchers, have directed their efforts toward overcoming this threat. The use of artificial intelligence (AI) has recently surged internationally and has been applied to diverse aspects of many problems. The benefits of using AI are now widely accepted, and many studies have shown great success in medical research on tasks, such as the classification, detection, and prediction of disease, or even patient outcome. In fact, AI technology has been actively employed in various ways in COVID-19 research, and several clinical applications of AI-equipped medical devices for the diagnosis of COVID-19 have already been reported. Hence, in this review, we summarize the latest studies that focus on medical imaging analysis, drug discovery, and therapeutics such as vaccine development and public health decision-making using AI. This survey clarifies the advantages of using AI in the fight against COVID-19 and provides future directions for tackling the COVID-19 pandemic using AI techniques.
Collapse
Affiliation(s)
- Ken Asada
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Masaaki Komatsu
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Ryo Shimoyama
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Ken Takasawa
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Norio Shinkai
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Akira Sakai
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Amina Bolatkan
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Masayoshi Yamada
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
- Department of Endoscopy, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan
| | - Satoshi Takahashi
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Hidenori Machino
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Kazuma Kobayashi
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Syuzo Kaneko
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
| | - Ryuji Hamamoto
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (K.A.); (M.K.); (R.S.); (K.T.); (N.S.); (A.B.); (S.T.); (H.M.); (K.K.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.S.); (M.Y.)
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| |
Collapse
|
49
|
Abstract
COVID-19, an infectious coronavirus disease, caused a pandemic with countless deaths. From the outset, clinical institutes have explored computed tomography as an effective and complementary screening tool alongside the reverse transcriptase-polymerase chain reaction. Deep learning techniques have shown promising results in similar medical tasks and, hence, may provide solutions to COVID-19 based on medical images of patients. We aim to contribute to the research in this field by: (i) Comparing different architectures on a public and extended reference dataset to find the most suitable; (ii) Proposing a patient-oriented investigation of the best performing networks; and (iii) Evaluating their robustness in a real-world scenario, represented by cross-dataset experiments. We exploited ten well-known convolutional neural networks on two public datasets. The results show that, on the reference dataset, the most suitable architecture is VGG19, which (i) Achieved 98.87% accuracy in the network comparison; (ii) Obtained 95.91% accuracy on the patient status classification, even though it misclassifies some patients that other networks classify correctly; and (iii) The cross-dataset experiments exhibit the limitations of deep learning approaches in a real-world scenario with 70.15% accuracy, which need further investigation to improve the robustness. Thus, VGG19 architecture showed promising performance in the classification of COVID-19 cases. Nonetheless, this architecture enables extensive improvements based on its modification, or even with preprocessing step in addition to it. Finally, the cross-dataset experiments exposed the critical weakness of classifying images from heterogeneous data sources, compatible with a real-world scenario.
Collapse
|
50
|
Application of deep learning to identify COVID-19 infection in posteroanterior chest X-rays. Clin Imaging 2021; 80:268-273. [PMID: 34425544 PMCID: PMC8302887 DOI: 10.1016/j.clinimag.2021.07.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Revised: 07/06/2021] [Accepted: 07/19/2021] [Indexed: 12/11/2022]
Abstract
Introduction The objective of this study was to assess seven configurations of six convolutional deep neural network architectures for classification of chest X-rays (CXRs) as COVID-19 positive or negative. Methods The primary dataset consisted of 294 COVID-19 positive and 294 COVID-19 negative CXRs, the latter comprising roughly equally many pneumonia, emphysema, fibrosis, and healthy images. We used six common convolutional neural network architectures, VGG16, DenseNet121, DenseNet201, MobileNet, NasNetMobile and InceptionV3. We studied six models (one for each architecture) which were pre-trained on a vast repository of generic (non-CXR) images, as well as a seventh DenseNet121 model, which was pre-trained on a repository of CXR images. For each model, we replaced the output layers with custom fully connected layers for the task of binary classification of images as COVID-19 positive or negative. Performance metrics were calculated on a hold-out test set with CXRs from patients who were not included in the training/validation set. Results When pre-trained on generic images, the VGG16, DenseNet121, DenseNet201, MobileNet, NasNetMobile, and InceptionV3 architectures respectively produced hold-out test set areas under the receiver operating characteristic (AUROCs) of 0.98, 0.95, 0.97, 0.95, 0.99, and 0.96 for the COVID-19 classification of CXRs. The X-ray pre-trained DenseNet121 model, in comparison, had a test set AUROC of 0.87. Discussion Common convolutional neural network architectures with parameters pre-trained on generic images yield high-performance and well-calibrated COVID-19 CXR classification.
Collapse
|