1
|
Meng X, Sun K, Xu J, He X, Shen D. Multi-Modal Modality-Masked Diffusion Network for Brain MRI Synthesis With Random Modality Missing. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2587-2598. [PMID: 38393846 DOI: 10.1109/tmi.2024.3368664] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/25/2024]
Abstract
Synthesis of unavailable imaging modalities from available ones can generate modality-specific complementary information and enable multi-modality based medical images diagnosis or treatment. Existing generative methods for medical image synthesis are usually based on cross-modal translation between acquired and missing modalities. These methods are usually dedicated to specific missing modality and perform synthesis in one shot, which cannot deal with varying number of missing modalities flexibly and construct the mapping across modalities effectively. To address the above issues, in this paper, we propose a unified Multi-modal Modality-masked Diffusion Network (M2DN), tackling multi-modal synthesis from the perspective of "progressive whole-modality inpainting", instead of "cross-modal translation". Specifically, our M2DN considers the missing modalities as random noise and takes all the modalities as a unity in each reverse diffusion step. The proposed joint synthesis scheme performs synthesis for the missing modalities and self-reconstruction for the available ones, which not only enables synthesis for arbitrary missing scenarios, but also facilitates the construction of common latent space and enhances the model representation ability. Besides, we introduce a modality-mask scheme to encode availability status of each incoming modality explicitly in a binary mask, which is adopted as condition for the diffusion model to further enhance the synthesis performance of our M2DN for arbitrary missing scenarios. We carry out experiments on two public brain MRI datasets for synthesis and downstream segmentation tasks. Experimental results demonstrate that our M2DN outperforms the state-of-the-art models significantly and shows great generalizability for arbitrary missing modalities.
Collapse
|
2
|
Meng H, Wang TD, Zhuo LY, Hao JW, Sui LY, Yang W, Zang LL, Cui JJ, Wang JN, Yin XP. Quantitative radiomics analysis of imaging features in adults and children Mycoplasma pneumonia. Front Med (Lausanne) 2024; 11:1409477. [PMID: 38831994 PMCID: PMC11146305 DOI: 10.3389/fmed.2024.1409477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2024] [Accepted: 04/30/2024] [Indexed: 06/05/2024] Open
Abstract
Purpose This study aims to explore the value of clinical features, CT imaging signs, and radiomics features in differentiating between adults and children with Mycoplasma pneumonia and seeking quantitative radiomic representations of CT imaging signs. Materials and methods In a retrospective analysis of 981 cases of mycoplasmal pneumonia patients from November 2021 to December 2023, 590 internal data (adults:450, children: 140) randomly divided into a training set and a validation set with an 8:2 ratio and 391 external test data (adults:121; children:270) were included. Using univariate analysis, CT imaging signs and clinical features with significant differences (p < 0.05) were selected. After segmenting the lesion area on the CT image as the region of interest, 1,904 radiomic features were extracted. Then, Pearson correlation analysis (PCC) and the least absolute shrinkage and selection operator (LASSO) were used to select the radiomic features. Based on the selected features, multivariable logistic regression analysis was used to establish the clinical model, CT image model, radiomic model, and combined model. The predictive performance of each model was evaluated using ROC curves, AUC, sensitivity, specificity, accuracy, and precision. The AUC between each model was compared using the Delong test. Importantly, the radiomics features and quantitative and qualitative CT image features were analyzed using Pearson correlation analysis and analysis of variance, respectively. Results For the individual model, the radiomics model, which was built using 45 selected features, achieved the highest AUCs in the training set, validation set, and external test set, which were 0.995 (0.992, 0.998), 0.952 (0.921, 0.978), and 0.969 (0.953, 0.982), respectively. In all models, the combined model achieved the highest AUCs, which were 0.996 (0.993, 0.998), 0.972 (0.942, 0.995), and 0.986 (0.976, 0.993) in the training set, validation set, and test set, respectively. In addition, we selected 11 radiomics features and CT image features with a correlation coefficient r greater than 0.35. Conclusion The combined model has good diagnostic performance for differentiating between adults and children with mycoplasmal pneumonia, and different CT imaging signs are quantitatively represented by radiomics.
Collapse
Affiliation(s)
- Huan Meng
- Clinical Medicine School of Hebei University, Baoding, China
- Department of Radiology, Affiliated Hospital of Hebei University, Baoding, China
- Hebei Key Laboratory of Precise Imaging of Inflammation Related Tumors, Baoding, China
| | - Tian-Da Wang
- Clinical Medicine School of Hebei University, Baoding, China
- Department of Radiology, Affiliated Hospital of Hebei University, Baoding, China
- Hebei Key Laboratory of Precise Imaging of Inflammation Related Tumors, Baoding, China
| | - Li-Yong Zhuo
- Clinical Medicine School of Hebei University, Baoding, China
- Department of Radiology, Affiliated Hospital of Hebei University, Baoding, China
- Hebei Key Laboratory of Precise Imaging of Inflammation Related Tumors, Baoding, China
| | - Jia-Wei Hao
- Clinical Medicine School of Hebei University, Baoding, China
- Department of Radiology, Affiliated Hospital of Hebei University, Baoding, China
- Hebei Key Laboratory of Precise Imaging of Inflammation Related Tumors, Baoding, China
| | - Lian-yu Sui
- Clinical Medicine School of Hebei University, Baoding, China
- Department of Radiology, Affiliated Hospital of Hebei University, Baoding, China
- Hebei Key Laboratory of Precise Imaging of Inflammation Related Tumors, Baoding, China
| | - Wei Yang
- Department of Radiology, Baoding First Central Hospital, Baoding, China
| | - Li-Li Zang
- Department of Radiology, Baoding Children's Hospital, Baoding, China
| | - Jing-Jing Cui
- Department of Research and Development, United Imaging Intelligence (Beijing) Co., Beijing, China
| | - Jia-Ning Wang
- Clinical Medicine School of Hebei University, Baoding, China
- Department of Radiology, Affiliated Hospital of Hebei University, Baoding, China
- Hebei Key Laboratory of Precise Imaging of Inflammation Related Tumors, Baoding, China
| | - Xiao-Ping Yin
- Clinical Medicine School of Hebei University, Baoding, China
- Department of Radiology, Affiliated Hospital of Hebei University, Baoding, China
- Hebei Key Laboratory of Precise Imaging of Inflammation Related Tumors, Baoding, China
| |
Collapse
|
3
|
Mathkor DM, Mathkor N, Bassfar Z, Bantun F, Slama P, Ahmad F, Haque S. Multirole of the internet of medical things (IoMT) in biomedical systems for managing smart healthcare systems: An overview of current and future innovative trends. J Infect Public Health 2024; 17:559-572. [PMID: 38367570 DOI: 10.1016/j.jiph.2024.01.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 01/16/2024] [Accepted: 01/18/2024] [Indexed: 02/19/2024] Open
Abstract
Internet of Medical Things (IoMT) is an emerging subset of Internet of Things (IoT), often called as IoT in healthcare, refers to medical devices and applications with internet connectivity, is exponentially gaining researchers' attention due to its wide-ranging applicability in biomedical systems for Smart Healthcare systems. IoMT facilitates remote health biomedical system and plays a crucial role within the healthcare industry to enhance precision, reliability, consistency and productivity of electronic devices used for various healthcare purposes. It comprises a conceptualized architecture for providing information retrieval strategies to extract the data from patient records using sensors for biomedical analysis and diagnostics against manifold diseases to provide cost-effective medical solutions, quick hospital treatments, and personalized healthcare. This article provides a comprehensive overview of IoMT with special emphasis on its current and future trends used in biomedical systems, such as deep learning, machine learning, blockchains, artificial intelligence, radio frequency identification, and industry 5.0.
Collapse
Affiliation(s)
- Darin Mansor Mathkor
- Research and Scientific Studies Unit, Department of Nursing, College of Nursing and Health Sciences, Jazan University, Jazan 45142, Saudi Arabia
| | - Noof Mathkor
- Department of Pathology, Ministry of National Guard Health Affairs (MNGHA), Riyadh, Saudi Arabia
| | - Zaid Bassfar
- Department of Information Technology, Faculty of Computers and Information Technology, University of Tabuk, Tabuk, Saudi Arabia
| | - Farkad Bantun
- Department of Microbiology, Faculty of Medicine, Umm Al-Qura University, Makkah, Saudi Arabia
| | - Petr Slama
- Laboratory of Animal Immunology and Biotechnology, Department of Animal Morphology, Physiology and Genetics, Mendel University in Brno, 61300 Brno, Czech Republic
| | - Faraz Ahmad
- Department of Biotechnology, School of Bio Sciences and Technology, Vellore Institute of Technology, Vellore 632014, India
| | - Shafiul Haque
- Research and Scientific Studies Unit, Department of Nursing, College of Nursing and Health Sciences, Jazan University, Jazan 45142, Saudi Arabia; Gilbert and Rose-Marie Chagoury School of Medicine, Lebanese American University, Beirut, Lebanon; Centre of Medical and Bio-Allied Health Sciences Research, Ajman University, Ajman, United Arab Emirates.
| |
Collapse
|
4
|
Gu Y, Pan Y, Fang Z, Ma L, Zhu Y, Androjna C, Zhong K, Yu X, Shen D. Deep learning-assisted preclinical MR fingerprinting for sub-millimeter T 1 and T 2 mapping of entire macaque brain. Magn Reson Med 2024; 91:1149-1164. [PMID: 37929695 DOI: 10.1002/mrm.29905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 09/10/2023] [Accepted: 10/10/2023] [Indexed: 11/07/2023]
Abstract
PURPOSE Preclinical MR fingerprinting (MRF) suffers from long acquisition time for organ-level coverage due to demanding image resolution and limited undersampling capacity. This study aims to develop a deep learning-assisted fast MRF framework for sub-millimeter T1 and T2 mapping of entire macaque brain on a preclinical 9.4 T MR system. METHODS Three dimensional MRF images were reconstructed by singular value decomposition (SVD) compressed reconstruction. T1 and T2 mapping for each axial slice exploited a self-attention assisted residual U-Net to suppress aliasing-induced quantification errors, and the transmit-field (B1 + ) measurements for robustness against B1 + inhomogeneity. Supervised network training used MRF images simulated via virtual parametric maps and a desired undersampling scheme. This strategy bypassed the difficulties of acquiring fully sampled preclinical MRF data to guide network training. The proposed fast MRF framework was tested on experimental data acquired from ex vivo and in vivo macaque brains. RESULTS The trained network showed reasonable adaptability to experimental MRF images, enabling robust delineation of various T1 and T2 distributions in the brain tissues. Further, the proposed MRF framework outperformed several existing fast MRF methods in handling the aliasing artifacts and capturing detailed cerebral structures in the mapping results. Parametric mapping of entire macaque brain at nominal resolution of 0.35× $$ \times $$ 0.35× $$ \times $$ 1 mm3 can be realized via a 20-min 3D MRF scan, which was sixfold faster than the baseline protocol. CONCLUSION Introducing deep learning to MRF framework paves the way for efficient organ-level high-resolution quantitative MRI in preclinical applications.
Collapse
Affiliation(s)
- Yuning Gu
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China
| | - Yongsheng Pan
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China
| | - Zhenghan Fang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Lei Ma
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China
| | - Yuran Zhu
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA
| | - Charlie Androjna
- Cleveland Clinic Pre-Clinical Magnetic Resonance Imaging Center, Cleveland Clinic Foundation, Cleveland, Ohio, USA
| | - Kai Zhong
- High Magnetic Field Laboratory, Chinese Academy of Sciences, Hefei, China
- Anhui Province Key Laboratory of High Field Magnetic Resonance Imaging, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, China
- Biomedical Engineering Department, Peking University, Beijing, China
| | - Xin Yu
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA
- Department of Radiology, Case Western Reserve University, Cleveland, Ohio, USA
| | - Dinggang Shen
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China
- Shanghai United Imaging Intelligence, Shanghai, China
- Shanghai Clinical Research and Trial Center, ShanghaiTech University, Shanghai, China
| |
Collapse
|
5
|
Li Y, Deng W, Zhou Y, Luo Y, Wu Y, Wen J, Cheng L, Liang X, Wu T, Wang F, Huang Z, Tan C, Liu Y. A nomogram based on clinical factors and CT radiomics for predicting anti-MDA5+ DM complicated by RP-ILD. Rheumatology (Oxford) 2024; 63:809-816. [PMID: 37267146 DOI: 10.1093/rheumatology/kead263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 03/30/2023] [Accepted: 05/08/2023] [Indexed: 06/04/2023] Open
Abstract
OBJECTIVES Anti-melanoma differentiation-associated gene 5 antibody-positive (anti-MDA5+) DM complicated by rapidly progressive interstitial lung disease (RP-ILD) has a high incidence and poor prognosis. The objective of this study was to establish a model for the prediction and early diagnosis of anti-MDA5+ DM-associated RP-ILD based on clinical manifestations and imaging features. METHODS A total of 103 patients with anti-MDA5+ DM were included. The patients were randomly split into training and testing sets of 72 and 31 patients, respectively. After image analysis, we collected clinical, imaging and radiomics features from each patient. Feature selection was performed first with the minimum redundancy and maximum relevance algorithm and then with the best subset selection method. The final remaining features comprised the radscore. A clinical model and imaging model were then constructed with the selected independent risk factors for the prediction of non-RP-ILD and RP-ILD. We also combined these models in different ways and compared their predictive abilities. A nomogram was also established. The predictive performances of the models were assessed based on receiver operating characteristics curves, calibration curves, discriminability and clinical utility. RESULTS The analyses showed that two clinical factors, dyspnoea (P = 0.000) and duration of illness in months (P = 0.001), and three radiomics features (P = 0.001, 0.044 and 0.008, separately) were independent predictors of non-RP-ILD and RP-ILD. However, no imaging features were significantly different between the two groups. The radiomics model built with the three radiomics features performed worse than the clinical model and showed areas under the curve (AUCs) of 0.805 and 0.754 in the training and test sets, respectively. The clinical model demonstrated a good predictive ability for RP-ILD in MDA5+ DM patients, with an AUC, sensitivity, specificity and accuracy of 0.954, 0.931, 0.837 and 0.847 in the training set and 0.890, 0.875, 0.800 and 0.774 in the testing set, respectively. The combination model built with clinical and radiomics features performed slightly better than the clinical model, with an AUC, sensitivity, specificity and accuracy of 0.994, 0.966, 0.977 and 0.931 in the training set and 0.890, 0.812, 1.000 and 0.839 in the testing set, respectively. The calibration curve and decision curve analyses showed satisfactory consistency and clinical utility of the nomogram. CONCLUSION Our results suggest that the combination model built with clinical and radiomics features could reliably predict the occurrence of RP-ILD in MDA5+ DM patients.
Collapse
Affiliation(s)
- Yanhong Li
- Department of Rheumatology and Immunology, West China Hospital, Sichuan University, Chengdu, China
- Rare Diseases Center, West China Hospital, Sichuan University, Chengdu, China
- Institute of Immunology and Inflammation, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Chengdu, China
| | - Wen Deng
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
| | - Yu Zhou
- Department of Respiratory and Critical Care Medicine, Chengdu First People's Hospital, Chengdu, China
| | - Yubin Luo
- Department of Rheumatology and Immunology, West China Hospital, Sichuan University, Chengdu, China
- Rare Diseases Center, West China Hospital, Sichuan University, Chengdu, China
- Institute of Immunology and Inflammation, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Chengdu, China
| | - Yinlan Wu
- Department of Rheumatology and Immunology, West China Hospital, Sichuan University, Chengdu, China
- Rare Diseases Center, West China Hospital, Sichuan University, Chengdu, China
- Institute of Immunology and Inflammation, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Chengdu, China
| | - Ji Wen
- Department of Rheumatology and Immunology, West China Hospital, Sichuan University, Chengdu, China
- Rare Diseases Center, West China Hospital, Sichuan University, Chengdu, China
- Institute of Immunology and Inflammation, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Chengdu, China
| | - Lu Cheng
- Department of Rheumatology and Immunology, West China Hospital, Sichuan University, Chengdu, China
- Rare Diseases Center, West China Hospital, Sichuan University, Chengdu, China
- Institute of Immunology and Inflammation, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Chengdu, China
| | - Xiuping Liang
- Department of Rheumatology and Immunology, West China Hospital, Sichuan University, Chengdu, China
- Rare Diseases Center, West China Hospital, Sichuan University, Chengdu, China
- Institute of Immunology and Inflammation, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Chengdu, China
| | - Tong Wu
- Department of Rheumatology and Immunology, West China Hospital, Sichuan University, Chengdu, China
- Rare Diseases Center, West China Hospital, Sichuan University, Chengdu, China
- Institute of Immunology and Inflammation, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Chengdu, China
| | - Fang Wang
- Department of Research and Development, Shanghai United Imaging Intelligence, Shanghai, China
| | - Zixing Huang
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
| | - Chunyu Tan
- Department of Rheumatology and Immunology, West China Hospital, Sichuan University, Chengdu, China
- Rare Diseases Center, West China Hospital, Sichuan University, Chengdu, China
- Institute of Immunology and Inflammation, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Chengdu, China
| | - Yi Liu
- Department of Rheumatology and Immunology, West China Hospital, Sichuan University, Chengdu, China
- Rare Diseases Center, West China Hospital, Sichuan University, Chengdu, China
- Institute of Immunology and Inflammation, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Chengdu, China
| |
Collapse
|
6
|
Teng L, Wang B, Xu X, Zhang J, Mei L, Feng Q, Shen D. Beam-wise dose composition learning for head and neck cancer dose prediction in radiotherapy. Med Image Anal 2024; 92:103045. [PMID: 38071865 DOI: 10.1016/j.media.2023.103045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Revised: 10/12/2023] [Accepted: 11/27/2023] [Indexed: 01/12/2024]
Abstract
Automatic and accurate dose distribution prediction plays an important role in radiotherapy plan. Although previous methods can provide promising performance, most methods did not consider beam-shaped radiation of treatment delivery in clinical practice. This leads to inaccurate prediction, especially on beam paths. To solve this problem, we propose a beam-wise dose composition learning (BDCL) method for dose prediction in the context of head and neck (H&N) radiotherapy plan. Specifically, a global dose network is first utilized to predict coarse dose values in the whole-image space. Then, we propose to generate individual beam masks to decompose the coarse dose distribution into multiple field doses, called beam voters, which are further refined by a subsequent beam dose network and reassembled to form the final dose distribution. In particular, we design an overlap consistency module to keep the similarity of high-level features in overlapping regions between different beam voters. To make the predicted dose distribution more consistent with the real radiotherapy plan, we also propose a dose-volume histogram (DVH) calibration process to facilitate feature learning in some clinically concerned regions. We further apply an edge enhancement procedure to enhance the learning of the extracted feature from the dose falloff regions. Experimental results on a public H&N cancer dataset from the AAPM OpenKBP challenge show that our method achieves superior performance over other state-of-the-art approaches by significant margins. Source code is released at https://github.com/TL9792/BDCLDosePrediction.
Collapse
Affiliation(s)
- Lin Teng
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China; School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Bin Wang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
| | - Xuanang Xu
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
| | - Jiadong Zhang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
| | - Lanzhuju Mei
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China; Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200230, China; Shanghai Clinical Research and Trial Center, Shanghai 201210, China.
| |
Collapse
|
7
|
Moosavi AS, Mahboobi A, Arabzadeh F, Ramezani N, Moosavi HS, Mehrpoor G. Segmentation and classification of lungs CT-scan for detecting COVID-19 abnormalities by deep learning technique: U-Net model. J Family Med Prim Care 2024; 13:691-698. [PMID: 38605799 PMCID: PMC11006039 DOI: 10.4103/jfmpc.jfmpc_695_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 07/12/2023] [Accepted: 09/22/2023] [Indexed: 04/13/2024] Open
Abstract
Background Artificial intelligence (AI) techniques have been ascertained useful in the analysis and description of infectious areas in radiological images promptly. Our aim in this study was to design a web-based application for detecting and labeling infected tissues on CT (computed tomography) lung images of patients based on the deep learning (DL) method as a type of AI. Materials and Methods The U-Net architecture, one of the DL networks, is used as a hybrid model with pre-trained densely connected convolutional network 121 (DenseNet121) architecture for the segmentation process. The proposed model was constructed on 1031 persons' CT-scan images from Ibn Sina Hospital of Iran in 2021 and some publicly available datasets. The network was trained using 6000 slices, validated on 1000 slices images, and tested against the 150 slices. Accuracy, sensitivity, specificity, and area under the receiver operating characteristics (ROC) curve (AUC) were calculated to evaluate model performance. Results The results indicate the acceptable ability of the U-Net-DenseNet121 model in detecting COVID-19 abnormality (accuracy = 0.88 and AUC = 0.96 for thresholds of 0.13 and accuracy = 0.88 and AUC = 0.90 for thresholds of 0.2). Based on this model, we developed the "Imaging-Tech" web-based application for use at hospitals and clinics to make our project's output more practical and attractive in the market. Conclusion We designed a DL-based model for the segmentation of COVID-19 CT scan images and, based on this model, constructed a web-based application that, according to the results, is a reliable detector for infected tissue in lung CT-scans. The availability of such tools would aid in automating, prioritizing, fastening, and broadening the treatment of COVID-19 patients globally.
Collapse
Affiliation(s)
| | - Ashraf Mahboobi
- Department of Radiologist, Babol University of Medical Sciences, Babol, Iran
| | - Farzin Arabzadeh
- Department of Radiologist, Dr. Arabzadeh Radiology and Sonography Clinic, Behbahan, Iran
| | - Nazanin Ramezani
- School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Helia S. Moosavi
- Computer Science Bachelor Degree, University of Toronto, On, Canada
| | - Golbarg Mehrpoor
- Department of Rheumatologist, Alborz University of Medical Sciences, Karaj, Iran
| |
Collapse
|
8
|
Li Y, Shen Y, Zhang J, Song S, Li Z, Ke J, Shen D. A Hierarchical Graph V-Net With Semi-Supervised Pre-Training for Histological Image Based Breast Cancer Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3907-3918. [PMID: 37725717 DOI: 10.1109/tmi.2023.3317132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/21/2023]
Abstract
Numerous patch-based methods have recently been proposed for histological image based breast cancer classification. However, their performance could be highly affected by ignoring spatial contextual information in the whole slide image (WSI). To address this issue, we propose a novel hierarchical Graph V-Net by integrating 1) patch-level pre-training and 2) context-based fine-tuning, with a hierarchical graph network. Specifically, a semi-supervised framework based on knowledge distillation is first developed to pre-train a patch encoder for extracting disease-relevant features. Then, a hierarchical Graph V-Net is designed to construct a hierarchical graph representation from neighboring/similar individual patches for coarse-to-fine classification, where each graph node (corresponding to one patch) is attached with extracted disease-relevant features and its target label during training is the average label of all pixels in the corresponding patch. To evaluate the performance of our proposed hierarchical Graph V-Net, we collect a large WSI dataset of 560 WSIs, with 30 labeled WSIs from the BACH dataset (through our further refinement), 30 labeled WSIs and 500 unlabeled WSIs from Yunnan Cancer Hospital. Those 500 unlabeled WSIs are employed for patch-level pre-training to improve feature representation, while 60 labeled WSIs are used to train and test our proposed hierarchical Graph V-Net. Both comparative assessment and ablation studies demonstrate the superiority of our proposed hierarchical Graph V-Net over state-of-the-art methods in classifying breast cancer from WSIs. The source code and our annotations for the BACH dataset have been released at https://github.com/lyhkevin/Graph-V-Net.
Collapse
|
9
|
Zhang J, Cui Z, Zhou L, Sun Y, Li Z, Liu Z, Shen D. Breast Fibroglandular Tissue Segmentation for Automated BPE Quantification With Iterative Cycle-Consistent Semi-Supervised Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3944-3955. [PMID: 37756174 DOI: 10.1109/tmi.2023.3319646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/29/2023]
Abstract
Background Parenchymal Enhancement (BPE) quantification in Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) plays a pivotal role in clinical breast cancer diagnosis and prognosis. However, the emerging deep learning-based breast fibroglandular tissue segmentation, a crucial step in automated BPE quantification, often suffers from limited training samples with accurate annotations. To address this challenge, we propose a novel iterative cycle-consistent semi-supervised framework to leverage segmentation performance by using a large amount of paired pre-/post-contrast images without annotations. Specifically, we design the reconstruction network, cascaded with the segmentation network, to learn a mapping from the pre-contrast images and segmentation predictions to the post-contrast images. Thus, we can implicitly use the reconstruction task to explore the inter-relationship between these two-phase images, which in return guides the segmentation task. Moreover, the reconstructed post-contrast images across multiple auto-context modeling-based iterations can be viewed as new augmentations, facilitating cycle-consistent constraints across each segmentation output. Extensive experiments on two datasets with various data distributions show great segmentation and BPE quantification accuracy compared with other state-of-the-art semi-supervised methods. Importantly, our method achieves 11.80 times of quantification accuracy improvement along with 10 times faster, compared with clinical physicians, demonstrating its potential for automated BPE quantification. The code is available at https://github.com/ZhangJD-ong/Iterative-Cycle-consistent-Semi-supervised-Learning-for-fibroglandular-tissue-segmentation.
Collapse
|
10
|
Zhunissova U, Dzierżak R, Omiotek Z, Lytvynenko V. A Novel COVID-19 Diagnosis Approach Utilizing a Comprehensive Set of Diagnostic Information (CSDI). J Clin Med 2023; 12:6912. [PMID: 37959377 PMCID: PMC10649663 DOI: 10.3390/jcm12216912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Revised: 10/19/2023] [Accepted: 10/31/2023] [Indexed: 11/15/2023] Open
Abstract
The aim of the study was to develop a computerized method for distinguishing COVID-19-affected cases from cases of pneumonia. This task continues to be a real challenge in the practice of diagnosing COVID-19 disease. In the study, a new approach was proposed, using a comprehensive set of diagnostic information (CSDI) including, among other things, medical history, demographic data, signs and symptoms of the disease, and laboratory results. These data have the advantage of being much more reliable compared with data based on a single source of information, such as radiological imaging. On this basis, a comprehensive process of building predictive models was carried out, including such steps as data preprocessing, feature selection, training, and evaluation of classification models. During the study, 9 different methods for feature selection were used, while the grid search method and 12 popular classification algorithms were employed to build classification models. The most effective model achieved a classification accuracy (ACC) of 85%, a sensitivity (TPR) equal to 83%, and a specificity (TNR) of 88%. The model was built using the random forest method with 15 features selected using the recursive feature elimination selection method. The results provide an opportunity to build a computer system to assist the physician in the diagnosis of the COVID-19 disease.
Collapse
Affiliation(s)
- Ulzhalgas Zhunissova
- Department of Biostatistics, Bioinformatics and Information Technologies, Astana Medical University, Beibitshilik Street 49A, Astana 010000, Kazakhstan
| | - Róża Dzierżak
- Faculty of Electrical Engineering and Computer Science, Lublin University of Technology, Nadbystrzycka 38 A, 20-618 Lublin, Poland
| | - Zbigniew Omiotek
- Faculty of Electrical Engineering and Computer Science, Lublin University of Technology, Nadbystrzycka 38 A, 20-618 Lublin, Poland
| | - Volodymyr Lytvynenko
- Department of Informatics and Computer Science, Kherson National Technical University, Beryslavs’ke Hwy, 24, 730082 Kherson, Kherson Oblast, Ukraine
| |
Collapse
|
11
|
Schaudt D, von Schwerin R, Hafner A, Riedel P, Reichert M, von Schwerin M, Beer M, Kloth C. Augmentation strategies for an imbalanced learning problem on a novel COVID-19 severity dataset. Sci Rep 2023; 13:18299. [PMID: 37880333 PMCID: PMC10600145 DOI: 10.1038/s41598-023-45532-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 10/20/2023] [Indexed: 10/27/2023] Open
Abstract
Since the beginning of the COVID-19 pandemic, many different machine learning models have been developed to detect and verify COVID-19 pneumonia based on chest X-ray images. Although promising, binary models have only limited implications for medical treatment, whereas the prediction of disease severity suggests more suitable and specific treatment options. In this study, we publish severity scores for the 2358 COVID-19 positive images in the COVIDx8B dataset, creating one of the largest collections of publicly available COVID-19 severity data. Furthermore, we train and evaluate deep learning models on the newly created dataset to provide a first benchmark for the severity classification task. One of the main challenges of this dataset is the skewed class distribution, resulting in undesirable model performance for the most severe cases. We therefore propose and examine different augmentation strategies, specifically targeting majority and minority classes. Our augmentation strategies show significant improvements in precision and recall values for the rare and most severe cases. While the models might not yet fulfill medical requirements, they serve as an appropriate starting point for further research with the proposed dataset to optimize clinical resource allocation and treatment.
Collapse
Affiliation(s)
- Daniel Schaudt
- Department of Computer Science, Ulm University of Applied Science, Albert-Einstein-Allee 55, 89081, Ulm, Baden-Wurttemberg, Germany.
| | - Reinhold von Schwerin
- Department of Computer Science, Ulm University of Applied Science, Albert-Einstein-Allee 55, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Alexander Hafner
- Department of Computer Science, Ulm University of Applied Science, Albert-Einstein-Allee 55, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Pascal Riedel
- Department of Computer Science, Ulm University of Applied Science, Albert-Einstein-Allee 55, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Manfred Reichert
- Institute of Databases and Information Systems, Ulm University, James-Franck-Ring, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Marianne von Schwerin
- Department of Computer Science, Ulm University of Applied Science, Albert-Einstein-Allee 55, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Meinrad Beer
- Department of Radiology, University Hospital of Ulm, Albert-Einstein-Allee 23, 89081, Ulm, Baden-Wurttemberg, Germany
| | - Christopher Kloth
- Department of Radiology, University Hospital of Ulm, Albert-Einstein-Allee 23, 89081, Ulm, Baden-Wurttemberg, Germany
| |
Collapse
|
12
|
Lin M, Lin N, Yu S, Sha Y, Zeng Y, Liu A, Niu Y. Automated Prediction of Early Recurrence in Advanced Sinonasal Squamous Cell Carcinoma With Deep Learning and Multi-parametric MRI-based Radiomics Nomogram. Acad Radiol 2023; 30:2201-2211. [PMID: 36925335 DOI: 10.1016/j.acra.2022.11.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 11/12/2022] [Accepted: 11/13/2022] [Indexed: 03/16/2023]
Abstract
RATIONALE AND OBJECTIVES Preoperative prediction of the recurrence risk in patients with advanced sinonasal squamous cell carcinoma (SNSCC) is critical for individualized treatment. To evaluate the predictive ability of radiomics signature (RS) based on deep learning and multiparametric MRI for the risk of 2-year recurrence in advanced SNSCC. MATERIALS AND METHODS Preoperative MRI datasets were retrospectively collected from 265 SNSCC patients (145 recurrences) who underwent preoperative MRI, including T2-weighted (T2W), contrast-enhanced T1-weighted (T1c) sequences and diffusion-weighted (DW). All patients were divided into 165 training cohort and 70 test cohort. A deep learning segmentation model based on VB-Net was used to segment regions of interest (ROIs) for preoperative MRI and radiomics features were extracted from automatically segmented ROIs. Least absolute shrinkage and selection operator (LASSO) and logistic regression (LR) were applied for feature selection and radiomics score construction. Combined with meaningful clinicopathological predictors, a nomogram was developed and its performance was evaluated. In addition, X-title software was used to divide patients into high-risk or low-risk early relapse (ER) subgroups. Recurrence-free survival probability (RFS) was assessed for each subgroup. RESULTS The radiomics score, T stage, histological grade and Ki-67 predictors were independent predictors. The segmentation models of T2WI, T1c, and apparent diffusion coefficient (ADC) sequences achieved Dice coefficients of 0.720, 0.727, and 0.756, respectively, in the test cohort. RS-T2, RS-T1c and RS-ADC were derived from single-parameter MRI. RS-Combined (combined with T2WI, T1c, and ADC features) was derived from multiparametric MRI and reached area under curve (AUC) and accuracy of 0.854 (0.749-0.927) and 74.3% (0.624-0.840), respectively, in the test cohort. The calibration curve and decision curve analysis (DCA) illustrate its value in clinical practice. Kaplan-Meier analysis showed that the 2-year RFS rate for low-risk patients was significantly greater than that for high-risk patients in both the training and testing cohorts (p < 0.001). CONCLUSION Automated nomograms based on multi-sequence MRI help to predict ER in SNSCC patients preoperatively.
Collapse
Affiliation(s)
- Mengyan Lin
- Shanghai Institute of Medical Imaging, Shanghai, China
| | - Naier Lin
- Department of Radiology, Eye & ENT Hospital, Fudan University, Shanghai, China
| | - Sihui Yu
- Department of Radiology, Eye & ENT Hospital, Fudan University, Shanghai, China
| | - Yan Sha
- Department of Radiology, Eye & ENT Hospital, Fudan University, Shanghai, China.
| | - Yan Zeng
- Department of Research Center, Shanghai United Imaging Intelligence Inc., Shanghai, China
| | - Aie Liu
- Department of Research Center, Shanghai United Imaging Intelligence Inc., Shanghai, China
| | - Yue Niu
- Department of Radiology, Eye & ENT Hospital, Fudan University, Shanghai, China
| |
Collapse
|
13
|
Chen M, Yi S, Yang M, Yang Z, Zhang X. UNet segmentation network of COVID-19 CT images with multi-scale attention. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:16762-16785. [PMID: 37920033 DOI: 10.3934/mbe.2023747] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/04/2023]
Abstract
In recent years, the global outbreak of COVID-19 has posed an extremely serious life-safety risk to humans, and in order to maximize the diagnostic efficiency of physicians, it is extremely valuable to investigate the methods of lesion segmentation in images of COVID-19. Aiming at the problems of existing deep learning models, such as low segmentation accuracy, poor model generalization performance, large model parameters and difficult deployment, we propose an UNet segmentation network integrating multi-scale attention for COVID-19 CT images. Specifically, the UNet network model is utilized as the base network, and the structure of multi-scale convolutional attention is proposed in the encoder stage to enhance the network's ability to capture multi-scale information. Second, a local channel attention module is proposed to extract spatial information by modeling local relationships to generate channel domain weights, to supplement detailed information about the target region to reduce information redundancy and to enhance important information. Moreover, the network model encoder segment uses the Meta-ACON activation function to avoid the overfitting phenomenon of the model and to improve the model's representational ability. A large number of experimental results on publicly available mixed data sets show that compared with the current mainstream image segmentation algorithms, the pro-posed method can more effectively improve the accuracy and generalization performance of COVID-19 lesions segmentation and provide help for medical diagnosis and analysis.
Collapse
Affiliation(s)
- Mingju Chen
- School of Automation and Information Engineering, Sichuan University of Science & Engineering, Yibin 644002, China
- Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science & Engineering, Yibin 644002, China
| | - Sihang Yi
- School of Automation and Information Engineering, Sichuan University of Science & Engineering, Yibin 644002, China
- Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science & Engineering, Yibin 644002, China
| | - Mei Yang
- Zigong Third People's Hospital, Zigong 643000, China
| | - Zhiwen Yang
- School of Automation and Information Engineering, Sichuan University of Science & Engineering, Yibin 644002, China
- Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science & Engineering, Yibin 644002, China
| | - Xingyue Zhang
- School of Automation and Information Engineering, Sichuan University of Science & Engineering, Yibin 644002, China
- Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science & Engineering, Yibin 644002, China
| |
Collapse
|
14
|
Park D, Jang R, Chung MJ, An HJ, Bak S, Choi E, Hwang D. Development and validation of a hybrid deep learning-machine learning approach for severity assessment of COVID-19 and other pneumonias. Sci Rep 2023; 13:13420. [PMID: 37591967 PMCID: PMC10435445 DOI: 10.1038/s41598-023-40506-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2023] [Accepted: 08/11/2023] [Indexed: 08/19/2023] Open
Abstract
The Coronavirus Disease 2019 (COVID-19) is transitioning into the endemic phase. Nonetheless, it is crucial to remain mindful that pandemics related to infectious respiratory diseases (IRDs) can emerge unpredictably. Therefore, we aimed to develop and validate a severity assessment model for IRDs, including COVID-19, influenza, and novel influenza, using CT images on a multi-centre data set. Of the 805 COVID-19 patients collected from a single centre, 649 were used for training and 156 were used for internal validation (D1). Additionally, three external validation sets were obtained from 7 cohorts: 1138 patients with COVID-19 (D2), and 233 patients with influenza and novel influenza (D3). A hybrid model, referred to as Hybrid-DDM, was constructed by combining two deep learning models and a machine learning model. Across datasets D1, D2, and D3, the Hybrid-DDM exhibited significantly improved performance compared to the baseline model. The areas under the receiver operating curves (AUCs) were 0.830 versus 0.767 (p = 0.036) in D1, 0.801 versus 0.753 (p < 0.001) in D2, and 0.774 versus 0.668 (p < 0.001) in D3. This study indicates that the Hybrid-DDM model, trained using COVID-19 patient data, is effective and can also be applicable to patients with other types of viral pneumonia.
Collapse
Affiliation(s)
- Doohyun Park
- School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | | | - Myung Jin Chung
- Medical AI Research Center, Samsung Medical Center, Seoul, 06351, Republic of Korea
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, 06351, Republic of Korea
| | | | | | - Euijoon Choi
- Department of Artificial Intelligence, Yonsei University, Seoul, Republic of Korea
| | - Dosik Hwang
- School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea.
- Center for Healthcare Robotics, Korea Institute of Science and Technology, 5, Hwarang-ro 14-gil, Seongbuk-gu, Seoul, 02792, Republic of Korea.
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul, Republic of Korea.
- Department of Radiology and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
15
|
Zaeri N. Artificial intelligence and machine learning responses to COVID-19 related inquiries. J Med Eng Technol 2023; 47:301-320. [PMID: 38625639 DOI: 10.1080/03091902.2024.2321846] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2021] [Accepted: 02/18/2024] [Indexed: 04/17/2024]
Abstract
Researchers and scientists can use computational-based models to turn linked data into useful information, aiding in disease diagnosis, examination, and viral containment due to recent artificial intelligence and machine learning breakthroughs. In this paper, we extensively study the role of artificial intelligence and machine learning in delivering efficient responses to the COVID-19 pandemic almost four years after its start. In this regard, we examine a large number of critical studies conducted by various academic and research communities from multiple disciplines, as well as practical implementations of artificial intelligence algorithms that suggest potential solutions in investigating different COVID-19 decision-making scenarios. We identify numerous areas where artificial intelligence and machine learning can impact this context, including diagnosis (using chest X-ray imaging and CT imaging), severity, tracking, treatment, and the drug industry. Furthermore, we analyse the dilemma's limits, restrictions, and hazards.
Collapse
Affiliation(s)
- Naser Zaeri
- Faculty of Computer Studies, Arab Open University, Kuwait
| |
Collapse
|
16
|
Vinod DN, Prabaharan SRS. Elucidation of infection asperity of CT scan images of COVID-19 positive cases: A Machine Learning perspective. SCIENTIFIC AFRICAN 2023; 20:e01681. [PMID: 37192886 PMCID: PMC10150416 DOI: 10.1016/j.sciaf.2023.e01681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Revised: 03/19/2023] [Accepted: 04/30/2023] [Indexed: 05/18/2023] Open
Abstract
Owing to the profoundly irresistible nature of the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) infection, an enormous number of individuals halt in the line for Computed Tomography (CT) scan assessment, which overburdens the medical practitioners, radiologists, and adversely influences the patient's remedy, diagnosis, as well as restraint the epidemic. Medical facilities like intensive care systems and mechanical ventilators are restrained due to highly infectious diseases. It turns out to be very imperative to characterize the patients as per their asperity levels. This article exhibited a novel execution of a threshold-based image segmentation technique and random forest classifier for COVID-19 contamination asperity identification. With the help of the image segmentation model and machine learning classifier, we can identify and classify COVID-19 individuals into three asperity classes such as early, progressive, and advanced, with an accuracy of 95.5% using chest CT scan image database. Experimental outcomes on an adequately enormous number of CT scan images exhibit the adequacy of the machine learning mechanism developed and recommended to identify coronavirus severity.
Collapse
Affiliation(s)
- Dasari Naga Vinod
- Department of Electronics and Communication Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Avadi, Chennai, Tamilnadu 600062, India
| | - S R S Prabaharan
- Sathyabama Centre for Advanced Studies, Sathyabama Institute of Science and Technology, Rajiv Gandhi Salai, Chennai, Tamilnadu 600119, India
| |
Collapse
|
17
|
Xu J, Cao Z, Miao C, Zhang M, Xu X. Predicting omicron pneumonia severity and outcome: a single-center study in Hangzhou, China. Front Med (Lausanne) 2023; 10:1192376. [PMID: 37305146 PMCID: PMC10250627 DOI: 10.3389/fmed.2023.1192376] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Accepted: 05/08/2023] [Indexed: 06/13/2023] Open
Abstract
Background In December 2022, there was a large Omicron epidemic in Hangzhou, China. Many people were diagnosed with Omicron pneumonia with variable symptom severity and outcome. Computed tomography (CT) imaging has been proven to be an important tool for COVID-19 pneumonia screening and quantification. We hypothesized that CT-based machine learning algorithms can predict disease severity and outcome in Omicron pneumonia, and we compared its performance with the pneumonia severity index (PSI)-related clinical and biological features. Methods Our study included 238 patients with the Omicron variant who have been admitted to our hospital in China from 15 December 2022 to 16 January 2023 (the first wave after the dynamic zero-COVID strategy stopped). All patients had a positive real-time polymerase chain reaction (PCR) or lateral flow antigen test for SARS-CoV-2 after vaccination and no previous SARS-CoV-2 infections. We recorded patient baseline information pertaining to demographics, comorbid conditions, vital signs, and available laboratory data. All CT images were processed with a commercial artificial intelligence (AI) algorithm to obtain the volume and percentage of consolidation and infiltration related to Omicron pneumonia. The support vector machine (SVM) model was used to predict the disease severity and outcome. Results The receiver operating characteristic (ROC) area under the curve (AUC) of the machine learning classifier using PSI-related features was 0.85 (accuracy = 87.40%, p < 0.001) for predicting severity while that using CT-based features was only 0.70 (accuracy = 76.47%, p = 0.014). If combined, the AUC was not increased, showing 0.84 (accuracy = 84.03%, p < 0.001). Trained on outcome prediction, the classifier reached the AUC of 0.85 using PSI-related features (accuracy = 85.29%, p < 0.001), which was higher than using CT-based features (AUC = 0.67, accuracy = 75.21%, p < 0.001). If combined, the integrated model showed a slightly higher AUC of 0.86 (accuracy = 86.13%, p < 0.001). Oxygen saturation, IL-6, and CT infiltration showed great importance in both predicting severity and outcome. Conclusion Our study provided a comprehensive analysis and comparison between baseline chest CT and clinical assessment in disease severity and outcome prediction in Omicron pneumonia. The predictive model accurately predicts the severity and outcome of Omicron infection. Oxygen saturation, IL-6, and infiltration in chest CT were found to be important biomarkers. This approach has the potential to provide frontline physicians with an objective tool to manage Omicron patients more effectively in time-sensitive, stressful, and potentially resource-constrained environments.
Collapse
Affiliation(s)
- Jingjing Xu
- Department of Radiology, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Zhengye Cao
- Department of Radiology, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Chunqin Miao
- Party and Hospital Administration Office, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Minming Zhang
- Department of Radiology, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Xiaojun Xu
- Department of Radiology, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| |
Collapse
|
18
|
Subramanian M, Sathishkumar VE, Cho J, Shanmugavadivel K. Learning without forgetting by leveraging transfer learning for detecting COVID-19 infection from CT images. Sci Rep 2023; 13:8516. [PMID: 37231044 DOI: 10.1038/s41598-023-34908-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Accepted: 05/09/2023] [Indexed: 05/27/2023] Open
Abstract
COVID-19, a global pandemic, has killed thousands in the last three years. Pathogenic laboratory testing is the gold standard but has a high false-negative rate, making alternate diagnostic procedures necessary to fight against it. Computer Tomography (CT) scans help diagnose and monitor COVID-19, especially in severe cases. But, visual inspection of CT images takes time and effort. In this study, we employ Convolution Neural Network (CNN) to detect coronavirus infection from CT images. The proposed study utilized transfer learning on the three pre-trained deep CNN models, namely VGG-16, ResNet, and wide ResNet, to diagnose and detect COVID-19 infection from the CT images. However, when the pre-trained models are retrained, the model suffers the generalization capability to categorize the data in the original datasets. The novel aspect of this work is the integration of deep CNN architectures with Learning without Forgetting (LwF) to enhance the model's generalization capabilities on both trained and new data samples. The LwF makes the network use its learning capabilities in training on the new dataset while preserving the original competencies. The deep CNN models with the LwF model are evaluated on original images and CT scans of individuals infected with Delta-variant of the SARS-CoV-2 virus. The experimental results show that of the three fine-tuned CNN models with the LwF method, the wide ResNet model's performance is superior and effective in classifying original and delta-variant datasets with an accuracy of 93.08% and 92.32%, respectively.
Collapse
Affiliation(s)
- Malliga Subramanian
- Department of Computer Science and Engineering, Kongu Engineering College, Perundurai, Erode, Tamil Nadu, India
| | | | - Jaehyuk Cho
- Department of Software Engineering, Jeonbuk National University, Jeongu-si, Republic of Korea.
| | - Kogilavani Shanmugavadivel
- Department of Computer Science and Engineering, Kongu Engineering College, Perundurai, Erode, Tamil Nadu, India
| |
Collapse
|
19
|
Yang J, Li X, Cheng JZ, Xue Z, Shi F, Ji Y, Wang X, Yang F. Segment aorta and localize landmarks simultaneously on noncontrast CT using a multitask learning framework for patients without severe vascular disease. Comput Biol Med 2023; 160:107002. [PMID: 37187136 DOI: 10.1016/j.compbiomed.2023.107002] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 03/29/2023] [Accepted: 05/02/2023] [Indexed: 05/17/2023]
Abstract
BACKGROUND Non-contrast chest CT is widely used for lung cancer screening, and its images carry potential information of the thoracic aorta. The morphological assessment of the thoracic aorta may have potential value in the presymptomatic detection of thoracic aortic-related diseases and the risk prediction of future adverse events. However, due to low vasculature contrast in such images, visual assessment of aortic morphology is challenging and highly depends on physicians' experience. PURPOSE The main objective of this study is to propose a novel multi-task framework based on deep learning for simultaneous aortic segmentation and localization of key landmarks on unenhanced chest CT. The secondary objective is to use the algorithm to measure quantitative features of thoracic aorta morphology. METHODS The proposed network is composed of two subnets to carry out segmentation and landmark detection, respectively. The segmentation subnet aims to demarcate the aortic sinuses of the Valsalva, aortic trunk and aortic branches, whereas the detection subnet is devised to locate five landmarks on the aorta to facilitate morphology measures. The networks share a common encoder and run decoders in parallel, taking full advantage of the synergy of the segmentation and landmark detection tasks. Furthermore, the volume of interest (VOI) module and the squeeze-and-excitation (SE) block with attention mechanisms are incorporated to further boost the capability of feature learning. RESULTS Benefiting from the multitask framework, we achieved a mean Dice score of 0.95, average symmetric surface distance of 0.53 mm, Hausdorff distance of 2.13 mm for aortic segmentation, and mean square error (MSE) of 3.23 mm for landmark localization in 40 testing cases. CONCLUSION We proposed a multitask learning framework which can perform segmentation of the thoracic aorta and localization of landmarks simultaneously and achieved good results. It can support quantitative measurement of aortic morphology for further analysis of aortic diseases, such as hypertension.
Collapse
Affiliation(s)
- Jinrong Yang
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022, China
| | - Xiang Li
- Shanghai United Imaging Intelligence Co. Ltd., Shanghai, 201807, China
| | - Jie-Zhi Cheng
- Shanghai United Imaging Intelligence Co. Ltd., Shanghai, 201807, China
| | - Zhong Xue
- Shanghai United Imaging Intelligence Co. Ltd., Shanghai, 201807, China
| | - Feng Shi
- Shanghai United Imaging Intelligence Co. Ltd., Shanghai, 201807, China
| | - Yuqing Ji
- Shanghai United Imaging Intelligence Co. Ltd., Shanghai, 201807, China
| | - Xuechun Wang
- Shanghai United Imaging Intelligence Co. Ltd., Shanghai, 201807, China.
| | - Fan Yang
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022, China.
| |
Collapse
|
20
|
Liu Y, Chen B, Zhang Z, Yu H, Ru S, Chen X, Lu G. Self-paced Multi-view Learning for CT-based severity assessment of COVID-19. Biomed Signal Process Control 2023; 83:104672. [PMID: 36777556 PMCID: PMC9905104 DOI: 10.1016/j.bspc.2023.104672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 01/30/2023] [Accepted: 02/04/2023] [Indexed: 02/11/2023]
Abstract
Prior studies for the task of severity assessment of COVID-19 (SA-COVID) usually suffer from domain-specific cognitive deficits. They mainly focus on visual cues based on single cognitive functions but fail to reconcile the valuable information from other alternative views. Inspired by the cognitive process of radiologists, this paper shifts naturally from single-symptom measurements to a multi-view analysis, and proposes a novel Self-paced Multi-view Learning (SPML) framework for automated SA-COVID. Specifically, the proposed SPML framework first comprehensively aggregates multi-view contexts in lung infection with different measure paradigms, i.e., Global Feature Branch, Texture Feature Branch, and Volume Feature Branch. In this way, multiple-perspective clues are taken into account to reflect the most essential pathological manifestation on CT images. To alleviate small-sample learning problems, we also introduce an optimization with self-paced learning strategy to cognitively increase the characterization capabilities of training samples by learning from simple to complex. In contrast to traditional batch-wise learning, a pure self-paced way can further guarantee the efficiency and accuracy of SPML when dealing with small and biased samples. Furthermore, we construct a well-established SA-COVID dataset that contains 300 CT images with fine annotations. Extensive experiments on this dataset demonstrate that SPML consistently outperforms the state-of-the-art baselines. The SA-COVID dataset is publicly released at https://github.com/YishuLiu/SA-COVID.
Collapse
Affiliation(s)
- Yishu Liu
- Harbin Institute of Technology, Shenzhen, 518055, China
| | - Bingzhi Chen
- South China Normal University, Guangzhou, 510631, China
| | - Zheng Zhang
- Harbin Institute of Technology, Shenzhen, 518055, China
| | - Hongbing Yu
- Nanshan District Chronic Disease Prevention and Control Hospital, Shenzhen, 518055, China
| | - Shouhang Ru
- Shenzhen Second People's Hospital, Shenzhen, 518000, China
| | - Xiaosheng Chen
- Shenzhen Second People's Hospital, Shenzhen, 518000, China
| | - Guangming Lu
- Harbin Institute of Technology, Shenzhen, 518055, China
| |
Collapse
|
21
|
Agrawal T, Choudhary P. COVID-SegNet: encoder-decoder-based architecture for COVID-19 lesion segmentation in chest X-ray. MULTIMEDIA SYSTEMS 2023; 29:1-14. [PMID: 37360154 PMCID: PMC10115388 DOI: 10.1007/s00530-023-01096-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 04/10/2023] [Indexed: 06/28/2023]
Abstract
The coronavirus disease 2019, initially named 2019-nCOV (COVID-19) has been declared a global pandemic by the World Health Organization in March 2020. Because of the growing number of COVID patients, the world's health infrastructure has collapsed, and computer-aided diagnosis has become a necessity. Most of the models proposed for the COVID-19 detection in chest X-rays do image-level analysis. These models do not identify the infected region in the images for an accurate and precise diagnosis. The lesion segmentation will help the medical experts to identify the infected region in the lungs. Therefore, in this paper, a UNet-based encoder-decoder architecture is proposed for the COVID-19 lesion segmentation in chest X-rays. To improve performance, the proposed model employs an attention mechanism and a convolution-based atrous spatial pyramid pooling module. The proposed model obtained 0.8325 and 0.7132 values of the dice similarity coefficient and jaccard index, respectively, and outperformed the state-of-the-art UNet model. An ablation study has been performed to highlight the contribution of the attention mechanism and small dilation rates in the atrous spatial pyramid pooling module.
Collapse
Affiliation(s)
- Tarun Agrawal
- Department of Computer Science and Engineering, National Institute of Technology Hamirpur, Hamirpur, Himachal Pradesh 177005 India
| | - Prakash Choudhary
- Department of Computer Science and Engineering, Central University of Rajasthan, Ajmer, Rajasthan India
| |
Collapse
|
22
|
Wu J, Xia Y, Wang X, Wei Y, Liu A, Innanje A, Zheng M, Chen L, Shi J, Wang L, Zhan Y, Zhou XS, Xue Z, Shi F, Shen D. uRP: An integrated research platform for one-stop analysis of medical images. FRONTIERS IN RADIOLOGY 2023; 3:1153784. [PMID: 37492386 PMCID: PMC10365282 DOI: 10.3389/fradi.2023.1153784] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 03/31/2023] [Indexed: 07/27/2023]
Abstract
Introduction Medical image analysis is of tremendous importance in serving clinical diagnosis, treatment planning, as well as prognosis assessment. However, the image analysis process usually involves multiple modality-specific software and relies on rigorous manual operations, which is time-consuming and potentially low reproducible. Methods We present an integrated platform - uAI Research Portal (uRP), to achieve one-stop analyses of multimodal images such as CT, MRI, and PET for clinical research applications. The proposed uRP adopts a modularized architecture to be multifunctional, extensible, and customizable. Results and Discussion The uRP shows 3 advantages, as it 1) spans a wealth of algorithms for image processing including semi-automatic delineation, automatic segmentation, registration, classification, quantitative analysis, and image visualization, to realize a one-stop analytic pipeline, 2) integrates a variety of functional modules, which can be directly applied, combined, or customized for specific application domains, such as brain, pneumonia, and knee joint analyses, 3) enables full-stack analysis of one disease, including diagnosis, treatment planning, and prognosis assessment, as well as full-spectrum coverage for multiple disease applications. With the continuous development and inclusion of advanced algorithms, we expect this platform to largely simplify the clinical scientific research process and promote more and better discoveries.
Collapse
Affiliation(s)
- Jiaojiao Wu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yuwei Xia
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xuechun Wang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Ying Wei
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Aie Liu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Arun Innanje
- Department of Research and Development, United Imaging Intelligence Co., Ltd., Cambridge, MA, United States
| | - Meng Zheng
- Department of Research and Development, United Imaging Intelligence Co., Ltd., Cambridge, MA, United States
| | - Lei Chen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Jing Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Liye Wang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Yiqiang Zhan
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xiang Sean Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Zhong Xue
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Dinggang Shen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
- Shanghai Clinical Research and Trial Center, Shanghai, China
| |
Collapse
|
23
|
Ahmad J, Saudagar AKJ, Malik KM, Khan MB, AlTameem A, Alkhathami M, Hasanat MHA. Prognosis Prediction in COVID-19 Patients through Deep Feature Space Reasoning. Diagnostics (Basel) 2023; 13:diagnostics13081387. [PMID: 37189488 DOI: 10.3390/diagnostics13081387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 03/05/2023] [Accepted: 03/17/2023] [Indexed: 05/17/2023] Open
Abstract
The COVID-19 pandemic has presented a unique challenge for physicians worldwide, as they grapple with limited data and uncertainty in diagnosing and predicting disease outcomes. In such dire circumstances, the need for innovative methods that can aid in making informed decisions with limited data is more critical than ever before. To allow prediction with limited COVID-19 data as a case study, we present a complete framework for progression and prognosis prediction in chest X-rays (CXR) through reasoning in a COVID-specific deep feature space. The proposed approach relies on a pre-trained deep learning model that has been fine-tuned specifically for COVID-19 CXRs to identify infection-sensitive features from chest radiographs. Using a neuronal attention-based mechanism, the proposed method determines dominant neural activations that lead to a feature subspace where neurons are more sensitive to COVID-related abnormalities. This process allows the input CXRs to be projected into a high-dimensional feature space where age and clinical attributes like comorbidities are associated with each CXR. The proposed method can accurately retrieve relevant cases from electronic health records (EHRs) using visual similarity, age group, and comorbidity similarities. These cases are then analyzed to gather evidence for reasoning, including diagnosis and treatment. By using a two-stage reasoning process based on the Dempster-Shafer theory of evidence, the proposed method can accurately predict the severity, progression, and prognosis of a COVID-19 patient when sufficient evidence is available. Experimental results on two large datasets show that the proposed method achieves 88% precision, 79% recall, and 83.7% F-score on the test sets.
Collapse
Affiliation(s)
- Jamil Ahmad
- Department of Computer Science, Islamia College Peshawar, Peshawar 25120, Pakistan
| | | | - Khalid Mahmood Malik
- Department of Computer Science and Engineering, Oakland University, Rochester, MI 48309, USA
| | - Muhammad Badruddin Khan
- Information Systems Department, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
| | - Abdullah AlTameem
- Information Systems Department, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
| | - Mohammed Alkhathami
- Information Systems Department, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
| | | |
Collapse
|
24
|
Wang X, Wang J, Shan F, Zhan Y, Shi J, Shen D. Severity prediction of pulmonary diseases using chest CT scans via cost-sensitive label multi-kernel distribution learning. Comput Biol Med 2023; 159:106890. [PMID: 37116240 DOI: 10.1016/j.compbiomed.2023.106890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 03/16/2023] [Accepted: 04/01/2023] [Indexed: 04/30/2023]
Abstract
BACKGROUND AND OBJECTIVES The progression of pulmonary diseases is a complex progress. Timely predicting whether the patients will progress to the severe stage or not in its early stage is critical to take appropriate hospital treatment. However, this task suffers from the "insufficient and incomplete" data issue since it is clinically impossible to have adequate training samples for one patient at each day. Besides, the training samples are extremely imbalanced since the patients who will progress to the severe stage is far less than those who will not progress to the non-severe stage. METHOD We consider the severity prediction of pulmonary diseases as a time estimation problem based on CT scans. To handle the issue of "insufficient and incomplete" training samples, we introduced label distribution learning (LDL). Specifically, we generate a label distribution for each patient, making a CT image contribute to not only the learning of its chronological day, but also the learning of its neighboring days. In addition, a cost-sensitive mechanism is introduced to explore the imbalance data issue. To identify the importance of pulmonary segments in pulmonary disease severity prediction, multi-kernel learning in composite kernel space is further incorporated and particle swarm optimization (PSO) is used to find the optimal kernel weights. RESULTS We compare the performance of the proposed CS-LD-MKSVR algorithm with several classical machine learning algorithms and deep learning (DL) algorithms. The proposed method has obtained the best classification results on the in-house data, fully indicating its effectiveness in pulmonary disease severity prediction. CONTRIBUTIONS The severity prediction of pulmonary diseases is considered as a time estimation problem, and label distribution is introduced to describe the conversion time from non-severe stage to severe stage. The cost-sensitive mechanism is also introduced to handle the data imbalance issue to further improve the classification performance.
Collapse
Affiliation(s)
- Xin Wang
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, School of Communication and Information Engineering, Shanghai University, China; Shanghai Institute for Advanced Communication and Data Science, Shanghai University, China
| | - Jun Wang
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, School of Communication and Information Engineering, Shanghai University, China; Shanghai Institute for Advanced Communication and Data Science, Shanghai University, China.
| | - Fei Shan
- Department of Radiology, Shanghai Public Health Clinical Center, Fudan University, Shanghai, 201508, China
| | - Yiqiang Zhan
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Shanghai, 200232, China
| | - Jun Shi
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, School of Communication and Information Engineering, Shanghai University, China; Shanghai Institute for Advanced Communication and Data Science, Shanghai University, China
| | - Dinggang Shen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Shanghai, 200232, China; School of Biomedical Engineering, ShanghaiTech University, Shanghai, 201210, China
| |
Collapse
|
25
|
Rodriguez-Obregon DE, Mejia-Rodriguez AR, Cendejas-Zaragoza L, Gutiérrez Mejía J, Arce-Santana ER, Charleston-Villalobos S, Aljama-Corrales T, Gabutti A, Santos-Díaz A. Semi-Supervised COVID-19 Volumetric Pulmonary Lesion Estimation on CT Images using Probabilistic Active Contour and CNN Segmentation. Biomed Signal Process Control 2023; 85:104905. [PMID: 36993838 PMCID: PMC10030333 DOI: 10.1016/j.bspc.2023.104905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 03/11/2023] [Accepted: 03/18/2023] [Indexed: 03/24/2023]
Abstract
Purpose A semi-supervised two-step methodology is proposed to obtain a volumetric estimation of COVID-19-related lesions on Computed Tomography (CT) images. Methods First, damaged tissue was segmented from CT images using a probabilistic active contours approach. Second, lung parenchyma was extracted using a previously trained U-Net. Finally, volumetric estimation of COVID-19 lesions was calculated considering the lung parenchyma masks. Our approach was validated using a publicly available dataset containing 20 CT COVID-19 images previously labeled and manually segmented. Then, it was applied to 295 COVID-19 patients CT scans admitted to an intensive care unit. We compared the lesion estimation between deceased and survived patients for high and low-resolution images. Results A comparable median Dice similarity coefficient of 0.66 for the 20 validation images was achieved. For the 295 images dataset, results show a significant difference in lesion percentages between deceased and survived patients, with a p-value of 9.1×10−4 in low-resolution and 5.1×10−5 for high-resolution images. Furthermore, the difference in lesion percentages between high and low-resolution images was 10% on average. Conclusion The proposed approach could help estimate the lesion size caused by COVID-19 in CT images and may be considered as an alternative to getting a volumetric segmentation for this novel disease without the requirement of large amounts of COVID-19 labeled data to train an artificial intelligence algorithm. The low variation between the estimated percentage of lesions in high and low-resolution CT images suggests that the proposed approach is robust and It may provide valuable information to differentiate between survived and deceased patients.
Collapse
Affiliation(s)
| | | | - Leopoldo Cendejas-Zaragoza
- Tecnologico de Monterrey, School of Engineering and Sciences, Mexico City, Mexico
- Instituto Nacional de Ciencias Médicas y Nutrición Salvador Zubirán, Mexico City, Mexico
| | - Juan Gutiérrez Mejía
- Tecnologico de Monterrey, School of Medicine and Health Sciences, Mexico City, Mexico
| | | | | | | | - Alejandro Gabutti
- Department of Radiology and Imaging, Instituto Nacional de Ciencias Médicas y Nutrición Salvador Zubirán, Mexico City, Mexico
| | - Alejandro Santos-Díaz
- Tecnologico de Monterrey, School of Engineering and Sciences, Mexico City, Mexico
- Tecnologico de Monterrey, School of Medicine and Health Sciences, Monterrey, Mexico
| |
Collapse
|
26
|
SuperMini-seg: An ultra lightweight network for COVID-19 lung infection segmentation from CT images. Biomed Signal Process Control 2023; 85:104896. [PMID: 36998783 PMCID: PMC10028361 DOI: 10.1016/j.bspc.2023.104896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 01/31/2023] [Accepted: 03/18/2023] [Indexed: 03/24/2023]
Abstract
The automatic segmentation of lung lesions from COVID-19 computed tomography (CT) images is helpful in establishing a quantitative model to diagnose and treat COVID-19. To this end, this study proposes a lightweight segmentation network called the SuperMini-Seg. We propose a new module called the transformer parallel convolution module (TPCB), which introduces both transformer and convolution operations in one module. SuperMini-seg adopts the structure of a double-branch parallel to downsample the image and designs a gated attention mechanism in the middle of the two parallel branches. At the same time, the attentive hierarchical spatial pyramid (AHSP) module and criss-cross attention module are adopted, and more than 100K parameters are present in the model. At the same time, the model is scalable, and the parameter quantity of SuperMini-seg-V2 reaches more than 70K. Compared with other advanced methods, the segmentation accuracy was almost reached the state-of-art method. The calculation efficiency was high, which is convenient for practical deployment.
Collapse
|
27
|
Lai M, Wang K, Ding C, Yin Y, Lin X, Xu C, Hu Z, Peng Z. Impact of inactivated COVID-19 vaccines on lung injury in B.1.617.2 (Delta) variant-infected patients. Ann Clin Microbiol Antimicrob 2023; 22:22. [PMID: 36944961 PMCID: PMC10029781 DOI: 10.1186/s12941-023-00569-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2022] [Accepted: 02/19/2023] [Indexed: 03/23/2023] Open
Abstract
BACKGROUND Chest computerized tomography (CT) scan is an important strategy that quantifies the severity of COVID-19 pneumonia. To what extent inactivated COVID-19 vaccines could impact the COVID-19 pneumonia on chest CT is not clear. METHODS This study recruited 357 SARS-COV-2 B.1.617.2 (Delta) variant-infected patients admitted to the Second Hospital of Nanjing from July to August 2021. An artificial intelligence-assisted CT imaging system was used to quantify the severity of COVID-19 pneumonia. We compared the volume of infection (VOI), percentage of infection (POI) and chest CT scores among patients with different vaccination statuses. RESULTS Of the 357 Delta variant-infected patients included for analysis, 105 were unvaccinated, 72 were partially vaccinated and 180 were fully vaccinated. Fully vaccination had the least lung injuries when quantified by VOI (median VOI of 222.4 cm3, 126.6 cm3 and 39.9 cm3 in unvaccinated, partially vaccinated and fully vaccinated, respectively; p < 0.001), POI (median POI of 7.60%, 3.55% and 1.20% in unvaccinated, partially vaccinated and fully vaccinated, respectively; p < 0.001) and chest CT scores (median CT score of 8.00, 6.00 and 4.00 in unvaccinated, partially vaccinated and fully vaccinated, respectively; p < 0.001). After adjustment for age, sex, comorbidity, time from illness onset to hospitalization and viral load, fully vaccination but not partial vaccination was significantly associated with less lung injuries quantified by VOI {adjust coefficient[95%CI] for "full vaccination": - 106.10(- 167.30,44.89); p < 0.001}, POI {adjust coefficient[95%CI] for "full vaccination": - 3.88(- 5.96, - 1.79); p = 0.001} and chest CT scores {adjust coefficient[95%CI] for "full vaccination": - 1.81(- 2.72, - 0.91); p < 0.001}. The extent of reduction of pulmonary injuries was more profound in fully vaccinated patients with older age, having underlying diseases, and being female sex, as demonstrated by relatively larger absolute values of adjusted coefficients. Finally, even within the non-severe COVID-19 population, fully vaccinated patients were found to have less lung injuries. CONCLUSION Fully vaccination but not partially vaccination could significantly protect lung injury manifested on chest CT. Our study provides additional evidence to encourage a full course of vaccination.
Collapse
Affiliation(s)
- Miao Lai
- School of Public Health, Nanjing Medical University, 101 Longmian Ave, Nanjing, 211166, China
| | - Kai Wang
- The First Affiliated Hospital of Nanjing Medical University, Nanjing, 210029, Jiangsu, China
| | - Chengyuan Ding
- Center for Global Health, School of Public Health, Nanjing Medical University, Nanjing, 211166, China
| | - Yi Yin
- School of Public Health, Nanjing Medical University, 101 Longmian Ave, Nanjing, 211166, China
| | - Xiaoling Lin
- Center for Global Health, School of Public Health, Nanjing Medical University, Nanjing, 211166, China
| | - Chuanjun Xu
- Department of Radiology, The Second Hospital of Nanjing, Nanjing University of Chinese Medicine, Nanjing, 210003, China.
| | - Zhiliang Hu
- Center for Global Health, School of Public Health, Nanjing Medical University, Nanjing, 211166, China.
- Department of Infectious Diseases, The Second Hospital of Nanjing, Nanjing University of Chinese Medicine, Nanjing, 210003, China.
| | - Zhihang Peng
- School of Public Health, Nanjing Medical University, 101 Longmian Ave, Nanjing, 211166, China.
| |
Collapse
|
28
|
D S, R K. Prognosticating various acute covid lung disorders from COVID-19 patient using chest CT Images. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE 2023; 119:105820. [PMID: 36644478 PMCID: PMC9829610 DOI: 10.1016/j.engappai.2023.105820] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 12/12/2022] [Accepted: 01/02/2023] [Indexed: 06/17/2023]
Abstract
The global spread of coronavirus illness has surged dramatically, resulting in a catastrophic pandemic situation. Despite this, accurate screening remains a significant challenge due to difficulties in categorizing infection regions and the minuscule difference between typical pneumonia and COVID (Coronavirus Disease) pneumonia. Diagnosing COVID-19 using the Mask Regional-Convolutional Neural Network (Mask R-CNN) is proposed to classify the chest computerized tomographic (CT) images into COVID-positive and COVID-negative. Covid-19 has a direct effect on the lungs, causing damage to the alveoli, which leads to various lung complications. By fusing multi-class data, the severity level of the patients can be classified using the meta-learning few-shot learning technique with the residual network with 50 layers deep (ResNet-50) as the base classifier. It has been tested with the outcome of COVID positive chest CT image data. From these various classes, it is possible to predict the onset possibilities of acute COVID lung disorders such as sepsis, acute respiratory distress syndrome (ARDS), COVID pneumonia, COVID bronchitis, etc. The first method of classification is proposed to diagnose whether the patient is affected by COVID-19 or not; it achieves a mean Average Precision (mAP) of 91.52% and G-mean of 97.69% with 98.60% of classification accuracy. The second method of classification is proposed for the detection of various acute lung disorders based on severity provide better performance in all the four stages, the average accuracy is of 95.4%, the G-mean for multiclass achieves 94.02%, and the AUC is 93.27% compared with the cutting-edge techniques. It enables healthcare professionals to correctly detect severity for potential treatments.
Collapse
Affiliation(s)
- Suganya D
- Department of Computer Science and Engineering, Puducherry Technological University, Puducherry 605014, India
| | - Kalpana R
- Department of Computer Science and Engineering, Puducherry Technological University, Puducherry 605014, India
| |
Collapse
|
29
|
Wang X, Yang B, Pan X, Liu F, Zhang S. BPCN: bilateral progressive compensation network for lung infection image segmentation. Phys Med Biol 2023; 68. [PMID: 36580682 DOI: 10.1088/1361-6560/acaf21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Accepted: 12/29/2022] [Indexed: 12/31/2022]
Abstract
Lung infection image segmentation is a key technology for autonomous understanding of the potential illness. However, current approaches usually lose the low-level details, which leads to a considerable accuracy decrease for lung infection areas with varied shapes and sizes. In this paper, we propose bilateral progressive compensation network (BPCN), a bilateral progressive compensation network to improve the accuracy of lung lesion segmentation through complementary learning of spatial and semantic features. The proposed BPCN are mainly composed of two deep branches. One branch is the multi-scale progressive fusion for main region features. The other branch is a flow-field based adaptive body-edge aggregation operations to explicitly learn detail features of lung infection areas which is supplement to region features. In addition, we propose a bilateral spatial-channel down-sampling to generate a hierarchical complementary feature which avoids losing discriminative features caused by pooling operations. Experimental results show that our proposed network outperforms state-of-the-art segmentation methods in lung infection segmentation on two public image datasets with or without a pseudo-label training strategy.
Collapse
Affiliation(s)
- Xiaoyan Wang
- Zhejiang University of Technology, Zhejiang Province, People's Republic of China
| | - Baoqi Yang
- Zhejiang University of Technology, Zhejiang Province, People's Republic of China
| | - Xiang Pan
- Zhejiang University of Technology, Zhejiang Province, People's Republic of China
| | - Fuchang Liu
- Hangzhou Normal University, Zhejiang Province, People's Republic of China
| | - Sanyuan Zhang
- Zhejiang University, Zhejiang Province, People's Republic of China
| |
Collapse
|
30
|
Khan A, Khan SH, Saif M, Batool A, Sohail A, Waleed Khan M. A Survey of Deep Learning Techniques for the Analysis of COVID-19 and their usability for Detecting Omicron. J EXP THEOR ARTIF IN 2023. [DOI: 10.1080/0952813x.2023.2165724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Affiliation(s)
- Asifullah Khan
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, Pakistan
- PIEAS Artificial Intelligence Center (PAIC), Pakistan Institute of Engineering & Applied Sciences, Islamabad, Pakistan
- Center for Mathematical Sciences, Pakistan Institute of Engineering & Applied Sciences, Islamabad, Pakistan
| | - Saddam Hussain Khan
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, Pakistan
- Department of Computer Systems Engineering, University of Engineering and Applied Sciences (UEAS), Swat, Pakistan
| | - Mahrukh Saif
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, Pakistan
| | - Asiya Batool
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, Pakistan
| | - Anabia Sohail
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, Pakistan
- Department of Computer Science, Faculty of Computing & Artificial Intelligence, Air University, Islamabad, Pakistan
| | - Muhammad Waleed Khan
- Pattern Recognition Lab, Department of Computer & Information Sciences, Pakistan Institute of Engineering & Applied Sciences, Nilore, Islamabad, Pakistan
- Department of Mechanical and Aerospace Engineering, Columbus, OH, USA
| |
Collapse
|
31
|
Asnawi MH, Pravitasari AA, Darmawan G, Hendrawati T, Yulita IN, Suprijadi J, Nugraha FAL. Lung and Infection CT-Scan-Based Segmentation with 3D UNet Architecture and Its Modification. Healthcare (Basel) 2023; 11:healthcare11020213. [PMID: 36673581 PMCID: PMC9859364 DOI: 10.3390/healthcare11020213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2022] [Revised: 12/28/2022] [Accepted: 01/04/2023] [Indexed: 01/12/2023] Open
Abstract
COVID-19 is the disease that has spread over the world since December 2019. This disease has a negative impact on individuals, governments, and even the global economy, which has caused the WHO to declare COVID-19 as a PHEIC (Public Health Emergency of International Concern). Until now, there has been no medicine that can completely cure COVID-19. Therefore, to prevent the spread and reduce the negative impact of COVID-19, an accurate and fast test is needed. The use of chest radiography imaging technology, such as CXR and CT-scan, plays a significant role in the diagnosis of COVID-19. In this study, CT-scan segmentation will be carried out using the 3D version of the most recommended segmentation algorithm for bio-medical images, namely 3D UNet, and three other architectures from the 3D UNet modifications, namely 3D ResUNet, 3D VGGUNet, and 3D DenseUNet. These four architectures will be used in two cases of segmentation: binary-class segmentation, where each architecture will segment the lung area from a CT scan; and multi-class segmentation, where each architecture will segment the lung and infection area from a CT scan. Before entering the model, the dataset is preprocessed first by applying a minmax scaler to scale the pixel value to a range of zero to one, and the CLAHE method is also applied to eliminate intensity in homogeneity and noise from the data. Of the four models tested in this study, surprisingly, the original 3D UNet produced the most satisfactory results compared to the other three architectures, although it requires more iterations to obtain the maximum results. For the binary-class segmentation case, 3D UNet produced IoU scores, Dice scores, and accuracy of 94.32%, 97.05%, and 99.37%, respectively. For the case of multi-class segmentation, 3D UNet produced IoU scores, Dice scores, and accuracy of 81.58%, 88.61%, and 98.78%, respectively. The use of 3D segmentation architecture will be very helpful for medical personnel because, apart from helping the process of diagnosing someone with COVID-19, they can also find out the severity of the disease through 3D infection projections.
Collapse
Affiliation(s)
- Mohammad Hamid Asnawi
- Department of Statistics, Faculty of Mathematics and Natural Sciences, Universitas Padjadjaran, Bandung 45363, Indonesia
| | - Anindya Apriliyanti Pravitasari
- Department of Statistics, Faculty of Mathematics and Natural Sciences, Universitas Padjadjaran, Bandung 45363, Indonesia
- Correspondence:
| | - Gumgum Darmawan
- Department of Statistics, Faculty of Mathematics and Natural Sciences, Universitas Padjadjaran, Bandung 45363, Indonesia
| | - Triyani Hendrawati
- Department of Statistics, Faculty of Mathematics and Natural Sciences, Universitas Padjadjaran, Bandung 45363, Indonesia
| | - Intan Nurma Yulita
- Department of Computer Science, Faculty of Mathematics and Natural Sciences, Universitas Padjadjaran, Bandung 45363, Indonesia
| | - Jadi Suprijadi
- Department of Statistics, Faculty of Mathematics and Natural Sciences, Universitas Padjadjaran, Bandung 45363, Indonesia
| | - Farid Azhar Lutfi Nugraha
- Department of Statistics, Faculty of Mathematics and Natural Sciences, Universitas Padjadjaran, Bandung 45363, Indonesia
| |
Collapse
|
32
|
Nguyen-Trong K, Nguyen-Hoang K. Multi-modal approach for COVID-19 detection using coughs and self-reported symptoms. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-222863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
COVID-19 (Coronavirus Disease of 2019) is one of the most challenging healthcare crises of the twenty-first century. The pandemic causes many negative impacts on all aspects of life and livelihoods. Although recent developments of relevant vaccines, such as Pfizer/BioNTech mRNA, AstraZeneca, or Moderna, the emergence of new virus mutations and their fast infection rate yet pose significant threats to public health. In this context, early detection of the disease is an important factor to reduce its effect and quickly control the spread of pandemic. Nevertheless, many countries still rely on methods that are either expensive and time-consuming (i.e., Reverse-transcription polymerase chain reaction) or uncomfortable and difficult for self-testing (i.e., Rapid Antigen Test Nasal). Recently, deep learning methods have been proposed as a potential solution for COVID-19 analysis. However, previous works usually focus on a single symptom, which can omit critical information for disease diagnosis. Therefore, in this study, we propose a multi-modal method to detect COVID-19 using cough sounds and self-reported symptoms. The proposed method consists of five neural networks to deal with different input features, including CNN-biLSTM for MFCC features, EfficientNetV2 for Mel spectrogram images, MLP for self-reported symptoms, C-YAMNet for cough detection, and RNNoise for noise-canceling. Experimental results demonstrated that our method outperformed the other state-of-the-art methods with a high AUC, accuracy, and F1-score of 98.6%, 96.9%, and 96.9% on the testing set.
Collapse
Affiliation(s)
- Khanh Nguyen-Trong
- Faculty of Information Technology, Posts and Telecommunications Institute of Technology, Hanoi, Viet Nam
| | - Khoi Nguyen-Hoang
- Faculty of Information Technology, Posts and Telecommunications Institute of Technology, Hanoi, Viet Nam
| |
Collapse
|
33
|
Lasker A, Obaidullah SM, Chakraborty C, Roy K. Application of Machine Learning and Deep Learning Techniques for COVID-19 Screening Using Radiological Imaging: A Comprehensive Review. SN COMPUTER SCIENCE 2022; 4:65. [PMID: 36467853 PMCID: PMC9702883 DOI: 10.1007/s42979-022-01464-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Accepted: 10/18/2022] [Indexed: 11/26/2022]
Abstract
Lung, being one of the most important organs in human body, is often affected by various SARS diseases, among which COVID-19 has been found to be the most fatal disease in recent times. In fact, SARS-COVID 19 led to pandemic that spreads fast among the community causing respiratory problems. Under such situation, radiological imaging-based screening [mostly chest X-ray and computer tomography (CT) modalities] has been performed for rapid screening of the disease as it is a non-invasive approach. Due to scarcity of physician/chest specialist/expert doctors, technology-enabled disease screening techniques have been developed by several researchers with the help of artificial intelligence and machine learning (AI/ML). It can be remarkably observed that the researchers have introduced several AI/ML/DL (deep learning) algorithms for computer-assisted detection of COVID-19 using chest X-ray and CT images. In this paper, a comprehensive review has been conducted to summarize the works related to applications of AI/ML/DL for diagnostic prediction of COVID-19, mainly using X-ray and CT images. Following the PRISMA guidelines, total 265 articles have been selected out of 1715 published articles till the third quarter of 2021. Furthermore, this review summarizes and compares varieties of ML/DL techniques, various datasets, and their results using X-ray and CT imaging. A detailed discussion has been made on the novelty of the published works, along with advantages and limitations.
Collapse
Affiliation(s)
- Asifuzzaman Lasker
- Department of Computer Science & Engineering, Aliah University, Kolkata, India
| | - Sk Md Obaidullah
- Department of Computer Science & Engineering, Aliah University, Kolkata, India
| | - Chandan Chakraborty
- Department of Computer Science & Engineering, National Institute of Technical Teachers’ Training & Research Kolkata, Kolkata, India
| | - Kaushik Roy
- Department of Computer Science, West Bengal State University, Barasat, India
| |
Collapse
|
34
|
A Multi-centric Evaluation of Deep Learning Models for Segmentation of COVID-19 Lung Lesions on Chest CT Scans. IRANIAN JOURNAL OF RADIOLOGY 2022. [DOI: 10.5812/iranjradiol-117992] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Background: Chest computed tomography (CT) scan is one of the most common tools used for the diagnosis of patients with coronavirus disease 2019 (COVID-19). While segmentation of COVID-19 lung lesions by radiologists can be time-consuming, the application of advanced deep learning techniques for automated segmentation can be a promising step toward the management of this infection and similar diseases in the future. Objectives: This study aimed to evaluate the performance and generalizability of deep learning-based models for the automated segmentation of COVID-19 lung lesions. Patients and Methods: Four datasets (2 private and 2 public) were used in this study. The first and second private datasets included 297 (147 healthy and 150 COVID-19 cases) and 82 COVID-19 subjects. The public datasets included the COVID19-P20 (20 COVID-19 cases from 2 centers) and the MosMedData datasets (50 COVID-19 patients from a single center). Model comparisons were made based on the Dice similarity coefficient (DSC), receiver operating characteristic (ROC) curve, and area under the curve (AUC). The predicted CT severity scores by the model were compared with those of radiologists by measuring the Pearson’s correlation coefficients (PCC). Also, DSC was used to compare the inter-rater agreement of the model and expert against that of 2 experts on an unseen dataset. Finally, the generalizability of the model was evaluated, and a simple calibration strategy was proposed. Results: The VGG16-UNet model showed the best performance across both private datasets, with a DSC of 84.23% ± 1.73% on the first private dataset and 56.61% ± 1.48% on the second private dataset. Similar results were obtained on public datasets, with a DSC of 60.10% ± 2.34% on the COVID19-P20 dataset and 66.28% ± 2.80% on a combined dataset of COVID19-P20 and MosMedData. The predicted CT severity scores of the model were compared against those of radiologists and were found to be 0.89 and 0.85 on the first private dataset and 0.77 and 0.74 on the second private dataset for the right and left lungs, respectively. Moreover, the model trained on the first private dataset was examined on the second private dataset and compared against the radiologist, which revealed a performance gap of 5.74% based on DSCs. A calibration strategy was employed to reduce this gap to 0.53%. Conclusion: The results demonstrated the potential of the proposed model in localizing COVID-19 lesions on CT scans across multiple datasets; its accuracy competed with the radiologists and could assist them in diagnostic and treatment procedures. The effect of model calibration on the performance of an unseen dataset was also reported, increasing the DSC by more than 5%.
Collapse
|
35
|
Developing and validating a machine learning prognostic model for alerting to imminent deterioration of hospitalized patients with COVID-19. Sci Rep 2022; 12:19220. [PMID: 36357439 PMCID: PMC9648491 DOI: 10.1038/s41598-022-23553-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Accepted: 11/02/2022] [Indexed: 11/12/2022] Open
Abstract
Our study was aimed at developing and validating a new approach, embodied in a machine learning-based model, for sequentially monitoring hospitalized COVID-19 patients and directing professional attention to patients whose deterioration is imminent. Model development employed real-world patient data (598 prediction events for 210 patients), internal validation (315 prediction events for 97 patients), and external validation (1373 prediction events for 307 patients). Results show significant divergence in longitudinal values of eight routinely collected blood parameters appearing several days before deterioration. Our model uses these signals to predict the personal likelihood of transition from non-severe to severe status within well-specified short time windows. Internal validation of the model's prediction accuracy showed ROC AUC of 0.8 and 0.79 for prediction scopes of 48 or 96 h, respectively; external validation showed ROC AUC of 0.7 and 0.73 for the same prediction scopes. Results indicate the feasibility of predicting the forthcoming deterioration of non-severe COVID-19 patients by eight routinely collected blood parameters, including neutrophil, lymphocyte, monocyte, and platelets counts, neutrophil-to-lymphocyte ratio, CRP, LDH, and D-dimer. A prospective clinical study and an impact assessment will allow implementation of this model in the clinic to improve care, streamline resources and ease hospital burden by timely focusing the medical attention on potentially deteriorating patients.
Collapse
|
36
|
Peng Y, Zhang T, Guo Y. Cov-TransNet: Dual branch fusion network with transformer for COVID-19 infection segmentation. Biomed Signal Process Control 2022; 80:104366. [PMCID: PMC9671472 DOI: 10.1016/j.bspc.2022.104366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 09/06/2022] [Accepted: 10/30/2022] [Indexed: 11/09/2022]
Abstract
Segmentation of COVID-19 infection is a challenging task due to the blurred boundaries and low contrast between the infected and the non-infected areas in COVID-19 CT images, especially for small infection regions. COV-TransNet is presented to achieve high-precision segmentation of COVID-19 infection regions in this paper. The proposed segmentation network is composed of the auxiliary branch and the backbone branch. The auxiliary branch network adopts transformer to provide global information, helping the convolution layers in backbone branch to learn specific local features better. A multi-scale feature attention module is introduced to capture contextual information and adaptively enhance feature representations. Specially, a high internal resolution is maintained during the attention calculation process. Moreover, feature activation module can effectively reduce the loss of valid information during sampling. The proposed network can take full advantage of different depth and multi-scale features to achieve high sensitivity for identifying lesions of varied sizes and locations. We experiment on several datasets of the COVID-19 lesion segmentation task, including COVID-19-CT-Seg, UESTC-COVID-19, MosMedData and COVID-19-MedSeg. Comprehensive results demonstrate that COV-TransNet outperforms the existing state-of-the-art segmentation methods and achieves better segmentation performance for multi-scale lesions.
Collapse
|
37
|
Lu X, Xu Y, Yuan W. DBF-Net: a semi-supervised dual-task balanced fusion network for segmenting infected regions from lung CT images. EVOLVING SYSTEMS 2022; 14:519-532. [PMID: 37193370 PMCID: PMC9483907 DOI: 10.1007/s12530-022-09466-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Accepted: 09/11/2022] [Indexed: 11/25/2022]
Abstract
Accurate segmentation of infected regions in lung computed tomography (CT) images is essential to improve the timeliness and effectiveness of treatment for coronavirus disease 2019 (COVID-19). However, the main difficulties in developing of lung lesion segmentation in COVID-19 are still the fuzzy boundary of the lung-infected region, the low contrast between the infected region and the normal trend region, and the difficulty in obtaining labeled data. To this end, we propose a novel dual-task consistent network framework that uses multiple inputs to continuously learn and extract lung infection region features, which is used to generate reliable label images (pseudo-labels) and expand the dataset. Specifically, we periodically feed multiple sets of raw and data-enhanced images into two trunk branches of the network; the characteristics of the lung infection region are extracted by a lightweight double convolution (LDC) module and fusiform equilibrium fusion pyramid (FEFP) convolution in the backbone. According to the learned features, the infected regions are segmented, and pseudo-labels are made based on the semi-supervised learning strategy, which effectively alleviates the semi-supervised problem of unlabeled data. Our proposed semi-supervised dual-task balanced fusion network (DBF-Net) creates pseudo-labels on the COVID-SemiSeg dataset and the COVID-19 CT segmentation dataset. Furthermore, we perform lung infection segmentation on the DBF-Net model, with a segmentation sensitivity of 70.6% and specificity of 92.8%. The results of the investigation indicate that the proposed network greatly enhances the segmentation ability of COVID-19 infection.
Collapse
Affiliation(s)
- Xiaoyan Lu
- College of Big Data and Information Engineering, Guizhou University, Guiyang, Guizhou People’s Republic of China
| | - Yang Xu
- College of Big Data and Information Engineering, Guizhou University, Guiyang, Guizhou People’s Republic of China
- Guiyang Aluminum Magnesium Design and Research Institute Co., Ltd, Guiyang, Guizhou People’s Republic of China
| | - Wenhao Yuan
- College of Big Data and Information Engineering, Guizhou University, Guiyang, Guizhou People’s Republic of China
| |
Collapse
|
38
|
Roth HR, Xu Z, Tor-Díez C, Sanchez Jacob R, Zember J, Molto J, Li W, Xu S, Turkbey B, Turkbey E, Yang D, Harouni A, Rieke N, Hu S, Isensee F, Tang C, Yu Q, Sölter J, Zheng T, Liauchuk V, Zhou Z, Moltz JH, Oliveira B, Xia Y, Maier-Hein KH, Li Q, Husch A, Zhang L, Kovalev V, Kang L, Hering A, Vilaça JL, Flores M, Xu D, Wood B, Linguraru MG. Rapid artificial intelligence solutions in a pandemic-The COVID-19-20 Lung CT Lesion Segmentation Challenge. Med Image Anal 2022; 82:102605. [PMID: 36156419 PMCID: PMC9444848 DOI: 10.1016/j.media.2022.102605] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 07/01/2022] [Accepted: 08/25/2022] [Indexed: 11/30/2022]
Abstract
Artificial intelligence (AI) methods for the automatic detection and quantification of COVID-19 lesions in chest computed tomography (CT) might play an important role in the monitoring and management of the disease. We organized an international challenge and competition for the development and comparison of AI algorithms for this task, which we supported with public data and state-of-the-art benchmark methods. Board Certified Radiologists annotated 295 public images from two sources (A and B) for algorithms training (n=199, source A), validation (n=50, source A) and testing (n=23, source A; n=23, source B). There were 1,096 registered teams of which 225 and 98 completed the validation and testing phases, respectively. The challenge showed that AI models could be rapidly designed by diverse teams with the potential to measure disease or facilitate timely and patient-specific interventions. This paper provides an overview and the major outcomes of the COVID-19 Lung CT Lesion Segmentation Challenge — 2020.
Collapse
Affiliation(s)
- Holger R Roth
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany.
| | - Ziyue Xu
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany
| | - Carlos Tor-Díez
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, WA, DC, USA
| | - Ramon Sanchez Jacob
- Division of Diagnostic Imaging and Radiology, Children's National Hospital, WA,DC, USA
| | - Jonathan Zember
- Division of Diagnostic Imaging and Radiology, Children's National Hospital, WA,DC, USA
| | - Jose Molto
- Division of Diagnostic Imaging and Radiology, Children's National Hospital, WA,DC, USA
| | - Wenqi Li
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany
| | - Sheng Xu
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Baris Turkbey
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Evrim Turkbey
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Dong Yang
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany
| | - Ahmed Harouni
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany
| | - Nicola Rieke
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany
| | - Shishuai Hu
- School of Computer Science and Engineering, Northwestern Polytechnical University, China
| | - Fabian Isensee
- Applied Computer Vision Lab, Helmholtz Imaging , Heidelberg, Germany; Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | | | - Qinji Yu
- Shanghai Jiao Tong University, China
| | - Jan Sölter
- Luxembourg Centre for Systems Biomedicine, University of Luxembourg, Luxembourg
| | - Tong Zheng
- School of Informatics, Nagoya University, Japan
| | - Vitali Liauchuk
- Biomedical Image Analysis Department, United Institute of Informatics Problems, Belarus
| | - Ziqi Zhou
- Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, China
| | | | - Bruno Oliveira
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, Braga, Portugal; ICVS/3B's - PT Government Associate Laboratory, Braga/Guimarães, Portugal; Algoritmi Center, School of Engineering, University of Minho, Guimarães, Portugal; 2Ai - School of Technology, IPCA, Barcelos, Portugal
| | - Yong Xia
- School of Computer Science and Engineering, Northwestern Polytechnical University, China
| | - Klaus H Maier-Hein
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Qikai Li
- Shanghai Jiao Tong University, China
| | - Andreas Husch
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck, Germany
| | | | - Vassili Kovalev
- Biomedical Image Analysis Department, United Institute of Informatics Problems, Belarus
| | - Li Kang
- Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, China
| | - Alessa Hering
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck, Germany
| | - João L Vilaça
- 2Ai - School of Technology, IPCA, Barcelos, Portugal
| | - Mona Flores
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany
| | - Daguang Xu
- NVIDIA, Bethesda, MD, USA; Santa Clara, CA, USA; Munich, Germany
| | - Bradford Wood
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Marius George Linguraru
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, WA, DC, USA; School of Medicine and Health Sciences, George Washington University, WA, DC, USA
| |
Collapse
|
39
|
Two-stage hybrid network for segmentation of COVID-19 pneumonia lesions in CT images: a multicenter study. Med Biol Eng Comput 2022; 60:2721-2736. [PMID: 35856130 PMCID: PMC9294771 DOI: 10.1007/s11517-022-02619-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Accepted: 06/15/2022] [Indexed: 12/15/2022]
Abstract
COVID-19 has been spreading continuously since its outbreak, and the detection of its manifestations in the lung via chest computed tomography (CT) imaging is essential to investigate the diagnosis and prognosis of COVID-19 as an indispensable step. Automatic and accurate segmentation of infected lesions is highly required for fast and accurate diagnosis and further assessment of COVID-19 pneumonia. However, the two-dimensional methods generally neglect the intraslice context, while the three-dimensional methods usually have high GPU memory consumption and calculation cost. To address these limitations, we propose a two-stage hybrid UNet to automatically segment infected regions, which is evaluated on the multicenter data obtained from seven hospitals. Moreover, we train a 3D-ResNet for COVID-19 pneumonia screening. In segmentation tasks, the Dice coefficient reaches 97.23% for lung segmentation and 84.58% for lesion segmentation. In classification tasks, our model can identify COVID-19 pneumonia with an area under the receiver-operating characteristic curve value of 0.92, an accuracy of 92.44%, a sensitivity of 93.94%, and a specificity of 92.45%. In comparison with other state-of-the-art methods, the proposed approach could be implemented as an efficient assisting tool for radiologists in COVID-19 diagnosis from CT images.
Collapse
|
40
|
Liu H, Wang J, Geng Y, Li K, Wu H, Chen J, Chai X, Li S, Zheng D. Fine-Grained Assessment of COVID-19 Severity Based on Clinico-Radiological Data Using Machine Learning. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:10665. [PMID: 36078380 PMCID: PMC9518491 DOI: 10.3390/ijerph191710665] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 08/21/2022] [Accepted: 08/24/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND The severe and critical cases of COVID-19 had high mortality rates. Clinical features, laboratory data, and radiological features provided important references for the assessment of COVID-19 severity. The machine learning analysis of clinico-radiological features, especially the quantitative computed tomography (CT) image analysis results, may achieve early, accurate, and fine-grained assessment of COVID-19 severity, which is an urgent clinical need. OBJECTIVE To evaluate if machine learning algorithms using CT-based clinico-radiological features could achieve the accurate fine-grained assessment of COVID-19 severity. METHODS The clinico-radiological features were collected from 78 COVID-19 patients with different severities. A neural network was developed to automatically measure the lesion volume from CT images. The severity was clinically diagnosed using two-type (severe and non-severe) and fine-grained four-type (mild, regular, severe, critical) classifications, respectively. To investigate the key features of COVID-19 severity, statistical analyses were performed between patients' clinico-radiological features and severity. Four machine learning algorithms (decision tree, random forest, SVM, and XGBoost) were trained and applied in the assessment of COVID-19 severity using clinico-radiological features. RESULTS The CT imaging features (CTscore and lesion volume) were significantly related with COVID-19 severity (p < 0.05 in statistical analysis for both in two-type and fine-grained four-type classifications). The CT imaging features significantly improved the accuracy of machine learning algorithms in assessing COVID-19 severity in the fine-grained four-type classification. With CT analysis results added, the four-type classification achieved comparable performance to the two-type one. CONCLUSIONS CT-based clinico-radiological features can provide an important reference for the accurate fine-grained assessment of illness severity using machine learning to achieve the early triage of COVID-19 patients.
Collapse
Affiliation(s)
- Haipeng Liu
- Research Centre for Intelligent Healthcare, Coventry University, Coventry CV1 5FB, UK
| | - Jiangtao Wang
- Research Centre for Intelligent Healthcare, Coventry University, Coventry CV1 5FB, UK
| | - Yayuan Geng
- Scientific Research Department, HY Medical Technology, B-2 Building, Dongsheng Science Park, Beijing 100192, China
| | - Kunwei Li
- Department of Radiology, The Fifth Affiliated Hospital, Sun Yat-sen University, Zhuhai 519000, China
| | - Han Wu
- College of Engineering, Mathematics and Physical Sciences, Streatham Campus, University of Exeter, North Park Road, Exeter EX4 4QF, UK
| | - Jian Chen
- Department of Radiology, The Fifth Affiliated Hospital, Sun Yat-sen University, Zhuhai 519000, China
| | - Xiangfei Chai
- Scientific Research Department, HY Medical Technology, B-2 Building, Dongsheng Science Park, Beijing 100192, China
| | - Shaolin Li
- Department of Radiology, The Fifth Affiliated Hospital, Sun Yat-sen University, Zhuhai 519000, China
- Guangdong Provincial Key Laboratory of Biomedical Imaging, The Fifth Affiliated Hospital, Sun Yat-sen University, Zhuhai 519000, China
| | - Dingchang Zheng
- Research Centre for Intelligent Healthcare, Coventry University, Coventry CV1 5FB, UK
| |
Collapse
|
41
|
Zammit J, Fung DLX, Liu Q, Leung CKS, Hu P. Semi-supervised COVID-19 CT image segmentation using deep generative models. BMC Bioinformatics 2022; 23:343. [PMID: 35974325 PMCID: PMC9381397 DOI: 10.1186/s12859-022-04878-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Accepted: 08/03/2022] [Indexed: 11/29/2022] Open
Abstract
Background A recurring problem in image segmentation is a lack of labelled data. This problem is especially acute in the segmentation of lung computed tomography (CT) of patients with Coronavirus Disease 2019 (COVID-19). The reason for this is simple: the disease has not been prevalent long enough to generate a great number of labels. Semi-supervised learning promises a way to learn from data that is unlabelled and has seen tremendous advancements in recent years. However, due to the complexity of its label space, those advancements cannot be applied to image segmentation. That being said, it is this same complexity that makes it extremely expensive to obtain pixel-level labels, making semi-supervised learning all the more appealing. This study seeks to bridge this gap by proposing a novel model that utilizes the image segmentation abilities of deep convolution networks and the semi-supervised learning abilities of generative models for chest CT images of patients with the COVID-19. Results We propose a novel generative model called the shared variational autoencoder (SVAE). The SVAE utilizes a five-layer deep hierarchy of latent variables and deep convolutional mappings between them, resulting in a generative model that is well suited for lung CT images. Then, we add a novel component to the final layer of the SVAE which forces the model to reconstruct the input image using a segmentation that must match the ground truth segmentation whenever it is present. We name this final model StitchNet. Conclusion We compare StitchNet to other image segmentation models on a high-quality dataset of CT images from COVID-19 patients. We show that our model has comparable performance to the other segmentation models. We also explore the potential limitations and advantages in our proposed algorithm and propose some potential future research directions for this challenging issue.
Collapse
Affiliation(s)
- Judah Zammit
- Department of Computer Science, University of Manitoba, Winnipeg, MB, R3T 2N2, Canada
| | - Daryl L X Fung
- Department of Computer Science, University of Manitoba, Winnipeg, MB, R3T 2N2, Canada
| | - Qian Liu
- Department of Computer Science, University of Manitoba, Winnipeg, MB, R3T 2N2, Canada.,Department of Biochemistry and Medical Genetics, University of Manitoba, Room 308 - Basic Medical Sciences Building, 745 Bannatyne Avenue, Winnipeg, MB, R3E 0J3, Canada
| | - Carson Kai-Sang Leung
- Department of Computer Science, University of Manitoba, Winnipeg, MB, R3T 2N2, Canada
| | - Pingzhao Hu
- Department of Computer Science, University of Manitoba, Winnipeg, MB, R3T 2N2, Canada. .,Department of Biochemistry and Medical Genetics, University of Manitoba, Room 308 - Basic Medical Sciences Building, 745 Bannatyne Avenue, Winnipeg, MB, R3E 0J3, Canada. .,CancerCare Manitoba Research Institute, Winnipeg, MB, Canada.
| |
Collapse
|
42
|
Gomes R, Kamrowski C, Langlois J, Rozario P, Dircks I, Grottodden K, Martinez M, Tee WZ, Sargeant K, LaFleur C, Haley M. A Comprehensive Review of Machine Learning Used to Combat COVID-19. Diagnostics (Basel) 2022; 12:diagnostics12081853. [PMID: 36010204 PMCID: PMC9406981 DOI: 10.3390/diagnostics12081853] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 07/22/2022] [Accepted: 07/26/2022] [Indexed: 12/19/2022] Open
Abstract
Coronavirus disease (COVID-19) has had a significant impact on global health since the start of the pandemic in 2019. As of June 2022, over 539 million cases have been confirmed worldwide with over 6.3 million deaths as a result. Artificial Intelligence (AI) solutions such as machine learning and deep learning have played a major part in this pandemic for the diagnosis and treatment of COVID-19. In this research, we review these modern tools deployed to solve a variety of complex problems. We explore research that focused on analyzing medical images using AI models for identification, classification, and tissue segmentation of the disease. We also explore prognostic models that were developed to predict health outcomes and optimize the allocation of scarce medical resources. Longitudinal studies were conducted to better understand COVID-19 and its effects on patients over a period of time. This comprehensive review of the different AI methods and modeling efforts will shed light on the role that AI has played and what path it intends to take in the fight against COVID-19.
Collapse
Affiliation(s)
- Rahul Gomes
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
- Correspondence:
| | - Connor Kamrowski
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
| | - Jordan Langlois
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
| | - Papia Rozario
- Department of Geography and Anthropology, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA;
| | - Ian Dircks
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
| | - Keegan Grottodden
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
| | - Matthew Martinez
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
| | - Wei Zhong Tee
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
| | - Kyle Sargeant
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
| | - Corbin LaFleur
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
| | - Mitchell Haley
- Department of Computer Science, University of Wisconsin-Eau Claire, Eau Claire, WI 54701, USA; (C.K.); (J.L.); (I.D.); (K.G.); (M.M.); (W.Z.T.); (K.S.); (C.L.); (M.H.)
| |
Collapse
|
43
|
Latif G, Morsy H, Hassan A, Alghazo J. Novel Coronavirus and Common Pneumonia Detection from CT Scans Using Deep Learning-Based Extracted Features. Viruses 2022; 14:v14081667. [PMID: 36016288 PMCID: PMC9414828 DOI: 10.3390/v14081667] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Revised: 07/23/2022] [Accepted: 07/26/2022] [Indexed: 11/23/2022] Open
Abstract
COVID-19 which was announced as a pandemic on 11 March 2020, is still infecting millions to date as the vaccines that have been developed do not prevent the disease but rather reduce the severity of the symptoms. Until a vaccine is developed that can prevent COVID-19 infection, the testing of individuals will be a continuous process. Medical personnel monitor and treat all health conditions; hence, the time-consuming process to monitor and test all individuals for COVID-19 becomes an impossible task, especially as COVID-19 shares similar symptoms with the common cold and pneumonia. Some off-the-counter tests have been developed and sold, but they are unreliable and add an additional burden because false-positive cases have to visit hospitals and perform specialized diagnostic tests to confirm the diagnosis. Therefore, the need for systems that can automatically detect and diagnose COVID-19 automatically without human intervention is still an urgent priority and will remain so because the same technology can be used for future pandemics and other health conditions. In this paper, we propose a modified machine learning (ML) process that integrates deep learning (DL) algorithms for feature extraction and well-known classifiers that can accurately detect and diagnose COVID-19 from chest CT scans. Publicly available datasets were made available by the China Consortium for Chest CT Image Investigation (CC-CCII). The highest average accuracy obtained was 99.9% using the modified ML process when 2000 features were extracted using GoogleNet and ResNet18 and using the support vector machine (SVM) classifier. The results obtained using the modified ML process were higher when compared to similar methods reported in the extant literature using the same datasets or different datasets of similar size; thus, this study is considered of added value to the current body of knowledge. Further research in this field is required to develop methods that can be applied in hospitals and can better equip mankind to be prepared for any future pandemics.
Collapse
Affiliation(s)
- Ghazanfar Latif
- Computer Science Department, Prince Mohammad Bin Fahd University, Khobar 34754, Saudi Arabia
- Department of Computer Sciences and Mathematics, Université du Québec à Chicoutimi, 555 Boulevard de l’Université, Chicoutimi, QC G7H 2B1, Canada
- Correspondence: or
| | - Hamdy Morsy
- Department of Applied Natural Sciences, College of Community, Qassim University, Buraydah 52571, Saudi Arabia;
- Department of Electronics and communications, College of Engineering, Helwan University, Cairo 11792, Egypt
| | - Asmaa Hassan
- Faculty of Medicine, Helwan University, Helwan 11795, Egypt;
| | - Jaafar Alghazo
- Department of Electrical and Computer Engineering, Virginia Military Institute, Lexington, VA 24450, USA;
| |
Collapse
|
44
|
Cai S, Lin X, Sun Y, Lin Z, Wang X, Lin N, Zhao X. Quantitative parameters obtained from gadobenate dimeglumine-enhanced MRI at the hepatobiliary phase can predict post-hepatectomy liver failure and overall survival in patients with hepatocellular carcinoma. Eur J Radiol 2022; 154:110449. [PMID: 35901599 DOI: 10.1016/j.ejrad.2022.110449] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 07/03/2022] [Accepted: 07/19/2022] [Indexed: 02/06/2023]
Abstract
PURPOSE To determine the value of the quantitative parameters obtained from gadobenate dimeglumine-enhanced magnetic resonance imaging (MRI) at the hepatobiliary phase for predicting post-hepatectomy liver failure and overall survival in patients with hepatocellular carcinoma. METHOD This multicenter retrospective study included 307 patients who underwent gadobenate dimeglumine-enhanced MRI. The quantitative liver-to-portal vein contrast ratio (LPC) and liver-spleen contrast ratio (LSC) at the hepatobiliary phase were measured. Logistic regression analyses were used to evaluate risk factors for post-hepatectomy liver failure. The capacity of the LPC and LSC to predict post-hepatectomy liver failure was evaluated via receiver operating characteristic (ROC) curve. The Cox proportional hazards regression was used to identify prognostic factors for overall survival (OS). RESULTS Post-hepatectomy liver failure was observed in 69 patients (22.5%). The LPC and LSC were independent risk factors for the development of post-hepatectomy liver failure, and the areas under the ROC curves of LPC and LSC were 0.882 and 0.782, respectively. The predictive performance of LPC for post-hepatectomy liver failure was superior to LSC. The LPC and LSC were also significant prognostic factors for OS. The cut-off values for the LPC and LSC were 1.07 and 0.89, respectively. The 5-year OS rate was higher in patients with LPC > 1.07 or LSC > 0.89 than in patients with LPC ≤ 1.07 or LSC ≤ 0.89. CONCLUSIONS The quantitative parameters obtained from gadobenate dimeglumine-enhanced MRI at the hepatobiliary phase were effective imaging biomarkers for predicting both post-hepatectomy liver failure and overall survival in patients with hepatocellular carcinoma.
Collapse
Affiliation(s)
- Shuo Cai
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong Province, 250021, China
| | - Xiangtao Lin
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong Province, 250021, China
| | - Yan Sun
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University, Jinan, Shandong Province 250021, China
| | - Zhengyu Lin
- Department of Interventional Radiology, First Affiliated Hospital of Fujian Medical University, Fuzhou, Fujian Province 350000, China
| | - Ximing Wang
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong Province, 250021, China
| | - Nan Lin
- Department of Medical Imaging, Shandong Public Health Clinical Center, Jinan, Shandong Province 250021, China.
| | - Xinya Zhao
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong Province, 250021, China.
| |
Collapse
|
45
|
Heidari A, Toumaj S, Navimipour NJ, Unal M. A privacy-aware method for COVID-19 detection in chest CT images using lightweight deep conventional neural network and blockchain. Comput Biol Med 2022; 145:105461. [PMID: 35366470 PMCID: PMC8958272 DOI: 10.1016/j.compbiomed.2022.105461] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 03/13/2022] [Accepted: 03/24/2022] [Indexed: 12/16/2022]
Abstract
With the global spread of the COVID-19 epidemic, a reliable method is required for identifying COVID-19 victims. The biggest issue in detecting the virus is a lack of testing kits that are both reliable and affordable. Due to the virus's rapid dissemination, medical professionals have trouble finding positive patients. However, the next real-life issue is sharing data with hospitals around the world while considering the organizations' privacy concerns. The primary worries for training a global Deep Learning (DL) model are creating a collaborative platform and personal confidentiality. Another challenge is exchanging data with health care institutions while protecting the organizations' confidentiality. The primary concerns for training a universal DL model are creating a collaborative platform and preserving privacy. This paper provides a model that receives a small quantity of data from various sources, like organizations or sections of hospitals, and trains a global DL model utilizing blockchain-based Convolutional Neural Networks (CNNs). In addition, we use the Transfer Learning (TL) technique to initialize layers rather than initialize randomly and discover which layers should be removed before selection. Besides, the blockchain system verifies the data, and the DL method trains the model globally while keeping the institution's confidentiality. Furthermore, we gather the actual and novel COVID-19 patients. Finally, we run extensive experiments utilizing Python and its libraries, such as Scikit-Learn and TensorFlow, to assess the proposed method. We evaluated works using five different datasets, including Boukan Dr. Shahid Gholipour hospital, Tabriz Emam Reza hospital, Mahabad Emam Khomeini hospital, Maragheh Dr.Beheshti hospital, and Miandoab Abbasi hospital datasets, and our technique outperform state-of-the-art methods on average in terms of precision (2.7%), recall (3.1%), F1 (2.9%), and accuracy (2.8%).
Collapse
Affiliation(s)
- Arash Heidari
- Department of Computer Engineering, Tabriz Branch, Islamic Azad University, Tabriz, Iran,Department of Computer Engineering, Shabestar Branch, Islamic Azad University, Shabestar, Iran
| | - Shiva Toumaj
- Urmia University of Medical Sciences, Urmia, Iran
| | - Nima Jafari Navimipour
- Department of Computer Engineering, Kadir Has University, Istanbul, Turkey,Corresponding author
| | - Mehmet Unal
- Department of Computer Engineering, Nisantasi University, Istanbul, Turkey
| |
Collapse
|
46
|
Gupta P, Siddiqui MK, Huang X, Morales-Menendez R, Panwar H, Terashima-Marin H, Wajid MS. COVID-WideNet-A capsule network for COVID-19 detection. Appl Soft Comput 2022; 122:108780. [PMID: 35369122 PMCID: PMC8962064 DOI: 10.1016/j.asoc.2022.108780] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2021] [Revised: 02/08/2022] [Accepted: 03/22/2022] [Indexed: 02/03/2023]
Abstract
Ever since the outbreak of COVID-19, the entire world is grappling with panic over its rapid spread. Consequently, it is of utmost importance to detect its presence. Timely diagnostic testing leads to the quick identification, treatment and isolation of infected people. A number of deep learning classifiers have been proved to provide encouraging results with higher accuracy as compared to the conventional method of RT-PCR testing. Chest radiography, particularly using X-ray images, is a prime imaging modality for detecting the suspected COVID-19 patients. However, the performance of these approaches still needs to be improved. In this paper, we propose a capsule network called COVID-WideNet for diagnosing COVID-19 cases using Chest X-ray (CXR) images. Experimental results have demonstrated that a discriminative trained, multi-layer capsule network achieves state-of-the-art performance on the COVIDx dataset. In particular, COVID-WideNet performs better than any other CNN based approaches for diagnosis of COVID-19 infected patients. Further, the proposed COVID-WideNet has the number of trainable parameters that is 20 times less than that of other CNN based models. This results in fast and efficient diagnosing COVID-19 symptoms and with achieving the 0.95 of Area Under Curve (AUC), 91% of accuracy, sensitivity and specificity respectively. This may also assist radiologists to detect COVID and its variant like delta.
Collapse
Affiliation(s)
- P.K. Gupta
- Department of Computer Science and Engineering, Jaypee University of Information Technology, Waknaghat, Solan, HP, 173 234, India
| | - Mohammad Khubeb Siddiqui
- School of Engineering and Sciences, Tecnologico de Monterrey, Monterrey, N.L, Mexico,Corresponding author
| | - Xiaodi Huang
- School of Computing Mathematics and Engineering, Charles Sturt University, Albury, NSW, Australia
| | | | - Harsh Panwar
- Queen Mary University of London, Mile End Rd, Bethnal Green, London, United Kingdom
| | - Hugo Terashima-Marin
- School of Engineering and Sciences, Tecnologico de Monterrey, Monterrey, N.L, Mexico
| | - Mohammad Saif Wajid
- School of Engineering and Sciences, Tecnologico de Monterrey, Monterrey, N.L, Mexico
| |
Collapse
|
47
|
Aggarwal P, Mishra NK, Fatimah B, Singh P, Gupta A, Joshi SD. COVID-19 image classification using deep learning: Advances, challenges and opportunities. Comput Biol Med 2022; 144:105350. [PMID: 35305501 PMCID: PMC8890789 DOI: 10.1016/j.compbiomed.2022.105350] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 02/10/2022] [Accepted: 02/22/2022] [Indexed: 12/16/2022]
Abstract
Corona Virus Disease-2019 (COVID-19), caused by Severe Acute Respiratory Syndrome-Corona Virus-2 (SARS-CoV-2), is a highly contagious disease that has affected the lives of millions around the world. Chest X-Ray (CXR) and Computed Tomography (CT) imaging modalities are widely used to obtain a fast and accurate diagnosis of COVID-19. However, manual identification of the infection through radio images is extremely challenging because it is time-consuming and highly prone to human errors. Artificial Intelligence (AI)-techniques have shown potential and are being exploited further in the development of automated and accurate solutions for COVID-19 detection. Among AI methodologies, Deep Learning (DL) algorithms, particularly Convolutional Neural Networks (CNN), have gained significant popularity for the classification of COVID-19. This paper summarizes and reviews a number of significant research publications on the DL-based classification of COVID-19 through CXR and CT images. We also present an outline of the current state-of-the-art advances and a critical discussion of open challenges. We conclude our study by enumerating some future directions of research in COVID-19 imaging classification.
Collapse
Affiliation(s)
| | | | - Binish Fatimah
- The Department of ECE, CMR Institute of Technology, Bengaluru, India
| | - Pushpendra Singh
- The Department of ECE, National Institute of Technology Hamirpur, HP, India,Corresponding author
| | - Anubha Gupta
- The Department of ECE, IIIT-Delhi, Delhi, 110020, India
| | - Shiv Dutt Joshi
- The Department of EE, Indian Institute of Technology Delhi, Delhi 110016, India
| |
Collapse
|
48
|
Kör H, Erbay H, Yurttakal AH. Diagnosing and differentiating viral pneumonia and COVID-19 using X-ray images. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:39041-39057. [PMID: 35493416 PMCID: PMC9042669 DOI: 10.1007/s11042-022-13071-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2021] [Revised: 01/29/2022] [Accepted: 04/04/2022] [Indexed: 05/31/2023]
Abstract
Coronavirus-caused diseases are common worldwide and might worsen both human health and the world economy. Most people may instantly encounter coronavirus in their life and may result in pneumonia. Nowadays, the world is fighting against the new coronavirus: COVID-19. The rate of increase is high, and the world got caught the disease unprepared. In most regions of the world, COVID-19 test is not possible due to the absence of the diagnostic kit, even if the kit exists, its false-negative (giving a negative result for a person infected with COVID-19) rate is high. Also, early detection of COVID-19 is crucial to keep its morbidity and mortality rates low. The symptoms of pneumonia are alike, and COVID-19 is no exception. The chest X-ray is the main reference in diagnosing pneumonia. Thus, the need for radiologists has been increased considerably not only to detect COVID-19 but also to identify other abnormalities it caused. Herein, a transfer learning-based multi-class convolutional neural network model was proposed for the automatic detection of pneumonia and also for differentiating non-COVID-19 pneumonia and COVID-19. The model that inputs chest X-ray images is capable of extracting radiographic patterns on chest X-ray images to turn into valuable information and monitor structural differences in the lungs caused by the diseases. The model was developed by two public datasets: Cohen dataset and Kermany dataset. The model achieves an average training accuracy of 0.9886, an average training recall of 0.9829, and an average training precision of 0.9837. Moreover, the average training false-positive and false-negative rates are 0.0085 and 0.0171, respectively. Conversely, the model's test set metrics such as average accuracy, average recall, and average precision are 97.78%, 96.67%, and 96.67%, respectively. According to the simulation results, the proposed model is promising, can quickly and accurately classify chest images, and helps doctors as the second reader in their final decision.
Collapse
Affiliation(s)
- Hakan Kör
- Department of Computer Engineering, Engineering Faculty, Hitit University, Çorum, Turkey
| | - Hasan Erbay
- Computer Engineering Department, Engineering Faculty, University of Turkish Aeronautical Association, 06790 Etimesgut Ankara, Turkey
| | - Ahmet Haşim Yurttakal
- Computer Engineering Department, Engineering Faculty, Afyon Kocatepe University, 03204 Erenler Afyon, Turkey
| |
Collapse
|
49
|
Shakhovska N, Yakovyna V, Chopyak V. A new hybrid ensemble machine-learning model for severity risk assessment and post-COVID prediction system. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2022; 19:6102-6123. [PMID: 35603393 DOI: 10.3934/mbe.2022285] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Starting from December 2019, the COVID-19 pandemic has globally strained medical resources and caused significant mortality. It is commonly recognized that the severity of SARS-CoV-2 disease depends on both the comorbidity and the state of the patient's immune system, which is reflected in several biomarkers. The development of early diagnosis and disease severity prediction methods can reduce the burden on the health care system and increase the effectiveness of treatment and rehabilitation of patients with severe cases. This study aims to develop and validate an ensemble machine-learning model based on clinical and immunological features for severity risk assessment and post-COVID rehabilitation duration for SARS-CoV-2 patients. The dataset consisting of 35 features and 122 instances was collected from Lviv regional rehabilitation center. The dataset contains age, gender, weight, height, BMI, CAT, 6-minute walking test, pulse, external respiration function, oxygen saturation, and 15 immunological markers used to predict the relationship between disease duration and biomarkers using the machine learning approach. The predictions are assessed through an area under the receiver-operating curve, classification accuracy, precision, recall, and F1 score performance metrics. A new hybrid ensemble feature selection model for a post-COVID prediction system is proposed as an automatic feature cut-off rank identifier. A three-layer high accuracy stacking ensemble classification model for intelligent analysis of short medical datasets is presented. Together with weak predictors, the associative rules allowed improving the classification quality. The proposed ensemble allows using a random forest model as an aggregator for weak repressors' results generalization. The performance of the three-layer stacking ensemble classification model (AUC 0.978; CA 0.920; F1 score 0.921; precision 0.924; recall 0.920) was higher than five machine learning models, viz. tree algorithm with forward pruning; Naïve Bayes classifier; support vector machine with RBF kernel; logistic regression, and a calibrated learner with sigmoid function and decision threshold optimization. Aging-related biomarkers, viz. CD3+, CD4+, CD8+, CD22+ were examined to predict post-COVID rehabilitation duration. The best accuracy was reached in the case of the support vector machine with the linear kernel (MAPE = 0.0787) and random forest classifier (RMSE = 1.822). The proposed three-layer stacking ensemble classification model predicted SARS-CoV-2 disease severity based on the cytokines and physiological biomarkers. The results point out that changes in studied biomarkers associated with the severity of the disease can be used to monitor the severity and forecast the rehabilitation duration.
Collapse
Affiliation(s)
- Natalya Shakhovska
- Department of Artificial Intelligence, Lviv Polytechnic National University, Lviv 79013, Ukraine
| | - Vitaliy Yakovyna
- Department of Artificial Intelligence, Lviv Polytechnic National University, Lviv 79013, Ukraine
- Faculty of Mathematics and Computer Science, University of Warmia and Mazury, Olsztyn 10719, Poland
| | - Valentyna Chopyak
- Department of Clinical Immunology and Allergology, Danylo Halytskyi Lviv National University, Lviv 79010, Ukraine
| |
Collapse
|
50
|
Yousefzadeh M, Zolghadri M, Hasanpour M, Salimi F, Jafari R, Vaziri Bozorg M, Haseli S, Mahmoudi Aqeel Abadi A, Naseri S, Ay M, Nazem-Zadeh MR. Statistical analysis of COVID-19 infection severity in lung lobes from chest CT. INFORMATICS IN MEDICINE UNLOCKED 2022; 30:100935. [PMID: 35382230 PMCID: PMC8970623 DOI: 10.1016/j.imu.2022.100935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 03/28/2022] [Accepted: 03/29/2022] [Indexed: 11/23/2022] Open
Abstract
Detection of the COVID 19 virus is possible through the reverse transcription-polymerase chain reaction (RT-PCR) kits and computed tomography (CT) images of the lungs. Diagnosis via CT images provides a faster diagnosis than the RT-PCR method does. In addition to low false-negative rate, CT is also used for prognosis in determining the severity of the disease and the proposed treatment method. In this study, we estimated a probability density function (PDF) to examine the infections caused by the virus. We collected 232 chest CT of suspected patients and had them labeled by two radiologists in 6 classes, including a healthy class and 5 classes of different infection severity. To segment the lung lobes, we used a pre-trained U-Net model with an average Dice similarity coefficient (DSC) greater than 0.96. First, we extracted the PDF to grade the infection of each lobe and selected five specific thresholds as feature vectors. We then assigned this feature vector to a support vector machine (SVM) model and made the final prediction of the infection severity. Using the T-Test statistics, we calculated the p-value at different pixel thresholds and reported the significant differences in the pixel values. In most cases, the p-value was less than 0.05. Our developed model was developed on roughly labeled data without any manual segmentation, which estimated lung infection involvements with the area under the curve (AUC) in the range of [0.64, 0.87]. The introduced model can be used to generate a systematic automated report for individual patients infected by COVID-19.
Collapse
|