1
|
Khoshbakht S, Zare S, Khatuni M, Ghodsirad M, Bayat M, Mirabootalebi FS, Pirayesh E, Amoui M, Norouzi G. Diagnostic Value of 99mTc-Ubiquicidin Scintigraphy in Differentiating Bacterial from Nonbacterial Pneumonia. Cancer Biother Radiopharm 2025. [PMID: 40040519 DOI: 10.1089/cbr.2024.0202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2025] Open
Abstract
Purpose: Differentiating purely viral from bacterial etiologies continues to be a challenging yet key step in the management of community-acquired pneumonia (CAP), further highlighted since the COVID-19 pandemic. This study aims to evaluate the utility of 99mTc-ubiquicidin (UBI) in the differentiation of bacterial from nonbacterial pneumonia. Methods: A total of 30 patients with CAP were allocated into groups A, bacterial (n = 15), and B, viral pneumonia (n = 15). All patients underwent 99mTc-UBI scan with planar and single-photon emission computed tomography (SPECT) images of thorax acquired at 30 and 180 min postinjection. Target-to-background (T/B) ratios were calculated with values >1.4 interpreted as positive for bacterial infection. Correlation was made with computed tomography (CT) scan and polymerase chain reaction (PCR) results. Results: UBI scan was positive in 43.3% (n = 13) of patients, with sensitivity, specificity, and accuracy of 86.7%, 100%, and 93.3%, respectively, and close correlation with chest CT scan and PCR results (p-value = 0.000). Planar images were generally not helpful. Receiver operating characteristic curve analysis indicated similar diagnostic performance for 30-min and 3-h SPECT images by implementing T/B thresholds of 1.2 and 1.33, respectively. Conclusions: 99mTc-UBI SPECT is a promising modality for differentiating purely viral from bacterial or superimposed bacterial pneumonia and provides reliable evidence either to mandate or withhold administration of antibiotics in patients with CAP.
Collapse
Affiliation(s)
- Sepideh Khoshbakht
- Department of Nuclear Medicine, Shohada-e Tajrish Hospital, Shahid Beheshti Medical University, Tehran, Iran
- Clinical Research Development Unit of Shohada-e Tajrish Medical Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Saba Zare
- Department of Nuclear Medicine, Shohada-e Tajrish Hospital, Shahid Beheshti Medical University, Tehran, Iran
| | - Mahdi Khatuni
- Department of Internal Medicine, Shohada-e Tajrish Hospital, Shahid Beheshti Medical University, Tehran, Iran
| | - Mohammadali Ghodsirad
- Department of Nuclear Medicine, Shohada-e Tajrish Hospital, Shahid Beheshti Medical University, Tehran, Iran
- Clinical Research Development Unit of Shohada-e Tajrish Medical Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mohadeseh Bayat
- Department of Nuclear Medicine, Shohada-e Tajrish Hospital, Shahid Beheshti Medical University, Tehran, Iran
| | | | - Elahe Pirayesh
- Department of Nuclear Medicine, Shohada-e Tajrish Hospital, Shahid Beheshti Medical University, Tehran, Iran
- Clinical Research Development Unit of Shohada-e Tajrish Medical Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mahasti Amoui
- Department of Nuclear Medicine, Shohada-e Tajrish Hospital, Shahid Beheshti Medical University, Tehran, Iran
- Clinical Research Development Unit of Shohada-e Tajrish Medical Center, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Ghazal Norouzi
- Department of Nuclear Medicine, The Ottawa Hospital, University of Ottawa, Faculty of Medicine, Ottawa, Canada
| |
Collapse
|
2
|
Ahmad IS, Dai J, Xie Y, Liang X. Deep learning models for CT image classification: a comprehensive literature review. Quant Imaging Med Surg 2025; 15:962-1011. [PMID: 39838987 PMCID: PMC11744119 DOI: 10.21037/qims-24-1400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2024] [Accepted: 10/18/2024] [Indexed: 01/23/2025]
Abstract
Background and Objective Computed tomography (CT) imaging plays a crucial role in the early detection and diagnosis of life-threatening diseases, particularly in respiratory illnesses and oncology. The rapid advancement of deep learning (DL) has revolutionized CT image analysis, enhancing diagnostic accuracy and efficiency. This review explores the impact of advanced DL methodologies in CT imaging, with a particular focus on their applications in coronavirus disease 2019 (COVID-19) detection and lung nodule classification. Methods A comprehensive literature search was conducted, examining the evolution of DL architectures in medical imaging from conventional convolutional neural networks (CNNs) to sophisticated foundational models (FMs). We reviewed publications from major databases, focusing on developments in CT image analysis using DL from 2013 to 2023. Our search criteria included all types of articles, with a focus on peer-reviewed research papers and review articles in English. Key Content and Findings The review reveals that DL, particularly advanced architectures like FMs, has transformed CT image analysis by streamlining interpretation processes and enhancing diagnostic capabilities. We found significant advancements in addressing global health challenges, especially during the COVID-19 pandemic, and in ongoing efforts for lung cancer screening. The review also addresses technical challenges in CT image analysis, including data variability, the need for large high-quality datasets, and computational demands. Innovative strategies such as transfer learning, data augmentation, and distributed computing are explored as solutions to these challenges. Conclusions This review underscores the pivotal role of DL in advancing CT image analysis, particularly for COVID-19 and lung nodule detection. The integration of DL models into clinical workflows shows promising potential to enhance diagnostic accuracy and efficiency. However, challenges remain in areas of interpretability, validation, and regulatory compliance. The review advocates for continued research, interdisciplinary collaboration, and ethical considerations as DL technologies become integral to clinical practice. While traditional imaging techniques remain vital, the integration of DL represents a significant advancement in medical diagnostics, with far-reaching implications for future research, clinical practice, and healthcare policy.
Collapse
Affiliation(s)
- Isah Salim Ahmad
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Jingjing Dai
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Yaoqin Xie
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Xiaokun Liang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
3
|
V AC, B SK, Pradeep S, Suraksha P, Lin M. Leveraging compact convolutional transformers for enhanced COVID-19 detection in chest X-rays: a grad-CAM visualization approach. Front Big Data 2024; 7:1489020. [PMID: 39736985 PMCID: PMC11683681 DOI: 10.3389/fdata.2024.1489020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2024] [Accepted: 11/29/2024] [Indexed: 01/01/2025] Open
Affiliation(s)
- Aravinda C. V
- Department of Computer Science and Engineering, NITTE Mahalinga Adyantaya Memorial Institute of Technology, NITTE Deemed to Be University, Karkala, Karnataka, India
| | - Sudeepa K. B
- Department of Computer Science and Engineering, NITTE Mahalinga Adyantaya Memorial Institute of Technology, NITTE Deemed to Be University, Karkala, Karnataka, India
| | - S. Pradeep
- Department of Computer Science and Engineering, Government Engineering College, Chamarajanagar, Karnataka, India
| | - P. Suraksha
- Department of Computer Science and Engineering, Vidhya Vardhaka College of Engineering, Mysore, Karnataka, India
| | - Meng Lin
- Department of Electronic and Computer Engineering (The Graduate School of Science and Engineering), Ritsumeikan University, Kusatsu, Shiga, Japan
| |
Collapse
|
4
|
Jian M, Wu R, Xu W, Zhi H, Tao C, Chen H, Li X. VascuConNet: an enhanced connectivity network for vascular segmentation. Med Biol Eng Comput 2024; 62:3543-3554. [PMID: 38898202 DOI: 10.1007/s11517-024-03150-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 06/06/2024] [Indexed: 06/21/2024]
Abstract
Medical image segmentation commonly involves diverse tissue types and structures, including tasks such as blood vessel segmentation and nerve fiber bundle segmentation. Enhancing the continuity of segmentation outcomes represents a pivotal challenge in medical image segmentation, driven by the demands of clinical applications, focusing on disease localization and quantification. In this study, a novel segmentation model is specifically designed for retinal vessel segmentation, leveraging vessel orientation information, boundary constraints, and continuity constraints to improve segmentation accuracy. To achieve this, we cascade U-Net with a long-short-term memory network (LSTM). U-Net is characterized by a small number of parameters and high segmentation efficiency, while LSTM offers a parameter-sharing capability. Additionally, we introduce an orientation information enhancement module inserted into the model's bottom layer to obtain feature maps containing orientation information through an orientation convolution operator. Furthermore, we design a new hybrid loss function that consists of connectivity loss, boundary loss, and cross-entropy loss. Experimental results demonstrate that the model achieves excellent segmentation outcomes across three widely recognized retinal vessel segmentation datasets, CHASE_DB1, DRIVE, and ARIA.
Collapse
Affiliation(s)
- Muwei Jian
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, China.
- School of Information Science and Technology, Linyi University, Linyi, China.
| | - Ronghua Wu
- School of Information Science and Technology, Linyi University, Linyi, China
| | - Wenjin Xu
- School of Information Science and Technology, Linyi University, Linyi, China
| | - Huixiang Zhi
- School of Information Science and Technology, Linyi University, Linyi, China
| | - Chen Tao
- School of Information Science and Technology, Linyi University, Linyi, China
| | - Hongyu Chen
- School of Information Science and Technology, Linyi University, Linyi, China
| | - Xiaoguang Li
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Beijing University of Technology, Beijing, China.
| |
Collapse
|
5
|
Bani Baker Q, Hammad M, Al-Smadi M, Al-Jarrah H, Al-Hamouri R, Al-Zboon SA. Enhanced COVID-19 Detection from X-ray Images with Convolutional Neural Network and Transfer Learning. J Imaging 2024; 10:250. [PMID: 39452413 PMCID: PMC11508642 DOI: 10.3390/jimaging10100250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2024] [Revised: 09/22/2024] [Accepted: 09/27/2024] [Indexed: 10/26/2024] Open
Abstract
The global spread of Coronavirus (COVID-19) has prompted imperative research into scalable and effective detection methods to curb its outbreak. The early diagnosis of COVID-19 patients has emerged as a pivotal strategy in mitigating the spread of the disease. Automated COVID-19 detection using Chest X-ray (CXR) imaging has significant potential for facilitating large-scale screening and epidemic control efforts. This paper introduces a novel approach that employs state-of-the-art Convolutional Neural Network models (CNNs) for accurate COVID-19 detection. The employed datasets each comprised 15,000 X-ray images. We addressed both binary (Normal vs. Abnormal) and multi-class (Normal, COVID-19, Pneumonia) classification tasks. Comprehensive evaluations were performed by utilizing six distinct CNN-based models (Xception, Inception-V3, ResNet50, VGG19, DenseNet201, and InceptionResNet-V2) for both tasks. As a result, the Xception model demonstrated exceptional performance, achieving 98.13% accuracy, 98.14% precision, 97.65% recall, and a 97.89% F1-score in binary classification, while in multi-classification it yielded 87.73% accuracy, 90.20% precision, 87.73% recall, and an 87.49% F1-score. Moreover, the other utilized models, such as ResNet50, demonstrated competitive performance compared with many recent works.
Collapse
Affiliation(s)
- Qanita Bani Baker
- Faculty of Computer and Information Technology, Jordan University of Science and Technology, P.O. Box 3030, Irbid 22110, Jordan; (M.H.); (H.A.-J.); (R.A.-H.); (S.A.A.-Z.)
| | - Mahmoud Hammad
- Faculty of Computer and Information Technology, Jordan University of Science and Technology, P.O. Box 3030, Irbid 22110, Jordan; (M.H.); (H.A.-J.); (R.A.-H.); (S.A.A.-Z.)
| | - Mohammed Al-Smadi
- Digital Learning and Online Education Office (DLOE), Qatar University, Doha 2713, Qatar;
| | - Heba Al-Jarrah
- Faculty of Computer and Information Technology, Jordan University of Science and Technology, P.O. Box 3030, Irbid 22110, Jordan; (M.H.); (H.A.-J.); (R.A.-H.); (S.A.A.-Z.)
| | - Rahaf Al-Hamouri
- Faculty of Computer and Information Technology, Jordan University of Science and Technology, P.O. Box 3030, Irbid 22110, Jordan; (M.H.); (H.A.-J.); (R.A.-H.); (S.A.A.-Z.)
| | - Sa’ad A. Al-Zboon
- Faculty of Computer and Information Technology, Jordan University of Science and Technology, P.O. Box 3030, Irbid 22110, Jordan; (M.H.); (H.A.-J.); (R.A.-H.); (S.A.A.-Z.)
| |
Collapse
|
6
|
Chou HY, Lin YC, Hsieh SY, Chou HH, Lai CS, Wang B, Tsai YS. Deep Learning Model for Prediction of Bronchopulmonary Dysplasia in Preterm Infants Using Chest Radiographs. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2063-2073. [PMID: 38499706 PMCID: PMC11522213 DOI: 10.1007/s10278-024-01050-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 12/05/2023] [Accepted: 12/18/2023] [Indexed: 03/20/2024]
Abstract
Bronchopulmonary dysplasia (BPD) is common in preterm infants and may result in pulmonary vascular disease, compromising lung function. This study aimed to employ artificial intelligence (AI) techniques to help physicians accurately diagnose BPD in preterm infants in a timely and efficient manner. This retrospective study involves two datasets: a lung region segmentation dataset comprising 1491 chest radiographs of infants, and a BPD prediction dataset comprising 1021 chest radiographs of preterm infants. Transfer learning of a pre-trained machine learning model was employed for lung region segmentation and image fusion for BPD prediction to enhance the performance of the AI model. The lung segmentation model uses transfer learning to achieve a dice score of 0.960 for preterm infants with ≤ 168 h postnatal age. The BPD prediction model exhibited superior diagnostic performance compared to that of experts and demonstrated consistent performance for chest radiographs obtained at ≤ 24 h postnatal age, and those obtained at 25 to 168 h postnatal age. This study is the first to use deep learning on preterm chest radiographs for lung segmentation to develop a BPD prediction model with an early detection time of less than 24 h. Additionally, this study compared the model's performance according to both NICHD and Jensen criteria for BPD. Results demonstrate that the AI model surpasses the diagnostic accuracy of experts in predicting lung development in preterm infants.
Collapse
Affiliation(s)
- Hao-Yang Chou
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, 70101, Taiwan
| | - Yung-Chieh Lin
- Department of Pediatrics, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, 704, Taiwan
| | - Sun-Yuan Hsieh
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, 70101, Taiwan
- Institution of Medical Informatics, National Cheng Kung University, Tainan, 70101, Taiwan
- Institute of Manufacturing Information and Systems, National Cheng Kung University, Tainan, 70101, Taiwan
- Department of Computer Science and Information Engineering, National Chi Nan University, Nantou, 54561, Taiwan
- Institute of Information Science, Academia Sinica, Taipei, 115, Taiwan
- Research Center for Information Technology Innovation, Academia Sinica, Taipei, 115, Taiwan
| | - Hsin-Hung Chou
- Department of Computer Science and Information Engineering, National Chi Nan University, Nantou, 54561, Taiwan.
| | - Cheng-Shih Lai
- Department of Medical Imaging, National Cheng Kung University Hospital, Tainan, 701401, Taiwan
| | - Bow Wang
- Department of Medical Imaging, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, 704, Taiwan
| | - Yi-Shan Tsai
- Department of Medical Imaging, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, 704, Taiwan.
| |
Collapse
|
7
|
Rai S, Bhatt JS, Patra SK. An AI-Based Low-Risk Lung Health Image Visualization Framework Using LR-ULDCT. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2047-2062. [PMID: 38491236 PMCID: PMC11522248 DOI: 10.1007/s10278-024-01062-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 01/18/2024] [Accepted: 02/12/2024] [Indexed: 03/18/2024]
Abstract
In this article, we propose an AI-based low-risk visualization framework for lung health monitoring using low-resolution ultra-low-dose CT (LR-ULDCT). We present a novel deep cascade processing workflow to achieve diagnostic visualization on LR-ULDCT (<0.3 mSv) at par high-resolution CT (HRCT) of 100 mSV radiation technology. To this end, we build a low-risk and affordable deep cascade network comprising three sequential deep processes: restoration, super-resolution (SR), and segmentation. Given degraded LR-ULDCT, the first novel network unsupervisedly learns restoration function from augmenting patch-based dictionaries and residuals. The restored version is then super-resolved (SR) for target (sensor) resolution. Here, we combine perceptual and adversarial losses in novel GAN to establish the closeness between probability distributions of generated SR-ULDCT and restored LR-ULDCT. Thus SR-ULDCT is presented to the segmentation network that first separates the chest portion from SR-ULDCT followed by lobe-wise colorization. Finally, we extract five lobes to account for the presence of ground glass opacity (GGO) in the lung. Hence, our AI-based system provides low-risk visualization of input degraded LR-ULDCT to various stages, i.e., restored LR-ULDCT, restored SR-ULDCT, and segmented SR-ULDCT, and achieves diagnostic power of HRCT. We perform case studies by experimenting on real datasets of COVID-19, pneumonia, and pulmonary edema/congestion while comparing our results with state-of-the-art. Ablation experiments are conducted for better visualizing different operating pipelines. Finally, we present a verification report by fourteen (14) experienced radiologists and pulmonologists.
Collapse
Affiliation(s)
- Swati Rai
- Indian Institute of Information Technology Vadodara, Vadodara, India.
| | - Jignesh S Bhatt
- Indian Institute of Information Technology Vadodara, Vadodara, India
| | | |
Collapse
|
8
|
Zhou Y, Mei S, Wang J, Xu Q, Zhang Z, Qin S, Feng J, Li C, Xing S, Wang W, Zhang X, Li F, Zhou Q, He Z, Gao Y. Development and validation of a deep learning-based framework for automated lung CT segmentation and acute respiratory distress syndrome prediction: a multicenter cohort study. EClinicalMedicine 2024; 75:102772. [PMID: 39170939 PMCID: PMC11338113 DOI: 10.1016/j.eclinm.2024.102772] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Revised: 07/15/2024] [Accepted: 07/17/2024] [Indexed: 08/23/2024] Open
Abstract
Background Acute respiratory distress syndrome (ARDS) is a life-threatening condition with a high incidence and mortality rate in intensive care unit (ICU) admissions. Early identification of patients at high risk for developing ARDS is crucial for timely intervention and improved clinical outcomes. However, the complex pathophysiology of ARDS makes early prediction challenging. This study aimed to develop an artificial intelligence (AI) model for automated lung lesion segmentation and early prediction of ARDS to facilitate timely intervention in the intensive care unit. Methods A total of 928 ICU patients with chest computed tomography (CT) scans were included from November 2018 to November 2021 at three centers in China. Patients were divided into a retrospective cohort for model development and internal validation, and three independent cohorts for external validation. A deep learning-based framework using the UNet Transformer (UNETR) model was developed to perform the segmentation of lung lesions and early prediction of ARDS. We employed various data augmentation techniques using the Medical Open Network for AI (MONAI) framework, enhancing the training sample diversity and improving the model's generalization capabilities. The performance of the deep learning-based framework was compared with a Densenet-based image classification network and evaluated in external and prospective validation cohorts. The segmentation performance was assessed using the Dice coefficient (DC), and the prediction performance was assessed using area under the receiver operating characteristic curve (AUC), sensitivity, and specificity. The contributions of different features to ARDS prediction were visualized using Shapley Explanation Plots. This study was registered with the China Clinical Trial Registration Centre (ChiCTR2200058700). Findings The segmentation task using the deep learning framework achieved a DC of 0.734 ± 0.137 in the validation set. For the prediction task, the deep learning-based framework achieved AUCs of 0.916 [0.858-0.961], 0.865 [0.774-0.945], 0.901 [0.835-0.955], and 0.876 [0.804-0.936] in the internal validation cohort, external validation cohort I, external validation cohort II, and prospective validation cohort, respectively. It outperformed the Densenet-based image classification network in terms of prediction accuracy. Moreover, the ARDS prediction model identified lung lesion features and clinical parameters such as C-reactive protein, albumin, bilirubin, platelet count, and age as significant contributors to ARDS prediction. Interpretation The deep learning-based framework using the UNETR model demonstrated high accuracy and robustness in lung lesion segmentation and early ARDS prediction, and had good generalization ability and clinical applicability. Funding This study was supported by grants from the Shanghai Renji Hospital Clinical Research Innovation and Cultivation Fund (RJPY-DZX-008) and Shanghai Science and Technology Development Funds (22YF1423300).
Collapse
Affiliation(s)
- Yang Zhou
- Department of Critical Care Medicine, Ren Ji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Shuya Mei
- Department of Critical Care Medicine, Ren Ji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jiemin Wang
- Department of Critical Care Medicine, Ren Ji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Qiaoyi Xu
- Department of Critical Care Medicine, Ren Ji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Zhiyun Zhang
- Department of Critical Care Medicine, Ren Ji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Shaojie Qin
- Department of Critical Care Medicine, Ren Ji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jinhua Feng
- Department of Critical Care Medicine, Ren Ji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Congye Li
- Department of Critical Care Medicine, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Shunpeng Xing
- Department of Critical Care Medicine, Ren Ji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Wei Wang
- Shanghai Public Health Clinical Center, Fudan University, Shanghai, China
| | - Xiaolin Zhang
- Shanghai Public Health Clinical Center, Fudan University, Shanghai, China
| | - Feng Li
- Shanghai Public Health Clinical Center, Fudan University, Shanghai, China
| | - Quanhong Zhou
- Department of Critical Care Medicine, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Zhengyu He
- Department of Critical Care Medicine, Ren Ji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yuan Gao
- Department of Critical Care Medicine, Ren Ji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| |
Collapse
|
9
|
Du T, Sun Y, Wang X, Jiang T, Xu N, Boukhers Z, Grzegorzek M, Sun H, Li C. A non-enhanced CT-based deep learning diagnostic system for COVID-19 infection at high risk among lung cancer patients. Front Med (Lausanne) 2024; 11:1444708. [PMID: 39188873 PMCID: PMC11345710 DOI: 10.3389/fmed.2024.1444708] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2024] [Accepted: 07/05/2024] [Indexed: 08/28/2024] Open
Abstract
Background Pneumonia and lung cancer have a mutually reinforcing relationship. Lung cancer patients are prone to contracting COVID-19, with poorer prognoses. Additionally, COVID-19 infection can impact anticancer treatments for lung cancer patients. Developing an early diagnostic system for COVID-19 pneumonia can help improve the prognosis of lung cancer patients with COVID-19 infection. Method This study proposes a neural network for COVID-19 diagnosis based on non-enhanced CT scans, consisting of two 3D convolutional neural networks (CNN) connected in series to form two diagnostic modules. The first diagnostic module classifies COVID-19 pneumonia patients from other pneumonia patients, while the second diagnostic module distinguishes severe COVID-19 patients from ordinary COVID-19 patients. We also analyzed the correlation between the deep learning features of the two diagnostic modules and various laboratory parameters, including KL-6. Result The first diagnostic module achieved an accuracy of 0.9669 on the training set and 0.8884 on the test set, while the second diagnostic module achieved an accuracy of 0.9722 on the training set and 0.9184 on the test set. Strong correlation was observed between the deep learning parameters of the second diagnostic module and KL-6. Conclusion Our neural network can differentiate between COVID-19 pneumonia and other pneumonias on CT images, while also distinguishing between ordinary COVID-19 patients and those with white lung. Patients with white lung in COVID-19 have greater alveolar damage compared to ordinary COVID-19 patients, and our deep learning features can serve as an imaging biomarker.
Collapse
Affiliation(s)
- Tianming Du
- College of Medicine and Biological information Engineering, Northeastern University, Shenyang, China
| | - Yihao Sun
- College of Medicine and Biological information Engineering, Northeastern University, Shenyang, China
| | - Xinghao Wang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
- Institute of Medical Informatics, University of Lübeck, Lübeck, Germany
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Tao Jiang
- Institute of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Ning Xu
- School of Arts and Design, Liaoning Petrochemical University, Fushun, Liaoning, China
| | - Zeyd Boukhers
- Fraunhofer Institute for Applied Information Technology FIT, Sankt Augustin, Germany
| | - Marcin Grzegorzek
- Institute of Medical Informatics, University of Lübeck, Lübeck, Germany
- German Research Center for Artificial Intelligence, Lübeck, Germany
| | - Hongzan Sun
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Chen Li
- College of Medicine and Biological information Engineering, Northeastern University, Shenyang, China
| |
Collapse
|
10
|
Sannasi Chakravarthy SR, Bharanidharan N, Vinothini C, Vinoth Kumar V, Mahesh TR, Guluwadi S. Adaptive Mish activation and ranger optimizer-based SEA-ResNet50 model with explainable AI for multiclass classification of COVID-19 chest X-ray images. BMC Med Imaging 2024; 24:206. [PMID: 39123118 PMCID: PMC11313131 DOI: 10.1186/s12880-024-01394-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Accepted: 08/06/2024] [Indexed: 08/12/2024] Open
Abstract
A recent global health crisis, COVID-19 is a significant global health crisis that has profoundly affected lifestyles. The detection of such diseases from similar thoracic anomalies using medical images is a challenging task. Thus, the requirement of an end-to-end automated system is vastly necessary in clinical treatments. In this way, the work proposes a Squeeze-and-Excitation Attention-based ResNet50 (SEA-ResNet50) model for detecting COVID-19 utilizing chest X-ray data. Here, the idea lies in improving the residual units of ResNet50 using the squeeze-and-excitation attention mechanism. For further enhancement, the Ranger optimizer and adaptive Mish activation function are employed to improve the feature learning of the SEA-ResNet50 model. For evaluation, two publicly available COVID-19 radiographic datasets are utilized. The chest X-ray input images are augmented during experimentation for robust evaluation against four output classes namely normal, pneumonia, lung opacity, and COVID-19. Then a comparative study is done for the SEA-ResNet50 model against VGG-16, Xception, ResNet18, ResNet50, and DenseNet121 architectures. The proposed framework of SEA-ResNet50 together with the Ranger optimizer and adaptive Mish activation provided maximum classification accuracies of 98.38% (multiclass) and 99.29% (binary classification) as compared with the existing CNN architectures. The proposed method achieved the highest Kappa validation scores of 0.975 (multiclass) and 0.98 (binary classification) over others. Furthermore, the visualization of the saliency maps of the abnormal regions is represented using the explainable artificial intelligence (XAI) model, thereby enhancing interpretability in disease diagnosis.
Collapse
Affiliation(s)
- S R Sannasi Chakravarthy
- Department of Electronics and Communication Engineering, Bannari Amman Institute of Technology, Sathyamangalam, India
| | - N Bharanidharan
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, Vellore, 632014, India
| | - C Vinothini
- Department of Computer Science and Engineering, Dayananda Sagar College of Engineering, Bangalore, India
| | - Venkatesan Vinoth Kumar
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, Vellore, 632014, India
| | - T R Mahesh
- Department of Computer Science and Engineering, JAIN (Deemed-to-Be University), Bengaluru, 562112, India
| | - Suresh Guluwadi
- Adama Science and Technology University, Adama, 302120, Ethiopia.
| |
Collapse
|
11
|
Ragab DA, Fayed S, Ghatwary N. DeepCSFusion: Deep Compressive Sensing Fusion for Efficient COVID-19 Classification. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1346-1358. [PMID: 38381386 PMCID: PMC11300776 DOI: 10.1007/s10278-024-01011-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 12/04/2023] [Accepted: 12/05/2023] [Indexed: 02/22/2024]
Abstract
Worldwide, the COVID-19 epidemic, which started in 2019, has resulted in millions of deaths. The medical research community has widely used computer analysis of medical data during the pandemic, specifically deep learning models. Deploying models on devices with constrained resources is a significant challenge due to the increased storage demands associated with larger deep learning models. Accordingly, in this paper, we propose a novel compression strategy that compresses deep features with a compression ratio of 10 to 90% to accurately classify the COVID-19 and non-COVID-19 computed tomography scans. Additionally, we extensively validated the compression using various available deep learning methods to extract the most suitable features from different models. Finally, the suggested DeepCSFusion model compresses the extracted features and applies fusion to achieve the highest classification accuracy with fewer features. The proposed DeepCSFusion model was validated on the publicly available dataset "SARS-CoV-2 CT" scans composed of 1252 CT. This study demonstrates that the proposed DeepCSFusion reduced the computational time with an overall accuracy of 99.3%. Also, it outperforms state-of-the-art pipelines in terms of various classification measures.
Collapse
Affiliation(s)
- Dina A Ragab
- Electronics & Communications Engineering Department, Arab Academy for Science, Technology, and Maritime Transport (AASTMT), Smart Village Campus, Giza, Egypt.
| | - Salema Fayed
- Computer Engineering Department, Arab Academy for Science Technology, and Maritime Transport (AASTMT), Smart Village Campus, Giza, Egypt
| | - Noha Ghatwary
- Computer Engineering Department, Arab Academy for Science Technology, and Maritime Transport (AASTMT), Smart Village Campus, Giza, Egypt
| |
Collapse
|
12
|
Rashid PQ, Türker İ. Lung Disease Detection Using U-Net Feature Extractor Cascaded by Graph Convolutional Network. Diagnostics (Basel) 2024; 14:1313. [PMID: 38928728 PMCID: PMC11202625 DOI: 10.3390/diagnostics14121313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Revised: 06/17/2024] [Accepted: 06/18/2024] [Indexed: 06/28/2024] Open
Abstract
Computed tomography (CT) scans have recently emerged as a major technique for the fast diagnosis of lung diseases via image classification techniques. In this study, we propose a method for the diagnosis of COVID-19 disease with improved accuracy by utilizing graph convolutional networks (GCN) at various layer formations and kernel sizes to extract features from CT scan images. We apply a U-Net model to aid in segmentation and feature extraction. In contrast with previous research retrieving deep features from convolutional filters and pooling layers, which fail to fully consider the spatial connectivity of the nodes, we employ GCNs for classification and prediction to capture spatial connectivity patterns, which provides a significant association benefit. We handle the extracted deep features to form an adjacency matrix that contains a graph structure and pass it to a GCN along with the original image graph and the largest kernel graph. We combine these graphs to form one block of the graph input and then pass it through a GCN with an additional dropout layer to avoid overfitting. Our findings show that the suggested framework, called the feature-extracted graph convolutional network (FGCN), performs better in identifying lung diseases compared to recently proposed deep learning architectures that are not based on graph representations. The proposed model also outperforms a variety of transfer learning models commonly used for medical diagnosis tasks, highlighting the abstraction potential of the graph representation over traditional methods.
Collapse
Affiliation(s)
| | - İlker Türker
- Department of Computer Engineering, Karabuk University, 78050 Karabuk, Turkey;
| |
Collapse
|
13
|
Cao R, Liu Y, Wen X, Liao C, Wang X, Gao Y, Tan T. Reinvestigating the performance of artificial intelligence classification algorithms on COVID-19 X-Ray and CT images. iScience 2024; 27:109712. [PMID: 38689643 PMCID: PMC11059117 DOI: 10.1016/j.isci.2024.109712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 03/01/2024] [Accepted: 04/07/2024] [Indexed: 05/02/2024] Open
Abstract
There are concerns that artificial intelligence (AI) algorithms may create underdiagnosis bias by mislabeling patient individuals with certain attributes (e.g., female and young) as healthy. Addressing this bias is crucial given the urgent need for AI diagnostics facing rapidly spreading infectious diseases like COVID-19. We find the prevalent AI diagnostic models show an underdiagnosis rate among specific patient populations, and the underdiagnosis rate is higher in some intersectional specific patient populations (for example, females aged 20-40 years). Additionally, we find training AI models on heterogeneous datasets (positive and negative samples from different datasets) may lead to poor model generalization. The model's classification performance varies significantly across test sets, with the accuracy of the better performance being over 40% higher than that of the poor performance. In conclusion, we developed an AI bias analysis pipeline to help researchers recognize and address biases that impact medical equality and ethics.
Collapse
Affiliation(s)
- Rui Cao
- School of Software, Taiyuan University of Technology, Taiyuan 030024, China
| | - Yanan Liu
- School of Software, Taiyuan University of Technology, Taiyuan 030024, China
| | - Xin Wen
- School of Software, Taiyuan University of Technology, Taiyuan 030024, China
| | - Caiqing Liao
- School of Software, Taiyuan University of Technology, Taiyuan 030024, China
| | - Xin Wang
- Department of Radiology, Netherlands Cancer Institute (NKI), Plesmanlaan 121, Amsterdam 1066 CX, the Netherlands
- Department of Radiology and Nuclear Medicine, Radboud University Medical Centre, Geert Grooteplein 10, 6525 GA Nijmegen, the Netherlands
- GROW School for Oncology and Development Biology, Maastricht University, MD, Maastricht 6200, the Netherlands
| | - Yuan Gao
- Department of Radiology, Netherlands Cancer Institute (NKI), Plesmanlaan 121, Amsterdam 1066 CX, the Netherlands
- GROW School for Oncology and Development Biology, Maastricht University, MD, Maastricht 6200, the Netherlands
| | - Tao Tan
- Department of Radiology, Netherlands Cancer Institute (NKI), Plesmanlaan 121, Amsterdam 1066 CX, the Netherlands
- Department of Radiology and Nuclear Medicine, Radboud University Medical Centre, Geert Grooteplein 10, 6525 GA Nijmegen, the Netherlands
- Faculty of Applied Sciences, Macao Polytechnic University, Macao SAR 999078, China
| |
Collapse
|
14
|
Huang M, Han K, Liu W, Wang Z, Liu X, Guo Q. Advancing microplastic surveillance through photoacoustic imaging and deep learning techniques. JOURNAL OF HAZARDOUS MATERIALS 2024; 470:134188. [PMID: 38579587 DOI: 10.1016/j.jhazmat.2024.134188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 03/25/2024] [Accepted: 03/30/2024] [Indexed: 04/07/2024]
Abstract
Microplastic contamination presents a significant global environmental threat, yet scientific understanding of its morphological distribution within ecosystems remains limited. This study introduces a pioneering method for comprehensive microplastic assessment and environmental monitoring, integrating photoacoustic imaging and advanced deep learning techniques. Rigorous curation of diverse microplastic datasets enhances model training, yielding a high-resolution imaging dataset focused on shape-based discrimination. The introduction of the Vector-Quantized Variational Auto Encoder (VQVAE2) deep learning model signifies a substantial advancement, demonstrating exceptional proficiency in image dimensionality reduction and clustering. Furthermore, the utilization of Vector Quantization Microplastic Photoacoustic imaging (VQMPA) with a proxy task before decoding enhances feature extraction, enabling simultaneous microplastic analysis and discrimination. Despite inherent limitations, this study lays a robust foundation for future research, suggesting avenues for enhancing microplastic identification precision through expanded sample sizes and complementary methodologies like spectroscopy. In conclusion, this innovative approach not only advances microplastic monitoring but also provides valuable insights for future environmental investigations, highlighting the potential of photoacoustic imaging and deep learning in bolstering sustainable environmental monitoring efforts.
Collapse
Affiliation(s)
- Mengyuan Huang
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing 102617, China
| | - Kaitai Han
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing 102617, China
| | - Wu Liu
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing 102617, China
| | - Zijun Wang
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing 102617, China
| | - Xi Liu
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing 102617, China
| | - Qianjin Guo
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing 102617, China; School of Mechanical Engineering & Hydrogen Energy Research Centre, Beijing Institute of Petrochemical Technology, Beijing 102617, China.
| |
Collapse
|
15
|
Zheng J, Xiong Y, Zheng Y, Zhang H, Wu R. Evaluating the Stroke Risk of Patients using Machine Learning: A New Perspective from Sichuan and Chongqing. EVALUATION REVIEW 2024; 48:346-369. [PMID: 37533403 DOI: 10.1177/0193841x231193468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/04/2023]
Abstract
Stroke is the leading cause of death and disability among people in China, and it leads to heavy burdens for patients, their families and society. An accurate prediction of the risk of stroke has important implications for early intervention and treatment. In light of recent advances in machine learning, the application of this technique in stroke prediction has achieved plentiful promising results. To detect the relationship between potential factors and the risk of stroke and examine which machine learning method significantly can enhance the prediction accuracy of stroke. We employed six machine learning methods including logistic regression, naive Bayes, decision tree, random forest, K-nearest neighbor and support vector machine, to model and predict the risk of stroke. Participants were 233 patients from Sichuan and Chongqing. Four indicators (accuracy, precision, recall and F1 metric) were examined to evaluate the predictive performance of the different models. The empirical results indicate that random forest yields the best accuracy, recall and F1 in predicting the risk of stroke, with an accuracy of .7548, precision of .7805, recall of .7619 and F1 of .7711. Additionally, the findings show that age, cerebral infarction, PM 8 (an anti-atrial fibrillation drug), and drinking are independent risk factors for stroke. Further studies should adopt a broader assortment of machine learning methods to analyze the risk of stroke, by which better accuracy can be expected. In particular, RF can successfully enhance the forecasting accuracy for stroke.
Collapse
Affiliation(s)
- Jin Zheng
- Institute of Traditional Chinese Medicine, Sichuan Academy of Chinese Medicine Sciences, Chengdu, China
| | - Yao Xiong
- Department of Neurology, The Third People's Hospital of Chengdu & The Affilliate Hosipital of Southwest Jiaotong University, Chengdu, China
| | - Yimei Zheng
- School of Mathematics, Southwest Jiao Tong University, Chengdu, China
| | - Haitao Zhang
- Department of Neurology, The Third People's Hospital of Chengdu & The Affilliate Hosipital of Southwest Jiaotong University, Chengdu, China
| | - Rui Wu
- School of Mathematics, Southwest Jiao Tong University, Chengdu, China
| |
Collapse
|
16
|
Rajinikanth V, Biju R, Mittal N, Mittal V, Askar S, Abouhawwash M. COVID-19 detection in lung CT slices using Brownian-butterfly-algorithm optimized lightweight deep features. Heliyon 2024; 10:e27509. [PMID: 38468955 PMCID: PMC10926136 DOI: 10.1016/j.heliyon.2024.e27509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 02/29/2024] [Accepted: 02/29/2024] [Indexed: 03/13/2024] Open
Abstract
Several deep-learning assisted disease assessment schemes (DAS) have been proposed to enhance accurate detection of COVID-19, a critical medical emergency, through the analysis of clinical data. Lung imaging, particularly from CT scans, plays a pivotal role in identifying and assessing the severity of COVID-19 infections. Existing automated methods leveraging deep learning contribute significantly to reducing the diagnostic burden associated with this process. This research aims in developing a simple DAS for COVID-19 detection using the pre-trained lightweight deep learning methods (LDMs) applied to lung CT slices. The use of LDMs contributes to a less complex yet highly accurate detection system. The key stages of the developed DAS include image collection and initial processing using Shannon's thresholding, deep-feature mining supported by LDMs, feature optimization utilizing the Brownian Butterfly Algorithm (BBA), and binary classification through three-fold cross-validation. The performance evaluation of the proposed scheme involves assessing individual, fused, and ensemble features. The investigation reveals that the developed DAS achieves a detection accuracy of 93.80% with individual features, 96% accuracy with fused features, and an impressive 99.10% accuracy with ensemble features. These outcomes affirm the effectiveness of the proposed scheme in significantly enhancing COVID-19 detection accuracy in the chosen lung CT database.
Collapse
Affiliation(s)
- Venkatesan Rajinikanth
- Department of Computer Science and Engineering, Division of Research and Innovation, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, 602105, Tamil Nadu, India
| | - Roshima Biju
- Department of Computer Science Engineering, Parul University, Vadodara, 391760, Gujarat, India
| | - Nitin Mittal
- Skill Faculty of Engineering and Technology, Shri Vishwakarma Skill University, Palwal, 121102, Haryana, India
| | - Vikas Mittal
- Department of Electronics and Communication Engineering, Chandigarh University, Mohali, 140413, India
| | - S.S. Askar
- Department of Statistics and Operations Research, College of Science, King Saud University, P.O. Box 2455, Riyadh, 11451, Saudi Arabia
| | - Mohamed Abouhawwash
- Department of Mathematics, Faculty of Science, Mansoura University, Mansoura, 35516, Egypt
| |
Collapse
|
17
|
Abdulahi AT, Ogundokun RO, Adenike AR, Shah MA, Ahmed YK. PulmoNet: a novel deep learning based pulmonary diseases detection model. BMC Med Imaging 2024; 24:51. [PMID: 38418987 PMCID: PMC10903074 DOI: 10.1186/s12880-024-01227-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Accepted: 02/11/2024] [Indexed: 03/02/2024] Open
Abstract
Pulmonary diseases are various pathological conditions that affect respiratory tissues and organs, making the exchange of gas challenging for animals inhaling and exhaling. It varies from gentle and self-limiting such as the common cold and catarrh, to life-threatening ones, such as viral pneumonia (VP), bacterial pneumonia (BP), and tuberculosis, as well as a severe acute respiratory syndrome, such as the coronavirus 2019 (COVID-19). The cost of diagnosis and treatment of pulmonary infections is on the high side, most especially in developing countries, and since radiography images (X-ray and computed tomography (CT) scan images) have proven beneficial in detecting various pulmonary infections, many machine learning (ML) models and image processing procedures have been utilized to identify these infections. The need for timely and accurate detection can be lifesaving, especially during a pandemic. This paper, therefore, suggested a deep convolutional neural network (DCNN) founded image detection model, optimized with image augmentation technique, to detect three (3) different pulmonary diseases (COVID-19, bacterial pneumonia, and viral pneumonia). The dataset containing four (4) different classes (healthy (10,325), COVID-19 (3,749), BP (883), and VP (1,478)) was utilized as training/testing data for the suggested model. The model's performance indicates high potential in detecting the three (3) classes of pulmonary diseases. The model recorded average detection accuracy of 94%, 95.4%, 99.4%, and 98.30%, and training/detection time of about 60/50 s. This result indicates the proficiency of the suggested approach when likened to the traditional texture descriptors technique of pulmonary disease recognition utilizing X-ray and CT scan images. This study introduces an innovative deep convolutional neural network model to enhance the detection of pulmonary diseases like COVID-19 and pneumonia using radiography. This model, notable for its accuracy and efficiency, promises significant advancements in medical diagnostics, particularly beneficial in developing countries due to its potential to surpass traditional diagnostic methods.
Collapse
Affiliation(s)
- AbdulRahman Tosho Abdulahi
- Department of Computer Science, Institute of Information and Communication Technology, Kwara State Polytechnic, Ilorin, Nigeria
| | - Roseline Oluwaseun Ogundokun
- Department of Multimedia Engineering, Kaunas University of Technology, Kaunas, Lithuania
- Department of Computer Science, Landmark University Omu Aran, Omu Aran, Nigeria
| | - Ajiboye Raimot Adenike
- Department of Statistics, Institute of Applied Sciences, Kwara State Polytechnic, Ilorin, Nigeria
| | - Mohd Asif Shah
- Department of Economics, Kebri Dehar University, Kebri Dehar, 250, Somali, Ethiopia.
- Centre of Research Impact and Outcome, Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, 140401, India.
- Chitkara Centre for Research and Development, Chitkara University, Baddi, Himachal Pradesh, 174103, India.
| | - Yusuf Kola Ahmed
- Department of Biomedical Engineering, University of Ilorin, Ilorin, Nigeria
- Department of Occupational Therapy, University of Alberta, Edmonton, Canada
| |
Collapse
|
18
|
Vaikunta Pai T, Maithili K, Arun Kumar R, Nagaraju D, Anuradha D, Kumar S, Ravuri A, Sunilkumar Reddy T, Sivaram M, Vidhya RG. DKCNN: Improving deep kernel convolutional neural network-based COVID-19 identification from CT images of the chest. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:913-930. [PMID: 38820059 DOI: 10.3233/xst-230424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2024]
Abstract
BACKGROUND An efficient deep convolutional neural network (DeepCNN) is proposed in this article for the classification of Covid-19 disease. OBJECTIVE A novel structure known as the Pointwise-Temporal-pointwise convolution unit is developed incorporated with the varying kernel-based depth wise temporal convolution before and after the pointwise convolution operations. METHODS The outcome is optimized by the Slap Swarm algorithm (SSA). The proposed Deep CNN is composed of depth wise temporal convolution and end-to-end automatic detection of disease. First, the datasets SARS-COV-2 Ct-Scan Dataset and CT scan COVID Prediction dataset are preprocessed using the min-max approach and the features are extracted for further processing. RESULTS The experimental analysis is conducted between the proposed and some state-of-art works and stated that the proposed work effectively classifies the disease than the other approaches. CONCLUSION The proposed structural unit is used to design the deep CNN with the increasing kernel sizes. The classification process is improved by the inclusion of depth wise temporal convolutions along with the kernel variation. The computational complexity is reduced by the introduction of stride convolutions are used in the residual linkage among the adjacent structural units.
Collapse
Affiliation(s)
- T Vaikunta Pai
- Department of Information Science and Engineering, NMAM Institute of Technology-Affiliated to NITTE (Deemed to be University), Bangalore, Karnataka, India
| | - K Maithili
- Department of Computer Science and Engineering (Ai & ML), KG Reddy College of Engineering and Technology, Hyderabad, Telangana, India
| | - Ravula Arun Kumar
- Department of Computer Science and Engineering, Vardhaman College of Engineering, Hyderabad, Telangana, India
| | - D Nagaraju
- Department of Computer Science and Engineering, Sri Venkatesa Perumal College of Engineering and Technology, Puttur, Andhra Pradesh, India
| | - D Anuradha
- Department of Computer Science and Business Systems, Panimalar Engineering College, Chennai, India
| | - Shailendra Kumar
- Department of Electronics and Communication Engineering, Integral University Lucknow, Uttar Pradesh, India
| | | | - T Sunilkumar Reddy
- Department of Computer Science and Engineering, Sri Venkatesa Perumal College of Engineering and Technology, Puttur, Andhra Pradesh, India
| | - M Sivaram
- Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Saveetha Nagar, Thandalam, Tamil Nadu, India
| | - R G Vidhya
- Department of ECE, HKBKCE, Bangalore, India
| |
Collapse
|
19
|
Vinothini R, Niranjana G, Yakub F. A Novel Classification Model Using Optimal Long Short-Term Memory for Classification of COVID-19 from CT Images. J Digit Imaging 2023; 36:2480-2493. [PMID: 37491543 PMCID: PMC10584759 DOI: 10.1007/s10278-023-00852-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 05/13/2023] [Accepted: 05/15/2023] [Indexed: 07/27/2023] Open
Abstract
The human respiratory system is affected when an individual is infected with COVID-19, which became a global pandemic in 2020 and affected millions of people worldwide. However, accurate diagnosis of COVID-19 can be challenging due to small variations in typical and COVID-19 pneumonia, as well as the complexities involved in classifying infection regions. Currently, various deep learning (DL)-based methods are being introduced for the automatic detection of COVID-19 using computerized tomography (CT) scan images. In this paper, we propose the pelican optimization algorithm-based long short-term memory (POA-LSTM) method for classifying coronavirus using CT scan images. The data preprocessing technique is used to convert raw image data into a suitable format for subsequent steps. Here, we develop a general framework called no new U-Net (nnU-Net) for region of interest (ROI) segmentation in medical images. We apply a set of heuristic guidelines derived from the domain to systematically optimize the ROI segmentation task, which represents the dataset's key properties. Furthermore, high-resolution net (HRNet) is a standard neural network design developed for feature extraction. HRNet chooses the top-down strategy over the bottom-up method after considering the two options. It first detects the subject, generates a bounding box around the object and then estimates the relevant feature. The POA is used to minimize the subjective influence of manually selected parameters and enhance the LSTM's parameters. Thus, the POA-LSTM is used for the classification process, achieving higher performance for each performance metric such as accuracy, sensitivity, F1-score, precision, and specificity of 99%, 98.67%, 98.88%, 98.72%, and 98.43%, respectively.
Collapse
Affiliation(s)
- R Vinothini
- Department of Computing Technologies, SRM Institute of Science and Technology, Kattankulathur, India.
| | - G Niranjana
- SRM Institute of Science and Technology, Kattankulathur, India
| | - Fitri Yakub
- Electronic System Engineering Department, Malaysia-Japan International Institute of Technology, Kuala Lumpur, Malaysia
| |
Collapse
|
20
|
Ozaltin O, Yeniay O, Subasi A. OzNet: A New Deep Learning Approach for Automated Classification of COVID-19 Computed Tomography Scans. BIG DATA 2023; 11:420-436. [PMID: 36927081 DOI: 10.1089/big.2022.0042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Coronavirus disease 2019 (COVID-19) is spreading rapidly around the world. Therefore, the classification of computed tomography (CT) scans alleviates the workload of experts, whose workload increased considerably during the pandemic. Convolutional neural network (CNN) architectures are successful for the classification of medical images. In this study, we have developed a new deep CNN architecture called OzNet. Moreover, we have compared it with pretrained architectures namely AlexNet, DenseNet201, GoogleNet, NASNetMobile, ResNet-50, SqueezeNet, and VGG-16. In addition, we have compared the classification success of three preprocessing methods with raw CT scans. We have not only classified the raw CT scans, but also have performed the classification with three different preprocessing methods, which are discrete wavelet transform (DWT), intensity adjustment, and gray to color red, green, blue image conversion on the data sets. Furthermore, it is known that the architecture's performance increases with the use of DWT preprocessing method rather than using the raw data set. The results are extremely promising with the CNN algorithms using the COVID-19 CT scans processed with the DWT. The proposed DWT-OzNet has achieved a high classification performance of more than 98.8% for each calculated metric.
Collapse
Affiliation(s)
- Oznur Ozaltin
- Department of Statistics, Institute of Science, Hacettepe University, Ankara, Turkey
| | - Ozgur Yeniay
- Department of Statistics, Institute of Science, Hacettepe University, Ankara, Turkey
| | - Abdulhamit Subasi
- Institute of Biomedicine, Faculty of Medicine, University of Turku, Turku, Finland
- Department of Computer Science, College of Engineering, Effat University, Jeddah, Saudi Arabia
| |
Collapse
|
21
|
Utku A. Deep learning based hybrid prediction model for predicting the spread of COVID-19 in the world's most populous countries. EXPERT SYSTEMS WITH APPLICATIONS 2023; 231:120769. [PMID: 37334273 PMCID: PMC10260264 DOI: 10.1016/j.eswa.2023.120769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Revised: 06/07/2023] [Accepted: 06/07/2023] [Indexed: 06/20/2023]
Abstract
COVID-19 has a disease and health phenomenon and has sociological and economic adverse effects. Accurate prediction of the spread of the epidemic will help in the planning of health management and the development of economic and sociological action plans. In the literature, there are many studies to analyse and predict the spread of COVID-19 in cities and countries. However, there is no study to predict and analyse the cross-country spread in the world's most populous countries. In this study, it was aimed to predict the spread of the COVID-19 epidemic. The motivation of this study is to reduce the workload of health workers, take preventive measures and optimize health processes by predicting the spread of the COVID-19 epidemic. A hybrid deep learning model was developed to predict and analyse COVID-19 cross-country spread and a case study was carried out for the world's most populous countries. The developed model was tested extensively using RMSE, MAE and R2. The experimental results showed that the developed model was more successful in predicting and analysis of COVID-19 cross-country spread in the world's most populous countries than LR, RF, SVM, MLP, CNN, GRU, LSTM and base CNN-GRU. In the developed model, CNN performs convolution and pooling operations to extract spatial features from the input data. GRU provides learning of long-term and non-linear relationships inferred by CNN. The developed hybrid model was more successful than the other models compared, as it enabled the effective features of the CNN and GRU models to be used together. The prediction and analysis of the cross-country spread of COVID-19 in the world's most populated countries can be presented as a novelty of this study.
Collapse
Affiliation(s)
- Anil Utku
- Department of Computer Engineering, Faculty of Engineering, Munzur University, 62100 Tunceli, Turkey
| |
Collapse
|
22
|
Nur-A-Alam M, Nasir MK, Ahsan M, Based MA, Haider J, Kowalski M. Ensemble classification of integrated CT scan datasets in detecting COVID-19 using feature fusion from contourlet transform and CNN. Sci Rep 2023; 13:20063. [PMID: 37973820 PMCID: PMC10654719 DOI: 10.1038/s41598-023-47183-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 11/09/2023] [Indexed: 11/19/2023] Open
Abstract
The COVID-19 disease caused by coronavirus is constantly changing due to the emergence of different variants and thousands of people are dying every day worldwide. Early detection of this new form of pulmonary disease can reduce the mortality rate. In this paper, an automated method based on machine learning (ML) and deep learning (DL) has been developed to detect COVID-19 using computed tomography (CT) scan images extracted from three publicly available datasets (A total of 11,407 images; 7397 COVID-19 images and 4010 normal images). An unsupervised clustering approach that is a modified region-based clustering technique for segmenting COVID-19 CT scan image has been proposed. Furthermore, contourlet transform and convolution neural network (CNN) have been employed to extract features individually from the segmented CT scan images and to fuse them in one feature vector. Binary differential evolution (BDE) approach has been employed as a feature optimization technique to obtain comprehensible features from the fused feature vector. Finally, a ML/DL-based ensemble classifier considering bagging technique has been employed to detect COVID-19 from the CT images. A fivefold and generalization cross-validation techniques have been used for the validation purpose. Classification experiments have also been conducted with several pre-trained models (AlexNet, ResNet50, GoogleNet, VGG16, VGG19) and found that the ensemble classifier technique with fused feature has provided state-of-the-art performance with an accuracy of 99.98%.
Collapse
Affiliation(s)
- Md Nur-A-Alam
- Department of Computer Science & Engineering, Mawlana Bhashani Science and Technology University, Tangail, 1902, Bangladesh
| | - Mostofa Kamal Nasir
- Department of Computer Science & Engineering, Mawlana Bhashani Science and Technology University, Tangail, 1902, Bangladesh
| | - Mominul Ahsan
- Department of Computer Science, University of York, Deramore Lane, York, YO10 5GH, UK
| | - Md Abdul Based
- Department of Computer Science & Engineering, Dhaka International University, Dhaka, 1205, Bangladesh
| | - Julfikar Haider
- Department of Engineering, Manchester Metropolitan University, Chester St, Manchester, M1 5GD, UK
| | - Marcin Kowalski
- Institute of Optoelectronics, Military University of Technology, Gen. S. Kaliskiego 2, Warsaw, Poland.
| |
Collapse
|
23
|
Murphy K, Muhairwe J, Schalekamp S, van Ginneken B, Ayakaka I, Mashaete K, Katende B, van Heerden A, Bosman S, Madonsela T, Gonzalez Fernandez L, Signorell A, Bresser M, Reither K, Glass TR. COVID-19 screening in low resource settings using artificial intelligence for chest radiographs and point-of-care blood tests. Sci Rep 2023; 13:19692. [PMID: 37952026 PMCID: PMC10640556 DOI: 10.1038/s41598-023-46461-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 11/01/2023] [Indexed: 11/14/2023] Open
Abstract
Artificial intelligence (AI) systems for detection of COVID-19 using chest X-Ray (CXR) imaging and point-of-care blood tests were applied to data from four low resource African settings. The performance of these systems to detect COVID-19 using various input data was analysed and compared with antigen-based rapid diagnostic tests. Participants were tested using the gold standard of RT-PCR test (nasopharyngeal swab) to determine whether they were infected with SARS-CoV-2. A total of 3737 (260 RT-PCR positive) participants were included. In our cohort, AI for CXR images was a poor predictor of COVID-19 (AUC = 0.60), since the majority of positive cases had mild symptoms and no visible pneumonia in the lungs. AI systems using differential white blood cell counts (WBC), or a combination of WBC and C-Reactive Protein (CRP) both achieved an AUC of 0.74 with a suggested optimal cut-off point at 83% sensitivity and 63% specificity. The antigen-RDT tests in this trial obtained 65% sensitivity at 98% specificity. This study is the first to validate AI tools for COVID-19 detection in an African setting. It demonstrates that screening for COVID-19 using AI with point-of-care blood tests is feasible and can operate at a higher sensitivity level than antigen testing.
Collapse
Affiliation(s)
- Keelin Murphy
- Radboud University Medical Center, 6525 GA, Nijmegen, The Netherlands.
| | | | - Steven Schalekamp
- Radboud University Medical Center, 6525 GA, Nijmegen, The Netherlands
| | - Bram van Ginneken
- Radboud University Medical Center, 6525 GA, Nijmegen, The Netherlands
| | - Irene Ayakaka
- SolidarMed, Partnerships for Health, Maseru, Lesotho
| | | | | | - Alastair van Heerden
- Centre for Community Based Research, Human Sciences Research Council, Pietermaritzburg, South Africa
- SAMRC/WITS Developmental Pathways for Health Research Unit, Department of Paediatrics, School of Clinical Medicine, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, Gauteng, South Africa
| | - Shannon Bosman
- Centre for Community Based Research, Human Sciences Research Council, Pietermaritzburg, South Africa
| | - Thandanani Madonsela
- Centre for Community Based Research, Human Sciences Research Council, Pietermaritzburg, South Africa
| | - Lucia Gonzalez Fernandez
- Department of Infectious Diseases and Hospital Epidemiology, University Hospital Basel, Basel, Switzerland
- SolidarMed, Partnerships for Health, Lucerne, Switzerland
| | - Aita Signorell
- Swiss Tropical and Public Health Institute, Allschwil, Switzerland
- University of Basel, Basel, Switzerland
| | - Moniek Bresser
- Swiss Tropical and Public Health Institute, Allschwil, Switzerland
- University of Basel, Basel, Switzerland
| | - Klaus Reither
- Swiss Tropical and Public Health Institute, Allschwil, Switzerland
- University of Basel, Basel, Switzerland
| | - Tracy R Glass
- Swiss Tropical and Public Health Institute, Allschwil, Switzerland
- University of Basel, Basel, Switzerland
| |
Collapse
|
24
|
Al-Sheikh MH, Al Dandan O, Al-Shamayleh AS, Jalab HA, Ibrahim RW. Multi-class deep learning architecture for classifying lung diseases from chest X-Ray and CT images. Sci Rep 2023; 13:19373. [PMID: 37938631 PMCID: PMC10632494 DOI: 10.1038/s41598-023-46147-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 10/27/2023] [Indexed: 11/09/2023] Open
Abstract
Medical imaging is considered a suitable alternative testing method for the detection of lung diseases. Many researchers have been working to develop various detection methods that have aided in the prevention of lung diseases. To better understand the condition of the lung disease infection, chest X-Ray and CT scans are utilized to check the disease's spread throughout the lungs. This study proposes an automated system for the detection multi lung diseases in X-Ray and CT scans. A customized convolutional neural network (CNN) and two pre-trained deep learning models with a new image enhancement model are proposed for image classification. The proposed lung disease detection comprises two main steps: pre-processing, and deep learning classification. The new image enhancement algorithm is developed in the pre-processing step using k-symbol Lerch transcendent functions model which enhancement images based on image pixel probability. While, in the classification step, the customized CNN architecture and two pre-trained CNN models Alex Net, and VGG16Net are developed. The proposed approach was tested on publicly available image datasets (CT, and X-Ray image dataset), and the results showed classification accuracy, sensitivity, and specificity of 98.60%, 98.40%, and 98.50% for the X-Ray image dataset, respectively, and 98.80%, 98.50%, 98.40% for the CT scans dataset, respectively. Overall, the obtained results highlight the advantages of the image enhancement model as a first step in processing.
Collapse
Affiliation(s)
- Mona Hmoud Al-Sheikh
- Physiology Department, College of Medicine, Imam Abdulrahman Bin Faisal University, 34212, Dammam, Saudi Arabia
| | - Omran Al Dandan
- Department of Radiology, College of Medicine, Imam Abdulrahman Bin Faisal University, 34212, Dammam, Saudi Arabia
| | - Ahmad Sami Al-Shamayleh
- Department of Data Science and Artificial Intelligence, Faculty of Information Technology, Al-Ahliyya Amman University, Al-Salt, Amman, 19328, Jordan
| | - Hamid A Jalab
- Information and Communication Technology Research Group, Scientific Research Center, Al-Ayen University, Nile Street, 64001, Thi-Qar, Iraq.
| | - Rabha W Ibrahim
- Information and Communication Technology Research Group, Scientific Research Center, Al-Ayen University, Nile Street, 64001, Thi-Qar, Iraq
- Department of Mathematics, Mathematics Research Center, Near East University, Near East Boulevard, PC: 99138, Nicosia/Mersin 10, Turkey
- Department of Computer Science and Mathematics, Lebanese American University, Beirut, 1102 2801, Lebanon
| |
Collapse
|
25
|
Jurado-Ruiz F, Rousseau D, Botía JA, Aranzana MJ. GenoDrawing: An Autoencoder Framework for Image Prediction from SNP Markers. PLANT PHENOMICS (WASHINGTON, D.C.) 2023; 5:0113. [PMID: 38239740 PMCID: PMC10795539 DOI: 10.34133/plantphenomics.0113] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Accepted: 10/23/2023] [Indexed: 01/22/2024]
Abstract
Advancements in genome sequencing have facilitated whole-genome characterization of numerous plant species, providing an abundance of genotypic data for genomic analysis. Genomic selection and neural networks (NNs), particularly deep learning, have been developed to predict complex traits from dense genotypic data. Autoencoders, an NN model to extract features from images in an unsupervised manner, has proven to be useful for plant phenotyping. This study introduces an autoencoder framework, GenoDrawing, for predicting and retrieving apple images from a low-depth single-nucleotide polymorphism (SNP) array, potentially useful in predicting traits that are difficult to define. GenoDrawing demonstrates proficiency in its task using a small dataset of shape-related SNPs. Results indicate that the use of SNPs associated with visual traits has substantial impact on the generated images, consistent with biological interpretation. While using substantial SNPs is crucial, incorporating additional, unrelated SNPs results in performance degradation for simple NN architectures that cannot easily identify the most important inputs. The proposed GenoDrawing method is a practical framework for exploring genomic prediction in fruit tree phenotyping, particularly beneficial for small to medium breeding companies to predict economically substantial heritable traits. Although GenoDrawing has limitations, it sets the groundwork for future research in image prediction from genomic markers. Future studies should focus on using stronger models for image reproduction, SNP information extraction, and dataset balance in terms of phenotypes for more precise outcomes.
Collapse
Affiliation(s)
- Federico Jurado-Ruiz
- Center for Research in Agricultural Genomics (CRAG), 08193 Barcelona, Cerdanyola, Spain
| | - David Rousseau
- Université d’Angers, LARIS, INRAe UMR IRHS, 49000 Angers, France
| | - Juan A. Botía
- Department of Information and Communication Engineering,
University of Murcia, 30071 Murcia, Spain
| | - Maria José Aranzana
- Center for Research in Agricultural Genomics (CRAG), 08193 Barcelona, Cerdanyola, Spain
- IRTA (Institut de Recerca i Tecnologia Agroalimentàries), Barcelona, Spain
| |
Collapse
|
26
|
Liang H, Wang M, Wen Y, Du F, Jiang L, Geng X, Tang L, Yan H. Predicting acute pancreatitis severity with enhanced computed tomography scans using convolutional neural networks. Sci Rep 2023; 13:17514. [PMID: 37845380 PMCID: PMC10579320 DOI: 10.1038/s41598-023-44828-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2023] [Accepted: 10/12/2023] [Indexed: 10/18/2023] Open
Abstract
This study aimed to evaluate acute pancreatitis (AP) severity using convolutional neural network (CNN) models with enhanced computed tomography (CT) scans. Three-dimensional DenseNet CNN models were developed and trained using the enhanced CT scans labeled with two severity assessment methods: the computed tomography severity index (CTSI) and Atlanta classification. Each labeling method was used independently for model training and validation. Model performance was evaluated using confusion matrices, areas under the receiver operating characteristic curve (AUC-ROC), accuracy, precision, recall, F1 score, and respective macro-average metrics. A total of 1,798 enhanced CT scans met the inclusion criteria were included in this study. The dataset was randomly divided into a training dataset (n = 1618) and a test dataset (n = 180) with a ratio of 9:1. The DenseNet model demonstrated promising predictions for both CTSI and Atlanta classification-labeled CT scans, with accuracy greater than 0.7 and AUC-ROC greater than 0.8. Specifically, when trained with CT scans labeled using CTSI, the DenseNet model achieved good performance, with a macro-average F1 score of 0.835 and a macro-average AUC-ROC of 0.980. The findings of this study affirm the feasibility of employing CNN models to predict the severity of AP using enhanced CT scans.
Collapse
Affiliation(s)
- Hongyin Liang
- Department of General Surgery, The General Hospital of Western Theater Command (Chengdu Military General Hospital), Chengdu, 610083, China
- Sichuan Provincial Key Laboratory of Pancreatic Injury and Repair, Chengdu, 610083, China
| | - Meng Wang
- Department of Traditional Chinese Medicine, The General Hospital of Western Theater Command (Chengdu Military General Hospital), Chengdu, 610083, China
| | - Yi Wen
- Department of General Surgery, The General Hospital of Western Theater Command (Chengdu Military General Hospital), Chengdu, 610083, China
- Sichuan Provincial Key Laboratory of Pancreatic Injury and Repair, Chengdu, 610083, China
| | - Feizhou Du
- Department of Radiology, The General Hospital of Western Theater Command (Chengdu Military General Hospital), Chengdu, 610083, China
| | - Li Jiang
- Department of Cardiac Surgery, The General Hospital of Western Theater Command (Chengdu Military General Hospital), Chengdu, 610083, China
| | - Xuelong Geng
- Department of Radiology, The General Hospital of Western Theater Command (Chengdu Military General Hospital), Chengdu, 610083, China
| | - Lijun Tang
- Department of General Surgery, The General Hospital of Western Theater Command (Chengdu Military General Hospital), Chengdu, 610083, China
- Sichuan Provincial Key Laboratory of Pancreatic Injury and Repair, Chengdu, 610083, China
| | - Hongtao Yan
- Department of Liver Transplantation and Hepato-biliary-pancreatic Surgery, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, Chengdu, 610016, China.
| |
Collapse
|
27
|
Xu W, Nie L, Chen B, Ding W. Dual-stream EfficientNet with adversarial sample augmentation for COVID-19 computer aided diagnosis. Comput Biol Med 2023; 165:107451. [PMID: 37696184 DOI: 10.1016/j.compbiomed.2023.107451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Revised: 08/17/2023] [Accepted: 09/04/2023] [Indexed: 09/13/2023]
Abstract
Though a series of computer aided measures have been taken for the rapid and definite diagnosis of 2019 coronavirus disease (COVID-19), they generally fail to achieve high enough accuracy, including the recently popular deep learning-based methods. The main reasons are that: (a) they generally focus on improving the model structures while ignoring important information contained in the medical image itself; (b) the existing small-scale datasets have difficulty in meeting the training requirements of deep learning. In this paper, a dual-stream network based on the EfficientNet is proposed for the COVID-19 diagnosis based on CT scans. The dual-stream network takes into account the important information in both spatial and frequency domains of CT scans. Besides, Adversarial Propagation (AdvProp) technology is used to address the insufficient training data usually faced by the deep learning-based computer aided diagnosis and also the overfitting issue. Feature Pyramid Network (FPN) is utilized to fuse the dual-stream features. Experimental results on the public dataset COVIDx CT-2A demonstrate that the proposed method outperforms the existing 12 deep learning-based methods for COVID-19 diagnosis, achieving an accuracy of 0.9870 for multi-class classification, and 0.9958 for binary classification. The source code is available at https://github.com/imagecbj/covid-efficientnet.
Collapse
Affiliation(s)
- Weijie Xu
- Engineering Research Center of Digital Forensics, Ministry of Education, Nanjing University of Information Science and Technology, Nanjing, 210044, China
| | - Lina Nie
- Engineering Research Center of Digital Forensics, Ministry of Education, Nanjing University of Information Science and Technology, Nanjing, 210044, China
| | - Beijing Chen
- Engineering Research Center of Digital Forensics, Ministry of Education, Nanjing University of Information Science and Technology, Nanjing, 210044, China; Jiangsu Collaborative Innovation Center of Atmospheric Environment and Equipment Technology (CICAEET), Nanjing University of Information Science and Technology, Nanjing, 210044, China.
| | - Weiping Ding
- School of Information Science and Technology, Nantong University, Nantong, 226019, China
| |
Collapse
|
28
|
Wang L, Wang J, Zhu L, Fu H, Li P, Cheng G, Feng Z, Li S, Heng PA. Dual Multiscale Mean Teacher Network for Semi-Supervised Infection Segmentation in Chest CT Volume for COVID-19. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:6363-6375. [PMID: 37015538 DOI: 10.1109/tcyb.2022.3223528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Automated detecting lung infections from computed tomography (CT) data plays an important role for combating coronavirus 2019 (COVID-19). However, there are still some challenges for developing AI system: 1) most current COVID-19 infection segmentation methods mainly relied on 2-D CT images, which lack 3-D sequential constraint; 2) existing 3-D CT segmentation methods focus on single-scale representations, which do not achieve the multiple level receptive field sizes on 3-D volume; and 3) the emergent breaking out of COVID-19 makes it hard to annotate sufficient CT volumes for training deep model. To address these issues, we first build a multiple dimensional-attention convolutional neural network (MDA-CNN) to aggregate multiscale information along different dimension of input feature maps and impose supervision on multiple predictions from different convolutional neural networks (CNNs) layers. Second, we assign this MDA-CNN as a basic network into a novel dual multiscale mean teacher network (DM [Formula: see text]-Net) for semi-supervised COVID-19 lung infection segmentation on CT volumes by leveraging unlabeled data and exploring the multiscale information. Our DM [Formula: see text]-Net encourages multiple predictions at different CNN layers from the student and teacher networks to be consistent for computing a multiscale consistency loss on unlabeled data, which is then added to the supervised loss on the labeled data from multiple predictions of MDA-CNN. Third, we collect two COVID-19 segmentation datasets to evaluate our method. The experimental results show that our network consistently outperforms the compared state-of-the-art methods.
Collapse
|
29
|
Verma P, Gupta A, Kumar M, Gill SS. FCMCPS-COVID: AI propelled fog-cloud inspired scalable medical cyber-physical system, specific to coronavirus disease. INTERNET OF THINGS (AMSTERDAM, NETHERLANDS) 2023; 23:100828. [PMID: 37274449 PMCID: PMC10214767 DOI: 10.1016/j.iot.2023.100828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 05/11/2023] [Accepted: 05/20/2023] [Indexed: 06/06/2023]
Abstract
Medical cyber-physical systems (MCPS) firmly integrate a network of medical objects. These systems are highly efficacious and have been progressively used in the Healthcare 4.0 to achieve continuous high-quality services. Healthcare 4.0 encompasses numerous emerging technologies and their applications have been realized in the monitoring of a variety of virus outbreaks. As a growing healthcare trend, coronavirus disease (COVID-19) can be cured and its spread can be prevented using MCPS. This virus spreads from human to human and can have devastating consequences. Moreover, with the alarmingly rising death rate and new cases across the world, there is an urgent need for continuous identification and screening of infected patients to mitigate their spread. Motivated by the facts, we propose a framework for early detection, prevention, and control of the COVID-19 outbreak by using novel Industry 5.0 technologies. The proposed framework uses a dimensionality reduction technique in the fog layer, allowing high-quality data to be used for classification purposes. The fog layer also uses the ensemble learning-based data classification technique for the detection of COVID-19 patients based on the symptomatic dataset. In addition, in the cloud layer, social network analysis (SNA) has been performed to control the spread of COVID-19. The experimental results reveal that compared with state-of-the-art methods, the proposed framework achieves better results in terms of accuracy (82.28 %), specificity (91.42 %), sensitivity (90 %) and stability with effective response time. Furthermore, the utilization of CVI-based alert generation at the fog layer improves the novelty aspects of the proposed system.
Collapse
Affiliation(s)
- Prabal Verma
- Department of Information Technology, National Institute of Technology, Srinagar, India
| | - Aditya Gupta
- Department of Computer Science and Engineering, Manipal University Jaipur, Jaipur, India
| | - Mohit Kumar
- Department of Information Technology, National Institute of Technology, Jalandhar, India
| | - Sukhpal Singh Gill
- School of Electronic Engineering and Computer Science, Queen Mary University Of London, UK
| |
Collapse
|
30
|
Santosh KC, GhoshRoy D, Nakarmi S. A Systematic Review on Deep Structured Learning for COVID-19 Screening Using Chest CT from 2020 to 2022. Healthcare (Basel) 2023; 11:2388. [PMID: 37685422 PMCID: PMC10486542 DOI: 10.3390/healthcare11172388] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 08/16/2023] [Accepted: 08/22/2023] [Indexed: 09/10/2023] Open
Abstract
The emergence of the COVID-19 pandemic in Wuhan in 2019 led to the discovery of a novel coronavirus. The World Health Organization (WHO) designated it as a global pandemic on 11 March 2020 due to its rapid and widespread transmission. Its impact has had profound implications, particularly in the realm of public health. Extensive scientific endeavors have been directed towards devising effective treatment strategies and vaccines. Within the healthcare and medical imaging domain, the application of artificial intelligence (AI) has brought significant advantages. This study delves into peer-reviewed research articles spanning the years 2020 to 2022, focusing on AI-driven methodologies for the analysis and screening of COVID-19 through chest CT scan data. We assess the efficacy of deep learning algorithms in facilitating decision making processes. Our exploration encompasses various facets, including data collection, systematic contributions, emerging techniques, and encountered challenges. However, the comparison of outcomes between 2020 and 2022 proves intricate due to shifts in dataset magnitudes over time. The initiatives aimed at developing AI-powered tools for the detection, localization, and segmentation of COVID-19 cases are primarily centered on educational and training contexts. We deliberate on their merits and constraints, particularly in the context of necessitating cross-population train/test models. Our analysis encompassed a review of 231 research publications, bolstered by a meta-analysis employing search keywords (COVID-19 OR Coronavirus) AND chest CT AND (deep learning OR artificial intelligence OR medical imaging) on both the PubMed Central Repository and Web of Science platforms.
Collapse
Affiliation(s)
- KC Santosh
- 2AI: Applied Artificial Intelligence Research Lab, Vermillion, SD 57069, USA
| | - Debasmita GhoshRoy
- School of Automation, Banasthali Vidyapith, Tonk 304022, Rajasthan, India;
| | - Suprim Nakarmi
- Department of Computer Science, University of South Dakota, Vermillion, SD 57069, USA;
| |
Collapse
|
31
|
Zaeri N. Artificial intelligence and machine learning responses to COVID-19 related inquiries. J Med Eng Technol 2023; 47:301-320. [PMID: 38625639 DOI: 10.1080/03091902.2024.2321846] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2021] [Accepted: 02/18/2024] [Indexed: 04/17/2024]
Abstract
Researchers and scientists can use computational-based models to turn linked data into useful information, aiding in disease diagnosis, examination, and viral containment due to recent artificial intelligence and machine learning breakthroughs. In this paper, we extensively study the role of artificial intelligence and machine learning in delivering efficient responses to the COVID-19 pandemic almost four years after its start. In this regard, we examine a large number of critical studies conducted by various academic and research communities from multiple disciplines, as well as practical implementations of artificial intelligence algorithms that suggest potential solutions in investigating different COVID-19 decision-making scenarios. We identify numerous areas where artificial intelligence and machine learning can impact this context, including diagnosis (using chest X-ray imaging and CT imaging), severity, tracking, treatment, and the drug industry. Furthermore, we analyse the dilemma's limits, restrictions, and hazards.
Collapse
Affiliation(s)
- Naser Zaeri
- Faculty of Computer Studies, Arab Open University, Kuwait
| |
Collapse
|
32
|
Khan MA, Akram T, Zhang Y, Alhaisoni M, Al Hejaili A, Shaban KA, Tariq U, Zayyan MH. SkinNet‐ENDO: Multiclass skin lesion recognition using deep neural network and Entropy‐Normal distribution optimization algorithm with ELM. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY 2023; 33:1275-1292. [DOI: 10.1002/ima.22863] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 01/31/2023] [Indexed: 08/25/2024]
Abstract
AbstractThe early diagnosis of skin cancer through clinical methods reduces the human mortality rate. The manual screening of dermoscopic images is not an efficient procedure; therefore, researchers working in the domain of computer vision employed several algorithms to classify the skin lesion. The existing computerized methods have a few drawbacks, such as low accuracy and high computational time. Therefore, in this work, we proposed a novel deep learning and Entropy‐Normal Distribution Optimization Algorithm with extreme learning machine (NDOEM)‐based architecture for multiclass skin lesion classification. The proposed architecture consists of five fundamental steps. In the first step, two contrast enhancement techniques including hybridization of mathematical formulation and convolutional neural network are implemented prior to data augmentation. In the second step, two pre‐trained deep learning models, EfficientNetB0 and DarkNet19, are fine‐tuned and retrained through the transfer learning. In the third step, features are extracted from the fine‐tuned models and later the most discriminant features are selected based on novel Entropy‐NDOELM algorithm. The selected features are finally fused using a parallel correlation technique in the fourth step to generate the result feature vectors. Finally, the resultant features are again down‐sampled using the proposed algorithm and the resultant features are passed to the extreme learning machine (ELM) for the final classification. The simulations are conducted on three publicly available datasets as HAM10000, ISIC2018, and ISIC2019 to achieving an accuracy of 95.7%, 96.3%, and 94.8% respectively.
Collapse
Affiliation(s)
- Muhammad Attique Khan
- Department of Computer Science HITEC University Taxila Pakistan
- Department of Informatics University of Leicester Leicester UK
| | - Tallha Akram
- Department of Electrical and Computer Engineering COMSATS University Islamabad Wah Campus Pakistan
| | - Yu‐Dong Zhang
- Department of Informatics University of Leicester Leicester UK
| | - Majed Alhaisoni
- Computer Sciences Department, College of Computer and Information Sciences Princess Nourah bint Abdulrahman University Riyadh Saudi Arabia
| | - Abdullah Al Hejaili
- Faculty of Computers & Information Technology, Computer Science Department University of Tabuk Tabuk Saudi Arabia
| | - Khalid Adel Shaban
- Computer Science Department, College of Computing and Informatics Saudi Electronic University Ryiadh Saudi Arabia
| | - Usman Tariq
- Department of Management Information Systems College of Business Administration, Prince Sattam Bin Abdulaziz University Al‐Kharj Saudi Arabia
| | - Muhammad H. Zayyan
- Computer Science Department, Faculty of Computers and Information Sciences Mansoura University Mansoura Egypt
| |
Collapse
|
33
|
Application of a novel deep learning technique using CT images for COVID-19 diagnosis on embedded systems. ALEXANDRIA ENGINEERING JOURNAL 2023; 74:345-358. [PMCID: PMC10183629 DOI: 10.1016/j.aej.2023.05.036] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Revised: 04/24/2023] [Accepted: 05/08/2023] [Indexed: 11/04/2023]
Abstract
Problem A novel coronavirus (COVID-19) has created a worldwide pneumonia epidemic, and it's important to make a computer-aided way for doctors to use computed tomography (CT) images to find people with COVID-19 as soon as possible. Aim: A fully automated, novel deep-learning method for diagnosis and prognostic analysis of COVID-19 on the embedded system is presented. Methods In this study, CT scans are utilized to identify individuals with COVID-19, pneumonia, or normal class. To achieve classification two pre-trained CNN models, namely ResNet50 and MobileNetv2, which are commonly used for image classification tasks. Additionally, a novel CNN architecture called CovidxNet-CT is introduced specifically designed for COVID-19 diagnosis using three classes of CT scans. To evaluate the effectiveness of the proposed method, k-fold cross-validation is employed, which is a common approach to estimate the performance of deep learning. The study is also evaluated the proposed method on two embedded system platforms, Jetson Nano and Tx2, to demonstrate its feasibility for deployment in resource-constrained environments. Results With an average accuracy of %98.83 and an AUC of 0.988, the system is trained and verified using a 4 fold cross-validation approach. Conclusion The optimistic outcomes from the investigation propose that CovidxNet-CT has the capacity to support radiologists and contribute towards the efforts to combat COVID-19. This study proposes a fully automated, deep-learning-based method for COVID-19 diagnosis and prognostic analysis that is specifically designed for use on embedded systems.
Collapse
|
34
|
Adhikari M, Hazra A, Nandy S. Deep Transfer Learning for Communicable Disease Detection and Recommendation in Edge Networks. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:2468-2479. [PMID: 35671308 DOI: 10.1109/tcbb.2022.3180393] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Considering the increasing number of communicable disease cases such as COVID-19 worldwide, the early detection of the disease can prevent and limit the outbreak. Besides that, the PCR test kits are not available in most parts of the world, and there is genuine concern about their performance and reliability. To overcome this, in this paper, we develop a novel edge-centric healthcare framework integrating with wearable sensors and advanced machine learning (ML) model for timely decisions with minimum delay. Through wearable sensors, a set of features have been collected that are further preprocessed for preparing a useful dataset. However, due to limited resource capacity, analyzing the features in resource-constrained edge devices is challenging. Motivated by this, we introduce an advanced ML technique for data analysis at edge networks, namely Deep Transfer Learning (DTL). DTL transfers the knowledge from the well-trained model to a new lightweight ML model that can support the resource-constraint nature of distributed edge devices. We consider a benchmark COVID-19 dataset for validation purposes, consisting of 11 features and 2 Million sensor data. The extensive simulation results demonstrate the efficiency of the proposed DTL technique over the existing ones and achieve 99.8% accuracy while diseases prediction.
Collapse
|
35
|
Xie P, Zhao X, He X. Improve the performance of CT-based pneumonia classification via source data reweighting. Sci Rep 2023; 13:9401. [PMID: 37296239 PMCID: PMC10251339 DOI: 10.1038/s41598-023-35938-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Accepted: 05/26/2023] [Indexed: 06/12/2023] Open
Abstract
Pneumonia is a life-threatening disease. Computer tomography (CT) imaging is broadly used for diagnosing pneumonia. To assist radiologists in accurately and efficiently detecting pneumonia from CT scans, many deep learning methods have been developed. These methods require large amounts of annotated CT scans, which are difficult to obtain due to privacy concerns and high annotation costs. To address this problem, we develop a three-level optimization based method which leverages CT data from a source domain to mitigate the lack of labeled CT scans in a target domain. Our method automatically identifies and downweights low-quality source CT data examples which are noisy or have large domain discrepancy with target data, by minimizing the validation loss of a target model trained on reweighted source data. On a target dataset with 2218 CT scans and a source dataset with 349 CT images, our method achieves an F1 score of 91.8% in detecting pneumonia and an F1 score of 92.4% in detecting other types of pneumonia, which are significantly better than those achieved by state-of-the-art baseline methods.
Collapse
Affiliation(s)
- Pengtao Xie
- Department of Electrical and Computer Engineering, University of California San Diego, San Diego, USA.
| | - Xingchen Zhao
- Department of Electrical and Computer Engineering, Northeastern University, Boston, USA
| | - Xuehai He
- Department of Computer Science and Engineering, University of California Santa Cruz, Santa Cruz, USA
| |
Collapse
|
36
|
A novel ensemble CNN model for COVID-19 classification in computerized tomography scans. RESULTS IN CONTROL AND OPTIMIZATION 2023; 11:100215. [PMCID: PMC9936787 DOI: 10.1016/j.rico.2023.100215] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 12/26/2022] [Accepted: 02/10/2023] [Indexed: 11/30/2024]
Abstract
COVID-19 is a rapidly spread infectious disease caused by a severe acute respiratory syndrome that can lead to death in just a few days. Thus, early disease detection can provide more time for successful treatment or action, even though an efficient treatment is unknown so far. In this context, this work proposes and investigates four ensemble CNNs using transfer learning and compares them with state-of-art CNN architectures. To select which models to use we tested 11 state-of-art CNN architectures: DenseNet121, DenseNet169, DenseNet201, VGG16, VGG19, Xception, ResNet50, ResNet50v2, InceptionV3, MobileNet, and MobileNetv2. We used a public dataset comprised of 2477 computerized tomography images divided into two classes: patients diagnosed with COVID-19 and patients with a negative diagnosis. Then three architectures were selected: DenseNet169, VGG16, and Xception. Finally, the ensemble models were tested in all possible combinations. The results showed that the ensemble models tend to present the best results. Moreover, the best ensemble CNN, called EnsenbleDVX, comprising all the three CNNs, provides the best results achieving an average accuracy of 97.7%, an average precision of 97.7%, an average recall of 97.8%, and an F1 average score of 97.7%
Collapse
|
37
|
Sun H, Xie W, Huang Y, Mo J, Dong H, Chen X, Zhang Z, Shang J. Paper microfluidics with deep learning for portable intelligent nucleic acid amplification tests. Talanta 2023; 258:124470. [PMID: 36958098 PMCID: PMC10027307 DOI: 10.1016/j.talanta.2023.124470] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Revised: 03/01/2023] [Accepted: 03/17/2023] [Indexed: 03/22/2023]
Abstract
During global outbreaks such as COVID-19, regular nucleic acid amplification tests (NAATs) have posed unprecedented burden on hospital resources. Data of traditional NAATs are manually analyzed post assay. Integration of artificial intelligence (AI) with on-chip assays give rise to novel analytical platforms via data-driven models. Here, we combined paper microfluidics, portable optoelectronic system with deep learning for SARS-CoV-2 detection. The system was quite streamlined with low power dissipation. Pixel by pixel signals reflecting amplification of synthesized SARS-CoV-2 templates (containing ORF1ab, N and E genes) can be real-time processed. Then, the data were synchronously fed to the neural networks for early prediction analysis. Instead of the quantification cycle (Cq) based analytics, reaction dynamics hidden at the early stage of amplification curve were utilized by neural networks for predicting subsequent data. Qualitative and quantitative analysis of the 40-cycle NAATs can be achieved at the end of 22nd cycle, reducing time cost by 45%. In particular, the attention mechanism based deep learning model trained by microfluidics-generated data can be seamlessly adapted to multiple clinical datasets including readouts of SARS-CoV-2 detection. Accuracy, sensitivity and specificity of the prediction can reach up to 98.1%, 97.6% and 98.6%, respectively. The approach can be compatible with the most advanced sensing technologies and AI algorithms to inspire ample innovations in fields of fundamental research and clinical settings.
Collapse
Affiliation(s)
- Hao Sun
- School of Mechanical Engineering and Automation, Fuzhou University, 350108, China; Fujian Provincial Collaborative Innovation Centre of High-End Equipment Manufacturing, 350108, China.
| | - Wantao Xie
- School of Mechanical Engineering and Automation, Fuzhou University, 350108, China; Fujian Provincial Collaborative Innovation Centre of High-End Equipment Manufacturing, 350108, China
| | - Yi Huang
- Centre for Experimental Research in Clinical Medicine, Fujian Provincial Hospital, 350001, China
| | - Jin Mo
- School of Mechanical Engineering and Automation, Fuzhou University, 350108, China; Fujian Provincial Collaborative Innovation Centre of High-End Equipment Manufacturing, 350108, China
| | - Hui Dong
- School of Mechanical Engineering and Automation, Fuzhou University, 350108, China; Fujian Provincial Collaborative Innovation Centre of High-End Equipment Manufacturing, 350108, China.
| | - Xinkai Chen
- Star-Net Ruijie Science & Technology Co., Ltd., 350108, China
| | - Zhixing Zhang
- Sino-German College of Intelligent Manufacturing, Shenzhen Technology University, 518118, China.
| | - Junyi Shang
- School of Automation, Beijing Institute of Technology, 100081, China.
| |
Collapse
|
38
|
Subramanian M, Sathishkumar VE, Cho J, Shanmugavadivel K. Learning without forgetting by leveraging transfer learning for detecting COVID-19 infection from CT images. Sci Rep 2023; 13:8516. [PMID: 37231044 DOI: 10.1038/s41598-023-34908-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Accepted: 05/09/2023] [Indexed: 05/27/2023] Open
Abstract
COVID-19, a global pandemic, has killed thousands in the last three years. Pathogenic laboratory testing is the gold standard but has a high false-negative rate, making alternate diagnostic procedures necessary to fight against it. Computer Tomography (CT) scans help diagnose and monitor COVID-19, especially in severe cases. But, visual inspection of CT images takes time and effort. In this study, we employ Convolution Neural Network (CNN) to detect coronavirus infection from CT images. The proposed study utilized transfer learning on the three pre-trained deep CNN models, namely VGG-16, ResNet, and wide ResNet, to diagnose and detect COVID-19 infection from the CT images. However, when the pre-trained models are retrained, the model suffers the generalization capability to categorize the data in the original datasets. The novel aspect of this work is the integration of deep CNN architectures with Learning without Forgetting (LwF) to enhance the model's generalization capabilities on both trained and new data samples. The LwF makes the network use its learning capabilities in training on the new dataset while preserving the original competencies. The deep CNN models with the LwF model are evaluated on original images and CT scans of individuals infected with Delta-variant of the SARS-CoV-2 virus. The experimental results show that of the three fine-tuned CNN models with the LwF method, the wide ResNet model's performance is superior and effective in classifying original and delta-variant datasets with an accuracy of 93.08% and 92.32%, respectively.
Collapse
Affiliation(s)
- Malliga Subramanian
- Department of Computer Science and Engineering, Kongu Engineering College, Perundurai, Erode, Tamil Nadu, India
| | | | - Jaehyuk Cho
- Department of Software Engineering, Jeonbuk National University, Jeongu-si, Republic of Korea.
| | - Kogilavani Shanmugavadivel
- Department of Computer Science and Engineering, Kongu Engineering College, Perundurai, Erode, Tamil Nadu, India
| |
Collapse
|
39
|
Abbasi Habashi S, Koyuncu M, Alizadehsani R. A Survey of COVID-19 Diagnosis Using Routine Blood Tests with the Aid of Artificial Intelligence Techniques. Diagnostics (Basel) 2023; 13:1749. [PMID: 37238232 PMCID: PMC10217633 DOI: 10.3390/diagnostics13101749] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 04/19/2023] [Accepted: 04/29/2023] [Indexed: 05/28/2023] Open
Abstract
Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2), causing a disease called COVID-19, is a class of acute respiratory syndrome that has considerably affected the global economy and healthcare system. This virus is diagnosed using a traditional technique known as the Reverse Transcription Polymerase Chain Reaction (RT-PCR) test. However, RT-PCR customarily outputs a lot of false-negative and incorrect results. Current works indicate that COVID-19 can also be diagnosed using imaging resolutions, including CT scans, X-rays, and blood tests. Nevertheless, X-rays and CT scans cannot always be used for patient screening because of high costs, radiation doses, and an insufficient number of devices. Therefore, there is a requirement for a less expensive and faster diagnostic model to recognize the positive and negative cases of COVID-19. Blood tests are easily performed and cost less than RT-PCR and imaging tests. Since biochemical parameters in routine blood tests vary during the COVID-19 infection, they may supply physicians with exact information about the diagnosis of COVID-19. This study reviewed some newly emerging artificial intelligence (AI)-based methods to diagnose COVID-19 using routine blood tests. We gathered information about research resources and inspected 92 articles that were carefully chosen from a variety of publishers, such as IEEE, Springer, Elsevier, and MDPI. Then, these 92 studies are classified into two tables which contain articles that use machine Learning and deep Learning models to diagnose COVID-19 while using routine blood test datasets. In these studies, for diagnosing COVID-19, Random Forest and logistic regression are the most widely used machine learning methods and the most widely used performance metrics are accuracy, sensitivity, specificity, and AUC. Finally, we conclude by discussing and analyzing these studies which use machine learning and deep learning models and routine blood test datasets for COVID-19 detection. This survey can be the starting point for a novice-/beginner-level researcher to perform on COVID-19 classification.
Collapse
Affiliation(s)
| | - Murat Koyuncu
- Department of Information Systems Engineering, Atilim University, 06830 Ankara, Turkey;
| | - Roohallah Alizadehsani
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Waurn Ponds, Geelong, VIC 3216, Australia
| |
Collapse
|
40
|
Alwazzeh MJ, Subbarayalu AV, Abu Ali BM, Alabdulqader R, Alhajri M, Alwarthan SM, AlShehail BM, Raman V, Almuhanna FA. Performance of CURB-65 and ISARIC 4C mortality scores for hospitalized patients with confirmed COVID-19 infection in Saudi Arabia. INFORMATICS IN MEDICINE UNLOCKED 2023; 39:101269. [PMID: 37193544 PMCID: PMC10167802 DOI: 10.1016/j.imu.2023.101269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Revised: 05/06/2023] [Accepted: 05/07/2023] [Indexed: 05/18/2023] Open
Abstract
Background The COVID-19 pandemic continues with new waves that could persist with the arrival of new SARS-CoV-2 variants. Therefore, the availability of validated and effective triage tools is the cornerstone for proper clinical management. Thus, this study aimed to assess the validity of the ISARIC-4C score as a triage tool for hospitalized COVID-19 patients in Saudi Arabia and to compare its performance with the CURB-65 score. Material and methods This retrospective observational cohort study was conducted between March 2020 and May 2021 at KFHU, Saudi Arabia, using 542 confirmed COVID-19 patient data on the variables relevant to the application of the ISARIC-4C mortality score and the CURB-65 score. Chi-square and t-tests were employed to study the significance of the CURB-65 score and the ISARIC-4C score variables considering the ICU requirements and the mortality of COVID-19 hospitalized patients. In addition, logistic regression was used to predict the variables related to COVID-19 mortality. In addition, the diagnostic accuracy of both scores was validated by calculating sensitivities, specificities, positive predictive value, negative predictive value, and Youden's J indices (YJI). Results ROC analysis showed an AUC value of 0.834 [95% CI; 0.800-0.865]) for the CURB-65 score and 0.809 [95% CI; 0.773-0.841]) for the ISARIC-4C score. The sensitivity for CURB-65 and ISARIC-4C is 75% and 85.71%, respectively, while the specificity was 82.31% and 62.66%, respectively. The difference between AUCs was 0.025 (95% [CI; -0.0203-0.0704], p = 0.2795). Conclusion Study results support external validation of the ISARIC-4C score in predicting the mortality risk of hospitalized COVID-19 patients in Saudi Arabia. In addition, the CURB-65 and ISARIC-4C scores showed comparable performance with good consistent discrimination and are suitable for clinical utility as triage tools for hospitalized COVID-19 patients.
Collapse
Affiliation(s)
- Marwan Jabr Alwazzeh
- Infectious Disease Division, Department of Internal Medicine, Faculty of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
- King Fahad Hospital of the University, Al-Khobar, Saudi Arabia
| | - Arun Vijay Subbarayalu
- Quality Studies and Research Unit, Vice Deanship for Quality, Deanship of Quality and Academic Accreditation, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| | | | | | - Mashael Alhajri
- Infectious Disease Division, Department of Internal Medicine, Faculty of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
- King Fahad Hospital of the University, Al-Khobar, Saudi Arabia
| | - Sara M Alwarthan
- Infectious Disease Division, Department of Internal Medicine, Faculty of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
- King Fahad Hospital of the University, Al-Khobar, Saudi Arabia
| | - Bashayer M AlShehail
- Pharmacy Practice Department, College of Clinical Pharmacy, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| | - Vinoth Raman
- Statistics Unit, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| | - Fahd Abdulaziz Almuhanna
- Nephrology Division, Department of Internal Medicine, Faculty of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
- King Fahad Hospital of the University, Al-Khobar, Saudi Arabia
| |
Collapse
|
41
|
Hu J, Mougiakakou S, Xue S, Afshar-Oromieh A, Hautz W, Christe A, Sznitman R, Rominger A, Ebner L, Shi K. Artificial intelligence for reducing the radiation burden of medical imaging for the diagnosis of coronavirus disease. EUROPEAN PHYSICAL JOURNAL PLUS 2023; 138:391. [PMID: 37192839 PMCID: PMC10165296 DOI: 10.1140/epjp/s13360-023-03745-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 01/25/2023] [Indexed: 05/18/2023]
Abstract
Medical imaging has been intensively employed in screening, diagnosis and monitoring during the COVID-19 pandemic. With the improvement of RT-PCR and rapid inspection technologies, the diagnostic references have shifted. Current recommendations tend to limit the application of medical imaging in the acute setting. Nevertheless, efficient and complementary values of medical imaging have been recognized at the beginning of the pandemic when facing unknown infectious diseases and a lack of sufficient diagnostic tools. Optimizing medical imaging for pandemics may still have encouraging implications for future public health, especially for long-lasting post-COVID-19 syndrome theranostics. A critical concern for the application of medical imaging is the increased radiation burden, particularly when medical imaging is used for screening and rapid containment purposes. Emerging artificial intelligence (AI) technology provides the opportunity to reduce the radiation burden while maintaining diagnostic quality. This review summarizes the current AI research on dose reduction for medical imaging, and the retrospective identification of their potential in COVID-19 may still have positive implications for future public health.
Collapse
Affiliation(s)
- Jiaxi Hu
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse 18, 3010 Bern, Switzerland
- ARTORG Center for Biomedical Engineering Research, University of Bern, Murtenstrasse 50, 3008 Bern, Switzerland
| | - Stavroula Mougiakakou
- ARTORG Center for Biomedical Engineering Research, University of Bern, Murtenstrasse 50, 3008 Bern, Switzerland
| | - Song Xue
- ARTORG Center for Biomedical Engineering Research, University of Bern, Murtenstrasse 50, 3008 Bern, Switzerland
| | - Ali Afshar-Oromieh
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse 18, 3010 Bern, Switzerland
| | - Wolf Hautz
- Department of University Emergency Center of Inselspital, University of Bern, Freiburgstrasse 15, 3010 Bern, Switzerland
| | - Andreas Christe
- Department of Radiology, Inselspital, Bern University Hospital, University of Bern, 3012 Bern, Switzerland
| | - Raphael Sznitman
- ARTORG Center for Biomedical Engineering Research, University of Bern, Murtenstrasse 50, 3008 Bern, Switzerland
| | - Axel Rominger
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse 18, 3010 Bern, Switzerland
| | - Lukas Ebner
- Department of Radiology, Inselspital, Bern University Hospital, University of Bern, 3012 Bern, Switzerland
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse 18, 3010 Bern, Switzerland
| |
Collapse
|
42
|
Liu Y, Chen B, Zhang Z, Yu H, Ru S, Chen X, Lu G. Self-paced Multi-view Learning for CT-based severity assessment of COVID-19. Biomed Signal Process Control 2023; 83:104672. [PMID: 36777556 PMCID: PMC9905104 DOI: 10.1016/j.bspc.2023.104672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 01/30/2023] [Accepted: 02/04/2023] [Indexed: 02/11/2023]
Abstract
Prior studies for the task of severity assessment of COVID-19 (SA-COVID) usually suffer from domain-specific cognitive deficits. They mainly focus on visual cues based on single cognitive functions but fail to reconcile the valuable information from other alternative views. Inspired by the cognitive process of radiologists, this paper shifts naturally from single-symptom measurements to a multi-view analysis, and proposes a novel Self-paced Multi-view Learning (SPML) framework for automated SA-COVID. Specifically, the proposed SPML framework first comprehensively aggregates multi-view contexts in lung infection with different measure paradigms, i.e., Global Feature Branch, Texture Feature Branch, and Volume Feature Branch. In this way, multiple-perspective clues are taken into account to reflect the most essential pathological manifestation on CT images. To alleviate small-sample learning problems, we also introduce an optimization with self-paced learning strategy to cognitively increase the characterization capabilities of training samples by learning from simple to complex. In contrast to traditional batch-wise learning, a pure self-paced way can further guarantee the efficiency and accuracy of SPML when dealing with small and biased samples. Furthermore, we construct a well-established SA-COVID dataset that contains 300 CT images with fine annotations. Extensive experiments on this dataset demonstrate that SPML consistently outperforms the state-of-the-art baselines. The SA-COVID dataset is publicly released at https://github.com/YishuLiu/SA-COVID.
Collapse
Affiliation(s)
- Yishu Liu
- Harbin Institute of Technology, Shenzhen, 518055, China
| | - Bingzhi Chen
- South China Normal University, Guangzhou, 510631, China
| | - Zheng Zhang
- Harbin Institute of Technology, Shenzhen, 518055, China
| | - Hongbing Yu
- Nanshan District Chronic Disease Prevention and Control Hospital, Shenzhen, 518055, China
| | - Shouhang Ru
- Shenzhen Second People's Hospital, Shenzhen, 518000, China
| | - Xiaosheng Chen
- Shenzhen Second People's Hospital, Shenzhen, 518000, China
| | - Guangming Lu
- Harbin Institute of Technology, Shenzhen, 518055, China
| |
Collapse
|
43
|
Rehman A, Xing H, Adnan Khan M, Hussain M, Hussain A, Gulzar N. Emerging technologies for COVID (ET-CoV) detection and diagnosis: Recent advancements, applications, challenges, and future perspectives. Biomed Signal Process Control 2023; 83:104642. [PMID: 36818992 PMCID: PMC9917176 DOI: 10.1016/j.bspc.2023.104642] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 11/29/2022] [Accepted: 01/25/2023] [Indexed: 02/12/2023]
Abstract
In light of the constantly changing terrain of the COVID outbreak, medical specialists have implemented proactive schemes for vaccine production. Despite the remarkable COVID-19 vaccine development, the virus has mutated into new variants, including delta and omicron. Currently, the situation is critical in many parts of the world, and precautions are being taken to stop the virus from spreading and mutating. Early identification and diagnosis of COVID-19 are the main challenges faced by emerging technologies during the outbreak. In these circumstances, emerging technologies to tackle Coronavirus have proven magnificent. Artificial intelligence (AI), big data, the internet of medical things (IoMT), robotics, blockchain technology, telemedicine, smart applications, and additive manufacturing are suspicious for detecting, classifying, monitoring, and locating COVID-19. Henceforth, this research aims to glance at these COVID-19 defeating technologies by focusing on their strengths and limitations. A CiteSpace-based bibliometric analysis of the emerging technology was established. The most impactful keywords and the ongoing research frontiers were compiled. Emerging technologies were unstable due to data inconsistency, redundant and noisy datasets, and the inability to aggregate the data due to disparate data formats. Moreover, the privacy and confidentiality of patient medical records are not guaranteed. Hence, Significant data analysis is required to develop an intelligent computational model for effective and quick clinical diagnosis of COVID-19. Remarkably, this article outlines how emerging technology has been used to counteract the virus disaster and offers ongoing research frontiers, directing readers to concentrate on the real challenges and thus facilitating additional explorations to amplify emerging technologies.
Collapse
Affiliation(s)
- Amir Rehman
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, 611756, China
| | - Huanlai Xing
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, 611756, China
| | - Muhammad Adnan Khan
- Pattern Recognition and Machine Learning, Department of Software, Gachon University, Seongnam 13557, Republic of Korea
- Riphah School of Computing & Innovation, Faculty of Computing, Riphah International University, Lahore Campus, Lahore 54000, Pakistan
| | - Mehboob Hussain
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, 611756, China
| | - Abid Hussain
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, 611756, China
| | - Nighat Gulzar
- School of Geosciences and Environmental Engineering, Southwest Jiaotong University, Chengdu, 611756, China
| |
Collapse
|
44
|
Aldughayfiq B, Ashfaq F, Jhanjhi NZ, Humayun M. YOLO-Based Deep Learning Model for Pressure Ulcer Detection and Classification. Healthcare (Basel) 2023; 11:healthcare11091222. [PMID: 37174764 PMCID: PMC10178524 DOI: 10.3390/healthcare11091222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Revised: 04/15/2023] [Accepted: 04/22/2023] [Indexed: 05/15/2023] Open
Abstract
Pressure ulcers are significant healthcare concerns affecting millions of people worldwide, particularly those with limited mobility. Early detection and classification of pressure ulcers are crucial in preventing their progression and reducing associated morbidity and mortality. In this work, we present a novel approach that uses YOLOv5, an advanced and robust object detection model, to detect and classify pressure ulcers into four stages and non-pressure ulcers. We also utilize data augmentation techniques to expand our dataset and strengthen the resilience of our model. Our approach shows promising results, achieving an overall mean average precision of 76.9% and class-specific mAP50 values ranging from 66% to 99.5%. Compared to previous studies that primarily utilize CNN-based algorithms, our approach provides a more efficient and accurate solution for the detection and classification of pressure ulcers. The successful implementation of our approach has the potential to improve the early detection and treatment of pressure ulcers, resulting in better patient outcomes and reduced healthcare costs.
Collapse
Affiliation(s)
- Bader Aldughayfiq
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia
| | - Farzeen Ashfaq
- School of Computer Science, SCS, Taylor's University, Subang Jaya 47500, Malaysia
| | - N Z Jhanjhi
- School of Computer Science, SCS, Taylor's University, Subang Jaya 47500, Malaysia
| | - Mamoona Humayun
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia
| |
Collapse
|
45
|
Shimron E, Perlman O. AI in MRI: Computational Frameworks for a Faster, Optimized, and Automated Imaging Workflow. Bioengineering (Basel) 2023; 10:492. [PMID: 37106679 PMCID: PMC10135995 DOI: 10.3390/bioengineering10040492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 04/12/2023] [Accepted: 04/18/2023] [Indexed: 04/29/2023] Open
Abstract
Over the last decade, artificial intelligence (AI) has made an enormous impact on a wide range of fields, including science, engineering, informatics, finance, and transportation [...].
Collapse
Affiliation(s)
- Efrat Shimron
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA 94720, USA
| | - Or Perlman
- Department of Biomedical Engineering, Tel Aviv University, Tel Aviv 6997801, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv 6997801, Israel
| |
Collapse
|
46
|
Kebaili A, Lapuyade-Lahorgue J, Ruan S. Deep Learning Approaches for Data Augmentation in Medical Imaging: A Review. J Imaging 2023; 9:81. [PMID: 37103232 PMCID: PMC10144738 DOI: 10.3390/jimaging9040081] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 03/31/2023] [Accepted: 04/07/2023] [Indexed: 04/28/2023] Open
Abstract
Deep learning has become a popular tool for medical image analysis, but the limited availability of training data remains a major challenge, particularly in the medical field where data acquisition can be costly and subject to privacy regulations. Data augmentation techniques offer a solution by artificially increasing the number of training samples, but these techniques often produce limited and unconvincing results. To address this issue, a growing number of studies have proposed the use of deep generative models to generate more realistic and diverse data that conform to the true distribution of the data. In this review, we focus on three types of deep generative models for medical image augmentation: variational autoencoders, generative adversarial networks, and diffusion models. We provide an overview of the current state of the art in each of these models and discuss their potential for use in different downstream tasks in medical imaging, including classification, segmentation, and cross-modal translation. We also evaluate the strengths and limitations of each model and suggest directions for future research in this field. Our goal is to provide a comprehensive review about the use of deep generative models for medical image augmentation and to highlight the potential of these models for improving the performance of deep learning algorithms in medical image analysis.
Collapse
Affiliation(s)
| | | | - Su Ruan
- Université Rouen Normandie, INSA Rouen Normandie, Université Le Havre Normandie, Normandie Univ, LITIS UR 4108, F-76000 Rouen, France
| |
Collapse
|
47
|
Khattab R, Abdelmaksoud IR, Abdelrazek S. Deep Convolutional Neural Networks for Detecting COVID-19 Using Medical Images: A Survey. NEW GENERATION COMPUTING 2023; 41:343-400. [PMID: 37229176 PMCID: PMC10071474 DOI: 10.1007/s00354-023-00213-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 02/23/2023] [Indexed: 05/27/2023]
Abstract
Coronavirus Disease 2019 (COVID-19), which is caused by Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-COV-2), surprised the world in December 2019 and has threatened the lives of millions of people. Countries all over the world closed worship places and shops, prevented gatherings, and implemented curfews to stand against the spread of COVID-19. Deep Learning (DL) and Artificial Intelligence (AI) can have a great role in detecting and fighting this disease. Deep learning can be used to detect COVID-19 symptoms and signs from different imaging modalities, such as X-Ray, Computed Tomography (CT), and Ultrasound Images (US). This could help in identifying COVID-19 cases as a first step to curing them. In this paper, we reviewed the research studies conducted from January 2020 to September 2022 about deep learning models that were used in COVID-19 detection. This paper clarified the three most common imaging modalities (X-Ray, CT, and US) in addition to the DL approaches that are used in this detection and compared these approaches. This paper also provided the future directions of this field to fight COVID-19 disease.
Collapse
Affiliation(s)
- Rana Khattab
- Information Systems Department, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| | - Islam R. Abdelmaksoud
- Information Systems Department, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| | - Samir Abdelrazek
- Information Systems Department, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| |
Collapse
|
48
|
Ren K, Hong G, Chen X, Wang Z. A COVID-19 medical image classification algorithm based on Transformer. Sci Rep 2023; 13:5359. [PMID: 37005476 PMCID: PMC10067012 DOI: 10.1038/s41598-023-32462-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 03/28/2023] [Indexed: 04/04/2023] Open
Abstract
Coronavirus 2019 (COVID-19) is a new acute respiratory disease that has spread rapidly throughout the world. This paper proposes a novel deep learning network based on ResNet-50 merged transformer named RMT-Net. On the backbone of ResNet-50, it uses Transformer to capture long-distance feature information, adopts convolutional neural networks and depth-wise convolution to obtain local features, reduce the computational cost and acceleration the detection process. The RMT-Net includes four stage blocks to realize the feature extraction of different receptive fields. In the first three stages, the global self-attention method is adopted to capture the important feature information and construct the relationship between tokens. In the fourth stage, the residual blocks are used to extract the details of feature. Finally, a global average pooling layer and a fully connected layer perform classification tasks. Training, verification and testing are carried out on self-built datasets. The RMT-Net model is compared with ResNet-50, VGGNet-16, i-CapsNet and MGMADS-3. The experimental results show that the RMT-Net model has a Test_ acc of 97.65% on the X-ray image dataset, 99.12% on the CT image dataset, which both higher than the other four models. The size of RMT-Net model is only 38.5 M, and the detection speed of X-ray image and CT image is 5.46 ms and 4.12 ms per image, respectively. It is proved that the model can detect and classify COVID-19 with higher accuracy and efficiency.
Collapse
Affiliation(s)
- Keying Ren
- College of Electronic Information and Automation, Tianjin University of Science and Technology, Tianjin, 300222, China
| | - Geng Hong
- College of Electronic Information and Automation, Tianjin University of Science and Technology, Tianjin, 300222, China
| | - Xiaoyan Chen
- College of Electronic Information and Automation, Tianjin University of Science and Technology, Tianjin, 300222, China.
| | - Zichen Wang
- College of Electronic Information and Automation, Tianjin University of Science and Technology, Tianjin, 300222, China
| |
Collapse
|
49
|
Challenges, opportunities, and advances related to COVID-19 classification based on deep learning. DATA SCIENCE AND MANAGEMENT 2023. [PMCID: PMC10063459 DOI: 10.1016/j.dsm.2023.03.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/03/2023]
Abstract
The novel coronavirus disease, or COVID-19, is a hazardous disease. It is endangering the lives of many people living in more than two hundred countries. It directly affects the lungs. In general, two main imaging modalities: - computed tomography (CT) and chest x-ray (CXR) are used to achieve a speedy and reliable medical diagnosis. Identifying the coronavirus in medical images is exceedingly difficult for diagnosis, assessment, and treatment. It is demanding, time-consuming, and subject to human mistakes. In biological disciplines, excellent performance can be achieved by employing artificial intelligence (AI) models. As a subfield of AI, deep learning (DL) networks have drawn considerable attention than standard machine learning (ML) methods. DL models automatically carry out all the steps of feature extraction, feature selection, and classification. This study has performed comprehensive analysis of coronavirus classification using CXR and CT imaging modalities using DL architectures. Additionally, we have discussed how transfer learning is helpful in this regard. Finally, the problem of designing and implementing a system using computer-aided diagnostic (CAD) to find COVID-19 using DL approaches is highlighted a future research possibility.
Collapse
|
50
|
Gürsoy E, Kaya Y. An overview of deep learning techniques for COVID-19 detection: methods, challenges, and future works. MULTIMEDIA SYSTEMS 2023; 29:1603-1627. [PMID: 37261262 PMCID: PMC10039775 DOI: 10.1007/s00530-023-01083-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 03/20/2023] [Indexed: 06/02/2023]
Abstract
The World Health Organization (WHO) declared a pandemic in response to the coronavirus COVID-19 in 2020, which resulted in numerous deaths worldwide. Although the disease appears to have lost its impact, millions of people have been affected by this virus, and new infections still occur. Identifying COVID-19 requires a reverse transcription-polymerase chain reaction test (RT-PCR) or analysis of medical data. Due to the high cost and time required to scan and analyze medical data, researchers are focusing on using automated computer-aided methods. This review examines the applications of deep learning (DL) and machine learning (ML) in detecting COVID-19 using medical data such as CT scans, X-rays, cough sounds, MRIs, ultrasound, and clinical markers. First, the data preprocessing, the features used, and the current COVID-19 detection methods are divided into two subsections, and the studies are discussed. Second, the reported publicly available datasets, their characteristics, and the potential comparison materials mentioned in the literature are presented. Third, a comprehensive comparison is made by contrasting the similar and different aspects of the studies. Finally, the results, gaps, and limitations are summarized to stimulate the improvement of COVID-19 detection methods, and the study concludes by listing some future research directions for COVID-19 classification.
Collapse
Affiliation(s)
- Ercan Gürsoy
- Department of Computer Engineering, Adana Alparslan Turkes Science and Technology University, 01250 Adana, Turkey
| | - Yasin Kaya
- Department of Computer Engineering, Adana Alparslan Turkes Science and Technology University, 01250 Adana, Turkey
| |
Collapse
|