1
|
Mozaffari J, Amirkhani A, Shokouhi SB. A survey on deep learning models for detection of COVID-19. Neural Comput Appl 2023; 35:1-29. [PMID: 37362568 PMCID: PMC10224665 DOI: 10.1007/s00521-023-08683-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 05/10/2023] [Indexed: 06/28/2023]
Abstract
The spread of the COVID-19 started back in 2019; and so far, more than 4 million people around the world have lost their lives to this deadly virus and its variants. In view of the high transmissibility of the Corona virus, which has turned this disease into a global pandemic, artificial intelligence can be employed as an effective tool for an earlier detection and treatment of this illness. In this review paper, we evaluate the performance of the deep learning models in processing the X-Ray and CT-Scan images of the Corona patients' lungs and describe the changes made to these models in order to enhance their Corona detection accuracy. To this end, we introduce the famous deep learning models such as VGGNet, GoogleNet and ResNet and after reviewing the research works in which these models have been used for the detection of COVID-19, we compare the performances of the newer models such as DenseNet, CapsNet, MobileNet and EfficientNet. We then present the deep learning techniques of GAN, transfer learning, and data augmentation and examine the statistics of using these techniques. Here, we also describe the datasets introduced since the onset of the COVID-19. These datasets contain the lung images of Corona patients, healthy individuals, and the patients with non-Corona pulmonary diseases. Lastly, we elaborate on the existing challenges in the use of artificial intelligence for COVID-19 detection and the prospective trends of using this method in similar situations and conditions. Supplementary Information The online version contains supplementary material available at 10.1007/s00521-023-08683-x.
Collapse
Affiliation(s)
- Javad Mozaffari
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, 16846-13114 Iran
| | - Abdollah Amirkhani
- School of Automotive Engineering, Iran University of Science and Technology, Tehran, 16846-13114 Iran
| | - Shahriar B. Shokouhi
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, 16846-13114 Iran
| |
Collapse
|
2
|
Lee MH, Shomanov A, Kudaibergenova M, Viderman D. Deep Learning Methods for Interpretation of Pulmonary CT and X-ray Images in Patients with COVID-19-Related Lung Involvement: A Systematic Review. J Clin Med 2023; 12:jcm12103446. [PMID: 37240552 DOI: 10.3390/jcm12103446] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 04/25/2023] [Accepted: 05/06/2023] [Indexed: 05/28/2023] Open
Abstract
SARS-CoV-2 is a novel virus that has been affecting the global population by spreading rapidly and causing severe complications, which require prompt and elaborate emergency treatment. Automatic tools to diagnose COVID-19 could potentially be an important and useful aid. Radiologists and clinicians could potentially rely on interpretable AI technologies to address the diagnosis and monitoring of COVID-19 patients. This paper aims to provide a comprehensive analysis of the state-of-the-art deep learning techniques for COVID-19 classification. The previous studies are methodically evaluated, and a summary of the proposed convolutional neural network (CNN)-based classification approaches is presented. The reviewed papers have presented a variety of CNN models and architectures that were developed to provide an accurate and quick automatic tool to diagnose the COVID-19 virus based on presented CT scan or X-ray images. In this systematic review, we focused on the critical components of the deep learning approach, such as network architecture, model complexity, parameter optimization, explainability, and dataset/code availability. The literature search yielded a large number of studies over the past period of the virus spread, and we summarized their past efforts. State-of-the-art CNN architectures, with their strengths and weaknesses, are discussed with respect to diverse technical and clinical evaluation metrics to safely implement current AI studies in medical practice.
Collapse
Affiliation(s)
- Min-Ho Lee
- School of Engineering and Digital Sciences, Nazarbayev University, Kabanbay Batyr Ave. 53, Astana 010000, Kazakhstan
| | - Adai Shomanov
- School of Engineering and Digital Sciences, Nazarbayev University, Kabanbay Batyr Ave. 53, Astana 010000, Kazakhstan
| | - Madina Kudaibergenova
- School of Engineering and Digital Sciences, Nazarbayev University, Kabanbay Batyr Ave. 53, Astana 010000, Kazakhstan
| | - Dmitriy Viderman
- School of Medicine, Nazarbayev University, 5/1 Kerey and Zhanibek Khandar Str., Astana 010000, Kazakhstan
| |
Collapse
|
3
|
Liu J, Feng Q, Miao Y, He W, Shi W, Jiang Z. COVID-19 disease identification network based on weakly supervised feature selection. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:9327-9348. [PMID: 37161245 DOI: 10.3934/mbe.2023409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
The coronavirus disease 2019 (COVID-19) outbreak has resulted in countless infections and deaths worldwide, posing increasing challenges for the health care system. The use of artificial intelligence to assist in diagnosis not only had a high accuracy rate but also saved time and effort in the sudden outbreak phase with the lack of doctors and medical equipment. This study aimed to propose a weakly supervised COVID-19 classification network (W-COVNet). This network was divided into three main modules: weakly supervised feature selection module (W-FS), deep learning bilinear feature fusion module (DBFF) and Grad-CAM++ based network visualization module (Grad-Ⅴ). The first module, W-FS, mainly removed redundant background features from computed tomography (CT) images, performed feature selection and retained core feature regions. The second module, DBFF, mainly used two symmetric networks to extract different features and thus obtain rich complementary features. The third module, Grad-Ⅴ, allowed the visualization of lesions in unlabeled images. A fivefold cross-validation experiment showed an average classification accuracy of 85.3%, and a comparison with seven advanced classification models showed that our proposed network had a better performance.
Collapse
Affiliation(s)
- Jingyao Liu
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, Jilin 130022, China
- School of Computer and Information Engineering, Chuzhou University, Chuzhou 239000, China
| | - Qinghe Feng
- School of Intelligent Engineering, Henan Institute of Technology, Xinxiang 453003, China
| | - Yu Miao
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, Jilin 130022, China
- Zhongshan Institute of Changchun University of Science and Technology, Zhongshan 528437, China
| | - Wei He
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, Jilin 130022, China
| | - Weili Shi
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, Jilin 130022, China
- Zhongshan Institute of Changchun University of Science and Technology, Zhongshan 528437, China
| | - Zhengang Jiang
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, Jilin 130022, China
- Zhongshan Institute of Changchun University of Science and Technology, Zhongshan 528437, China
| |
Collapse
|
4
|
Montalbo FJP. Automating mosquito taxonomy by compressing and enhancing a feature fused EfficientNet with knowledge distillation and a novel residual skip block. MethodsX 2023; 10:102072. [PMID: 36851980 PMCID: PMC9958064 DOI: 10.1016/j.mex.2023.102072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 02/08/2023] [Indexed: 02/11/2023] Open
Abstract
Identifying lethal vector and non-vector mosquitoes can become difficult for a layperson and sometimes even for experts, considering their visual similarities. Recently, deep learning (DL) became a solution to assist in differentiating the two mosquito types to reduce infections and enhance actions against them. However, the existing methods employed to develop a DL model for such a task tend to require massive amounts of computing resources and steps, making them impractical. Based on existing methods, most researchers rely on training pre-trained state-of-the-art (SOTA) deep convolutional neural networks (DCNN), which usually require about a million parameters to train. Hence, this method proposes an approach to craft a model with a far lower computing cost while attaining similar or even significantly better performance than pre-existing models in automating the taxonomy of several mosquitoes. This method combines the approach of layer-wise compression and feature fusion with enhanced residual learning that consists of a self-normalizing activation and depthwise convolutions.•The proposed method yielded a model that outperformed the most recent and classic state-of-the-art deep convolutional neural network models.•With the help of the modified residual block and knowledge distillation, the proposed method significantly reduced a fused model's cost while maintaining competitive performance.•Unlike other methods, the proposed method had the best performance-to-cost ratio.
Collapse
Affiliation(s)
- Francis Jesmar P Montalbo
- College of Informatics and Computing Sciences, Batangas State University, Batangas City, Batangas, Philippines
| |
Collapse
|
5
|
Gan F, Chen WY, Liu H, Zhong YL. Application of artificial intelligence models for detecting the pterygium that requires surgical treatment based on anterior segment images. Front Neurosci 2022; 16:1084118. [PMID: 36605553 PMCID: PMC9808075 DOI: 10.3389/fnins.2022.1084118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2022] [Accepted: 12/02/2022] [Indexed: 01/07/2023] Open
Abstract
Background and aim A pterygium is a common ocular surface disease, which not only affects facial appearance but can also grow into the tissue layer, causing astigmatism and vision loss. In this study, an artificial intelligence model was developed for detecting the pterygium that requires surgical treatment. The model was designed using ensemble deep learning (DL). Methods A total of 172 anterior segment images of pterygia were obtained from the Jiangxi Provincial People's Hospital (China) between 2017 and 2022. They were divided by a senior ophthalmologist into the non-surgery group and the surgery group. An artificial intelligence model was then developed based on ensemble DL, which was integrated with four benchmark models: the Resnet18, Alexnet, Googlenet, and Vgg11 model, for detecting the pterygium that requires surgical treatment, and Grad-CAM was used to visualize the DL process. Finally, the performance of the ensemble DL model was compared with the classical Resnet18 model, Alexnet model, Googlenet model, and Vgg11 model. Results The accuracy and area under the curve (AUC) of the ensemble DL model was higher than all of the other models. In the training set, the accuracy and AUC of the ensemble model was 94.20% and 0.978, respectively. In the testing set, the accuracy and AUC of the ensemble model was 94.12% and 0.980, respectively. Conclusion This study indicates that this ensemble DL model, coupled with the anterior segment images in our study, might be an automated and cost-saving alternative for detection of the pterygia that require surgery.
Collapse
Affiliation(s)
- Fan Gan
- Medical College of Nanchang University, Nanchang, China,Department of Ophthalmology, Jiangxi Provincial People’s Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, China
| | - Wan-Yun Chen
- Department of Ophthalmology, Jiangxi Provincial People’s Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, China
| | - Hui Liu
- Department of Ophthalmology, Jiangxi Provincial People’s Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, China
| | - Yu-Lin Zhong
- Department of Ophthalmology, Jiangxi Provincial People’s Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, China,*Correspondence: Yu-Lin Zhong,
| |
Collapse
|
6
|
Wang W, Liu S, Xu H, Deng L. COVIDX-LwNet: A Lightweight Network Ensemble Model for the Detection of COVID-19 Based on Chest X-ray Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:8578. [PMID: 36366277 PMCID: PMC9655773 DOI: 10.3390/s22218578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 11/02/2022] [Accepted: 11/03/2022] [Indexed: 06/16/2023]
Abstract
Recently, the COVID-19 pandemic coronavirus has put a lot of pressure on health systems around the world. One of the most common ways to detect COVID-19 is to use chest X-ray images, which have the advantage of being cheap and fast. However, in the early days of the COVID-19 outbreak, most studies applied pretrained convolutional neural network (CNN) models, and the features produced by the last convolutional layer were directly passed into the classification head. In this study, the proposed ensemble model consists of three lightweight networks, Xception, MobileNetV2 and NasNetMobile as three original feature extractors, and then three base classifiers are obtained by adding the coordinated attention module, LSTM and a new classification head to the original feature extractors. The classification results from the three base classifiers are then fused by a confidence fusion method. Three publicly available chest X-ray datasets for COVID-19 testing were considered, with ternary (COVID-19, normal and other pneumonia) and quaternary (COVID-19, normal) analyses performed on the first two datasets, bacterial pneumonia and viral pneumonia classification, and achieved high accuracy rates of 95.56% and 91.20%, respectively. The third dataset was used to compare the performance of the model compared to other models and the generalization ability on different datasets. We performed a thorough ablation study on the first dataset to understand the impact of each proposed component. Finally, we also performed visualizations. These saliency maps not only explain key prediction decisions of the model, but also help radiologists locate areas of infection. Through extensive experiments, it was finally found that the results obtained by the proposed method are comparable to the state-of-the-art methods.
Collapse
Affiliation(s)
| | - Shuxian Liu
- School of Information Science and Engineering, Xinjiang University, Urumqi 830017, China
| | | | | |
Collapse
|
7
|
Smadi AA, Abugabah A, Al-Smadi AM, Almotairi S. SEL-COVIDNET: An intelligent application for the diagnosis of COVID-19 from chest X-rays and CT-scans. INFORMATICS IN MEDICINE UNLOCKED 2022; 32:101059. [PMID: 36033909 PMCID: PMC9398554 DOI: 10.1016/j.imu.2022.101059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2022] [Revised: 08/17/2022] [Accepted: 08/17/2022] [Indexed: 11/06/2022] Open
Abstract
COVID-19 detection from medical imaging is a difficult challenge that has piqued the interest of experts worldwide. Chest X-rays and computed tomography (CT) scanning are the essential imaging modalities for diagnosing COVID-19. All researchers focus their efforts on developing viable methods and rapid treatment procedures for this pandemic. Fast and accurate automated detection approaches have been devised to alleviate the need for medical professionals. Deep Learning (DL) technologies have successfully recognized COVID-19 situations. This paper proposes a developed set of nine deep learning models for diagnosing COVID-19 based on transfer learning and implementation in a novel architecture (SEL-COVIDNET). We include a global average pooling layer, flattening, and two dense layers that are fully connected. The model’s effectiveness is evaluated using balanced and unbalanced COVID-19 radiography datasets. After that, our model’s performance is analyzed using six evaluation measures: accuracy, sensitivity, specificity, precision, F1-score, and Matthew’s correlation coefficient (MCC). Experiments demonstrated that the proposed SEL-COVIDNET with tuned DenseNet121, InceptionResNetV2, and MobileNetV3Large models outperformed the results of comparative SOTA for multi-class classification (COVID-19 vs. No-finding vs. Pneumonia) in terms of accuracy (98.52%), specificity (98.5%), sensitivity (98.5%), precision (98.7%), F1-score (98.7%), and MCC (97.5%). For the COVID-19 vs. No-finding classification, our method had an accuracy of 99.77%, a specificity of 99.85%, a sensitivity of 99.85%, a precision of 99.55%, an F1-score of 99.7%, and an MCC of 99.4%. The proposed model offers an accurate approach for detecting COVID-19 patients, which aids in the containment of the COVID-19 pandemic.
Collapse
Affiliation(s)
- Ahmad Al Smadi
- School of Artificial Intelligence, Xidian University, No. 2 South Taibai Road, Xian, 710071, China.,College of Technological Innovation, Zayed University, Abu Dhabi Campus, UAE
| | - Ahed Abugabah
- College of Technological Innovation, Zayed University, Abu Dhabi Campus, UAE
| | - Ahmad Mohammad Al-Smadi
- Department of Computer Science, Al-Balqa Applied University, Ajloun University College, Jordan
| | - Sultan Almotairi
- Faculty of Community College, Majmaah University, Al Majma'ah, Saudi Arabia
| |
Collapse
|
8
|
Liu X, Hu Y, Zhou G, Cai W, He M, Zhan J, Hu Y, Li L. DS-MENet for the classification of citrus disease. FRONTIERS IN PLANT SCIENCE 2022; 13:884464. [PMID: 35937334 PMCID: PMC9355402 DOI: 10.3389/fpls.2022.884464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/26/2022] [Accepted: 06/30/2022] [Indexed: 06/15/2023]
Abstract
Affected by various environmental factors, citrus will frequently suffer from diseases during the growth process, which has brought huge obstacles to the development of agriculture. This paper proposes a new method for identifying and classifying citrus diseases. Firstly, this paper designs an image enhancement method based on the MSRCR algorithm and homomorphic filtering algorithm optimized by Laplacian (HFLF-MS) to highlight the disease characteristics of citrus. Secondly, we designed a new neural network DS-MENet based on the DenseNet-121 backbone structure. In DS-MENet, the regular convolution in Dense Block is replaced with depthwise separable convolution, which reduces the network parameters. The ReMish activation function is used to alleviate the neuron death problem caused by the ReLU function and improve the robustness of the model. To further enhance the attention to citrus disease information and the ability to extract feature information, a multi-channel fusion backbone enhancement method (MCF) was designed in this work to process Dense Block. We use the 10-fold cross-validation method to conduct experiments. The average classification accuracy of DS-MENet on the dataset after adding noise can reach 95.02%. This shows that the method has good performance and has certain feasibility for the classification of citrus diseases in real life.
Collapse
Affiliation(s)
- Xuyao Liu
- College of Computer & Information Engineering, Central South University of Forestry and Technology, Changsha, China
| | - Yaowen Hu
- College of Computer & Information Engineering, Central South University of Forestry and Technology, Changsha, China
| | - Guoxiong Zhou
- College of Computer & Information Engineering, Central South University of Forestry and Technology, Changsha, China
| | - Weiwei Cai
- College of Computer & Information Engineering, Central South University of Forestry and Technology, Changsha, China
| | - Mingfang He
- College of Computer & Information Engineering, Central South University of Forestry and Technology, Changsha, China
| | - Jialei Zhan
- College of Computer & Information Engineering, Central South University of Forestry and Technology, Changsha, China
| | - Yahui Hu
- Plant Protection Research Institute, Hunan Academy of Agricultural Sciences, Changsha, China
| | - Liujun Li
- Department of Civil, Architectural and Environmental Engineering, University of Missouri-Rolla, Rolla, MO, United States
| |
Collapse
|
9
|
Deep feature fusion classification network (DFFCNet): Towards accurate diagnosis of COVID-19 using chest X-rays images. Biomed Signal Process Control 2022; 76:103677. [PMID: 35432578 PMCID: PMC9005442 DOI: 10.1016/j.bspc.2022.103677] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 03/22/2022] [Accepted: 04/09/2022] [Indexed: 12/12/2022]
|
10
|
Diagnosing gastrointestinal diseases from endoscopy images through a multi-fused CNN with auxiliary layers, alpha dropouts, and a fusion residual block. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
11
|
COVLIAS 2.0-cXAI: Cloud-Based Explainable Deep Learning System for COVID-19 Lesion Localization in Computed Tomography Scans. Diagnostics (Basel) 2022; 12:diagnostics12061482. [PMID: 35741292 PMCID: PMC9221733 DOI: 10.3390/diagnostics12061482] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 06/07/2022] [Accepted: 06/13/2022] [Indexed: 02/07/2023] Open
Abstract
Background: The previous COVID-19 lung diagnosis system lacks both scientific validation and the role of explainable artificial intelligence (AI) for understanding lesion localization. This study presents a cloud-based explainable AI, the “COVLIAS 2.0-cXAI” system using four kinds of class activation maps (CAM) models. Methodology: Our cohort consisted of ~6000 CT slices from two sources (Croatia, 80 COVID-19 patients and Italy, 15 control patients). COVLIAS 2.0-cXAI design consisted of three stages: (i) automated lung segmentation using hybrid deep learning ResNet-UNet model by automatic adjustment of Hounsfield units, hyperparameter optimization, and parallel and distributed training, (ii) classification using three kinds of DenseNet (DN) models (DN-121, DN-169, DN-201), and (iii) validation using four kinds of CAM visualization techniques: gradient-weighted class activation mapping (Grad-CAM), Grad-CAM++, score-weighted CAM (Score-CAM), and FasterScore-CAM. The COVLIAS 2.0-cXAI was validated by three trained senior radiologists for its stability and reliability. The Friedman test was also performed on the scores of the three radiologists. Results: The ResNet-UNet segmentation model resulted in dice similarity of 0.96, Jaccard index of 0.93, a correlation coefficient of 0.99, with a figure-of-merit of 95.99%, while the classifier accuracies for the three DN nets (DN-121, DN-169, and DN-201) were 98%, 98%, and 99% with a loss of ~0.003, ~0.0025, and ~0.002 using 50 epochs, respectively. The mean AUC for all three DN models was 0.99 (p < 0.0001). The COVLIAS 2.0-cXAI showed 80% scans for mean alignment index (MAI) between heatmaps and gold standard, a score of four out of five, establishing the system for clinical settings. Conclusions: The COVLIAS 2.0-cXAI successfully showed a cloud-based explainable AI system for lesion localization in lung CT scans.
Collapse
|
12
|
Study on the Grading Model of Hepatic Steatosis Based on Improved DenseNet. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:9601470. [PMID: 35340251 PMCID: PMC8947877 DOI: 10.1155/2022/9601470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 02/08/2022] [Accepted: 02/23/2022] [Indexed: 11/17/2022]
Abstract
To achieve intelligent grading of hepatic steatosis, a deep learning-based method for grading hepatic steatosis was proposed by introducing migration learning in the DenseNet model, and the effectiveness of the method was verified by applying it to the practice of grading hepatic steatosis. The results show that the proposed method can significantly reduce the number of model iterations and improve the model convergence speed and prediction accuracy by introducing migration learning in the deep learning DenseNet model, with an accuracy of more than 85%, sensitivity of more than 94%, specificity of about 80%, and good prediction performance on the training and test sets. It can also detect hepatic steatosis grade 1 more accurately and reliably, and achieve automated and more accurate grading, which has some practical application value.
Collapse
|
13
|
Montalbo FJ. Truncating fined-tuned vision-based models to lightweight deployable diagnostic tools for SARS-CoV-2 infected chest X-rays and CT-scans. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:16411-16439. [PMID: 35261555 PMCID: PMC8893243 DOI: 10.1007/s11042-022-12484-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Revised: 10/05/2021] [Accepted: 01/25/2022] [Indexed: 06/14/2023]
Abstract
In such a brief period, the recent coronavirus (COVID-19) already infected large populations worldwide. Diagnosing an infected individual requires a Real-Time Polymerase Chain Reaction (RT-PCR) test, which can become expensive and limited in most developing countries, making them rely on alternatives like Chest X-Rays (CXR) or Computerized Tomography (CT) scans. However, results from these imaging approaches radiated confusion for medical experts due to their similarities with other diseases like pneumonia. Other solutions based on Deep Convolutional Neural Network (DCNN) recently improved and automated the diagnosis of COVID-19 from CXRs and CT scans. However, upon examination, most proposed studies focused primarily on accuracy rather than deployment and reproduction, which may cause them to become difficult to reproduce and implement in locations with inadequate computing resources. Therefore, instead of focusing only on accuracy, this work investigated the effects of parameter reduction through a proposed truncation method and analyzed its effects. Various DCNNs had their architectures truncated, which retained only their initial core block, reducing their parameter sizes to <1 M. Once trained and validated, findings have shown that a DCNN with robust layer aggregations like the InceptionResNetV2 had less vulnerability to the adverse effects of the proposed truncation. The results also showed that from its full-length size of 55 M with 98.67% accuracy, the proposed truncation reduced its parameters to only 441 K and still attained an accuracy of 97.41%, outperforming other studies based on its size to performance ratio.
Collapse
Affiliation(s)
- Francis Jesmar Montalbo
- College of Informatics and Computing Sciences, Batangas State University, Rizal Avenue Extension, Batangas, Batangas City, Philippines
| |
Collapse
|
14
|
A Novel COVID-19 Diagnosis Support System Using the Stacking Approach and Transfer Learning Technique on Chest X-Ray Images. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:9437538. [PMID: 34777739 PMCID: PMC8589496 DOI: 10.1155/2021/9437538] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/29/2021] [Revised: 08/26/2021] [Accepted: 10/07/2021] [Indexed: 12/24/2022]
Abstract
COVID-19 is an infectious disease-causing flu-like respiratory problem with various symptoms such as cough or fever, which in severe cases can cause pneumonia. The aim of this paper is to develop a rapid and accurate medical diagnosis support system to detect COVID-19 in chest X-ray images using a stacking approach combining transfer learning techniques and KNN algorithm for selection of the best model. In deep learning, we have multiple approaches for building a classification system for analyzing radiographic images. In this work, we used the transfer learning technique. This approach makes it possible to store and use the knowledge acquired from a pretrained convolutional neural network to solve a new problem. To ensure the robustness of the proposed system for diagnosing patients with COVID-19 using X-ray images, we used a machine learning method called the stacking approach to combine the performances of the many transfer learning-based models. The generated model was trained on a dataset containing four classes, namely, COVID-19, tuberculosis, viral pneumonia, and normal cases. The dataset used was collected from a six-source dataset of X-ray images. To evaluate the performance of the proposed system, we used different common evaluation measures. Our proposed system achieves an extremely good accuracy of 99.23% exceeding many previous related studies.
Collapse
|
15
|
Chauhan T, Palivela H, Tiwari S. Optimization and fine-tuning of DenseNet model for classification of COVID-19 cases in medical imaging. INTERNATIONAL JOURNAL OF INFORMATION MANAGEMENT DATA INSIGHTS 2021. [PMCID: PMC8189817 DOI: 10.1016/j.jjimei.2021.100020] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
It’s been more than a year that the entire world is fighting against COVID-19 pandemic. Starting from the Wuhan city in China, COVID-19 has conquered the entire world with its rapid progression. But seeking the importance towards the human situation, it has become essential to build such an automated model to diagnose COVID-19 within less computational time easily. As the disease has spread, there is not enough data to implement an accurate COVID-19 predicting model. But technology is a boon, which makes it possible. Effective techniques based on medical imaging using artificial intelligence have approached to assist humans in needful time. It has become very essential to detect COVID-19 in humans at an early stage to prevent it from becoming more infectious. The neural networks have shown promising results in medical imaging. In this research, a deep learning-based approach is used for image classification to detect COVID-19 using chest X-ray images (CXR). A CNN classifier have been used to classify the normal-healthy images from the COVID-19 images, using transfer learning. The concept of early stopping is used to enhance the accuracy of the proposed DenseNet model. The results of the system have been evaluated using accuracy, precision, recall and F1-score metrics. An automated comparative analysis among multiple optimizers, LR Scheduler and Loss Function is performed to get the highest accuracy suitable for the proposed system. The Adamax optimizer with Cross Entropy loss function and StepLR scheduler have outperformed with 98.45% accuracy for normal-healthy CXR images and 98.32% accuracy for COVID-19 images.
Collapse
|
16
|
Classification of Covid-19 patients using efficient fine-tuned deep learning DenseNet model. GLOBAL TRANSITIONS PROCEEDINGS 2021. [PMCID: PMC8361010 DOI: 10.1016/j.gltp.2021.08.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
As COVID-19 pandemic caused completely spoils the livings, almost more than one year passed, still lives were not on the track. It is important to diagnose the COVID-19 patients earlier and provide the prompt treatment. The Convolutional Neural Network (CNN), a deep neural network that specializes in image processing and image classification. In this paper, a fine tuned DenseNet201 model was proposed which is used to classify Chest X ray images. Firstly, different DenseNet121, DenseNet169 and DenseNet201 model trained and tested on the same dataset. With the experiment, it is observed that DenseNet201 model performs well as compared to other dense models. Furthermore, DenseNet201 experiments over different optimizers and it is noticed that RMSprop, Adagrad and Adamax performs better. Proposed model achieves accuracy of 95.2% as compared to other models. We experimentally determine that RMSprop optimizer with DenseNet201 produces better results as similar to Adam and Adamax widely used optimizers.
Collapse
|
17
|
Baltazar LR, Manzanillo MG, Gaudillo J, Viray ED, Domingo M, Tiangco B, Albia J. Artificial intelligence on COVID-19 pneumonia detection using chest xray images. PLoS One 2021; 16:e0257884. [PMID: 34648509 PMCID: PMC8516252 DOI: 10.1371/journal.pone.0257884] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Accepted: 09/13/2021] [Indexed: 12/24/2022] Open
Abstract
Recent studies show the potential of artificial intelligence (AI) as a screening tool to detect COVID-19 pneumonia based on chest x-ray (CXR) images. However, issues on the datasets and study designs from medical and technical perspectives, as well as questions on the vulnerability and robustness of AI algorithms have emerged. In this study, we address these issues with a more realistic development of AI-driven COVID-19 pneumonia detection models by generating our own data through a retrospective clinical study to augment the dataset aggregated from external sources. We optimized five deep learning architectures, implemented development strategies by manipulating data distribution to quantitatively compare study designs, and introduced several detection scenarios to evaluate the robustness and diagnostic performance of the models. At the current level of data availability, the performance of the detection model depends on the hyperparameter tuning and has less dependency on the quantity of data. InceptionV3 attained the highest performance in distinguishing pneumonia from normal CXR in two-class detection scenario with sensitivity (Sn), specificity (Sp), and positive predictive value (PPV) of 96%. The models attained higher general performance of 91-96% Sn, 94-98% Sp, and 90-96% PPV in three-class compared to four-class detection scenario. InceptionV3 has the highest general performance with accuracy, F1-score, and g-mean of 96% in the three-class detection scenario. For COVID-19 pneumonia detection, InceptionV3 attained the highest performance with 86% Sn, 99% Sp, and 91% PPV with an AUC of 0.99 in distinguishing pneumonia from normal CXR. Its capability of differentiating COVID-19 pneumonia from normal and non-COVID-19 pneumonia attained 0.98 AUC and a micro-average of 0.99 for other classes.
Collapse
Affiliation(s)
- Lei Rigi Baltazar
- Data-Driven Research Laboratory (DARE Lab), Institute of Mathematical Sciences and Physics, University of the Philippines Los Baños, Los Baños, Philippines
- Domingo Artificial Intelligence Research Center (DARC Labs), Pasig City, Philippines
- Computational Interdisciplinary Research Laboratories (CINTERLabs), University of the Philippines Los Baños, Los Baños, Philippines
| | - Mojhune Gabriel Manzanillo
- Data-Driven Research Laboratory (DARE Lab), Institute of Mathematical Sciences and Physics, University of the Philippines Los Baños, Los Baños, Philippines
- Domingo Artificial Intelligence Research Center (DARC Labs), Pasig City, Philippines
- Computational Interdisciplinary Research Laboratories (CINTERLabs), University of the Philippines Los Baños, Los Baños, Philippines
| | - Joverlyn Gaudillo
- Data-Driven Research Laboratory (DARE Lab), Institute of Mathematical Sciences and Physics, University of the Philippines Los Baños, Los Baños, Philippines
- Domingo Artificial Intelligence Research Center (DARC Labs), Pasig City, Philippines
- Computational Interdisciplinary Research Laboratories (CINTERLabs), University of the Philippines Los Baños, Los Baños, Philippines
| | | | - Mario Domingo
- Domingo Artificial Intelligence Research Center (DARC Labs), Pasig City, Philippines
| | - Beatrice Tiangco
- National Institute of Health, College of Medicine, University of the Philippines, Manila, Philippines
- Division of Medicine, The Medical City, Pasig City, Philippines
| | - Jason Albia
- Data-Driven Research Laboratory (DARE Lab), Institute of Mathematical Sciences and Physics, University of the Philippines Los Baños, Los Baños, Philippines
- Domingo Artificial Intelligence Research Center (DARC Labs), Pasig City, Philippines
- Computational Interdisciplinary Research Laboratories (CINTERLabs), University of the Philippines Los Baños, Los Baños, Philippines
| |
Collapse
|
18
|
Montalbo FJP. Truncating a densely connected convolutional neural network with partial layer freezing and feature fusion for diagnosing COVID-19 from chest X-rays. MethodsX 2021; 8:101408. [PMID: 34109106 PMCID: PMC8178958 DOI: 10.1016/j.mex.2021.101408] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Accepted: 06/04/2021] [Indexed: 01/16/2023] Open
Abstract
Deep learning and computer vision revolutionized a new method to automate medical image diagnosis. However, to achieve reliable and state-of-the-art performance, vision-based models require high computing costs and robust datasets. Moreover, even with the conventional training methods, large vision-based models still involve lengthy epochs and costly disk consumptions that can entail difficulty during deployment due to the absence of high-end infrastructures. Therefore, this method modified the training approach on a vision-based model through layer truncation, partial layer freezing, and feature fusion. The proposed method was employed on a Densely Connected Convolutional Neural Network (CNN), the DenseNet model, to diagnose whether a Chest X-Ray (CXR) is well, has Pneumonia, or has COVID-19. From the results, the performance to parameter size ratio highlighted this method's effectiveness to train a DenseNet model with fewer parameters compared to traditionally trained state-of-the-art Deep CNN (DCNN) models, yet yield promising results.•This novel method significantly reduced the model's parameter size without sacrificing much of its classification performance.•The proposed method had better performance against some state-of-the-art Deep Convolutional Neural Network (DCNN) models that diagnosed samples of CXRs with COVID-19.•The proposed method delivered a conveniently scalable, reproducible, and deployable DCNN model for most low-end devices.
Collapse
|