26
|
Aljohani A, Alharbe N. A Novel Master-Slave Architecture to Detect COVID-19 in Chest X-ray Image Sequences Using Transfer-Learning Techniques. Healthcare (Basel) 2022; 10:healthcare10122443. [PMID: 36553967 PMCID: PMC9778261 DOI: 10.3390/healthcare10122443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 11/15/2022] [Accepted: 12/01/2022] [Indexed: 12/10/2022] Open
Abstract
Coronavirus disease, frequently referred to as COVID-19, is a contagious and transmittable disease produced by the SARS-CoV-2 virus. The only solution to tackle this virus and reduce its spread is early diagnosis. Pathogenic laboratory tests such as the polymerase chain reaction (PCR) process take a long time. Also, they regularly produce incorrect results. However, they are still considered the critical standard for detecting the virus. Hence, there is a solid need to evolve computer-assisted diagnosis systems capable of providing quick and low-cost testing in areas where traditional testing procedures are not feasible. This study focuses on COVID-19 detection using X-ray images. The prime objective is to introduce a computer-assisted diagnosis (CAD) system to differentiate COVID-19 from healthy and pneumonia cases using X-ray image sequences. This work utilizes standard transfer-learning techniques for COVID-19 detection. It proposes the master-slave architecture using the most state-of-the-art Densenet201 and Squeezenet1_0 techniques for classifying the COVID-19 virus in chest X-ray image sequences. This paper compares the proposed models with other standard transfer-learning approaches for COVID-19. The performance metrics demonstrate that the proposed approach outperforms standard transfer-learning approaches. This research also fine-tunes hyperparameters and predicts the optimized learning rate to achieve the highest accuracy in the model. After fine-tuning the learning rate, the DenseNet201 model retrieves an accuracy of 83.33%, while the fastest model is SqueezeNet1_0, which retrieves an accuracy of 80%.
Collapse
|
27
|
Shen J, Ghatti S, Levkov NR, Shen H, Sen T, Rheuban K, Enfield K, Facteau NR, Engel G, Dowdell K. A survey of COVID-19 detection and prediction approaches using mobile devices, AI, and telemedicine. Front Artif Intell 2022; 5:1034732. [PMID: 36530356 PMCID: PMC9755752 DOI: 10.3389/frai.2022.1034732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Accepted: 11/02/2022] [Indexed: 09/19/2023] Open
Abstract
Since 2019, the COVID-19 pandemic has had an extremely high impact on all facets of the society and will potentially have an everlasting impact for years to come. In response to this, over the past years, there have been a significant number of research efforts on exploring approaches to combat COVID-19. In this paper, we present a survey of the current research efforts on using mobile Internet of Thing (IoT) devices, Artificial Intelligence (AI), and telemedicine for COVID-19 detection and prediction. We first present the background and then present current research in this field. Specifically, we present the research on COVID-19 monitoring and detection, contact tracing, machine learning based approaches, telemedicine, and security. We finally discuss the challenges and the future work that lay ahead in this field before concluding this paper.
Collapse
|
28
|
Dual_Pachi: Attention-based dual path framework with intermediate second order-pooling for Covid-19 detection from chest X-ray images. Comput Biol Med 2022; 151:106324. [PMID: 36423531 PMCID: PMC9671873 DOI: 10.1016/j.compbiomed.2022.106324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 10/27/2022] [Accepted: 11/14/2022] [Indexed: 11/19/2022]
Abstract
Numerous machine learning and image processing algorithms, most recently deep learning, allow the recognition and classification of COVID-19 disease in medical images. However, feature extraction, or the semantic gap between low-level visual information collected by imaging modalities and high-level semantics, is the fundamental shortcoming of these techniques. On the other hand, several techniques focused on the first-order feature extraction of the chest X-Ray thus making the employed models less accurate and robust. This study presents Dual_Pachi: Attention Based Dual Path Framework with Intermediate Second Order-Pooling for more accurate and robust Chest X-ray feature extraction for Covid-19 detection. Dual_Pachi consists of 4 main building Blocks; Block one converts the received chest X-Ray image to CIE LAB coordinates (L & AB channels which are separated at the first three layers of a modified Inception V3 Architecture.). Block two further exploit the global features extracted from block one via a global second-order pooling while block three focuses on the low-level visual information and the high-level semantics of Chest X-ray image features using a multi-head self-attention and an MLP Layer without sacrificing performance. Finally, the fourth block is the classification block where classification is done using fully connected layers and SoftMax activation. Dual_Pachi is designed and trained in an end-to-end manner. According to the results, Dual_Pachi outperforms traditional deep learning models and other state-of-the-art approaches described in the literature with an accuracy of 0.96656 (Data_A) and 0.97867 (Data_B) for the Dual_Pachi approach and an accuracy of 0.95987 (Data_A) and 0.968 (Data_B) for the Dual_Pachi without attention block model. A Grad-CAM-based visualization is also built to highlight where the applied attention mechanism is concentrated.
Collapse
|
29
|
Ukwuoma CC, Qin Z, Agbesi VK, Ejiyi CJ, Bamisile O, Chikwendu IA, Tienin BW, Hossin MA. LCSB-inception: Reliable and effective light-chroma separated branches for Covid-19 detection from chest X-ray images. Comput Biol Med 2022; 150:106195. [PMID: 37859288 PMCID: PMC9561436 DOI: 10.1016/j.compbiomed.2022.106195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 09/03/2022] [Accepted: 10/09/2022] [Indexed: 11/24/2022]
Abstract
According to the World Health Organization, an estimate of more than five million infections and 355,000 deaths have been recorded worldwide since the emergence of the coronavirus disease (COVID-19). Various researchers have developed interesting and effective deep learning frameworks to tackle this disease. However, poor feature extraction from the Chest X-ray images and the high computational cost of the available models impose difficulties to an accurate and fast Covid-19 detection framework. Thus, the major purpose of this study is to offer an accurate and efficient approach for extracting COVID-19 features from chest X-rays that is also less computationally expensive than earlier research. To achieve the specified goal, we explored the Inception V3 deep artificial neural network. This study proposed LCSB-Inception; a two-path (L and AB channel) Inception V3 network along the first three convolutional layers. The RGB input image is first transformed to CIE LAB coordinates (L channel which is aimed at learning the textural and edge features of the Chest X-Ray and AB channel which is aimed at learning the color variations of the Chest X-ray images). The L achromatic channel and the AB channels filters are set to 50%L-50%AB. This method saves between one-third and one-half of the parameters in the divided branches. We further introduced a global second-order pooling at the last two convolutional blocks for more robust image feature extraction against the conventional max-pooling. The detection accuracy of the LCSB-Inception is further improved by employing the Contrast Limited Adaptive Histogram Equalization (CLAHE) image enhancement technique on the input image before feeding them to the network. The proposed LCSB-Inception network is experimented on using two loss functions (Categorically smooth loss and categorically Cross-entropy) and two learning rates whereas Accuracy, Precision, Sensitivity, Specificity F1-Score, and AUC Score were used for evaluation via the chestX-ray-15k (Data_1) and COVID-19 Radiography dataset (Data_2). The proposed models produced an acceptable outcome with an accuracy of 0.97867 (Data_1) and 0.98199 (Data_2) according to the experimental findings. In terms of COVID-19 identification, the suggested models outperform conventional deep learning models and other state-of-the-art techniques presented in the literature based on the results.
Collapse
|
30
|
Zhang Z. Genomic Biomarker Heterogeneities between SARS-CoV-2 and COVID-19. Vaccines (Basel) 2022; 10:vaccines10101657. [PMID: 36298522 PMCID: PMC9608907 DOI: 10.3390/vaccines10101657] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Revised: 09/27/2022] [Accepted: 09/29/2022] [Indexed: 11/06/2022] Open
Abstract
Genes functionally associated with SARS-CoV-2 infection and genes functionally related to the COVID-19 disease can be different, whose distinction will become the first essential step for successfully fighting against the COVID-19 pandemic. Unfortunately, this first step has not been completed in all biological and medical research. Using a newly developed max-competing logistic classifier, two genes, ATP6V1B2 and IFI27, stand out to be critical in the transcriptional response to SARS-CoV-2 infection with differential expressions derived from NP/OP swab PCR. This finding is evidenced by combining these two genes with another gene in predicting disease status to achieve better-indicating accuracy than existing classifiers with the same number of genes. In addition, combining these two genes with three other genes to form a five-gene classifier outperforms existing classifiers with ten or more genes. These two genes can be critical in fighting against the COVID-19 pandemic as a new focus and direction with their exceptional predicting accuracy. Comparing the functional effects of these genes with a five-gene classifier with 100% accuracy identified and tested from blood samples in our earlier work, the genes and their transcriptional response and functional effects on SARS-CoV-2 infection, and the genes and their functional signature patterns on COVID-19 antibodies, are significantly different. We will use a total of fourteen cohort studies (including breakthrough infections and omicron variants) with 1481 samples to justify our results. Such significant findings can help explore the causal and pathological links between SARS-CoV-2 infection and the COVID-19 disease, and fight against the disease with more targeted genes, vaccines, antiviral drugs, and therapies.
Collapse
|
31
|
Wu G, Duan J. BLCov: A novel collaborative-competitive broad learning system for COVID-19 detection from radiology images. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE 2022; 115:105323. [PMID: 35992036 PMCID: PMC9376349 DOI: 10.1016/j.engappai.2022.105323] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 07/25/2022] [Accepted: 08/08/2022] [Indexed: 05/07/2023]
Abstract
With the global outbreak of COVID-19, there is an urgent need to develop an effective and automated detection approach as a faster diagnostic alternative to avoid the spread of COVID-19. Recently, broad learning system (BLS) has been viewed as an alternative method of deep learning which has been applied to many areas. Nevertheless, the sparse autoencoder in classical BLS just considers the representations to reconstruct the input data but ignores the relationship among the extracted features. In this paper, inspired by the effectiveness of the collaborative-competitive representation (CCR) mechanism, a novel collaborative-competitive representation-based autoencoder (CCRAE) is first proposed, and then collaborative-competitive broad learning system (CCBLS) is proposed based on CCRAE to effectively address the issues mentioned above. Moreover, an automated CCBLS-based approach is proposed for COVID-19 detection from radiology images such as CT scans and chest X-ray images. In the proposed approach, a feature extraction module is utilized to extract features from CT scans or chest X-ray images, then we use these features for COVID-19 detection with CCBLS. The experimental results demonstrated that our proposed approach can achieve superior or comparable performance in comparison with ten other state-of-the-art methods.
Collapse
|
32
|
Siddiquee MMR, Shah J, Wu T, Chong C, Schwedt T, Li B. HealthyGAN: Learning from Unannotated Medical Images to Detect Anomalies Associated with Human Disease. SIMULATION AND SYNTHESIS IN MEDICAL IMAGING : ... INTERNATIONAL WORKSHOP, SASHIMI ..., HELD IN CONJUNCTION WITH MICCAI ..., PROCEEDINGS. SASHIMI (WORKSHOP) 2022; 13570:43-54. [PMID: 38694707 PMCID: PMC11062325 DOI: 10.1007/978-3-031-16980-9_5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2024]
Abstract
Automated anomaly detection from medical images, such as MRIs and X-rays, can significantly reduce human effort in disease diagnosis. Owing to the complexity of modeling anomalies and the high cost of manual annotation by domain experts (e.g., radiologists), a typical technique in the current medical imaging literature has focused on deriving diagnostic models from healthy subjects only, assuming the model will detect the images from patients as outliers. However, in many real-world scenarios, unannotated datasets with a mix of both healthy and diseased individuals are abundant. Therefore, this paper poses the research question of how to improve unsupervised anomaly detection by utilizing (1) an unannotated set of mixed images, in addition to (2) the set of healthy images as being used in the literature. To answer the question, we propose HealthyGAN, a novel one-directional image-to-image translation method, which learns to translate the images from the mixed dataset to only healthy images. Being one-directional, HealthyGAN relaxes the requirement of cycle-consistency of existing unpaired image-to-image translation methods, which is unattainable with mixed unannotated data. Once the translation is learned, we generate a difference map for any given image by subtracting its translated output. Regions of significant responses in the difference map correspond to potential anomalies (if any). Our HealthyGAN outperforms the conventional state-of-the-art methods by significant margins on two publicly available datasets: COVID-19 and NIH ChestX-ray14, and one institutional dataset collected from Mayo Clinic. The implementation is publicly available at https://github.com/mahfuzmohammad/HealthyGAN.
Collapse
|
33
|
Sarv Ahrabi S, Momenzadeh A, Baccarelli E, Scarpiniti M, Piazzo L. How much BiGAN and CycleGAN-learned hidden features are effective for COVID-19 detection from CT images? A comparative study. THE JOURNAL OF SUPERCOMPUTING 2022; 79:2850-2881. [PMID: 36042937 PMCID: PMC9411851 DOI: 10.1007/s11227-022-04775-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 08/10/2022] [Indexed: 06/15/2023]
Abstract
Bidirectional generative adversarial networks (BiGANs) and cycle generative adversarial networks (CycleGANs) are two emerging machine learning models that, up to now, have been used as generative models, i.e., to generate output data sampled from a target probability distribution. However, these models are also equipped with encoding modules, which, after weakly supervised training, could be, in principle, exploited for the extraction of hidden features from the input data. At the present time, how these extracted features could be effectively exploited for classification tasks is still an unexplored field. Hence, motivated by this consideration, in this paper, we develop and numerically test the performance of a novel inference engine that relies on the exploitation of BiGAN and CycleGAN-learned hidden features for the detection of COVID-19 disease from other lung diseases in computer tomography (CT) scans. In this respect, the main contributions of the paper are twofold. First, we develop a kernel density estimation (KDE)-based inference method, which, in the training phase, leverages the hidden features extracted by BiGANs and CycleGANs for estimating the (a priori unknown) probability density function (PDF) of the CT scans of COVID-19 patients and, then, in the inference phase, uses it as a target COVID-PDF for the detection of COVID diseases. As a second major contribution, we numerically evaluate and compare the classification accuracies of the implemented BiGAN and CycleGAN models against the ones of some state-of-the-art methods, which rely on the unsupervised training of convolutional autoencoders (CAEs) for attaining feature extraction. The performance comparisons are carried out by considering a spectrum of different training loss functions and distance metrics. The obtained classification accuracies of the proposed CycleGAN-based (resp., BiGAN-based) models outperform the corresponding ones of the considered benchmark CAE-based models of about 16% (resp., 14%).
Collapse
|
34
|
Elshennawy NM, Ibrahim DM, Sarhan AM, Arafa M. Deep-Risk: Deep Learning-Based Mortality Risk Predictive Models for COVID-19. Diagnostics (Basel) 2022; 12:1847. [PMID: 36010198 PMCID: PMC9406405 DOI: 10.3390/diagnostics12081847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 07/22/2022] [Accepted: 07/26/2022] [Indexed: 11/16/2022] Open
Abstract
The SARS-CoV-2 virus has proliferated around the world and caused panic to all people as it claimed many lives. Since COVID-19 is highly contagious and spreads quickly, an early diagnosis is essential. Identifying the COVID-19 patients' mortality risk factors is essential for reducing this risk among infected individuals. For the timely examination of large datasets, new computing approaches must be created. Many machine learning (ML) techniques have been developed to predict the mortality risk factors and severity for COVID-19 patients. Contrary to expectations, deep learning approaches as well as ML algorithms have not been widely applied in predicting the mortality and severity from COVID-19. Furthermore, the accuracy achieved by ML algorithms is less than the anticipated values. In this work, three supervised deep learning predictive models are utilized to predict the mortality risk and severity for COVID-19 patients. The first one, which we refer to as CV-CNN, is built using a convolutional neural network (CNN); it is trained using a clinical dataset of 12,020 patients and is based on the 10-fold cross-validation (CV) approach for training and validation. The second predictive model, which we refer to as CV-LSTM + CNN, is developed by combining the long short-term memory (LSTM) approach with a CNN model. It is also trained using the clinical dataset based on the 10-fold CV approach for training and validation. The first two predictive models use the clinical dataset in its original CSV form. The last one, which we refer to as IMG-CNN, is a CNN model and is trained alternatively using the converted images of the clinical dataset, where each image corresponds to a data row from the original clinical dataset. The experimental results revealed that the IMG-CNN predictive model outperforms the other two with an average accuracy of 94.14%, a precision of 100%, a recall of 91.0%, a specificity of 100%, an F1-score of 95.3%, an AUC of 93.6%, and a loss of 0.22.
Collapse
|
35
|
Rjoub G, Wahab OA, Bentahar J, Cohen R, Bataineh AS. Trust-Augmented Deep Reinforcement Learning for Federated Learning Client Selection. INFORMATION SYSTEMS FRONTIERS : A JOURNAL OF RESEARCH AND INNOVATION 2022:1-18. [PMID: 35875592 PMCID: PMC9294770 DOI: 10.1007/s10796-022-10307-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/08/2022] [Indexed: 05/27/2023]
Abstract
In the context of distributed machine learning, the concept of federated learning (FL) has emerged as a solution to the privacy concerns that users have about sharing their own data with a third-party server. FL allows a group of users (often referred to as clients) to locally train a single machine learning model on their devices without sharing their raw data. One of the main challenges in FL is how to select the most appropriate clients to participate in the training of a certain task. In this paper, we address this challenge and propose a trust-based deep reinforcement learning approach to select the most adequate clients in terms of resource consumption and training time. On top of the client selection mechanism, we embed a transfer learning approach to handle the scarcity of data in some regions and compensate potential lack of learning at some servers. We apply our solution in the healthcare domain in a COVID-19 detection scenario over IoT devices. In the considered scenario, edge servers collaborate with IoT devices to train a COVID-19 detection model using FL without having to share any raw confidential data. Experiments conducted on a real-world COVID-19 dataset reveal that our solution achieves a good trade-off between detection accuracy and model execution time compared to existing approaches.
Collapse
|
36
|
Habib M, Ramzan M, Khan SA. A Deep Learning and Handcrafted Based Computationally Intelligent Technique for Effective COVID-19 Detection from X-ray/CT-scan Imaging. JOURNAL OF GRID COMPUTING 2022; 20:23. [PMID: 35874855 PMCID: PMC9294765 DOI: 10.1007/s10723-022-09615-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Accepted: 06/27/2022] [Indexed: 06/15/2023]
Abstract
The world has witnessed dramatic changes because of the advent of COVID19 in the last few days of 2019. During the last more than two years, COVID-19 has badly affected the world in diverse ways. It has not only affected human health and mortality rate but also the economic condition on a global scale. There is an urgent need today to cope with this pandemic and its diverse effects. Medical imaging has revolutionized the treatment of various diseases during the last four decades. Automated detection and classification systems have proven to be of great assistance to the doctors and scientific community for the treatment of various diseases. In this paper, a novel framework for an efficient COVID-19 classification system is proposed which uses the hybrid feature extraction approach. After preprocessing image data, two types of features i.e., deep learning and handcrafted, are extracted. For Deep learning features, two pre-trained models namely ResNet101 and DenseNet201 are used. Handcrafted features are extracted using Weber Local Descriptor (WLD). The Excitation component of WLD is utilized and features are reduced using DCT. Features are extracted from both models, handcrafted features are fused, and significant features are selected using entropy. Experiments have proven the effectiveness of the proposed model. A comprehensive set of experiments have been performed and results are compared with the existing well-known methods. The proposed technique has performed better in terms of accuracy and time.
Collapse
|
37
|
Sobahi N, Atila O, Deniz E, Sengur A, Acharya UR. Explainable COVID-19 detection using fractal dimension and vision transformer with Grad-CAM on cough sounds. Biocybern Biomed Eng 2022; 42:1066-1080. [PMID: 36092540 PMCID: PMC9444505 DOI: 10.1016/j.bbe.2022.08.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 08/28/2022] [Accepted: 08/29/2022] [Indexed: 11/27/2022]
Abstract
The polymerase chain reaction (PCR) test is not only time-intensive but also a contact method that puts healthcare personnel at risk. Thus, contactless and fast detection tests are more valuable. Cough sound is an important indicator of COVID-19, and in this paper, a novel explainable scheme is developed for cough sound-based COVID-19 detection. In the presented work, the cough sound is initially segmented into overlapping parts, and each segment is labeled as the input audio, which may contain other sounds. The deep Yet Another Mobile Network (YAMNet) model is considered in this work. After labeling, the segments labeled as cough are cropped and concatenated to reconstruct the pure cough sounds. Then, four fractal dimensions (FD) calculation methods are employed to acquire the FD coefficients on the cough sound with an overlapped sliding window that forms a matrix. The constructed matrixes are then used to form the fractal dimension images. Finally, a pretrained vision transformer (ViT) model is used to classify the constructed images into COVID-19, healthy and symptomatic classes. In this work, we demonstrate the performance of the ViT on cough sound-based COVID-19, and a visual explainability of the inner workings of the ViT model is shown. Three publically available cough sound datasets, namely COUGHVID, VIRUFY, and COSWARA, are used in this study. We have obtained 98.45%, 98.15%, and 97.59% accuracy for COUGHVID, VIRUFY, and COSWARA datasets, respectively. Our developed model obtained the highest performance compared to the state-of-the-art methods and is ready to be tested in real-world applications.
Collapse
|
38
|
Meraihi Y, Gabis AB, Mirjalili S, Ramdane-Cherif A, Alsaadi FE. Machine Learning-Based Research for COVID-19 Detection, Diagnosis, and Prediction: A Survey. SN COMPUTER SCIENCE 2022; 3:286. [PMID: 35578678 PMCID: PMC9096341 DOI: 10.1007/s42979-022-01184-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/09/2022] [Accepted: 04/30/2022] [Indexed: 12/12/2022]
Abstract
The year 2020 experienced an unprecedented pandemic called COVID-19, which impacted the whole world. The absence of treatment has motivated research in all fields to deal with it. In Computer Science, contributions mainly include the development of methods for the diagnosis, detection, and prediction of COVID-19 cases. Data science and Machine Learning (ML) are the most widely used techniques in this area. This paper presents an overview of more than 160 ML-based approaches developed to combat COVID-19. They come from various sources like Elsevier, Springer, ArXiv, MedRxiv, and IEEE Xplore. They are analyzed and classified into two categories: Supervised Learning-based approaches and Deep Learning-based ones. In each category, the employed ML algorithm is specified and a number of used parameters is given. The parameters set for each of the algorithms are gathered in different tables. They include the type of the addressed problem (detection, diagnosis, or detection), the type of the analyzed data (Text data, X-ray images, CT images, Time series, Clinical data,...) and the evaluated metrics (accuracy, precision, sensitivity, specificity, F1-Score, and AUC). The study discusses the collected information and provides a number of statistics drawing a picture about the state of the art. Results show that Deep Learning is used in 79% of cases where 65% of them are based on the Convolutional Neural Network (CNN) and 17% use Specialized CNN. On his side, supervised learning is found in only 16% of the reviewed approaches and only Random Forest, Support Vector Machine (SVM) and Regression algorithms are employed.
Collapse
|
39
|
Ho TT, Tran KD, Huang Y. FedSGDCOVID: Federated SGD COVID-19 Detection under Local Differential Privacy Using Chest X-ray Images and Symptom Information. SENSORS 2022; 22:s22103728. [PMID: 35632136 PMCID: PMC9147951 DOI: 10.3390/s22103728] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 05/09/2022] [Accepted: 05/10/2022] [Indexed: 12/15/2022]
Abstract
Coronavirus (COVID-19) has created an unprecedented global crisis because of its detrimental effect on the global economy and health. COVID-19 cases have been rapidly increasing, with no sign of stopping. As a result, test kits and accurate detection models are in short supply. Early identification of COVID-19 patients will help decrease the infection rate. Thus, developing an automatic algorithm that enables the early detection of COVID-19 is essential. Moreover, patient data are sensitive, and they must be protected to prevent malicious attackers from revealing information through model updates and reconstruction. In this study, we presented a higher privacy-preserving federated learning system for COVID-19 detection without sharing data among data owners. First, we constructed a federated learning system using chest X-ray images and symptom information. The purpose is to develop a decentralized model across multiple hospitals without sharing data. We found that adding the spatial pyramid pooling to a 2D convolutional neural network improves the accuracy of chest X-ray images. Second, we explored that the accuracy of federated learning for COVID-19 identification reduces significantly for non-independent and identically distributed (Non-IID) data. We then proposed a strategy to improve the model's accuracy on Non-IID data by increasing the total number of clients, parallelism (client-fraction), and computation per client. Finally, for our federated learning model, we applied a differential privacy stochastic gradient descent (DP-SGD) to improve the privacy of patient data. We also proposed a strategy to maintain the robustness of federated learning to ensure the security and accuracy of the model.
Collapse
|
40
|
Basu A, Sheikh KH, Cuevas E, Sarkar R. COVID-19 detection from CT scans using a two-stage framework. EXPERT SYSTEMS WITH APPLICATIONS 2022; 193:116377. [PMID: 35002099 PMCID: PMC8720180 DOI: 10.1016/j.eswa.2021.116377] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 11/09/2021] [Accepted: 12/04/2021] [Indexed: 05/04/2023]
Abstract
Coronavirus disease 2019 (COVID-19) is a contagious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). It may cause serious ailments in infected individuals and complications may lead to death. X-rays and Computed Tomography (CT) scans can be used for the diagnosis of the disease. In this context, various methods have been proposed for the detection of COVID-19 from radiological images. In this work, we propose an end-to-end framework consisting of deep feature extraction followed by feature selection (FS) for the detection of COVID-19 from CT scan images. For feature extraction, we utilize three deep learning based Convolutional Neural Networks (CNNs). For FS, we use a meta-heuristic optimization algorithm, Harmony Search (HS), combined with a local search method, Adaptive β -Hill Climbing (A β HC) for better performance. We evaluate the proposed approach on the SARS-COV-2 CT-Scan Dataset consisting of 2482 CT scan images and an updated version of the previous dataset containing 2926 CT scan images. For comparison, we use a few state-of-the-art optimization algorithms. The best accuracy scores obtained by the present approach are 97.30% and 98.87% respectively on the said datasets, which are better than many of the algorithms used for comparison. The performances are also at par with some recent works which use the same datasets. The codes for the FS algorithms are available at: https://github.com/khalid0007/Metaheuristic-Algorithms.
Collapse
|
41
|
Aggarwal P, Mishra NK, Fatimah B, Singh P, Gupta A, Joshi SD. COVID-19 image classification using deep learning: Advances, challenges and opportunities. Comput Biol Med 2022; 144:105350. [PMID: 35305501 PMCID: PMC8890789 DOI: 10.1016/j.compbiomed.2022.105350] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 02/10/2022] [Accepted: 02/22/2022] [Indexed: 12/16/2022]
Abstract
Corona Virus Disease-2019 (COVID-19), caused by Severe Acute Respiratory Syndrome-Corona Virus-2 (SARS-CoV-2), is a highly contagious disease that has affected the lives of millions around the world. Chest X-Ray (CXR) and Computed Tomography (CT) imaging modalities are widely used to obtain a fast and accurate diagnosis of COVID-19. However, manual identification of the infection through radio images is extremely challenging because it is time-consuming and highly prone to human errors. Artificial Intelligence (AI)-techniques have shown potential and are being exploited further in the development of automated and accurate solutions for COVID-19 detection. Among AI methodologies, Deep Learning (DL) algorithms, particularly Convolutional Neural Networks (CNN), have gained significant popularity for the classification of COVID-19. This paper summarizes and reviews a number of significant research publications on the DL-based classification of COVID-19 through CXR and CT images. We also present an outline of the current state-of-the-art advances and a critical discussion of open challenges. We conclude our study by enumerating some future directions of research in COVID-19 imaging classification.
Collapse
|
42
|
Muhammad U, Hoque MZ, Oussalah M, Keskinarkaus A, Seppänen T, Sarder P. SAM: Self-augmentation mechanism for COVID-19 detection using chest X-ray images. Knowl Based Syst 2022; 241:108207. [PMID: 35068707 PMCID: PMC8762871 DOI: 10.1016/j.knosys.2022.108207] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Revised: 01/07/2022] [Accepted: 01/08/2022] [Indexed: 12/20/2022]
Abstract
COVID-19 is a rapidly spreading viral disease and has affected over 100 countries worldwide. The numbers of casualties and cases of infection have escalated particularly in countries with weakened healthcare systems. Recently, reverse transcription-polymerase chain reaction (RT-PCR) is the test of choice for diagnosing COVID-19. However, current evidence suggests that COVID-19 infected patients are mostly stimulated from a lung infection after coming in contact with this virus. Therefore, chest X-ray (i.e., radiography) and chest CT can be a surrogate in some countries where PCR is not readily available. This has forced the scientific community to detect COVID-19 infection from X-ray images and recently proposed machine learning methods offer great promise for fast and accurate detection. Deep learning with convolutional neural networks (CNNs) has been successfully applied to radiological imaging for improving the accuracy of diagnosis. However, the performance remains limited due to the lack of representative X-ray images available in public benchmark datasets. To alleviate this issue, we propose a self-augmentation mechanism for data augmentation in the feature space rather than in the data space using reconstruction independent component analysis (RICA). Specifically, a unified architecture is proposed which contains a deep convolutional neural network (CNN), a feature augmentation mechanism, and a bidirectional LSTM (BiLSTM). The CNN provides the high-level features extracted at the pooling layer where the augmentation mechanism chooses the most relevant features and generates low-dimensional augmented features. Finally, BiLSTM is used to classify the processed sequential information. We conducted experiments on three publicly available databases to show that the proposed approach achieves the state-of-the-art results with accuracy of 97%, 84% and 98%. Explainability analysis has been carried out using feature visualization through PCA projection and t-SNE plots.
Collapse
|
43
|
Subramanian N, Elharrouss O, Al-Maadeed S, Chowdhury M. A review of deep learning-based detection methods for COVID-19. Comput Biol Med 2022; 143:105233. [PMID: 35180499 PMCID: PMC8798789 DOI: 10.1016/j.compbiomed.2022.105233] [Citation(s) in RCA: 42] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2021] [Revised: 01/10/2022] [Accepted: 01/10/2022] [Indexed: 12/16/2022]
Abstract
COVID-19 is a fast-spreading pandemic, and early detection is crucial for stopping the spread of infection. Lung images are used in the detection of coronavirus infection. Chest X-ray (CXR) and computed tomography (CT) images are available for the detection of COVID-19. Deep learning methods have been proven efficient and better performing in many computer vision and medical imaging applications. In the rise of the COVID pandemic, researchers are using deep learning methods to detect coronavirus infection in lung images. In this paper, the currently available deep learning methods that are used to detect coronavirus infection in lung images are surveyed. The available methodologies, public datasets, datasets that are used by each method and evaluation metrics are summarized in this paper to help future researchers. The evaluation metrics that are used by the methods are comprehensively compared.
Collapse
|
44
|
Sobahi N, Sengur A, Tan RS, Acharya UR. Attention-based 3D CNN with residual connections for efficient ECG-based COVID-19 detection. Comput Biol Med 2022; 143:105335. [PMID: 35219186 PMCID: PMC8858432 DOI: 10.1016/j.compbiomed.2022.105335] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Revised: 02/17/2022] [Accepted: 02/17/2022] [Indexed: 02/09/2023]
Abstract
BACKGROUND The world has been suffering from the COVID-19 pandemic since 2019. More than 5 million people have died. Pneumonia is caused by the COVID-19 virus, which can be diagnosed using chest X-ray and computed tomography (CT) scans. COVID-19 also causes clinical and subclinical cardiovascular injury that may be detected on electrocardiography (ECG), which is easily accessible. METHOD For ECG-based COVID-19 detection, we developed a novel attention-based 3D convolutional neural network (CNN) model with residual connections (RC). In this paper, the deep learning (DL) approach was developed using 12-lead ECG printouts obtained from 250 normal subjects, 250 patients with COVID-19 and 250 with abnormal heartbeat. For binary classification, the COVID-19 and normal classes were considered; and for multiclass classification, all classes. The ECGs were preprocessed into standard ECG lead segments that were channeled into 12-dimensional volumes as input to the network model. Our developed model comprised of 19 layers with three 3D convolutional, three batch normalization, three rectified linear unit, two dropouts, two additional (for residual connections), one attention, and one fully connected layer. The RC were used to improve gradient flow through the developed network, and attention layer, to connect the second residual connection to the fully connected layer through the batch normalization layer. RESULTS A publicly available dataset was used in this work. We obtained average accuracies of 99.0% and 92.0% for binary and multiclass classifications, respectively, using ten-fold cross-validation. Our proposed model is ready to be tested with a huge ECG database.
Collapse
|
45
|
Rana A, Singh H, Mavuduru R, Pattanaik S, Rana PS. Quantifying prognosis severity of COVID-19 patients from deep learning based analysis of CT chest images. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:18129-18153. [PMID: 35282403 PMCID: PMC8901869 DOI: 10.1007/s11042-022-12214-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Revised: 01/04/2022] [Accepted: 01/10/2022] [Indexed: 05/28/2023]
Abstract
The COVID-19 pandemic has affected all the countries in the world with its droplet spread mode. The colossal amount of cases has strained all the healthcare systems due to the serious nature of infections especially for people with comorbidities. A very high specificity Reverse Transcriptase-Polymerase Chain Reaction (RT-PCR) test is the principal technique in use for diagnosing the COVID-19 patients. Also, CT scans have helped medical professionals in patient severity estimation & progression tracking of COVID-19 virus. In study we present our own extensible COVID-19 viral infection tracking prognosis technique. It uses annotated dataset of CT chest scan slice images created with the help of medical professionals. The annotated dataset contains bounding box coordinates of different features for COVID-19 detection like ground glass opacities, crazy paving pattern, consolidations, lesions etc. We qualitatively identify the severity of the patient for later prognosis stages in our study to assist medical staff for patient prioritization. First we detected COVID-19 positive patients with pre-trained Siamese Neural Network (SNN) which obtained 87.6% accuracy, 87.1% F1-Score & 95.1% AUC scores. These metrics were achieved after removal of 40% quantitatively highly similar images from the COVID-CT dataset. This reduced dataset was further medically annotated with COVID-19 features for bounding box detection. After this we assigned severity scores to detected COVID-19 features and calculated the cumulative severity score for COVID-19 patients. For qualitative patient prioritization with prognosis clinical assistance information, we finally converted this score into a multi-classification problem which obtained 47% weighted-average F1-score.
Collapse
|
46
|
Mary Shyni H, Chitra E. A COMPARATIVE STUDY OF X-RAY AND CT IMAGES IN COVID-19 DETECTION USING IMAGE PROCESSING AND DEEP LEARNING TECHNIQUES. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE UPDATE 2022; 2:100054. [PMID: 35281724 PMCID: PMC8898857 DOI: 10.1016/j.cmpbup.2022.100054] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
The deadly coronavirus has not just devastated the lives of millions but has put the entire healthcare system under tremendous pressure. Early diagnosis of COVID-19 plays a significant role in isolating the positive cases and preventing the further spread of the disease. The medical images along with deep learning models provided faster and more accurate results in the detection of COVID-19. This article extensively reviews the recent deep learning techniques for COVID-19 diagnosis. The research articles discussed reveal that Convolutional Neural Network (CNN) is the most popular deep learning algorithm in detecting COVID-19 from medical images. An overview of the necessity of pre-processing the medical images, transfer learning and data augmentation techniques to deal with data scarcity problems, use of pre-trained models to save time and the role of medical images in the automatic detection of COVID-19 are summarized. This article also provides a sensible outlook for the young researchers to develop highly effective CNN models coupled with medical images in the early detection of the disease.
Collapse
|
47
|
Hassan H, Ren Z, Zhao H, Huang S, Li D, Xiang S, Kang Y, Chen S, Huang B. Review and classification of AI-enabled COVID-19 CT imaging models based on computer vision tasks. Comput Biol Med 2022; 141:105123. [PMID: 34953356 PMCID: PMC8684223 DOI: 10.1016/j.compbiomed.2021.105123] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 12/03/2021] [Accepted: 12/03/2021] [Indexed: 01/12/2023]
Abstract
This article presents a systematic overview of artificial intelligence (AI) and computer vision strategies for diagnosing the coronavirus disease of 2019 (COVID-19) using computerized tomography (CT) medical images. We analyzed the previous review works and found that all of them ignored classifying and categorizing COVID-19 literature based on computer vision tasks, such as classification, segmentation, and detection. Most of the COVID-19 CT diagnosis methods comprehensively use segmentation and classification tasks. Moreover, most of the review articles are diverse and cover CT as well as X-ray images. Therefore, we focused on the COVID-19 diagnostic methods based on CT images. Well-known search engines and databases such as Google, Google Scholar, Kaggle, Baidu, IEEE Xplore, Web of Science, PubMed, ScienceDirect, and Scopus were utilized to collect relevant studies. After deep analysis, we collected 114 studies and reported highly enriched information for each selected research. According to our analysis, AI and computer vision have substantial potential for rapid COVID-19 diagnosis as they could significantly assist in automating the diagnosis process. Accurate and efficient models will have real-time clinical implications, though further research is still required. Categorization of literature based on computer vision tasks could be helpful for future research; therefore, this review article will provide a good foundation for conducting such research.
Collapse
|
48
|
Kumar A, Tripathi AR, Satapathy SC, Zhang YD. SARS-Net: COVID-19 detection from chest x-rays by combining graph convolutional network and convolutional neural network. PATTERN RECOGNITION 2022; 122:108255. [PMID: 34456369 PMCID: PMC8386119 DOI: 10.1016/j.patcog.2021.108255] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Revised: 08/05/2021] [Accepted: 08/12/2021] [Indexed: 05/19/2023]
Abstract
COVID-19 has emerged as one of the deadliest pandemics that has ever crept on humanity. Screening tests are currently the most reliable and accurate steps in detecting severe acute respiratory syndrome coronavirus in a patient, and the most used is RT-PCR testing. Various researchers and early studies implied that visual indicators (abnormalities) in a patient's Chest X-Ray (CXR) or computed tomography (CT) imaging were a valuable characteristic of a COVID-19 patient that can be leveraged to find out virus in a vast population. Motivated by various contributions to open-source community to tackle COVID-19 pandemic, we introduce SARS-Net, a CADx system combining Graph Convolutional Networks and Convolutional Neural Networks for detecting abnormalities in a patient's CXR images for presence of COVID-19 infection in a patient. In this paper, we introduce and evaluate the performance of a custom-made deep learning architecture SARS-Net, to classify and detect the Chest X-ray images for COVID-19 diagnosis. Quantitative analysis shows that the proposed model achieves more accuracy than previously mentioned state-of-the-art methods. It was found that our proposed model achieved an accuracy of 97.60% and a sensitivity of 92.90% on the validation set.
Collapse
|
49
|
SARS-CoV-2 Detection Using Optical Fiber Based Sensor Method. SENSORS 2022; 22:s22030751. [PMID: 35161497 PMCID: PMC8839674 DOI: 10.3390/s22030751] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Revised: 01/18/2022] [Accepted: 01/18/2022] [Indexed: 01/27/2023]
Abstract
The SARS-CoV-2 Coronavirus disease, also known as the COVID-19 pandemic, has engendered the biggest challenge to human life for the last two years. With a rapid increase in the spread of the Omicron variant across the world, and to contain the spread of COVID-19 in general, it is crucial to rapidly identify this viral infection with minimal logistics. To achieve this, a novel plastic optical fiber (POF) U-shaped probe sensing method is presented for accurate detection of SARS-CoV-2, commonly known as the COVID-19 virus, which has the capability to detect new variants such as Omicron. The sample under test can be taken from oropharyngeal or nasopharyngeal via specific POF U-shaped probe with one end that is fed with a laser source while the other end is connected to a photodetector to receive the response and postprocess for decision-making. The study includes detection comparison with two types of POF with diameters of 200 and 500 µm. Results show that detection is better when a smaller-diameter POF is used. It is also seen that the proposed test bed and its envisaged prototype can detect the COVID-19 variants within 15 min of the test. The proposed approach will make the clinical diagnosis faster, cheaper and applicable to patients in remote areas where there are no hospitals or clinical laboratories due to poverty, geographic obstacles, or other factors.
Collapse
|
50
|
Paul A, Basu A, Mahmud M, Kaiser MS, Sarkar R. Inverted bell-curve-based ensemble of deep learning models for detection of COVID-19 from chest X-rays. Neural Comput Appl 2022; 35:1-15. [PMID: 35013650 PMCID: PMC8729326 DOI: 10.1007/s00521-021-06737-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Accepted: 09/21/2021] [Indexed: 12/20/2022]
Abstract
Novel Coronavirus 2019 disease or COVID-19 is a viral disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The use of chest X-rays (CXRs) has become an important practice to assist in the diagnosis of COVID-19 as they can be used to detect the abnormalities developed in the infected patients' lungs. With the fast spread of the disease, many researchers across the world are striving to use several deep learning-based systems to identify the COVID-19 from such CXR images. To this end, we propose an inverted bell-curve-based ensemble of deep learning models for the detection of COVID-19 from CXR images. We first use a selection of models pretrained on ImageNet dataset and use the concept of transfer learning to retrain them with CXR datasets. Then the trained models are combined with the proposed inverted bell curve weighted ensemble method, where the output of each classifier is assigned a weight, and the final prediction is done by performing a weighted average of those outputs. We evaluate the proposed method on two publicly available datasets: the COVID-19 Radiography Database and the IEEE COVID Chest X-ray Dataset. The accuracy, F1 score and the AUC ROC achieved by the proposed method are 99.66%, 99.75% and 99.99%, respectively, in the first dataset, and, 99.84%, 99.81% and 99.99%, respectively, in the other dataset. Experimental results ensure that the use of transfer learning-based models and their combination using the proposed ensemble method result in improved predictions of COVID-19 in CXRs.
Collapse
|