1
|
Champendal M, Müller H, Prior JO, Dos Reis CS. A scoping review of interpretability and explainability concerning artificial intelligence methods in medical imaging. Eur J Radiol 2023; 169:111159. [PMID: 37976760 DOI: 10.1016/j.ejrad.2023.111159] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 09/26/2023] [Accepted: 10/19/2023] [Indexed: 11/19/2023]
Abstract
PURPOSE To review eXplainable Artificial Intelligence/(XAI) methods available for medical imaging/(MI). METHOD A scoping review was conducted following the Joanna Briggs Institute's methodology. The search was performed on Pubmed, Embase, Cinhal, Web of Science, BioRxiv, MedRxiv, and Google Scholar. Studies published in French and English after 2017 were included. Keyword combinations and descriptors related to explainability, and MI modalities were employed. Two independent reviewers screened abstracts, titles and full text, resolving differences through discussion. RESULTS 228 studies met the criteria. XAI publications are increasing, targeting MRI (n = 73), radiography (n = 47), CT (n = 46). Lung (n = 82) and brain (n = 74) pathologies, Covid-19 (n = 48), Alzheimer's disease (n = 25), brain tumors (n = 15) are the main pathologies explained. Explanations are presented visually (n = 186), numerically (n = 67), rule-based (n = 11), textually (n = 11), and example-based (n = 6). Commonly explained tasks include classification (n = 89), prediction (n = 47), diagnosis (n = 39), detection (n = 29), segmentation (n = 13), and image quality improvement (n = 6). The most frequently provided explanations were local (78.1 %), 5.7 % were global, and 16.2 % combined both local and global approaches. Post-hoc approaches were predominantly employed. The used terminology varied, sometimes indistinctively using explainable (n = 207), interpretable (n = 187), understandable (n = 112), transparent (n = 61), reliable (n = 31), and intelligible (n = 3). CONCLUSION The number of XAI publications in medical imaging is increasing, primarily focusing on applying XAI techniques to MRI, CT, and radiography for classifying and predicting lung and brain pathologies. Visual and numerical output formats are predominantly used. Terminology standardisation remains a challenge, as terms like "explainable" and "interpretable" are sometimes being used indistinctively. Future XAI development should consider user needs and perspectives.
Collapse
Affiliation(s)
- Mélanie Champendal
- School of Health Sciences HESAV, HES-SO, University of Applied Sciences Western Switzerland, Lausanne, CH, Switzerland; Faculty of Biology and Medicine, University of Lausanne, Lausanne, CH, Switzerland.
| | - Henning Müller
- Informatics Institute, University of Applied Sciences Western Switzerland (HES-SO Valais) Sierre, CH, Switzerland; Medical faculty, University of Geneva, CH, Switzerland.
| | - John O Prior
- Faculty of Biology and Medicine, University of Lausanne, Lausanne, CH, Switzerland; Nuclear Medicine and Molecular Imaging Department, Lausanne University Hospital (CHUV), Lausanne, CH, Switzerland.
| | - Cláudia Sá Dos Reis
- School of Health Sciences HESAV, HES-SO, University of Applied Sciences Western Switzerland, Lausanne, CH, Switzerland.
| |
Collapse
|
2
|
Ghafoori M, Hamidi M, Modegh RG, Aziz-Ahari A, Heydari N, Tavafizadeh Z, Pournik O, Emdadi S, Samimi S, Mohseni A, Khaleghi M, Dashti H, Rabiee HR. Predicting survival of Iranian COVID-19 patients infected by various variants including omicron from CT Scan images and clinical data using deep neural networks. Heliyon 2023; 9:e21965. [PMID: 38058649 PMCID: PMC10696006 DOI: 10.1016/j.heliyon.2023.e21965] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 10/26/2023] [Accepted: 11/01/2023] [Indexed: 12/08/2023] Open
Abstract
Purpose: The rapid spread of the COVID-19 omicron variant virus has resulted in an overload of hospitals around the globe. As a result, many patients are deprived of hospital facilities, increasing mortality rates. Therefore, mortality rates can be reduced by efficiently assigning facilities to higher-risk patients. Therefore, it is crucial to estimate patients' survival probability based on their conditions at the time of admission so that the minimum required facilities can be provided, allowing more opportunities to be available for those who need them. Although radiologic findings in chest computerized tomography scans show various patterns, considering the individual risk factors and other underlying diseases, it is difficult to predict patient prognosis through routine clinical or statistical analysis. Method: In this study, a deep neural network model is proposed for predicting survival based on simple clinical features, blood tests, axial computerized tomography scan images of lungs, and the patients' planned treatment. The model's architecture combines a Convolutional Neural Network and a Long Short Term Memory network. The model was trained using 390 survivors and 108 deceased patients from the Rasoul Akram Hospital and evaluated 109 surviving and 36 deceased patients infected by the omicron variant. Results: The proposed model reached an accuracy of 87.5% on the test data, indicating survival prediction possibility. The accuracy was significantly higher than the accuracy achieved by classical machine learning methods without considering computerized tomography scan images (p-value <= 4E-5). The images were also replaced with hand-crafted features related to the ratio of infected lung lobes used in classical machine-learning models. The highest-performing model reached an accuracy of 84.5%, which was considerably higher than the models trained on mere clinical information (p-value <= 0.006). However, the performance was still significantly less than the deep model (p-value <= 0.016). Conclusion: The proposed deep model achieved a higher accuracy than classical machine learning methods trained on features other than computerized tomography scan images. This proves the images contain extra information. Meanwhile, Artificial Intelligence methods with multimodal inputs can be more reliable and accurate than computerized tomography severity scores.
Collapse
Affiliation(s)
- Mahyar Ghafoori
- Radiology Department, Hazrat Rasoul Akram Hospital, School of Medicine, Iran University of Medical Sciences, Hemmat, Tehran, 14535, Iran
| | - Mehrab Hamidi
- BCB Lab, Department of Computer Engineering, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
- AI-Med Group, AI Innovation Center, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
| | - Rassa Ghavami Modegh
- Data science and Machine learning Lab, Department of Computer Engineering, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
- BCB Lab, Department of Computer Engineering, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
- AI-Med Group, AI Innovation Center, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
| | - Alireza Aziz-Ahari
- Radiology Department, Hazrat Rasoul Akram Hospital, School of Medicine, Iran University of Medical Sciences, Hemmat, Tehran, 14535, Iran
| | - Neda Heydari
- Radiology Department, Hazrat Rasoul Akram Hospital, School of Medicine, Iran University of Medical Sciences, Hemmat, Tehran, 14535, Iran
| | - Zeynab Tavafizadeh
- Radiology Department, Hazrat Rasoul Akram Hospital, School of Medicine, Iran University of Medical Sciences, Hemmat, Tehran, 14535, Iran
| | - Omid Pournik
- Radiology Department, Hazrat Rasoul Akram Hospital, School of Medicine, Iran University of Medical Sciences, Hemmat, Tehran, 14535, Iran
| | - Sasan Emdadi
- AI-Med Group, AI Innovation Center, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
| | - Saeed Samimi
- AI-Med Group, AI Innovation Center, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
| | - Amir Mohseni
- BCB Lab, Department of Computer Engineering, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
- AI-Med Group, AI Innovation Center, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
| | - Mohammadreza Khaleghi
- Radiology Department, Hazrat Rasoul Akram Hospital, School of Medicine, Iran University of Medical Sciences, Hemmat, Tehran, 14535, Iran
| | - Hamed Dashti
- AI-Med Group, AI Innovation Center, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
| | - Hamid R. Rabiee
- Data science and Machine learning Lab, Department of Computer Engineering, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
- BCB Lab, Department of Computer Engineering, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
- AI-Med Group, AI Innovation Center, Sharif University of Technology, Azadi, Tehran, 11365-8639, Iran
| |
Collapse
|
3
|
Aboutalebi H, Pavlova M, Shafiee MJ, Florea A, Hryniowski A, Wong A. COVID-Net Biochem: an explainability-driven framework to building machine learning models for predicting survival and kidney injury of COVID-19 patients from clinical and biochemistry data. Sci Rep 2023; 13:17001. [PMID: 37813920 PMCID: PMC10562395 DOI: 10.1038/s41598-023-42203-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 09/06/2023] [Indexed: 10/11/2023] Open
Abstract
Since the World Health Organization declared COVID-19 a pandemic in 2020, the global community has faced ongoing challenges in controlling and mitigating the transmission of the SARS-CoV-2 virus, as well as its evolving subvariants and recombinants. A significant challenge during the pandemic has not only been the accurate detection of positive cases but also the efficient prediction of risks associated with complications and patient survival probabilities. These tasks entail considerable clinical resource allocation and attention. In this study, we introduce COVID-Net Biochem, a versatile and explainable framework for constructing machine learning models. We apply this framework to predict COVID-19 patient survival and the likelihood of developing Acute Kidney Injury during hospitalization, utilizing clinical and biochemical data in a transparent, systematic approach. The proposed approach advances machine learning model design by seamlessly integrating domain expertise with explainability tools, enabling model decisions to be based on key biomarkers. This fosters a more transparent and interpretable decision-making process made by machines specifically for medical applications. More specifically, the framework comprises two phases: In the first phase, referred to as the "clinician-guided design" phase, the dataset is preprocessed using explainable AI and domain expert input. To better demonstrate this phase, we prepared a benchmark dataset of carefully curated clinical and biochemical markers based on clinician assessments for survival and kidney injury prediction in COVID-19 patients. This dataset was selected from a patient cohort of 1366 individuals at Stony Brook University. Moreover, we designed and trained a diverse collection of machine learning models, encompassing gradient-based boosting tree architectures and deep transformer architectures, specifically for survival and kidney injury prediction based on the selected markers. In the second phase, called the "explainability-driven design refinement" phase, the proposed framework employs explainability methods to not only gain a deeper understanding of each model's decision-making process but also to identify the overall impact of individual clinical and biochemical markers for bias identification. In this context, we used the models constructed in the previous phase for the prediction task and analyzed the explainability outcomes alongside a clinician with over 8 years of experience to gain a deeper understanding of the clinical validity of the decisions made. The explainability-driven insights obtained, in conjunction with the associated clinical feedback, are then utilized to guide and refine the training policies and architectural design iteratively. This process aims to enhance not only the prediction performance but also the clinical validity and trustworthiness of the final machine learning models. Employing the proposed explainability-driven framework, we attained 93.55% accuracy in survival prediction and 88.05% accuracy in predicting kidney injury complications. The models have been made available through an open-source platform. Although not a production-ready solution, this study aims to serve as a catalyst for clinical scientists, machine learning researchers, and citizen scientists to develop innovative and trustworthy clinical decision support solutions, ultimately assisting clinicians worldwide in managing pandemic outcomes.
Collapse
Affiliation(s)
- Hossein Aboutalebi
- Cheriton School of Computer Science, University of Waterloo, Waterloo, Canada.
- Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, Canada.
| | - Maya Pavlova
- Department of Systems Design Engineering, University of Waterloo, Waterloo, Canada
| | - Mohammad Javad Shafiee
- Department of Systems Design Engineering, University of Waterloo, Waterloo, Canada
- Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, Canada
- DarwinAI Corp., Waterloo, Canada
| | - Adrian Florea
- Department of Emergency Medicine, McGill University, Montreal, Canada
| | - Andrew Hryniowski
- Department of Systems Design Engineering, University of Waterloo, Waterloo, Canada
- DarwinAI Corp., Waterloo, Canada
| | - Alexander Wong
- Cheriton School of Computer Science, University of Waterloo, Waterloo, Canada
- Department of Systems Design Engineering, University of Waterloo, Waterloo, Canada
- Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, Canada
- DarwinAI Corp., Waterloo, Canada
| |
Collapse
|
4
|
Haennah JHJ, Christopher CS, King GRG. Prediction of the COVID disease using lung CT images by Deep Learning algorithm: DETS-optimized Resnet 101 classifier. Front Med (Lausanne) 2023; 10:1157000. [PMID: 37746067 PMCID: PMC10513469 DOI: 10.3389/fmed.2023.1157000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Accepted: 08/18/2023] [Indexed: 09/26/2023] Open
Abstract
As a result of the COVID-19 (coronavirus) disease due to SARS-CoV2 becoming a pandemic, it has spread over the globe. It takes time to evaluate the results of the laboratory tests because of the rising number of cases each day. Therefore, there are restrictions in terms of both therapy and findings. A clinical decision-making system with predictive algorithms is needed to alleviate the pressure on healthcare systems via Deep Learning (DL) algorithms. With the use of DL and chest scans, this research intends to determine COVID-19 patients by utilizing the Transfer Learning (TL)-based Generative Adversarial Network (Pix 2 Pix-GAN). Moreover, the COVID-19 images are then classified as either positive or negative using a Duffing Equation Tuna Swarm (DETS)-optimized Resnet 101 classifier trained on synthetic and real images from the Kaggle lung CT Covid dataset. Implementation of the proposed technique is done using MATLAB simulations. Besides, is evaluated via accuracy, precision, F1-score, recall, and AUC. Experimental findings show that the proposed prediction model identifies COVID-19 patients with 97.2% accuracy, a recall of 95.9%, and a specificity of 95.5%, which suggests the proposed predictive model can be utilized to forecast COVID-19 infection by medical specialists for clinical prediction research and can be beneficial to them.
Collapse
Affiliation(s)
- J. H. Jensha Haennah
- St. Xavier’s Catholic College of Engineering, Affiliated to Anna University Chennai, Tamil Nadu, India
| | | | - G. R. Gnana King
- Sahrdaya College of Engineering and Technology, Thrissur, Kerala, India
| |
Collapse
|
5
|
Ghassemi N, Shoeibi A, Khodatars M, Heras J, Rahimi A, Zare A, Zhang YD, Pachori RB, Gorriz JM. Automatic diagnosis of COVID-19 from CT images using CycleGAN and transfer learning. Appl Soft Comput 2023; 144:110511. [PMID: 37346824 PMCID: PMC10263244 DOI: 10.1016/j.asoc.2023.110511] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 08/23/2022] [Accepted: 06/08/2023] [Indexed: 06/23/2023]
Abstract
The outbreak of the corona virus disease (COVID-19) has changed the lives of most people on Earth. Given the high prevalence of this disease, its correct diagnosis in order to quarantine patients is of the utmost importance in the steps of fighting this pandemic. Among the various modalities used for diagnosis, medical imaging, especially computed tomography (CT) imaging, has been the focus of many previous studies due to its accuracy and availability. In addition, automation of diagnostic methods can be of great help to physicians. In this paper, a method based on pre-trained deep neural networks is presented, which, by taking advantage of a cyclic generative adversarial net (CycleGAN) model for data augmentation, has reached state-of-the-art performance for the task at hand, i.e., 99.60% accuracy. Also, in order to evaluate the method, a dataset containing 3163 images from 189 patients has been collected and labeled by physicians. Unlike prior datasets, normal data have been collected from people suspected of having COVID-19 disease and not from data from other diseases, and this database is made available publicly. Moreover, the method's reliability is further evaluated by calibration metrics, and its decision is interpreted by Grad-CAM also to find suspicious regions as another output of the method and make its decisions trustworthy and explainable.
Collapse
Affiliation(s)
- Navid Ghassemi
- Faculty of Electrical Engineering, FPGA Lab, K. N. Toosi University of Technology, Tehran, Iran
- Computer Engineering department, Ferdowsi University of Mashhad, Mashhad, Iran
| | - Afshin Shoeibi
- Faculty of Electrical Engineering, FPGA Lab, K. N. Toosi University of Technology, Tehran, Iran
- Computer Engineering department, Ferdowsi University of Mashhad, Mashhad, Iran
| | - Marjane Khodatars
- Department of Medical Engineering, Mashhad Branch, Islamic Azad University, Mashhad, Iran
| | - Jonathan Heras
- Department of Mathematics and Computer Science, University of La Rioja, La Rioja, Spain
| | - Alireza Rahimi
- Computer Engineering department, Ferdowsi University of Mashhad, Mashhad, Iran
| | - Assef Zare
- Faculty of Electrical Engineering, Gonabad Branch, Islamic Azad University, Gonabad, Iran
| | - Yu-Dong Zhang
- School of Informatics, University of Leicester, Leicester, LE1 7RH, UK
| | - Ram Bilas Pachori
- Department of Electrical Engineering, Indian Institute of Technology Indore, Indore 453552, India
| | - J Manuel Gorriz
- Department of Signal Theory, Networking and Communications, Universidad de Granada, Spain
- Department of Psychiatry, University of Cambridge, UK
| |
Collapse
|
6
|
Zhang J, Liu Y, Lei B, Sun D, Wang S, Zhou C, Ding X, Chen Y, Chen F, Wang T, Huang R, Chen K. GIONet: Global information optimized network for multi-center COVID-19 diagnosis via COVID-GAN and domain adversarial strategy. Comput Biol Med 2023; 163:107113. [PMID: 37307643 PMCID: PMC10242645 DOI: 10.1016/j.compbiomed.2023.107113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Revised: 05/14/2023] [Accepted: 05/30/2023] [Indexed: 06/14/2023]
Abstract
The outbreak of coronavirus disease (COVID-19) in 2019 has highlighted the need for automatic diagnosis of the disease, which can develop rapidly into a severe condition. Nevertheless, distinguishing between COVID-19 pneumonia and community-acquired pneumonia (CAP) through computed tomography scans can be challenging due to their similar characteristics. The existing methods often perform poorly in the 3-class classification task of healthy, CAP, and COVID-19 pneumonia, and they have poor ability to handle the heterogeneity of multi-centers data. To address these challenges, we design a COVID-19 classification model using global information optimized network (GIONet) and cross-centers domain adversarial learning strategy. Our approach includes proposing a 3D convolutional neural network with graph enhanced aggregation unit and multi-scale self-attention fusion unit to improve the global feature extraction capability. We also verified that domain adversarial training can effectively reduce feature distance between different centers to address the heterogeneity of multi-center data, and used specialized generative adversarial networks to balance data distribution and improve diagnostic performance. Our experiments demonstrate satisfying diagnosis results, with a mixed dataset accuracy of 99.17% and cross-centers task accuracies of 86.73% and 89.61%.
Collapse
Affiliation(s)
- Jing Zhang
- Department of Radiology, The Fifth Affiliated Hospital of Zunyi Medical University, Zhuhai, 518000, China
| | - Yiyao Liu
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, 518000, China
| | - Baiying Lei
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, 518000, China
| | - Dandan Sun
- Department of Radiology, The Fifth Affiliated Hospital of Zunyi Medical University, Zhuhai, 518000, China
| | - Siqi Wang
- Department of Radiology, The Fifth Affiliated Hospital of Zunyi Medical University, Zhuhai, 518000, China
| | - Changning Zhou
- Department of Radiology, The Fifth Affiliated Hospital of Zunyi Medical University, Zhuhai, 518000, China
| | - Xing Ding
- Department of Radiology, The Fifth Affiliated Hospital of Zunyi Medical University, Zhuhai, 518000, China
| | - Yang Chen
- Department of Radiology, The Fifth Affiliated Hospital of Zunyi Medical University, Zhuhai, 518000, China
| | - Fen Chen
- Department of Radiology, The Fifth Affiliated Hospital of Zunyi Medical University, Zhuhai, 518000, China
| | - Tianfu Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, 518000, China
| | - Ruidong Huang
- Department of Radiology, The Fifth Affiliated Hospital of Zunyi Medical University, Zhuhai, 518000, China
| | - Kuntao Chen
- Department of Radiology, The Fifth Affiliated Hospital of Zunyi Medical University, Zhuhai, 518000, China.
| |
Collapse
|
7
|
Santosh KC, GhoshRoy D, Nakarmi S. A Systematic Review on Deep Structured Learning for COVID-19 Screening Using Chest CT from 2020 to 2022. Healthcare (Basel) 2023; 11:2388. [PMID: 37685422 PMCID: PMC10486542 DOI: 10.3390/healthcare11172388] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 08/16/2023] [Accepted: 08/22/2023] [Indexed: 09/10/2023] Open
Abstract
The emergence of the COVID-19 pandemic in Wuhan in 2019 led to the discovery of a novel coronavirus. The World Health Organization (WHO) designated it as a global pandemic on 11 March 2020 due to its rapid and widespread transmission. Its impact has had profound implications, particularly in the realm of public health. Extensive scientific endeavors have been directed towards devising effective treatment strategies and vaccines. Within the healthcare and medical imaging domain, the application of artificial intelligence (AI) has brought significant advantages. This study delves into peer-reviewed research articles spanning the years 2020 to 2022, focusing on AI-driven methodologies for the analysis and screening of COVID-19 through chest CT scan data. We assess the efficacy of deep learning algorithms in facilitating decision making processes. Our exploration encompasses various facets, including data collection, systematic contributions, emerging techniques, and encountered challenges. However, the comparison of outcomes between 2020 and 2022 proves intricate due to shifts in dataset magnitudes over time. The initiatives aimed at developing AI-powered tools for the detection, localization, and segmentation of COVID-19 cases are primarily centered on educational and training contexts. We deliberate on their merits and constraints, particularly in the context of necessitating cross-population train/test models. Our analysis encompassed a review of 231 research publications, bolstered by a meta-analysis employing search keywords (COVID-19 OR Coronavirus) AND chest CT AND (deep learning OR artificial intelligence OR medical imaging) on both the PubMed Central Repository and Web of Science platforms.
Collapse
Affiliation(s)
- KC Santosh
- 2AI: Applied Artificial Intelligence Research Lab, Vermillion, SD 57069, USA
| | - Debasmita GhoshRoy
- School of Automation, Banasthali Vidyapith, Tonk 304022, Rajasthan, India;
| | - Suprim Nakarmi
- Department of Computer Science, University of South Dakota, Vermillion, SD 57069, USA;
| |
Collapse
|
8
|
Choukhan CF, Lasri I, El Hatimi R, Lemnaouar MR, Esghir M. SARS-CoV-2 Prediction Strategy Based on Classification Algorithms from a Full Blood Examination. ScientificWorldJournal 2023; 2023:3248192. [PMID: 37649715 PMCID: PMC10465262 DOI: 10.1155/2023/3248192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2022] [Revised: 07/01/2023] [Accepted: 08/03/2023] [Indexed: 09/01/2023] Open
Abstract
A fast and efficient diagnosis of serious infectious diseases, such as the recent SARS-CoV-2, is necessary in order to curb both the spread of existing variants and the emergence of new ones. In this regard and recognizing the shortcomings of the reverse transcription-polymerase chain reaction (RT-PCR) and rapid diagnostic test (RDT), strategic planning in the public health system is required. In particular, helping researchers develop a more accurate diagnosis means to distinguish patients with symptoms with COVID-19 from other common infections is what is needed. The aim of this study was to train and optimize the support vector machine (SVM) and K-nearest neighbors (KNN) classifiers to rapidly identify SARS-CoV-2 (positive/negative) patients through a simple complete blood test without any prior knowledge of the patient's health state or symptoms. After applying both models to a sample of patients at Israelita Albert Einstein at São Paulo, Brazil (solely for two examined groups of patients' data: "regular ward" and "not admitted to the hospital"), it was found that both provided early and accurate detection, based only on a selected blood profile via the statistical test of dependence (ANOVA test). The best performance was achieved by the improved SVM technique on nonhospitalized patients, with precision, recall, accuracy, and AUC values reaching 94%, 96%, 95%, and 99%, respectively, which supports the potential of this innovative strategy to significantly improve initial screening.
Collapse
Affiliation(s)
- C. F. Choukhan
- Laboratory of Mathematics, Computing and Applications, Mohammed V University in Rabat, Faculty of Sciences, Rabat, Morocco
| | - I. Lasri
- Laboratory of Conception and Systems (Electronics, Signals and Informatics), Mohammed V University in Rabat, Faculty of Sciences, Rabat, Morocco
| | - R. El Hatimi
- Laboratory of Mathematics, Computing and Applications, Mohammed V University in Rabat, Faculty of Sciences, Rabat, Morocco
| | - M. R. Lemnaouar
- LASTIMI, Mohammed V University in Rabat, Superior School of Technology, Sale, Rabat, Morocco
| | - M. Esghir
- Laboratory of Mathematics, Computing and Applications, Mohammed V University in Rabat, Faculty of Sciences, Rabat, Morocco
| |
Collapse
|
9
|
Zhang J, Liu Y, Lei B, Sun D, Huang R, Wang T, Chen S, Chen K. Graph Convolution and Self-attention Enhanced CNN with Domain Adaptation for Multi-site COVID-19 Diagnosis. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083611 DOI: 10.1109/embc40787.2023.10340851] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
In 2019, coronavirus disease (COVID-19) is an acute disease that can rapidly develop into a very serious state. Therefore, it is of great significance to realize automatic COVID-19 diagnosis. However, due to the small difference in the characteristics of computed tomography (CT) between community acquire pneumonia (CP) and COVID-19, the existing model is unsuitable for the three-class classifications of healthy control, CP and COVID-19. The current model rarely optimizes the data from multiple centers. Therefore, we propose a diagnosis model for COVID-19 patients based on graph enhanced 3D convolution neural network (CNN) and cross-center domain feature adaptation. Specifically, we first design a 3D CNN with graph convolution module to enhance the global feature extraction capability of the CNN. Meanwhile, we use the domain adaptive feature alignment method to optimize the feature distance between different centers, which can effectively realize multi-center COVID-19 diagnosis. Our experimental results achieve quite promising COVID-19 diagnosis results, which show that the accuracy in the mixed dataset is 98.05%, and the accuracy in cross-center tasks are 85.29% and 87.53%.
Collapse
|
10
|
Mozaffari J, Amirkhani A, Shokouhi SB. A survey on deep learning models for detection of COVID-19. Neural Comput Appl 2023; 35:1-29. [PMID: 37362568 PMCID: PMC10224665 DOI: 10.1007/s00521-023-08683-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 05/10/2023] [Indexed: 06/28/2023]
Abstract
The spread of the COVID-19 started back in 2019; and so far, more than 4 million people around the world have lost their lives to this deadly virus and its variants. In view of the high transmissibility of the Corona virus, which has turned this disease into a global pandemic, artificial intelligence can be employed as an effective tool for an earlier detection and treatment of this illness. In this review paper, we evaluate the performance of the deep learning models in processing the X-Ray and CT-Scan images of the Corona patients' lungs and describe the changes made to these models in order to enhance their Corona detection accuracy. To this end, we introduce the famous deep learning models such as VGGNet, GoogleNet and ResNet and after reviewing the research works in which these models have been used for the detection of COVID-19, we compare the performances of the newer models such as DenseNet, CapsNet, MobileNet and EfficientNet. We then present the deep learning techniques of GAN, transfer learning, and data augmentation and examine the statistics of using these techniques. Here, we also describe the datasets introduced since the onset of the COVID-19. These datasets contain the lung images of Corona patients, healthy individuals, and the patients with non-Corona pulmonary diseases. Lastly, we elaborate on the existing challenges in the use of artificial intelligence for COVID-19 detection and the prospective trends of using this method in similar situations and conditions. Supplementary Information The online version contains supplementary material available at 10.1007/s00521-023-08683-x.
Collapse
Affiliation(s)
- Javad Mozaffari
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, 16846-13114 Iran
| | - Abdollah Amirkhani
- School of Automotive Engineering, Iran University of Science and Technology, Tehran, 16846-13114 Iran
| | - Shahriar B. Shokouhi
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, 16846-13114 Iran
| |
Collapse
|
11
|
Lee MH, Shomanov A, Kudaibergenova M, Viderman D. Deep Learning Methods for Interpretation of Pulmonary CT and X-ray Images in Patients with COVID-19-Related Lung Involvement: A Systematic Review. J Clin Med 2023; 12:jcm12103446. [PMID: 37240552 DOI: 10.3390/jcm12103446] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 04/25/2023] [Accepted: 05/06/2023] [Indexed: 05/28/2023] Open
Abstract
SARS-CoV-2 is a novel virus that has been affecting the global population by spreading rapidly and causing severe complications, which require prompt and elaborate emergency treatment. Automatic tools to diagnose COVID-19 could potentially be an important and useful aid. Radiologists and clinicians could potentially rely on interpretable AI technologies to address the diagnosis and monitoring of COVID-19 patients. This paper aims to provide a comprehensive analysis of the state-of-the-art deep learning techniques for COVID-19 classification. The previous studies are methodically evaluated, and a summary of the proposed convolutional neural network (CNN)-based classification approaches is presented. The reviewed papers have presented a variety of CNN models and architectures that were developed to provide an accurate and quick automatic tool to diagnose the COVID-19 virus based on presented CT scan or X-ray images. In this systematic review, we focused on the critical components of the deep learning approach, such as network architecture, model complexity, parameter optimization, explainability, and dataset/code availability. The literature search yielded a large number of studies over the past period of the virus spread, and we summarized their past efforts. State-of-the-art CNN architectures, with their strengths and weaknesses, are discussed with respect to diverse technical and clinical evaluation metrics to safely implement current AI studies in medical practice.
Collapse
Affiliation(s)
- Min-Ho Lee
- School of Engineering and Digital Sciences, Nazarbayev University, Kabanbay Batyr Ave. 53, Astana 010000, Kazakhstan
| | - Adai Shomanov
- School of Engineering and Digital Sciences, Nazarbayev University, Kabanbay Batyr Ave. 53, Astana 010000, Kazakhstan
| | - Madina Kudaibergenova
- School of Engineering and Digital Sciences, Nazarbayev University, Kabanbay Batyr Ave. 53, Astana 010000, Kazakhstan
| | - Dmitriy Viderman
- School of Medicine, Nazarbayev University, 5/1 Kerey and Zhanibek Khandar Str., Astana 010000, Kazakhstan
| |
Collapse
|
12
|
Lou L, Liang H, Wang Z. Deep-Learning-Based COVID-19 Diagnosis and Implementation in Embedded Edge-Computing Device. Diagnostics (Basel) 2023; 13:diagnostics13071329. [PMID: 37046553 PMCID: PMC10093656 DOI: 10.3390/diagnostics13071329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 03/24/2023] [Accepted: 03/28/2023] [Indexed: 04/07/2023] Open
Abstract
The rapid spread of coronavirus disease 2019 (COVID-19) has posed enormous challenges to the global public health system. To deal with the COVID-19 pandemic crisis, the more accurate and convenient diagnosis of patients needs to be developed. This paper proposes a deep-learning-based COVID-19 detection method and evaluates its performance on embedded edge-computing devices. By adding an attention module and mixed loss into the original VGG19 model, the method can effectively reduce the parameters of the model and increase the classification accuracy. The improved model was first trained and tested on the PC X86 GPU platform using a large dataset (COVIDx CT-2A) and a medium dataset (integrated CT scan); the weight parameters of the model were reduced by around six times compared to the original model, but it still approximately achieved 98.80%and 97.84% accuracy, outperforming most existing methods. The trained model was subsequently transferred to embedded NVIDIA Jetson devices (TX2, Nano), where it achieved 97% accuracy at a 0.6−1 FPS inference speed using the NVIDIA TensorRT engine. The experimental results demonstrate that the proposed method is practicable and convenient; it can be used on a low-cost medical edge-computing terminal. The source code is available on GitHub for researchers.
Collapse
Affiliation(s)
- Lu Lou
- School of Information Science and Engineering, Chongqing Jiaotong University, Chongqing 400074, China
| | - Hong Liang
- School of Information Science and Engineering, Chongqing Jiaotong University, Chongqing 400074, China
| | - Zhengxia Wang
- School of Computer Science and Technology, Hainan University, Haikou 570100, China
| |
Collapse
|
13
|
On The Potential of Image Moments for Medical Diagnosis. J Imaging 2023; 9:jimaging9030070. [PMID: 36976121 PMCID: PMC10056731 DOI: 10.3390/jimaging9030070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 02/24/2023] [Accepted: 03/11/2023] [Indexed: 03/22/2023] Open
Abstract
Medical imaging is widely used for diagnosis and postoperative or post-therapy monitoring. The ever-increasing number of images produced has encouraged the introduction of automated methods to assist doctors or pathologists. In recent years, especially after the advent of convolutional neural networks, many researchers have focused on this approach, considering it to be the only method for diagnosis since it can perform a direct classification of images. However, many diagnostic systems still rely on handcrafted features to improve interpretability and limit resource consumption. In this work, we focused our efforts on orthogonal moments, first by providing an overview and taxonomy of their macrocategories and then by analysing their classification performance on very different medical tasks represented by four public benchmark data sets. The results confirmed that convolutional neural networks achieved excellent performance on all tasks. Despite being composed of much fewer features than those extracted by the networks, orthogonal moments proved to be competitive with them, showing comparable and, in some cases, better performance. In addition, Cartesian and harmonic categories provided a very low standard deviation, proving their robustness in medical diagnostic tasks. We strongly believe that the integration of the studied orthogonal moments can lead to more robust and reliable diagnostic systems, considering the performance obtained and the low variation of the results. Finally, since they have been shown to be effective on both magnetic resonance and computed tomography images, they can be easily extended to other imaging techniques.
Collapse
|
14
|
Feng Y, Luo Y, Yang J. Cross-platform privacy-preserving CT image COVID-19 diagnosis based on source-free domain adaptation. Knowl Based Syst 2023; 264:110324. [PMID: 36713615 PMCID: PMC9869622 DOI: 10.1016/j.knosys.2023.110324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 01/05/2023] [Accepted: 01/17/2023] [Indexed: 01/24/2023]
Abstract
In the wake of the Coronavirus disease (COVID-19) pandemic, chest computed tomography (CT) has become an invaluable component in the rapid and accurate detection of COVID-19. CT scans traditionally require manual inspections from medical professionals, which is expensive and tedious. With advancements in machine learning, deep neural networks have been applied to classify CT scans for efficient diagnosis. However, three challenges hinder this application of deep learning: (1) Domain shift across CT platforms and human subjects impedes the performance of neural networks in different hospitals. (2) Unsupervised Domain Adaptation (UDA), the traditional method to overcome domain shift, typically requires access to both source and target data. This is not realistic in COVID-19 diagnosis due to the sensitivity of medical data. The privacy of patients must be protected. (3) Data imbalance may exist between easy/hard samples and between data classes which can overwhelm the training of deep networks, causing degenerate models. To overcome these challenges, we propose a Cross-Platform Privacy-Preserving COVID-19 diagnosis network (CP 3 Net) that integrates domain adaptation, self-supervised learning, imbalanced label learning, and rotation classifier training into one synergistic framework. We also create a new CT benchmark by combining real-world datasets from multiple medical platforms to facilitate the cross-domain evaluation of our method. Through extensive experiments, we demonstrate that CP 3 Net outperforms many popular UDA methods and achieves state-of-the-art results in diagnosing COVID-19 using CT scans.
Collapse
Affiliation(s)
| | - Yuemei Luo
- School of Artificial Intelligence, Nanjing University of Information Science & Technology, China
| | - Jianfei Yang
- School of Electrical and Electronics Engineering, Nanyang Technological University, Singapore,Corresponding author
| |
Collapse
|
15
|
Kolarik M, Sarnovsky M, Paralic J, Babic F. Explainability of deep learning models in medical video analysis: a survey. PeerJ Comput Sci 2023; 9:e1253. [PMID: 37346619 PMCID: PMC10280416 DOI: 10.7717/peerj-cs.1253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Accepted: 01/20/2023] [Indexed: 06/23/2023]
Abstract
Deep learning methods have proven to be effective for multiple diagnostic tasks in medicine and have been performing significantly better in comparison to other traditional machine learning methods. However, the black-box nature of deep neural networks has restricted their use in real-world applications, especially in healthcare. Therefore, explainability of the machine learning models, which focuses on providing of the comprehensible explanations of model outputs, may affect the possibility of adoption of such models in clinical use. There are various studies reviewing approaches to explainability in multiple domains. This article provides a review of the current approaches and applications of explainable deep learning for a specific area of medical data analysis-medical video processing tasks. The article introduces the field of explainable AI and summarizes the most important requirements for explainability in medical applications. Subsequently, we provide an overview of existing methods, evaluation metrics and focus more on those that can be applied to analytical tasks involving the processing of video data in the medical domain. Finally we identify some of the open research issues in the analysed area.
Collapse
Affiliation(s)
- Michal Kolarik
- Department of Cybernetics and Artificial Intelligence, Technical University in Kosice, Kosice, Slovakia
| | - Martin Sarnovsky
- Department of Cybernetics and Artificial Intelligence, Technical University in Kosice, Kosice, Slovakia
| | - Jan Paralic
- Department of Cybernetics and Artificial Intelligence, Technical University in Kosice, Kosice, Slovakia
| | - Frantisek Babic
- Department of Cybernetics and Artificial Intelligence, Technical University in Kosice, Kosice, Slovakia
| |
Collapse
|
16
|
Afif M, Ayachi R, Said Y, Atri M. Deep learning-based technique for lesions segmentation in CT scan images for COVID-19 prediction. MULTIMEDIA TOOLS AND APPLICATIONS 2023; 82:1-15. [PMID: 37362746 PMCID: PMC9986667 DOI: 10.1007/s11042-023-14941-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 09/29/2022] [Accepted: 02/22/2023] [Indexed: 06/28/2023]
Abstract
Since 2019, COVID-19 disease caused significant damage and it has become a serious health issue in the worldwide. The number of infected and confirmed cases is increasing day by day. Different hospitals and countries around the world to this day are not equipped enough to treat these cases and stop this pandemic evolution. Lung and chest X-ray images (e.g., radiography images) and chest CT images are the most effective imaging techniques to analyze and diagnose the COVID-19 related problems. Deep learning-based techniques have recently shown good performance in computer vision and healthcare fields. We propose developing a new deep learning-based application for COVID-19 segmentation and analysis in this work. The proposed system is developed based on the context aggregation neural network. This network consists of three main modules: the context fuse model (CFM), attention mix module (AMM) and a residual convolutional module (RCM). The developed system can detect two main COVID-19-related regions: ground glass opacity and consolidation area in CT images. Generally, these lesions are often related to common pneumonia and COVID 19 cases. Training and testing experiments have been conducted using the COVID-x-CT dataset. Based on the obtained results, the developed system demonstrated better and more competitive results compared to state-of-the-art performances. The numerical findings demonstrate the effectiveness of the proposed work by outperforming other works in terms of accuracy by a factor of over 96.23%.
Collapse
Affiliation(s)
- Mouna Afif
- Laboratory of Electronics and Microelectronics (EμE), Faculty of Sciences of Monastir, University of Monastir, Monastir, Tunisia
| | - Riadh Ayachi
- Laboratory of Electronics and Microelectronics (EμE), Faculty of Sciences of Monastir, University of Monastir, Monastir, Tunisia
| | - Yahia Said
- Electrical Engineering Department, College of Engineering, Northern Border University, Arar, Saudi Arabia
| | - Mohamed Atri
- College of Computer Science, King Khalid University, Abha, Saudi Arabia
| |
Collapse
|
17
|
Liu Y, Zhang M, Zhong Z, Zeng X. A novel adaptive cubic quasi-Newton optimizer for deep learning based medical image analysis tasks, validated on detection of COVID-19 and segmentation for COVID-19 lung infection, liver tumor, and optic disc/cup. Med Phys 2023; 50:1528-1538. [PMID: 36057788 PMCID: PMC9538560 DOI: 10.1002/mp.15969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 08/25/2022] [Accepted: 08/25/2022] [Indexed: 11/14/2022] Open
Abstract
BACKGROUND Most of existing deep learning research in medical image analysis is focused on networks with stronger performance. These networks have achieved success, while their architectures are complex and even contain massive parameters ranging from thousands to millions in numbers. The nature of high dimension and nonconvex makes it easy to train a suboptimal model through the popular stochastic first-order optimizers, which only use gradient information. PURPOSE Our purpose is to design an adaptive cubic quasi-Newton optimizer, which could help to escape from suboptimal solution and improve the performance of deep neural networks on four medical image analysis tasks including: detection of COVID-19, COVID-19 lung infection segmentation, liver tumor segmentation, optic disc/cup segmentation. METHODS In this work, we introduce a novel adaptive cubic quasi-Newton optimizer with high-order moment (termed ACQN-H) for medical image analysis. The optimizer dynamically captures the curvature of the loss function by diagonally approximated Hessian and the norm of difference between previous two estimates, which helps to escape from saddle points more efficiently. In addition, to reduce the variance introduced by the stochastic nature of the problem, ACQN-H hires high-order moment through exponential moving average on iteratively calculated approximated Hessian matrix. Extensive experiments are performed to access the performance of ACQN-H. These include detection of COVID-19 using COVID-Net on dataset COVID-chestxray, which contains 16 565 training samples and 1841 test samples; COVID-19 lung infection segmentation using Inf-Net on COVID-CT, which contains 45, 5, and 5 computer tomography (CT) images for training, validation, and testing, respectively; liver tumor segmentation using ResUNet on LiTS2017, which consists of 50 622 abdominal scan images for training and 26 608 images for testing; optic disc/cup segmentation using MRNet on RIGA, which has 655 color fundus images for training and 95 for testing. The results are compared with commonly used stochastic first-order optimizers such as Adam, SGD, and AdaBound, and recently proposed stochastic quasi-Newton optimizer Apollo. In task detection of COVID-19, we use classification accuracy as the evaluation metric. For the other three medical image segmentation tasks, seven commonly used evaluation metrics are utilized, that is, Dice, structure measure, enhanced-alignment measure (EM), mean absolute error (MAE), intersection over union (IoU), true positive rate (TPR), and true negative rate. RESULTS Experiments on four tasks show that ACQN-H achieves improvements over other stochastic optimizers: (1) comparing with AdaBound, ACQN-H achieves 0.49%, 0.11%, and 0.70% higher accuracy on the COVID-chestxray dataset using network COVID-Net with VGG16, ResNet50 and DenseNet121 as backbones, respectively; (2) ACQN-H has the best scores in terms of evaluation metrics Dice, TPR, EM, and MAE on COVID-CT dataset using network Inf-Net. Particularly, ACQN-H achieves 1.0% better Dice as compared to Apollo; (3) ACQN-H achieves the best results on LiTS2017 dataset using network ResUNet, and outperforms Adam in terms of Dice by 2.3%; (4) ACQN-H improves the performance of network MRNet on RIGA dataset, and achieves 0.5% and 1.0% better scores on cup segmentation for Dice and IoU, respectively, compared with SGD. We also present fivefold validation results of four tasks. It can be found that the results on detection of COVID-19, liver tumor segmentation and optic disc/cup segmentation can achieve high performance with low variance. For COVID-19 lung infection segmentation, the variance on test set is much larger than on validation set, which may due to small size of dataset. CONCLUSIONS The proposed optimizer ACQN-H has been validated on four medical image analysis tasks including: detection of COVID-19 using COVID-Net on COVID-chestxray, COVID-19 lung infection segmentation using Inf-Net on COVID-CT, liver tumor segmentation using ResUNet on LiTS2017, optic disc/cup segmentation using MRNet on RIGA. Experiments show that ACQN-H can achieve some performance improvement. Moreover, the work is expected to boost the performance of existing deep learning networks in medical image analysis.
Collapse
Affiliation(s)
- Yan Liu
- College of Systems EngineeringNational University of Defense TechnologyChangshaChina
| | - Maojun Zhang
- College of Systems EngineeringNational University of Defense TechnologyChangshaChina
| | - Zhiwei Zhong
- College of Systems EngineeringNational University of Defense TechnologyChangshaChina
| | - Xiangrong Zeng
- College of Systems EngineeringNational University of Defense TechnologyChangshaChina
| |
Collapse
|
18
|
Song J, Ebadi A, Florea A, Xi P, Tremblay S, Wong A. COVID-Net USPro: An Explainable Few-Shot Deep Prototypical Network for COVID-19 Screening Using Point-of-Care Ultrasound. SENSORS (BASEL, SWITZERLAND) 2023; 23:2621. [PMID: 36904833 PMCID: PMC10007046 DOI: 10.3390/s23052621] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 02/20/2023] [Accepted: 02/21/2023] [Indexed: 06/18/2023]
Abstract
As the Coronavirus Disease 2019 (COVID-19) continues to impact many aspects of life and the global healthcare systems, the adoption of rapid and effective screening methods to prevent the further spread of the virus and lessen the burden on healthcare providers is a necessity. As a cheap and widely accessible medical image modality, point-of-care ultrasound (POCUS) imaging allows radiologists to identify symptoms and assess severity through visual inspection of the chest ultrasound images. Combined with the recent advancements in computer science, applications of deep learning techniques in medical image analysis have shown promising results, demonstrating that artificial intelligence-based solutions can accelerate the diagnosis of COVID-19 and lower the burden on healthcare professionals. However, the lack of large, well annotated datasets poses a challenge in developing effective deep neural networks, especially in the case of rare diseases and new pandemics. To address this issue, we present COVID-Net USPro, an explainable few-shot deep prototypical network that is designed to detect COVID-19 cases from very few ultrasound images. Through intensive quantitative and qualitative assessments, the network not only demonstrates high performance in identifying COVID-19 positive cases, using an explainability component, but it is also shown that the network makes decisions based on the actual representative patterns of the disease. Specifically, COVID-Net USPro achieves 99.55% overall accuracy, 99.93% recall, and 99.83% precision for COVID-19-positive cases when trained with only five shots. In addition to the quantitative performance assessment, our contributing clinician with extensive experience in POCUS interpretation verified the analytic pipeline and results, ensuring that the network's decisions are based on clinically relevant image patterns integral to COVID-19 diagnosis. We believe that network explainability and clinical validation are integral components for the successful adoption of deep learning in the medical field. As part of the COVID-Net initiative, and to promote reproducibility and foster further innovation, the network is open-sourced and available to the public.
Collapse
Affiliation(s)
- Jessy Song
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada
| | - Ashkan Ebadi
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada
- Digital Technologies Research Centre, National Research Council Canada, Toronto, ON M5T 3J1, Canada
| | - Adrian Florea
- Department of Emergency Medicine, McGill University, Montreal, QC H4A 3J1, Canada
| | - Pengcheng Xi
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada
- Digital Technologies Research Centre, National Research Council Canada, Ottawa, ON K1A 0R6, Canada
| | - Stéphane Tremblay
- Digital Technologies Research Centre, National Research Council Canada, Ottawa, ON K1A 0R6, Canada
| | - Alexander Wong
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada
- Waterloo Artificial Intelligence Institute, Waterloo, ON N2L 3G1, Canada
| |
Collapse
|
19
|
Kurt Z, Işık Ş, Kaya Z, Anagün Y, Koca N, Çiçek S. Evaluation of EfficientNet models for COVID-19 detection using lung parenchyma. Neural Comput Appl 2023; 35:12121-12132. [PMID: 36843903 PMCID: PMC9940669 DOI: 10.1007/s00521-023-08344-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Accepted: 01/25/2023] [Indexed: 02/23/2023]
Abstract
When the COVID-19 pandemic broke out in the beginning of 2020, it became crucial to enhance early diagnosis with efficient means to reduce dangers and future spread of the viruses as soon as possible. Finding effective treatments and lowering mortality rates is now more important than ever. Scanning with a computer tomography (CT) scanner is a helpful method for detecting COVID-19 in this regard. The present paper, as such, is an attempt to contribute to this process by generating an open-source, CT-based image dataset. This dataset contains the CT scans of lung parenchyma regions of 180 COVID-19-positive and 86 COVID-19-negative patients taken at the Bursa Yuksek Ihtisas Training and Research Hospital. The experimental studies show that the modified EfficientNet-ap-nish method uses this dataset effectively for diagnostic purposes. Firstly, a smart segmentation mechanism based on the k-means algorithm is applied to this dataset as a preprocessing stage. Then, performance pretrained models are analyzed using different CNN architectures and with our Nish activation function. The statistical rates are obtained by the various EfficientNet models and the highest detection score is obtained with the EfficientNet-B4-ap-nish version, which provides a 97.93% accuracy rate and a 97.33% F1-score. The implications of the proposed method are immense both for present-day applications and future developments.
Collapse
Affiliation(s)
- Zuhal Kurt
- Department of Computer Engineering, Atilim University, Ankara, Turkey
| | - Şahin Işık
- Department of Computer Engineering, Eskisehir Osmangazi University Meselik Campus, Eskisehir, Turkey
| | - Zeynep Kaya
- Department of Electrical and Energy, Bilecik Seyh Edebali University, Osmaneli Vocational School, Bilecik, Turkey
| | - Yıldıray Anagün
- Department of Computer Engineering, Eskisehir Osmangazi University Meselik Campus, Eskisehir, Turkey
| | - Nizameddin Koca
- Department of Internal Medicine, University of Health Sciences, Bursa Yuksek Ihtisas Training and Research Hospital, Bursa, Turkey
| | - Sümeyye Çiçek
- Department of Internal Medicine, University of Health Sciences, Bursa Yuksek Ihtisas Training and Research Hospital, Bursa, Turkey
| |
Collapse
|
20
|
Aslani S, Jacob J. Utilisation of deep learning for COVID-19 diagnosis. Clin Radiol 2023; 78:150-157. [PMID: 36639173 PMCID: PMC9831845 DOI: 10.1016/j.crad.2022.11.006] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 11/21/2022] [Accepted: 11/22/2022] [Indexed: 01/12/2023]
Abstract
The COVID-19 pandemic that began in 2019 has resulted in millions of deaths worldwide. Over this period, the economic and healthcare consequences of COVID-19 infection in survivors of acute COVID-19 infection have become apparent. During the course of the pandemic, computer analysis of medical images and data have been widely used by the medical research community. In particular, deep-learning methods, which are artificial intelligence (AI)-based approaches, have been frequently employed. This paper provides a review of deep-learning-based AI techniques for COVID-19 diagnosis using chest radiography and computed tomography. Thirty papers published from February 2020 to March 2022 that used two-dimensional (2D)/three-dimensional (3D) deep convolutional neural networks combined with transfer learning for COVID-19 detection were reviewed. The review describes how deep-learning methods detect COVID-19, and several limitations of the proposed methods are highlighted.
Collapse
Affiliation(s)
- S Aslani
- Centre for Medical Image Computing and Department of Respiratory Medicine, University College London, London, UK.
| | - J Jacob
- Centre for Medical Image Computing and Department of Respiratory Medicine, University College London, London, UK
| |
Collapse
|
21
|
A survey of machine learning-based methods for COVID-19 medical image analysis. Med Biol Eng Comput 2023; 61:1257-1297. [PMID: 36707488 PMCID: PMC9883138 DOI: 10.1007/s11517-022-02758-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 12/22/2022] [Indexed: 01/29/2023]
Abstract
The ongoing COVID-19 pandemic caused by the SARS-CoV-2 virus has already resulted in 6.6 million deaths with more than 637 million people infected after only 30 months since the first occurrences of the disease in December 2019. Hence, rapid and accurate detection and diagnosis of the disease is the first priority all over the world. Researchers have been working on various methods for COVID-19 detection and as the disease infects lungs, lung image analysis has become a popular research area for detecting the presence of the disease. Medical images from chest X-rays (CXR), computed tomography (CT) images, and lung ultrasound images have been used by automated image analysis systems in artificial intelligence (AI)- and machine learning (ML)-based approaches. Various existing and novel ML, deep learning (DL), transfer learning (TL), and hybrid models have been applied for detecting and classifying COVID-19, segmentation of infected regions, assessing the severity, and tracking patient progress from medical images of COVID-19 patients. In this paper, a comprehensive review of some recent approaches on COVID-19-based image analyses is provided surveying the contributions of existing research efforts, the available image datasets, and the performance metrics used in recent works. The challenges and future research scopes to address the progress of the fight against COVID-19 from the AI perspective are also discussed. The main objective of this paper is therefore to provide a summary of the research works done in COVID detection and analysis from medical image datasets using ML, DL, and TL models by analyzing their novelty and efficiency while mentioning other COVID-19-based review/survey researches to deliver a brief overview on the maximum amount of information on COVID-19-based existing researches.
Collapse
|
22
|
Detection of COVID-19 Case from Chest CT Images Using Deformable Deep Convolutional Neural Network. JOURNAL OF HEALTHCARE ENGINEERING 2023; 2023:4301745. [PMID: 36844950 PMCID: PMC9949952 DOI: 10.1155/2023/4301745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 06/14/2022] [Accepted: 01/24/2023] [Indexed: 02/18/2023]
Abstract
The infectious coronavirus disease (COVID-19) has become a great threat to global human health. Timely and rapid detection of COVID-19 cases is very crucial to control its spreading through isolation measures as well as for proper treatment. Though the real-time reverse transcription-polymerase chain reaction (RT-PCR) test is a widely used technique for COVID-19 infection, recent researches suggest chest computed tomography (CT)-based screening as an effective substitute in cases of time and availability limitations of RT-PCR. In consequence, deep learning-based COVID-19 detection from chest CT images is gaining momentum. Furthermore, visual analysis of data has enhanced the opportunities of maximizing the prediction performance in this big data and deep learning realm. In this article, we have proposed two separate deformable deep networks converting from the conventional convolutional neural network (CNN) and the state-of-the-art ResNet-50, to detect COVID-19 cases from chest CT images. The impact of the deformable concept has been observed through performance comparative analysis among the designed deformable and normal models, and it is found that the deformable models show better prediction results than their normal form. Furthermore, the proposed deformable ResNet-50 model shows better performance than the proposed deformable CNN model. The gradient class activation mapping (Grad-CAM) technique has been used to visualize and check the targeted regions' localization effort at the final convolutional layer and has been found excellent. Total 2481 chest CT images have been used to evaluate the performance of the proposed models with a train-valid-test data splitting ratio of 80 : 10 : 10 in random fashion. The proposed deformable ResNet-50 model achieved training accuracy of 99.5% and test accuracy of 97.6% with specificity of 98.5% and sensitivity of 96.5% which are satisfactory compared with related works. The comprehensive discussion demonstrates that the proposed deformable ResNet-50 model-based COVID-19 detection technique can be useful for clinical applications.
Collapse
|
23
|
Weighted ensemble model for image classification. INTERNATIONAL JOURNAL OF INFORMATION TECHNOLOGY : AN OFFICIAL JOURNAL OF BHARATI VIDYAPEETH'S INSTITUTE OF COMPUTER APPLICATIONS AND MANAGEMENT 2023; 15:557-564. [PMID: 36714094 PMCID: PMC9867993 DOI: 10.1007/s41870-022-01149-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 09/20/2022] [Indexed: 01/24/2023]
Abstract
The Deep Convolutional Neural Network (DCNN) classification models are being tremendously used across many research fields including medical science for image classification. The accuracy of the model and reliability on the results of the model are the key attributes which determine whether a particular model should be used for a specific application or not. A highly accurate model is always desirable for all applications of machine learning as well as deep learning. This paper presents a DCNN based heterogeneous ensemble approach where all DCNN models can be trained on a single dataset and each model can contribute of towards the final output of the ensemble model. The contribution of each model is weighted according to its individual accuracy on the given dataset. Models with higher accuracy has higher contribution in the final output of ensemble model, whereas the models with lower accuracy has lower contribution. This approach, when tested on two different X-ray images datasets of Covid-19, has confirmed the significant increase in 3-class accuracy as compared to the models in literature.
Collapse
|
24
|
Xu Y, Lam HK, Jia G, Jiang J, Liao J, Bao X. Improving COVID-19 CT classification of CNNs by learning parameter-efficient representation. Comput Biol Med 2023; 152:106417. [PMID: 36543003 PMCID: PMC9750504 DOI: 10.1016/j.compbiomed.2022.106417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Revised: 11/22/2022] [Accepted: 12/04/2022] [Indexed: 12/23/2022]
Abstract
The COVID-19 pandemic continues to spread rapidly over the world and causes a tremendous crisis in global human health and the economy. Its early detection and diagnosis are crucial for controlling the further spread. Many deep learning-based methods have been proposed to assist clinicians in automatic COVID-19 diagnosis based on computed tomography imaging. However, challenges still remain, including low data diversity in existing datasets, and unsatisfied detection resulting from insufficient accuracy and sensitivity of deep learning models. To enhance the data diversity, we design augmentation techniques of incremental levels and apply them to the largest open-access benchmark dataset, COVIDx CT-2A. Meanwhile, similarity regularization (SR) derived from contrastive learning is proposed in this study to enable CNNs to learn more parameter-efficient representations, thus improve the accuracy and sensitivity of CNNs. The results on seven commonly used CNNs demonstrate that CNN performance can be improved stably through applying the designed augmentation and SR techniques. In particular, DenseNet121 with SR achieves an average test accuracy of 99.44% in three trials for three-category classification, including normal, non-COVID-19 pneumonia, and COVID-19 pneumonia. The achieved precision, sensitivity, and specificity for the COVID-19 pneumonia category are 98.40%, 99.59%, and 99.50%, respectively. These statistics suggest that our method has surpassed the existing state-of-the-art methods on the COVIDx CT-2A dataset. Source code is available at https://github.com/YujiaKCL/COVID-CT-Similarity-Regularization.
Collapse
Affiliation(s)
- Yujia Xu
- Department of Engineering, King's College London, Strand, London, WC2R 2LS, United Kingdom.
| | - Hak-Keung Lam
- Department of Engineering, King's College London, Strand, London, WC2R 2LS, United Kingdom.
| | - Guangyu Jia
- Department of Engineering, King's College London, Strand, London, WC2R 2LS, United Kingdom.
| | - Jian Jiang
- Department of Engineering, King's College London, Strand, London, WC2R 2LS, United Kingdom.
| | - Junkai Liao
- Department of Engineering, King's College London, Strand, London, WC2R 2LS, United Kingdom.
| | - Xinqi Bao
- Department of Engineering, King's College London, Strand, London, WC2R 2LS, United Kingdom.
| |
Collapse
|
25
|
Bhatele KR, Jha A, Tiwari D, Bhatele M, Sharma S, Mithora MR, Singhal S. COVID-19 Detection: A Systematic Review of Machine and Deep Learning-Based Approaches Utilizing Chest X-Rays and CT Scans. Cognit Comput 2022:1-38. [PMID: 36593991 PMCID: PMC9797382 DOI: 10.1007/s12559-022-10076-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Accepted: 11/15/2022] [Indexed: 12/30/2022]
Abstract
This review study presents the state-of-the-art machine and deep learning-based COVID-19 detection approaches utilizing the chest X-rays or computed tomography (CT) scans. This study aims to systematically scrutinize as well as to discourse challenges and limitations of the existing state-of-the-art research published in this domain from March 2020 to August 2021. This study also presents a comparative analysis of the performance of four majorly used deep transfer learning (DTL) models like VGG16, VGG19, ResNet50, and DenseNet over the COVID-19 local CT scans dataset and global chest X-ray dataset. A brief illustration of the majorly used chest X-ray and CT scan datasets of COVID-19 patients utilized in state-of-the-art COVID-19 detection approaches are also presented for future research. The research databases like IEEE Xplore, PubMed, and Web of Science are searched exhaustively for carrying out this survey. For the comparison analysis, four deep transfer learning models like VGG16, VGG19, ResNet50, and DenseNet are initially fine-tuned and trained using the augmented local CT scans and global chest X-ray dataset in order to observe their performance. This review study summarizes major findings like AI technique employed, type of classification performed, used datasets, results in terms of accuracy, specificity, sensitivity, F1 score, etc., along with the limitations, and future work for COVID-19 detection in tabular manner for conciseness. The performance analysis of the four majorly used deep transfer learning models affirms that Visual Geometry Group 19 (VGG19) model delivered the best performance over both COVID-19 local CT scans dataset and global chest X-ray dataset.
Collapse
Affiliation(s)
| | - Anand Jha
- RJIT BSF Academy, Tekanpur, Gwalior India
| | | | | | | | | | | |
Collapse
|
26
|
Interactive framework for Covid-19 detection and segmentation with feedback facility for dynamically improved accuracy and trust. PLoS One 2022; 17:e0278487. [PMID: 36548288 PMCID: PMC9778629 DOI: 10.1371/journal.pone.0278487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2022] [Accepted: 11/17/2022] [Indexed: 12/24/2022] Open
Abstract
Due to the severity and speed of spread of the ongoing Covid-19 pandemic, fast but accurate diagnosis of Covid-19 patients has become a crucial task. Achievements in this respect might enlighten future efforts for the containment of other possible pandemics. Researchers from various fields have been trying to provide novel ideas for models or systems to identify Covid-19 patients from different medical and non-medical data. AI-based researchers have also been trying to contribute to this area by mostly providing novel approaches of automated systems using convolutional neural network (CNN) and deep neural network (DNN) for Covid-19 detection and diagnosis. Due to the efficiency of deep learning (DL) and transfer learning (TL) models in classification and segmentation tasks, most of the recent AI-based researches proposed various DL and TL models for Covid-19 detection and infected region segmentation from chest medical images like X-rays or CT images. This paper describes a web-based application framework for Covid-19 lung infection detection and segmentation. The proposed framework is characterized by a feedback mechanism for self learning and tuning. It uses variations of three popular DL models, namely Mask R-CNN, U-Net, and U-Net++. The models were trained, evaluated and tested using CT images of Covid patients which were collected from two different sources. The web application provide a simple user friendly interface to process the CT images from various resources using the chosen models, thresholds and other parameters to generate the decisions on detection and segmentation. The models achieve high performance scores for Dice similarity, Jaccard similarity, accuracy, loss, and precision values. The U-Net model outperformed the other models with more than 98% accuracy.
Collapse
|
27
|
Hasija S, Akash P, Bhargav Hemanth M, Kumar A, Sharma S. A novel approach for detection of COVID-19 and Pneumonia using only binary classification from chest CT-scans. NEUROSCIENCE INFORMATICS 2022; 2:100069. [PMID: 36741276 PMCID: PMC8958781 DOI: 10.1016/j.neuri.2022.100069] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 03/22/2022] [Indexed: 02/07/2023]
Abstract
The novel Coronavirus, Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) spread all over the world, causing a dramatic shift in circumstances that resulted in a massive pandemic, affecting the world's well-being and stability. It is an RNA virus that can infect both humans as well as animals. Diagnosis of the virus as soon as possible could contain and avoid a serious COVID-19 outbreak. Current pharmaceutical techniques and diagnostic methods tests such as Reverse Transcription-Polymerase Chain Reaction (RT-PCR) and Serology tests are time-consuming, expensive, and require a well-equipped laboratory for analysis, making them restrictive and inaccessible to everyone. Deep Learning has grown in popularity in recent years, and it now plays a crucial role in Image Classification, which also involves Medical Imaging. Using chest CT scans, this study explores the problem statement automation of differentiating COVID-19 contaminated individuals from healthy individuals. Convolutional Neural Networks (CNNs) can be trained to detect patterns in computed tomography scans (CT scans). Hence, different CNN models were used in the current study to identify variations in chest CT scans, with accuracies ranging from 91% to 98%. The Multiclass Classification method is used to build these architectures. This study also proposes a new approach for classifying CT images that use two binary classifications combined to work together, achieving 98.38% accuracy. All of these architectures' performances are compared using different classification metrics.
Collapse
|
28
|
Dubey AK, Mohbey KK. Combined Cloud-Based Inference System for the Classification of COVID-19 in CT-Scan and X-Ray Images. NEW GENERATION COMPUTING 2022; 41:61-84. [PMID: 36439302 PMCID: PMC9676871 DOI: 10.1007/s00354-022-00195-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Accepted: 11/09/2022] [Indexed: 06/16/2023]
Abstract
In the past few years, most of the work has been done around the classification of covid-19 using different images like CT-scan, X-ray, and ultrasound. But none of that is capable enough to deal with each of these image types on a single common platform and can identify the possibility that a person is suffering from COVID or not. Thus, we realized there should be a platform to identify COVID-19 in CT-scan and X-ray images on the fly. So, to fulfill this need, we proposed an AI model to identify CT-scan and X-ray images from each other and then use this inference to classify them of COVID positive or negative. The proposed model uses the inception architecture under the hood and trains on the open-source extended covid-19 dataset. The dataset consists of plenty of images for both image types and is of size 4 GB. We achieved an accuracy of 100%, average macro-Precision of 100%, average macro-Recall of 100%, average macro f1-score of 100%, and AUC score of 99.6%. Furthermore, in this work, cloud-based architecture is proposed to massively scale and load balance as the Number of user requests rises. As a result, it will deliver a service with minimal latency to all users.
Collapse
Affiliation(s)
- Ankit Kumar Dubey
- Department of Computer Science, Central University of Rajasthan, Ajmer, India
| | | |
Collapse
|
29
|
Loh HW, Ooi CP, Seoni S, Barua PD, Molinari F, Acharya UR. Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011-2022). COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107161. [PMID: 36228495 DOI: 10.1016/j.cmpb.2022.107161] [Citation(s) in RCA: 88] [Impact Index Per Article: 44.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 09/16/2022] [Accepted: 09/25/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVES Artificial intelligence (AI) has branched out to various applications in healthcare, such as health services management, predictive medicine, clinical decision-making, and patient data and diagnostics. Although AI models have achieved human-like performance, their use is still limited because they are seen as a black box. This lack of trust remains the main reason for their low use in practice, especially in healthcare. Hence, explainable artificial intelligence (XAI) has been introduced as a technique that can provide confidence in the model's prediction by explaining how the prediction is derived, thereby encouraging the use of AI systems in healthcare. The primary goal of this review is to provide areas of healthcare that require more attention from the XAI research community. METHODS Multiple journal databases were thoroughly searched using PRISMA guidelines 2020. Studies that do not appear in Q1 journals, which are highly credible, were excluded. RESULTS In this review, we surveyed 99 Q1 articles covering the following XAI techniques: SHAP, LIME, GradCAM, LRP, Fuzzy classifier, EBM, CBR, rule-based systems, and others. CONCLUSION We discovered that detecting abnormalities in 1D biosignals and identifying key text in clinical notes are areas that require more attention from the XAI research community. We hope this is review will encourage the development of a holistic cloud system for a smart city.
Collapse
Affiliation(s)
- Hui Wen Loh
- School of Science and Technology, Singapore University of Social Sciences, Singapore
| | - Chui Ping Ooi
- School of Science and Technology, Singapore University of Social Sciences, Singapore
| | - Silvia Seoni
- Department of Electronics and Telecommunications, Biolab, Politecnico di Torino, Torino 10129, Italy
| | - Prabal Datta Barua
- Faculty of Engineering and Information Technology, University of Technology Sydney, Australia; School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Australia
| | - Filippo Molinari
- Department of Electronics and Telecommunications, Biolab, Politecnico di Torino, Torino 10129, Italy
| | - U Rajendra Acharya
- School of Science and Technology, Singapore University of Social Sciences, Singapore; School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Australia; School of Engineering, Ngee Ann Polytechnic, Singapore; Department of Bioinformatics and Medical Engineering, Asia University, Taiwan; Research Organization for Advanced Science and Technology (IROAST), Kumamoto University, Kumamoto, Japan.
| |
Collapse
|
30
|
Dron L, Kalatharan V, Gupta A, Haggstrom J, Zariffa N, Morris AD, Arora P, Park J. Data capture and sharing in the COVID-19 pandemic: a cause for concern. Lancet Digit Health 2022; 4:e748-e756. [PMID: 36150783 PMCID: PMC9489064 DOI: 10.1016/s2589-7500(22)00147-9] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 07/08/2022] [Accepted: 07/13/2022] [Indexed: 12/25/2022]
Abstract
Routine health care and research have been profoundly influenced by digital-health technologies. These technologies range from primary data collection in electronic health records (EHRs) and administrative claims to web-based artificial-intelligence-driven analyses. There has been increased use of such health technologies during the COVID-19 pandemic, driven in part by the availability of these data. In some cases, this has resulted in profound and potentially long-lasting positive effects on medical research and routine health-care delivery. In other cases, high profile shortcomings have been evident, potentially attenuating the effect of-or representing a decreased appetite for-digital-health transformation. In this Series paper, we provide an overview of how facets of health technologies in routinely collected medical data (including EHRs and digital data sharing) have been used for COVID-19 research and tracking, and how these technologies might influence future pandemics and health-care research. We explore the strengths and weaknesses of digital-health research during the COVID-19 pandemic and discuss how learnings from COVID-19 might translate into new approaches in a post-pandemic era.
Collapse
Affiliation(s)
- Louis Dron
- Real World & Advanced Analytics, Cytel Health, Vancouver, BC, Canada,Correspondence to: Mr Louis Dron, Real World & Advanced Analytics, Cytel Health, Vancouver, BC V5Z 4J7, Canada
| | - Vinusha Kalatharan
- Department of Epidemiology and Biostatistics, Western University, London, ON, Canada
| | - Alind Gupta
- Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
| | - Jonas Haggstrom
- Real World & Advanced Analytics, Cytel Health, Vancouver, BC, Canada,The International COVID-19 Data Alliance (ICODA), Health Data Research UK, London, UK
| | - Nevine Zariffa
- The International COVID-19 Data Alliance (ICODA), Health Data Research UK, London, UK,NMD Group, LLC, Bala Cynwyd, PA, USA
| | - Andrew D Morris
- The International COVID-19 Data Alliance (ICODA), Health Data Research UK, London, UK
| | - Paul Arora
- Real World & Advanced Analytics, Cytel Health, Vancouver, BC, Canada,Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
| | - Jay Park
- Department of Experimental Medicine, Department of Medicine, University of British Columbia, Vancouver, BC, Canada,Department of Health Research Methods, Evidence and Impact, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
31
|
A semi-supervised learning approach for COVID-19 detection from chest CT scans. Neurocomputing 2022; 503:314-324. [PMID: 35765410 PMCID: PMC9221925 DOI: 10.1016/j.neucom.2022.06.076] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 05/11/2022] [Accepted: 06/18/2022] [Indexed: 01/17/2023]
Abstract
COVID-19 has spread rapidly all over the world and has infected more than 200 countries and regions. Early screening of suspected infected patients is essential for preventing and combating COVID-19. Computed Tomography (CT) is a fast and efficient tool which can quickly provide chest scan results. To reduce the burden on doctors of reading CTs, in this article, a high precision diagnosis algorithm of COVID-19 from chest CTs is designed for intelligent diagnosis. A semi-supervised learning approach is developed to solve the problem when only small amount of labelled data is available. While following the MixMatch rules to conduct sophisticated data augmentation, we introduce a model training technique to reduce the risk of model over-fitting. At the same time, a new data enhancement method is proposed to modify the regularization term in MixMatch. To further enhance the generalization of the model, a convolutional neural network based on an attention mechanism is then developed that enables to extract multi-scale features on CT scans. The proposed algorithm is evaluated on an independent CT dataset of the chest from COVID-19 and achieves the area under the receiver operating characteristic curve (AUC) value of 0.932, accuracy of 90.1%, sensitivity of 91.4%, specificity of 88.9%, and F1-score of 89.9%. The results show that the proposed algorithm can accurately diagnose whether a chest CT belongs to a positive or negative indication of COVID-19, and can help doctors to diagnose rapidly in the early stages of a COVID-19 outbreak.
Collapse
|
32
|
Sarv Ahrabi S, Momenzadeh A, Baccarelli E, Scarpiniti M, Piazzo L. How much BiGAN and CycleGAN-learned hidden features are effective for COVID-19 detection from CT images? A comparative study. THE JOURNAL OF SUPERCOMPUTING 2022; 79:2850-2881. [PMID: 36042937 PMCID: PMC9411851 DOI: 10.1007/s11227-022-04775-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 08/10/2022] [Indexed: 06/15/2023]
Abstract
Bidirectional generative adversarial networks (BiGANs) and cycle generative adversarial networks (CycleGANs) are two emerging machine learning models that, up to now, have been used as generative models, i.e., to generate output data sampled from a target probability distribution. However, these models are also equipped with encoding modules, which, after weakly supervised training, could be, in principle, exploited for the extraction of hidden features from the input data. At the present time, how these extracted features could be effectively exploited for classification tasks is still an unexplored field. Hence, motivated by this consideration, in this paper, we develop and numerically test the performance of a novel inference engine that relies on the exploitation of BiGAN and CycleGAN-learned hidden features for the detection of COVID-19 disease from other lung diseases in computer tomography (CT) scans. In this respect, the main contributions of the paper are twofold. First, we develop a kernel density estimation (KDE)-based inference method, which, in the training phase, leverages the hidden features extracted by BiGANs and CycleGANs for estimating the (a priori unknown) probability density function (PDF) of the CT scans of COVID-19 patients and, then, in the inference phase, uses it as a target COVID-PDF for the detection of COVID diseases. As a second major contribution, we numerically evaluate and compare the classification accuracies of the implemented BiGAN and CycleGAN models against the ones of some state-of-the-art methods, which rely on the unsupervised training of convolutional autoencoders (CAEs) for attaining feature extraction. The performance comparisons are carried out by considering a spectrum of different training loss functions and distance metrics. The obtained classification accuracies of the proposed CycleGAN-based (resp., BiGAN-based) models outperform the corresponding ones of the considered benchmark CAE-based models of about 16% (resp., 14%).
Collapse
Affiliation(s)
- Sima Sarv Ahrabi
- Department of Information Engineering, Electronics and Telecommunications, Sapienza University or Rome, Via Eudossiana, 18, 00184 Roma, Italy
| | - Alireza Momenzadeh
- Department of Information Engineering, Electronics and Telecommunications, Sapienza University or Rome, Via Eudossiana, 18, 00184 Roma, Italy
| | - Enzo Baccarelli
- Department of Information Engineering, Electronics and Telecommunications, Sapienza University or Rome, Via Eudossiana, 18, 00184 Roma, Italy
| | - Michele Scarpiniti
- Department of Information Engineering, Electronics and Telecommunications, Sapienza University or Rome, Via Eudossiana, 18, 00184 Roma, Italy
| | - Lorenzo Piazzo
- Department of Information Engineering, Electronics and Telecommunications, Sapienza University or Rome, Via Eudossiana, 18, 00184 Roma, Italy
| |
Collapse
|
33
|
Ortiz-Vilchis P, Ramirez-Arellano A. An Entropy-Based Measure of Complexity: An Application in Lung-Damage. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1119. [PMID: 36010783 PMCID: PMC9407132 DOI: 10.3390/e24081119] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/18/2022] [Revised: 07/23/2022] [Accepted: 08/12/2022] [Indexed: 06/15/2023]
Abstract
The computed tomography (CT) chest is a tool for diagnostic tests and the early evaluation of lung infections, pulmonary interstitial damage, and complications caused by common pneumonia and COVID-19. Additionally, computer-aided diagnostic systems and methods based on entropy, fractality, and deep learning have been implemented to analyse lung CT images. This article aims to introduce an Entropy-based Measure of Complexity (EMC). In addition, derived from EMC, a Lung Damage Measure (LDM) is introduced to show a medical application. CT scans of 486 healthy subjects, 263 diagnosed with COVID-19, and 329 with pneumonia were analysed using the LDM. The statistical analysis shows a significant difference in LDM between healthy subjects and those suffering from COVID-19 and common pneumonia. The LDM of common pneumonia was the highest, followed by COVID-19 and healthy subjects. Furthermore, LDM increased as much as clinical classification and CO-RADS scores. Thus, LDM is a measure that could be used to determine or confirm the scored severity. On the other hand, the d-summable information model best fits the information obtained by the covering of the CT; thus, it can be the cornerstone for formulating a fractional LDM.
Collapse
|
34
|
Lee JRH, Pavlova M, Famouri M, Wong A. Cancer-Net SCa: tailored deep neural network designs for detection of skin cancer from dermoscopy images. BMC Med Imaging 2022; 22:143. [PMID: 35945505 PMCID: PMC9364616 DOI: 10.1186/s12880-022-00871-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Accepted: 07/26/2022] [Indexed: 11/25/2022] Open
Abstract
Background Skin cancer continues to be the most frequently diagnosed form of cancer in the U.S., with not only significant effects on health and well-being but also significant economic costs associated with treatment. A crucial step to the treatment and management of skin cancer is effective early detection with key screening approaches such as dermoscopy examinations, leading to stronger recovery prognoses. Motivated by the advances of deep learning and inspired by the open source initiatives in the research community, in this study we introduce Cancer-Net SCa, a suite of deep neural network designs tailored for the detection of skin cancer from dermoscopy images that is open source and available to the general public. To the best of the authors’ knowledge, Cancer-Net SCa comprises the first machine-driven design of deep neural network architectures tailored specifically for skin cancer detection, one of which leverages attention condensers for an efficient self-attention design. Results We investigate and audit the behaviour of Cancer-Net SCa in a responsible and transparent manner through explainability-driven performance validation. All the proposed designs achieved improved accuracy when compared to the ResNet-50 architecture while also achieving significantly reduced architectural and computational complexity. In addition, when evaluating the decision making process of the networks, it can be seen that diagnostically relevant critical factors are leveraged rather than irrelevant visual indicators and imaging artifacts. Conclusion The proposed Cancer-Net SCa designs achieve strong skin cancer detection performance on the International Skin Imaging Collaboration (ISIC) dataset, while providing a strong balance between computation and architectural efficiency and accuracy. While Cancer-Net SCa is not a production-ready screening solution, the hope is that the release of Cancer-Net SCa in open source, open access form will encourage researchers, clinicians, and citizen data scientists alike to leverage and build upon them.
Collapse
Affiliation(s)
- James Ren Hou Lee
- Vision and Image Processing Research Group, University of Waterloo, Waterloo, Canada.
| | - Maya Pavlova
- Vision and Image Processing Research Group, University of Waterloo, Waterloo, Canada.,DarwinAI Corp, Waterloo, Canada
| | | | - Alexander Wong
- Vision and Image Processing Research Group, University of Waterloo, Waterloo, Canada.,Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, Canada.,DarwinAI Corp, Waterloo, Canada
| |
Collapse
|
35
|
Alshayeji MH, ChandraBhasi Sindhu S, Abed S. CAD systems for COVID-19 diagnosis and disease stage classification by segmentation of infected regions from CT images. BMC Bioinformatics 2022; 23:264. [PMID: 35794537 PMCID: PMC9261058 DOI: 10.1186/s12859-022-04818-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Accepted: 06/30/2022] [Indexed: 11/19/2022] Open
Abstract
Background Here propose a computer-aided diagnosis (CAD) system to differentiate COVID-19 (the coronavirus disease of 2019) patients from normal cases, as well as to perform infection region segmentation along with infection severity estimation using computed tomography (CT) images. The developed system facilitates timely administration of appropriate treatment by identifying the disease stage without reliance on medical professionals. So far, this developed model gives the most accurate, fully automatic COVID-19 real-time CAD framework. Results The CT image dataset of COVID-19 and non-COVID-19 individuals were subjected to conventional ML stages to perform binary classification. In the feature extraction stage, SIFT, SURF, ORB image descriptors and bag of features technique were implemented for the appropriate differentiation of chest CT regions affected with COVID-19 from normal cases. This is the first work introducing this concept for COVID-19 diagnosis application. The preferred diverse database and selected features that are invariant to scale, rotation, distortion, noise etc. make this framework real-time applicable. Also, this fully automatic approach which is faster compared to existing models helps to incorporate it into CAD systems. The severity score was measured based on the infected regions along the lung field. Infected regions were segmented through a three-class semantic segmentation of the lung CT image. Using severity score, the disease stages were classified as mild if the lesion area covers less than 25% of the lung area; moderate if 25–50% and severe if greater than 50%. Our proposed model resulted in classification accuracy of 99.7% with a PNN classifier, along with area under the curve (AUC) of 0.9988, 99.6% sensitivity, 99.9% specificity and a misclassification rate of 0.0027. The developed infected region segmentation model gave 99.47% global accuracy, 94.04% mean accuracy, 0.8968 mean IoU (intersection over union), 0.9899 weighted IoU, and a mean Boundary F1 (BF) contour matching score of 0.9453, using Deepabv3+ with its weights initialized using ResNet-50. Conclusions The developed CAD system model is able to perform fully automatic and accurate diagnosis of COVID-19 along with infected region extraction and disease stage identification. The ORB image descriptor with bag of features technique and PNN classifier achieved the superior classification performance.
Collapse
Affiliation(s)
- Mohammad H Alshayeji
- Computer Engineering Department, College of Engineering and Petroleum, Kuwait University, P.O. Box 5969, 13060, Safat, Kuwait City, Kuwait.
| | | | - Sa'ed Abed
- Computer Engineering Department, College of Engineering and Petroleum, Kuwait University, P.O. Box 5969, 13060, Safat, Kuwait City, Kuwait
| |
Collapse
|
36
|
PCA-Based Incremental Extreme Learning Machine (PCA-IELM) for COVID-19 Patient Diagnosis Using Chest X-Ray Images. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:9107430. [PMID: 35800685 PMCID: PMC9253873 DOI: 10.1155/2022/9107430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Accepted: 04/29/2022] [Indexed: 11/24/2022]
Abstract
Novel coronavirus 2019 has created a pandemic and was first reported in December 2019. It has had very adverse consequences on people's daily life, healthcare, and the world's economy as well. According to the World Health Organization's most recent statistics, COVID-19 has become a worldwide pandemic, and the number of infected persons and fatalities growing at an alarming rate. It is highly required to have an effective system to early detect the COVID-19 patients to curb the further spreading of the virus from the affected person. Therefore, to early identify positive cases in patients and to support radiologists in the automatic diagnosis of COVID-19 from X-ray images, a novel method PCA-IELM is proposed based on principal component analysis (PCA) and incremental extreme learning machine. The suggested method's key addition is that it considers the benefits of PCA and the incremental extreme learning machine. Further, our strategy PCA-IELM reduces the input dimension by extracting the most important information from an image. Consequently, the technique can effectively increase the COVID-19 patient prediction performance. In addition to these, PCA-IELM has a faster training speed than a multi-layer neural network. The proposed approach was tested on a COVID-19 patient's chest X-ray image dataset. The experimental results indicate that the proposed approach PCA-IELM outperforms PCA-SVM and PCA-ELM in terms of accuracy (98.11%), precision (96.11%), recall (97.50%), F1-score (98.50%), etc., and training speed.
Collapse
|
37
|
Goel K, Sindhgatta R, Kalra S, Goel R, Mutreja P. The effect of machine learning explanations on user trust for automated diagnosis of COVID-19. Comput Biol Med 2022; 146:105587. [PMID: 35551007 PMCID: PMC9080676 DOI: 10.1016/j.compbiomed.2022.105587] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Revised: 05/01/2022] [Accepted: 05/02/2022] [Indexed: 12/16/2022]
Abstract
Recent years have seen deep neural networks (DNN) gain widespread acceptance for a range of computer vision tasks that include medical imaging. Motivated by their performance, multiple studies have focused on designing deep convolutional neural network architectures tailored to detect COVID-19 cases from chest computerized tomography (CT) images. However, a fundamental challenge of DNN models is their inability to explain the reasoning for a diagnosis. Explainability is essential for medical diagnosis, where understanding the reason for a decision is as important as the decision itself. A variety of algorithms have been proposed that generate explanations and strive to enhance users' trust in DNN models. Yet, the influence of the generated machine learning explanations on clinicians' trust for complex decision tasks in healthcare has not been understood. This study evaluates the quality of explanations generated for a deep learning model that detects COVID-19 based on CT images and examines the influence of the quality of these explanations on clinicians' trust. First, we collect radiologist-annotated explanations of the CT images for the diagnosis of COVID-19 to create the ground truth. We then compare ground truth explanations with machine learning explanations. Our evaluation shows that the explanations produced. by different algorithms were often correct (high precision) when compared to the radiologist annotated ground truth but a significant number of explanations were missed (significantly lower recall). We further conduct a controlled experiment to study the influence of machine learning explanations on clinicians' trust for the diagnosis of COVID-19. Our findings show that while the clinicians' trust in automated diagnosis increases with the explanations, their reliance on the diagnosis reduces as clinicians are less likely to rely on algorithms that are not close to human judgement. Clinicians want higher recall of the explanations for a better understanding of an automated diagnosis system.
Collapse
Affiliation(s)
- Kanika Goel
- School of Information Systems, Queensland University of Technology, Australia,Corresponding author
| | | | - Sumit Kalra
- Department of Computer Science, Indian Institute of Technology, Jodhpur, India
| | - Rohan Goel
- COVID-19 Centre at Guru Teg Bahadur (GTB) Hospital, Delhi, India
| | - Preeti Mutreja
- All India Institute of Medical Sciences (AIIMS), Jodhpur, India
| |
Collapse
|
38
|
Singh G. Think positive: An interpretable neural network for image recognition. Neural Netw 2022; 151:178-189. [PMID: 35439663 PMCID: PMC8978459 DOI: 10.1016/j.neunet.2022.03.034] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 03/16/2022] [Accepted: 03/28/2022] [Indexed: 11/19/2022]
Abstract
The COVID-19 pandemic is an ongoing pandemic and is placing additional burden on healthcare systems around the world. Timely and effectively detecting the virus can help to reduce the spread of the disease. Although, RT-PCR is still a gold standard for COVID-19 testing, deep learning models to identify the virus from medical images can also be helpful in certain circumstances. In particular, in situations when patients undergo routine X-rays and/or CT-scans tests but within a few days of such tests they develop respiratory complications. Deep learning models can also be used for pre-screening prior to RT-PCR testing. However, the transparency/interpretability of the reasoning process of predictions made by such deep learning models is essential. In this paper, we propose an interpretable deep learning model that uses positive reasoning process to make predictions. We trained and tested our model over the dataset of chest CT-scan images of COVID-19 patients, normal people and pneumonia patients. Our model gives the accuracy, precision, recall and F-score equal to 99.48%, 0.99, 0.99 and 0.99, respectively.
Collapse
Affiliation(s)
- Gurmail Singh
- Faculty of Engineering and Applied Science, University of Regina, 3737 Wascana Pkwy, Regina, SK S4S 0A2, Canada.
| |
Collapse
|
39
|
Pavlova M, Terhljan N, Chung AG, Zhao A, Surana S, Aboutalebi H, Gunraj H, Sabri A, Alaref A, Wong A. COVID-Net CXR-2: An Enhanced Deep Convolutional Neural Network Design for Detection of COVID-19 Cases From Chest X-ray Images. Front Med (Lausanne) 2022; 9:861680. [PMID: 35755067 PMCID: PMC9226387 DOI: 10.3389/fmed.2022.861680] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Accepted: 05/12/2022] [Indexed: 01/08/2023] Open
Abstract
As the COVID-19 pandemic devastates globally, the use of chest X-ray (CXR) imaging as a complimentary screening strategy to RT-PCR testing continues to grow given its routine clinical use for respiratory complaint. As part of the COVID-Net open source initiative, we introduce COVID-Net CXR-2, an enhanced deep convolutional neural network design for COVID-19 detection from CXR images built using a greater quantity and diversity of patients than the original COVID-Net. We also introduce a new benchmark dataset composed of 19,203 CXR images from a multinational cohort of 16,656 patients from at least 51 countries, making it the largest, most diverse COVID-19 CXR dataset in open access form. The COVID-Net CXR-2 network achieves sensitivity and positive predictive value of 95.5 and 97.0%, respectively, and was audited in a transparent and responsible manner. Explainability-driven performance validation was used during auditing to gain deeper insights in its decision-making behavior and to ensure clinically relevant factors are leveraged for improving trust in its usage. Radiologist validation was also conducted, where select cases were reviewed and reported on by two board-certified radiologists with over 10 and 19 years of experience, respectively, and showed that the critical factors leveraged by COVID-Net CXR-2 are consistent with radiologist interpretations.
Collapse
Affiliation(s)
- Maya Pavlova
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Naomi Terhljan
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Audrey G. Chung
- Waterloo AI Institute, University of Waterloo, Waterloo, ON, Canada
- DarwinAI Corp., Waterloo, ON, Canada
| | - Andy Zhao
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Siddharth Surana
- Cheriton School of Computer Science, University of Waterloo, Waterloo, ON, Canada
| | - Hossein Aboutalebi
- Waterloo AI Institute, University of Waterloo, Waterloo, ON, Canada
- Cheriton School of Computer Science, University of Waterloo, Waterloo, ON, Canada
| | - Hayden Gunraj
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Ali Sabri
- Department of Radiology, McMaster University, Hamilton, ON, Canada
- Niagara Health System, St. Catharines, ON, Canada
| | - Amer Alaref
- Department of Diagnostic Imaging, Northern Ontario School of Medicine, Thunder Bay, ON, Canada
- Department of Diagnostic Radiology, Thunder Bay Regional Health Sciences Centre, Thunder Bay, ON, Canada
| | - Alexander Wong
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
- Waterloo AI Institute, University of Waterloo, Waterloo, ON, Canada
- DarwinAI Corp., Waterloo, ON, Canada
| |
Collapse
|
40
|
Sri Kavya N, Shilpa T, Veeranjaneyulu N, Divya Priya D. Detecting Covid19 and pneumonia from chest X-ray images using deep convolutional neural networks. MATERIALS TODAY. PROCEEDINGS 2022; 64:737-743. [PMID: 35607444 PMCID: PMC9117408 DOI: 10.1016/j.matpr.2022.05.199] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
With the current COVID19 pandemic, we have to weigh human life, prosperity, and value, while implicitly acknowledging that controlling case spread and mortality is a challenge. Identifying COVID19-infected patients and disconnecting them to avoid COVID transmission is one of the most difficult tasks for clinicians. As a result, figuring out who infected with covid19 is crucial. COVID19 is identified using a 4-6-hour reverse transcription-polymerase chain reaction (RT-PCR). Another way to detect Coronavirus early in the disease process is by using chest X-rays (CXR).We extracted characteristics from chest X-ray images using VGG16 and ResNet50 deep learning algorithms, then classified them into three groups: viral pneumonia, normal, and COVID19. We ran 15,153 images through the models to see how accurate they were in real-world situations. For detecting COVID19 cases, the VGG16 model has an average accuracy of 89.34 %, whereas ResNet50 has an accuracy of 91.39 %. When utilizing deep learning to identify COVID19, however, a larger dataset is necessary. It has the desired effect of detecting situations accurately.
Collapse
Affiliation(s)
- Nallamothu Sri Kavya
- Department of IT, Vignan's Foundation for Science, Technology and Research (Deemed to be University), Vadlamudi, Guntur, Andhra Pradesh, India
| | - Thotapalli Shilpa
- Department of IT, Vignan's Foundation for Science, Technology and Research (Deemed to be University), Vadlamudi, Guntur, Andhra Pradesh, India
| | - N Veeranjaneyulu
- Department of IT, Vignan's Foundation for Science, Technology and Research (Deemed to be University), Vadlamudi, Guntur, Andhra Pradesh, India
| | - D Divya Priya
- Department of Computer Science and Engineering, MLR Institute of Technology, Hyderabad, India
| |
Collapse
|
41
|
Suri JS, Agarwal S, Chabert GL, Carriero A, Paschè A, Danna PSC, Saba L, Mehmedović A, Faa G, Singh IM, Turk M, Chadha PS, Johri AM, Khanna NN, Mavrogeni S, Laird JR, Pareek G, Miner M, Sobel DW, Balestrieri A, Sfikakis PP, Tsoulfas G, Protogerou AD, Misra DP, Agarwal V, Kitas GD, Teji JS, Al-Maini M, Dhanjil SK, Nicolaides A, Sharma A, Rathore V, Fatemi M, Alizad A, Krishnan PR, Nagy F, Ruzsa Z, Fouda MM, Naidu S, Viskovic K, Kalra MK. COVLIAS 1.0 Lesion vs. MedSeg: An Artificial Intelligence Framework for Automated Lesion Segmentation in COVID-19 Lung Computed Tomography Scans. Diagnostics (Basel) 2022; 12:diagnostics12051283. [PMID: 35626438 PMCID: PMC9141749 DOI: 10.3390/diagnostics12051283] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2022] [Revised: 05/18/2022] [Accepted: 05/19/2022] [Indexed: 02/01/2023] Open
Abstract
Background: COVID-19 is a disease with multiple variants, and is quickly spreading throughout the world. It is crucial to identify patients who are suspected of having COVID-19 early, because the vaccine is not readily available in certain parts of the world. Methodology: Lung computed tomography (CT) imaging can be used to diagnose COVID-19 as an alternative to the RT-PCR test in some cases. The occurrence of ground-glass opacities in the lung region is a characteristic of COVID-19 in chest CT scans, and these are daunting to locate and segment manually. The proposed study consists of a combination of solo deep learning (DL) and hybrid DL (HDL) models to tackle the lesion location and segmentation more quickly. One DL and four HDL models—namely, PSPNet, VGG-SegNet, ResNet-SegNet, VGG-UNet, and ResNet-UNet—were trained by an expert radiologist. The training scheme adopted a fivefold cross-validation strategy on a cohort of 3000 images selected from a set of 40 COVID-19-positive individuals. Results: The proposed variability study uses tracings from two trained radiologists as part of the validation. Five artificial intelligence (AI) models were benchmarked against MedSeg. The best AI model, ResNet-UNet, was superior to MedSeg by 9% and 15% for Dice and Jaccard, respectively, when compared against MD 1, and by 4% and 8%, respectively, when compared against MD 2. Statistical tests—namely, the Mann−Whitney test, paired t-test, and Wilcoxon test—demonstrated its stability and reliability, with p < 0.0001. The online system for each slice was <1 s. Conclusions: The AI models reliably located and segmented COVID-19 lesions in CT scans. The COVLIAS 1.0Lesion lesion locator passed the intervariability test.
Collapse
Affiliation(s)
- Jasjit S. Suri
- Stroke Diagnostic and Monitoring Division, AtheroPoint™, Roseville, CA 95661, USA; (I.M.S.); (P.S.C.)
- Advanced Knowledge Engineering Centre, GBTI, Roseville, CA 95661, USA;
- Correspondence: ; Tel.: +1-(916)-749-5628
| | - Sushant Agarwal
- Advanced Knowledge Engineering Centre, GBTI, Roseville, CA 95661, USA;
- Department of Computer Science Engineering, PSIT, Kanpur 209305, India
| | - Gian Luca Chabert
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy; (G.L.C.); (A.P.); (P.S.C.D.); (L.S.); (A.B.)
| | - Alessandro Carriero
- Department of Radiology, “Maggiore della Carità” Hospital, University of Piemonte Orientale (UPO), Via Solaroli 17, 28100 Novara, Italy;
| | - Alessio Paschè
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy; (G.L.C.); (A.P.); (P.S.C.D.); (L.S.); (A.B.)
| | - Pietro S. C. Danna
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy; (G.L.C.); (A.P.); (P.S.C.D.); (L.S.); (A.B.)
| | - Luca Saba
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy; (G.L.C.); (A.P.); (P.S.C.D.); (L.S.); (A.B.)
| | - Armin Mehmedović
- University Hospital for Infectious Diseases, 10000 Zagreb, Croatia; (A.M.); (K.V.)
| | - Gavino Faa
- Department of Pathology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy;
| | - Inder M. Singh
- Stroke Diagnostic and Monitoring Division, AtheroPoint™, Roseville, CA 95661, USA; (I.M.S.); (P.S.C.)
| | - Monika Turk
- The Hanse-Wissenschaftskolleg Institute for Advanced Study, 27753 Delmenhorst, Germany;
| | - Paramjit S. Chadha
- Stroke Diagnostic and Monitoring Division, AtheroPoint™, Roseville, CA 95661, USA; (I.M.S.); (P.S.C.)
| | - Amer M. Johri
- Department of Medicine, Division of Cardiology, Queen’s University, Kingston, ON K7L 3N6, Canada;
| | - Narendra N. Khanna
- Department of Cardiology, Indraprastha APOLLO Hospitals, New Delhi 110076, India;
| | - Sophie Mavrogeni
- Cardiology Clinic, Onassis Cardiac Surgery Center, 17674 Athens, Greece;
| | - John R. Laird
- Heart and Vascular Institute, Adventist Health St. Helena, St Helena, CA 94574, USA;
| | - Gyan Pareek
- Minimally Invasive Urology Institute, Brown University, Providence, RI 02912, USA; (G.P.); (D.W.S.)
| | - Martin Miner
- Men’s Health Center, Miriam Hospital, Providence, RI 02906, USA;
| | - David W. Sobel
- Minimally Invasive Urology Institute, Brown University, Providence, RI 02912, USA; (G.P.); (D.W.S.)
| | - Antonella Balestrieri
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09124 Cagliari, Italy; (G.L.C.); (A.P.); (P.S.C.D.); (L.S.); (A.B.)
| | - Petros P. Sfikakis
- Rheumatology Unit, National Kapodistrian University of Athens, 15772 Athens, Greece;
| | - George Tsoulfas
- Department of Surgery, Aristoteleion University of Thessaloniki, 54124 Thessaloniki, Greece;
| | - Athanasios D. Protogerou
- Cardiovascular Prevention and Research Unit, Department of Pathophysiology, National & Kapodistrian University of Athens, 15772 Athens, Greece;
| | - Durga Prasanna Misra
- Department of Immunology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow 226014, India; (D.P.M.); (V.A.)
| | - Vikas Agarwal
- Department of Immunology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow 226014, India; (D.P.M.); (V.A.)
| | - George D. Kitas
- Academic Affairs, Dudley Group NHS Foundation Trust, Dudley DY1 2HQ, UK;
- Arthritis Research UK Epidemiology Unit, Manchester University, Manchester M13 9PL, UK
| | - Jagjit S. Teji
- Ann and Robert H. Lurie Children’s Hospital of Chicago, Chicago, IL 60611, USA;
| | - Mustafa Al-Maini
- Allergy, Clinical Immunology and Rheumatology Institute, Toronto, ON L4Z 4C4, Canada;
| | | | - Andrew Nicolaides
- Vascular Screening and Diagnostic Centre, University of Nicosia Medical School, Nicosia 2408, Cyprus;
| | - Aditya Sharma
- Division of Cardiovascular Medicine, University of Virginia, Charlottesville, VA 22908, USA;
| | - Vijay Rathore
- AtheroPoint LLC, Roseville, CA 95661, USA; (S.K.D.); (V.R.)
| | - Mostafa Fatemi
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA;
| | - Azra Alizad
- Department of Radiology, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA;
| | | | - Ferenc Nagy
- Internal Medicine Department, University of Szeged, 6725 Szeged, Hungary;
| | - Zoltan Ruzsa
- Invasive Cardiology Division, University of Szeged, 6725 Szeged, Hungary;
| | - Mostafa M. Fouda
- Department of Electrical and Computer Engineering, Idaho State University, Pocatello, ID 83209, USA;
| | - Subbaram Naidu
- Electrical Engineering Department, University of Minnesota, Duluth, MN 55812, USA;
| | - Klaudija Viskovic
- University Hospital for Infectious Diseases, 10000 Zagreb, Croatia; (A.M.); (K.V.)
| | - Manudeep K. Kalra
- Department of Radiology, Massachusetts General Hospital, 55 Fruit Street, Boston, MA 02114, USA;
| |
Collapse
|
42
|
Hassan H, Ren Z, Zhou C, Khan MA, Pan Y, Zhao J, Huang B. Supervised and weakly supervised deep learning models for COVID-19 CT diagnosis: A systematic review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 218:106731. [PMID: 35286874 PMCID: PMC8897838 DOI: 10.1016/j.cmpb.2022.106731] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 01/28/2022] [Accepted: 03/03/2022] [Indexed: 05/05/2023]
Abstract
Artificial intelligence (AI) and computer vision (CV) methods become reliable to extract features from radiological images, aiding COVID-19 diagnosis ahead of the pathogenic tests and saving critical time for disease management and control. Thus, this review article focuses on cascading numerous deep learning-based COVID-19 computerized tomography (CT) imaging diagnosis research, providing a baseline for future research. Compared to previous review articles on the topic, this study pigeon-holes the collected literature very differently (i.e., its multi-level arrangement). For this purpose, 71 relevant studies were found using a variety of trustworthy databases and search engines, including Google Scholar, IEEE Xplore, Web of Science, PubMed, Science Direct, and Scopus. We classify the selected literature in multi-level machine learning groups, such as supervised and weakly supervised learning. Our review article reveals that weak supervision has been adopted extensively for COVID-19 CT diagnosis compared to supervised learning. Weakly supervised (conventional transfer learning) techniques can be utilized effectively for real-time clinical practices by reusing the sophisticated features rather than over-parameterizing the standard models. Few-shot and self-supervised learning are the recent trends to address data scarcity and model efficacy. The deep learning (artificial intelligence) based models are mainly utilized for disease management and control. Therefore, it is more appropriate for readers to comprehend the related perceptive of deep learning approaches for the in-progress COVID-19 CT diagnosis research.
Collapse
Affiliation(s)
- Haseeb Hassan
- College of Big data and Internet, Shenzhen Technology University, Shenzhen, 518118, China; Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Health Science Center, Shenzhen, China; College of Applied Sciences, Shenzhen University, Shenzhen, 518060, China
| | - Zhaoyu Ren
- College of Big data and Internet, Shenzhen Technology University, Shenzhen, 518118, China
| | - Chengmin Zhou
- College of Big data and Internet, Shenzhen Technology University, Shenzhen, 518118, China
| | - Muazzam A Khan
- Department of Computer Sciences, Quaid-i-Azam University, Islamabad, Pakistan
| | - Yi Pan
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China
| | - Jian Zhao
- College of Big data and Internet, Shenzhen Technology University, Shenzhen, 518118, China.
| | - Bingding Huang
- College of Big data and Internet, Shenzhen Technology University, Shenzhen, 518118, China.
| |
Collapse
|
43
|
Scarpiniti M, Sarv Ahrabi S, Baccarelli E, Piazzo L, Momenzadeh A. A novel unsupervised approach based on the hidden features of Deep Denoising Autoencoders for COVID-19 disease detection. EXPERT SYSTEMS WITH APPLICATIONS 2022; 192:116366. [PMID: 34937995 PMCID: PMC8675154 DOI: 10.1016/j.eswa.2021.116366] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 10/15/2021] [Accepted: 11/30/2021] [Indexed: 05/02/2023]
Abstract
Chest imaging can represent a powerful tool for detecting the Coronavirus disease 2019 (COVID-19). Among the available technologies, the chest Computed Tomography (CT) scan is an effective approach for reliable and early detection of the disease. However, it could be difficult to rapidly identify by human inspection anomalous area in CT images belonging to the COVID-19 disease. Hence, it becomes necessary the exploitation of suitable automatic algorithms able to quick and precisely identify the disease, possibly by using few labeled input data, because large amounts of CT scans are not usually available for the COVID-19 disease. The method proposed in this paper is based on the exploitation of the compact and meaningful hidden representation provided by a Deep Denoising Convolutional Autoencoder (DDCAE). Specifically, the proposed DDCAE, trained on some target CT scans in an unsupervised way, is used to build up a robust statistical representation generating a target histogram. A suitable statistical distance measures how this target histogram is far from a companion histogram evaluated on an unknown test scan: if this distance is greater of a threshold, the test image is labeled as anomaly, i.e. the scan belongs to a patient affected by COVID-19 disease. Some experimental results and comparisons with other state-of-the-art methods show the effectiveness of the proposed approach reaching a top accuracy of 100% and similar high values for other metrics. In conclusion, by using a statistical representation of the hidden features provided by DDCAEs, the developed architecture is able to differentiate COVID-19 from normal and pneumonia scans with high reliability and at low computational cost.
Collapse
Affiliation(s)
- Michele Scarpiniti
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| | - Sima Sarv Ahrabi
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| | - Enzo Baccarelli
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| | - Lorenzo Piazzo
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| | - Alireza Momenzadeh
- Department of Information Engineering, Electronics and Telecommunications (DIET), Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy
| |
Collapse
|
44
|
Wong A, Lee JRH, Rahmat-Khah H, Sabri A, Alaref A, Liu H. TB-Net: A Tailored, Self-Attention Deep Convolutional Neural Network Design for Detection of Tuberculosis Cases From Chest X-Ray Images. Front Artif Intell 2022; 5:827299. [PMID: 35464996 PMCID: PMC9022489 DOI: 10.3389/frai.2022.827299] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 02/21/2022] [Indexed: 11/23/2022] Open
Abstract
Tuberculosis (TB) remains a global health problem, and is the leading cause of death from an infectious disease. A crucial step in the treatment of tuberculosis is screening high risk populations and the early detection of the disease, with chest x-ray (CXR) imaging being the most widely-used imaging modality. As such, there has been significant recent interest in artificial intelligence-based TB screening solutions for use in resource-limited scenarios where there is a lack of trained healthcare workers with expertise in CXR interpretation. Motivated by this pressing need and the recent recommendation by the World Health Organization (WHO) for the use of computer-aided diagnosis of TB in place of a human reader, we introduce TB-Net, a self-attention deep convolutional neural network tailored for TB case screening. We used CXR data from a multi-national patient cohort to train and test our models. A machine-driven design exploration approach leveraging generative synthesis was used to build a highly customized deep neural network architecture with attention condensers. We conducted an explainability-driven performance validation process to validate TB-Net's decision-making behavior. Experiments on CXR data from a multi-national patient cohort showed that the proposed TB-Net is able to achieve accuracy/sensitivity/specificity of 99.86/100.0/99.71%. Radiologist validation was conducted on select cases by two board-certified radiologists with over 10 and 19 years of experience, respectively, and showed consistency between radiologist interpretation and critical factors leveraged by TB-Net for TB case detection for the case where radiologists identified anomalies. The proposed TB-Net not only achieves high tuberculosis case detection performance in terms of sensitivity and specificity, but also leverages clinically relevant critical factors in its decision making process. While not a production-ready solution, we hope that the open-source release of TB-Net as part of the COVID-Net initiative will support researchers, clinicians, and citizen data scientists in advancing this field in the fight against this global public health crisis.
Collapse
Affiliation(s)
- Alexander Wong
- Vision and Image Processing Research Group, University of Waterloo, Waterloo, ON, Canada
- Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
- DarwinAI Corp, Waterloo, ON, Canada
- *Correspondence: Alexander Wong
| | | | | | - Ali Sabri
- Department of Radiology, Niagara Health, McMaster University, Hamilton, ON, Canada
| | - Amer Alaref
- Department of Diagnostic Radiology, Thunder Bay Regional Health Sciences Centre, Thunder Bay, ON, Canada
- Department of Diagnostic Imaging, Northern Ontario School of Medicine, Sudbury, ON, Canada
| | | |
Collapse
|
45
|
Gunraj H, Sabri A, Koff D, Wong A. COVID-Net CT-2: Enhanced Deep Neural Networks for Detection of COVID-19 From Chest CT Images Through Bigger, More Diverse Learning. Front Med (Lausanne) 2022; 8:729287. [PMID: 35360446 PMCID: PMC8960961 DOI: 10.3389/fmed.2021.729287] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Accepted: 12/31/2021] [Indexed: 01/08/2023] Open
Abstract
The COVID-19 pandemic continues to rage on, with multiple waves causing substantial harm to health and economies around the world. Motivated by the use of computed tomography (CT) imaging at clinical institutes around the world as an effective complementary screening method to RT-PCR testing, we introduced COVID-Net CT, a deep neural network tailored for detection of COVID-19 cases from chest CT images, along with a large curated benchmark dataset comprising 1,489 patient cases as part of the open-source COVID-Net initiative. However, one potential limiting factor is restricted data quantity and diversity given the single nation patient cohort used in the study. To address this limitation, in this study we introduce enhanced deep neural networks for COVID-19 detection from chest CT images which are trained using a large, diverse, multinational patient cohort. We accomplish this through the introduction of two new CT benchmark datasets, the largest of which comprises a multinational cohort of 4,501 patients from at least 16 countries. To the best of our knowledge, this represents the largest, most diverse multinational cohort for COVID-19 CT images in open-access form. Additionally, we introduce a novel lightweight neural network architecture called COVID-Net CT S, which is significantly smaller and faster than the previously introduced COVID-Net CT architecture. We leverage explainability to investigate the decision-making behavior of the trained models and ensure that decisions are based on relevant indicators, with the results for select cases reviewed and reported on by two board-certified radiologists with over 10 and 30 years of experience, respectively. The best-performing deep neural network in this study achieved accuracy, COVID-19 sensitivity, positive predictive value, specificity, and negative predictive value of 99.0%/99.1%/98.0%/99.4%/99.7%, respectively. Moreover, explainability-driven performance validation shows consistency with radiologist interpretation by leveraging correct, clinically relevant critical factors. The results are promising and suggest the strong potential of deep neural networks as an effective tool for computer-aided COVID-19 assessment. While not a production-ready solution, we hope the open-source, open-access release of COVID-Net CT-2 and the associated benchmark datasets will continue to enable researchers, clinicians, and citizen data scientists alike to build upon them.
Collapse
Affiliation(s)
- Hayden Gunraj
- Vision and Image Processing Lab, University of Waterloo, Waterloo, ON, Canada
- *Correspondence: Hayden Gunraj
| | - Ali Sabri
- Department of Radiology, McMaster University, Hamilton, ON, Canada
- Niagara Health System, St. Catharines, ON, Canada
| | - David Koff
- Department of Radiology, McMaster University, Hamilton, ON, Canada
- Hamilton Health Sciences, Hamilton, ON, Canada
| | - Alexander Wong
- Vision and Image Processing Lab, University of Waterloo, Waterloo, ON, Canada
- Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON, Canada
- DarwinAI Corp., Waterloo, ON, Canada
| |
Collapse
|
46
|
Mary Shyni H, Chitra E. A COMPARATIVE STUDY OF X-RAY AND CT IMAGES IN COVID-19 DETECTION USING IMAGE PROCESSING AND DEEP LEARNING TECHNIQUES. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE UPDATE 2022; 2:100054. [PMID: 35281724 PMCID: PMC8898857 DOI: 10.1016/j.cmpbup.2022.100054] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
The deadly coronavirus has not just devastated the lives of millions but has put the entire healthcare system under tremendous pressure. Early diagnosis of COVID-19 plays a significant role in isolating the positive cases and preventing the further spread of the disease. The medical images along with deep learning models provided faster and more accurate results in the detection of COVID-19. This article extensively reviews the recent deep learning techniques for COVID-19 diagnosis. The research articles discussed reveal that Convolutional Neural Network (CNN) is the most popular deep learning algorithm in detecting COVID-19 from medical images. An overview of the necessity of pre-processing the medical images, transfer learning and data augmentation techniques to deal with data scarcity problems, use of pre-trained models to save time and the role of medical images in the automatic detection of COVID-19 are summarized. This article also provides a sensible outlook for the young researchers to develop highly effective CNN models coupled with medical images in the early detection of the disease.
Collapse
Affiliation(s)
- H Mary Shyni
- Department of Electronics and Communication Engineering, SRM Institute of Science and Technology, Kattankulathur, Chennai, Tamil Nadu, India
| | - E Chitra
- Department of Electronics and Communication Engineering, SRM Institute of Science and Technology, Kattankulathur, Chennai, Tamil Nadu, India
| |
Collapse
|
47
|
Montalbo FJ. Truncating fined-tuned vision-based models to lightweight deployable diagnostic tools for SARS-CoV-2 infected chest X-rays and CT-scans. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:16411-16439. [PMID: 35261555 PMCID: PMC8893243 DOI: 10.1007/s11042-022-12484-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Revised: 10/05/2021] [Accepted: 01/25/2022] [Indexed: 06/14/2023]
Abstract
In such a brief period, the recent coronavirus (COVID-19) already infected large populations worldwide. Diagnosing an infected individual requires a Real-Time Polymerase Chain Reaction (RT-PCR) test, which can become expensive and limited in most developing countries, making them rely on alternatives like Chest X-Rays (CXR) or Computerized Tomography (CT) scans. However, results from these imaging approaches radiated confusion for medical experts due to their similarities with other diseases like pneumonia. Other solutions based on Deep Convolutional Neural Network (DCNN) recently improved and automated the diagnosis of COVID-19 from CXRs and CT scans. However, upon examination, most proposed studies focused primarily on accuracy rather than deployment and reproduction, which may cause them to become difficult to reproduce and implement in locations with inadequate computing resources. Therefore, instead of focusing only on accuracy, this work investigated the effects of parameter reduction through a proposed truncation method and analyzed its effects. Various DCNNs had their architectures truncated, which retained only their initial core block, reducing their parameter sizes to <1 M. Once trained and validated, findings have shown that a DCNN with robust layer aggregations like the InceptionResNetV2 had less vulnerability to the adverse effects of the proposed truncation. The results also showed that from its full-length size of 55 M with 98.67% accuracy, the proposed truncation reduced its parameters to only 441 K and still attained an accuracy of 97.41%, outperforming other studies based on its size to performance ratio.
Collapse
Affiliation(s)
- Francis Jesmar Montalbo
- College of Informatics and Computing Sciences, Batangas State University, Rizal Avenue Extension, Batangas, Batangas City, Philippines
| |
Collapse
|
48
|
Nazir T, Nawaz M, Javed A, Malik KM, Saudagar AKJ, Khan MB, Abul Hasanat MH, AlTameem A, AlKathami M. COVID-DAI: A novel framework for COVID-19 detection and infection growth estimation using computed tomography images. Microsc Res Tech 2022; 85:2313-2330. [PMID: 35194866 PMCID: PMC9088346 DOI: 10.1002/jemt.24088] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Revised: 02/01/2022] [Accepted: 02/14/2022] [Indexed: 12/18/2022]
Abstract
The COVID‐19 pandemic is spreading at a fast pace around the world and has a high mortality rate. Since there is no proper treatment of COVID‐19 and its multiple variants, for example, Alpha, Beta, Gamma, and Delta, being more infectious in nature are affecting millions of people, further complicates the detection process, so, victims are at the risk of death. However, timely and accurate diagnosis of this deadly virus can not only save the patients from life loss but can also prevent them from the complex treatment procedures. Accurate segmentation and classification of COVID‐19 is a tedious job due to the extensive variations in its shape and similarity with other diseases like Pneumonia. Furthermore, the existing techniques have hardly focused on the infection growth estimation over time which can assist the doctors to better analyze the condition of COVID‐19‐affected patients. In this work, we tried to overcome the shortcomings of existing studies by proposing a model capable of segmenting, classifying the COVID‐19 from computed tomography images, and predicting its behavior over a certain period. The framework comprises four main steps: (i) data preparation, (ii) segmentation, (iii) infection growth estimation, and (iv) classification. After performing the pre‐processing step, we introduced the DenseNet‐77 based UNET approach. Initially, the DenseNet‐77 is used at the Encoder module of the UNET model to calculate the deep keypoints which are later segmented to show the coronavirus region. Then, the infection growth estimation of COVID‐19 per patient is estimated using the blob analysis. Finally, we employed the DenseNet‐77 framework as an end‐to‐end network to classify the input images into three classes namely healthy, COVID‐19‐affected, and pneumonia images. We evaluated the proposed model over the COVID‐19‐20 and COVIDx CT‐2A datasets for segmentation and classification tasks, respectively. Furthermore, unlike existing techniques, we performed a cross‐dataset evaluation to show the generalization ability of our method. The quantitative and qualitative evaluation confirms that our method is robust to both COVID‐19 segmentation and classification and can accurately predict the infection growth in a certain time frame.
Collapse
Affiliation(s)
- Tahira Nazir
- Department of Computer Science, University of Engineering and Technology, Taxila, Pakistan
| | - Marriam Nawaz
- Department of Computer Science, University of Engineering and Technology, Taxila, Pakistan
| | - Ali Javed
- Department of Computer Science, University of Engineering and Technology, Taxila, Pakistan
| | - Khalid Mahmood Malik
- Department of Computer Science and Engineering, Oakland University, Rochester, Michigan, USA
| | - Abdul Khader Jilani Saudagar
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| | - Muhammad Badruddin Khan
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| | - Mozaherul Hoque Abul Hasanat
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| | - Abdullah AlTameem
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| | - Mohammad AlKathami
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| |
Collapse
|
49
|
Liu T, Siegel E, Shen D. Deep Learning and Medical Image Analysis for COVID-19 Diagnosis and Prediction. Annu Rev Biomed Eng 2022; 24:179-201. [PMID: 35316609 DOI: 10.1146/annurev-bioeng-110220-012203] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The coronavirus disease 2019 (COVID-19) pandemic has imposed dramatic challenges to health-care organizations worldwide. To combat the global crisis, the use of thoracic imaging has played a major role in diagnosis, prediction, and management for COVID-19 patients with moderate to severe symptoms or with evidence of worsening respiratory status. In response, the medical image analysis community acted quickly to develop and disseminate deep learning models and tools to meet the urgent need of managing and interpreting large amounts of COVID-19 imaging data. This review aims to not only summarize existing deep learning and medical image analysis methods but also offer in-depth discussions and recommendations for future investigations. We believe that the wide availability of high-quality, curated, and benchmarked COVID-19 imaging data sets offers the great promise of a transformative test bed to develop, validate, and disseminate novel deep learning methods in the frontiers of data science and artificial intelligence. Expected final online publication date for the Annual Review of Biomedical Engineering, Volume 24 is June 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Tianming Liu
- Department of Computer Science, University of Georgia, Athens, Georgia, USA;
| | - Eliot Siegel
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland, Baltimore, Maryland, USA;
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China.,Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China;
| |
Collapse
|
50
|
COVID-19 Detection Systems Using Deep-Learning Algorithms Based on Speech and Image Data. MATHEMATICS 2022. [DOI: 10.3390/math10040564] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
The global epidemic caused by COVID-19 has had a severe impact on the health of human beings. The virus has wreaked havoc throughout the world since its declaration as a worldwide pandemic and has affected an expanding number of nations in numerous countries around the world. Recently, a substantial amount of work has been done by doctors, scientists, and many others working on the frontlines to battle the effects of the spreading virus. The integration of artificial intelligence, specifically deep- and machine-learning applications, in the health sector has contributed substantially to the fight against COVID-19 by providing a modern innovative approach for detecting, diagnosing, treating, and preventing the virus. In this proposed work, we focus mainly on the role of the speech signal and/or image processing in detecting the presence of COVID-19. Three types of experiments have been conducted, utilizing speech-based, image-based, and speech and image-based models. Long short-term memory (LSTM) has been utilized for the speech classification of the patient’s cough, voice, and breathing, obtaining an accuracy that exceeds 98%. Moreover, CNN models VGG16, VGG19, Densnet201, ResNet50, Inceptionv3, InceptionResNetV2, and Xception have been benchmarked for the classification of chest X-ray images. The VGG16 model outperforms all other CNN models, achieving an accuracy of 85.25% without fine-tuning and 89.64% after performing fine-tuning techniques. Furthermore, the speech–image-based model has been evaluated using the same seven models, attaining an accuracy of 82.22% by the InceptionResNetV2 model. Accordingly, it is inessential for the combined speech–image-based model to be employed for diagnosis purposes since the speech-based and image-based models have each shown higher terms of accuracy than the combined model.
Collapse
|