51
|
Ahmad I, Merla A, Ali F, Shah B, AlZubi AA, AlZubi MA. A deep transfer learning approach for COVID-19 detection and exploring a sense of belonging with Diabetes. Front Public Health 2023; 11:1308404. [PMID: 38026271 PMCID: PMC10657998 DOI: 10.3389/fpubh.2023.1308404] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Accepted: 10/18/2023] [Indexed: 12/01/2023] Open
Abstract
COVID-19 is an epidemic disease that results in death and significantly affects the older adult and those afflicted with chronic medical conditions. Diabetes medication and high blood glucose levels are significant predictors of COVID-19-related death or disease severity. Diabetic individuals, particularly those with preexisting comorbidities or geriatric patients, are at a higher risk of COVID-19 infection, including hospitalization, ICU admission, and death, than those without Diabetes. Everyone's lives have been significantly changed due to the COVID-19 outbreak. Identifying patients infected with COVID-19 in a timely manner is critical to overcoming this challenge. The Real-Time Polymerase Chain Reaction (RT-PCR) diagnostic assay is currently the gold standard for COVID-19 detection. However, RT-PCR is a time-consuming and costly technique requiring a lab kit that is difficult to get in crises and epidemics. This work suggests the CIDICXR-Net50 model, a ResNet-50-based Transfer Learning (TL) method for COVID-19 detection via Chest X-ray (CXR) image classification. The presented model is developed by substituting the final ResNet-50 classifier layer with a new classification head. The model is trained on 3,923 chest X-ray images comprising a substantial dataset of 1,360 viral pneumonia, 1,363 normal, and 1,200 COVID-19 CXR images. The proposed model's performance is evaluated in contrast to the results of six other innovative pre-trained models. The proposed CIDICXR-Net50 model attained 99.11% accuracy on the provided dataset while maintaining 99.15% precision and recall. This study also explores potential relationships between COVID-19 and Diabetes.
Collapse
Affiliation(s)
- Ijaz Ahmad
- Digital Transition, Innovation and Health Service, Leonardo da Vinci Telematic University, Chieti, Italy
| | - Arcangelo Merla
- Department of Engineering and Geology (INGEO) University "G. d’Annunzio" Chieti-Pescara, Pescara, Italy
| | - Farman Ali
- Department of Computer Science and Engineering, School of Convergence, College of Computing and Informatics, Sungkyunkwan University, Seoul, Republic of Korea
| | - Babar Shah
- College of Technological Innovation, Zayed University, Dubai, United Arab Emirates
| | - Ahmad Ali AlZubi
- Department of Computer Science, Community College, King Saud University, Riyadh, Saudi Arabia
| | - Mallak Ahmad AlZubi
- Faculty of Medicine, Jordan University of Science and Technology, Irbid, Jordan
| |
Collapse
|
52
|
Saha PK, Nadeem SA, Comellas AP. A Survey on Artificial Intelligence in Pulmonary Imaging. WILEY INTERDISCIPLINARY REVIEWS. DATA MINING AND KNOWLEDGE DISCOVERY 2023; 13:e1510. [PMID: 38249785 PMCID: PMC10796150 DOI: 10.1002/widm.1510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 06/21/2023] [Indexed: 01/23/2024]
Abstract
Over the last decade, deep learning (DL) has contributed a paradigm shift in computer vision and image recognition creating widespread opportunities of using artificial intelligence in research as well as industrial applications. DL has been extensively studied in medical imaging applications, including those related to pulmonary diseases. Chronic obstructive pulmonary disease, asthma, lung cancer, pneumonia, and, more recently, COVID-19 are common lung diseases affecting nearly 7.4% of world population. Pulmonary imaging has been widely investigated toward improving our understanding of disease etiologies and early diagnosis and assessment of disease progression and clinical outcomes. DL has been broadly applied to solve various pulmonary image processing challenges including classification, recognition, registration, and segmentation. This paper presents a survey of pulmonary diseases, roles of imaging in translational and clinical pulmonary research, and applications of different DL architectures and methods in pulmonary imaging with emphasis on DL-based segmentation of major pulmonary anatomies such as lung volumes, lung lobes, pulmonary vessels, and airways as well as thoracic musculoskeletal anatomies related to pulmonary diseases.
Collapse
Affiliation(s)
- Punam K Saha
- Departments of Radiology and Electrical and Computer Engineering, University of Iowa, Iowa City, IA, 52242
| | | | | |
Collapse
|
53
|
Nahiduzzaman M, Goni MOF, Hassan R, Islam MR, Syfullah MK, Shahriar SM, Anower MS, Ahsan M, Haider J, Kowalski M. Parallel CNN-ELM: A multiclass classification of chest X-ray images to identify seventeen lung diseases including COVID-19. EXPERT SYSTEMS WITH APPLICATIONS 2023; 229:120528. [PMID: 37274610 PMCID: PMC10223636 DOI: 10.1016/j.eswa.2023.120528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/01/2022] [Revised: 05/19/2023] [Accepted: 05/19/2023] [Indexed: 06/06/2023]
Abstract
Numerous epidemic lung diseases such as COVID-19, tuberculosis (TB), and pneumonia have spread over the world, killing millions of people. Medical specialists have experienced challenges in correctly identifying these diseases due to their subtle differences in Chest X-ray images (CXR). To assist the medical experts, this study proposed a computer-aided lung illness identification method based on the CXR images. For the first time, 17 different forms of lung disorders were considered and the study was divided into six trials with each containing two, two, three, four, fourteen, and seventeen different forms of lung disorders. The proposed framework combined robust feature extraction capabilities of a lightweight parallel convolutional neural network (CNN) with the classification abilities of the extreme learning machine algorithm named CNN-ELM. An optimistic accuracy of 90.92% and an area under the curve (AUC) of 96.93% was achieved when 17 classes were classified side by side. It also accurately identified COVID-19 and TB with 99.37% and 99.98% accuracy, respectively, in 0.996 microseconds for a single image. Additionally, the current results also demonstrated that the framework could outperform the existing state-of-the-art (SOTA) models. On top of that, a secondary conclusion drawn from this study was that the prospective framework retained its effectiveness over a range of real-world environments, including balanced-unbalanced or large-small datasets, large multiclass or simple binary class, and high- or low-resolution images. A prototype Android App was also developed to establish the potential of the framework in real-life implementation.
Collapse
Affiliation(s)
- Md Nahiduzzaman
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Omaer Faruq Goni
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Rakibul Hassan
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Robiul Islam
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Khalid Syfullah
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Saleh Mohammed Shahriar
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Shamim Anower
- Department of Electrical & Electronic Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Mominul Ahsan
- Department of Computer Science, University of York, Deramore Lane, Heslington, York YO10 5GH, UK
| | - Julfikar Haider
- Department of Engineering, Manchester Metropolitan University, Chester St, Manchester M1 5GD, UK
| | - Marcin Kowalski
- Institute of Optoelectronics, Military University of Technology, Gen. S. Kaliskiego 2, 00-908 Warsaw, Poland
| |
Collapse
|
54
|
Miyazaki A, Ikejima K, Nishio M, Yabuta M, Matsuo H, Onoue K, Matsunaga T, Nishioka E, Kono A, Yamada D, Oba K, Ishikura R, Murakami T. Computer-aided diagnosis of chest X-ray for COVID-19 diagnosis in external validation study by radiologists with and without deep learning system. Sci Rep 2023; 13:17533. [PMID: 37845348 PMCID: PMC10579343 DOI: 10.1038/s41598-023-44818-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 10/12/2023] [Indexed: 10/18/2023] Open
Abstract
To evaluate the diagnostic performance of our deep learning (DL) model of COVID-19 and investigate whether the diagnostic performance of radiologists was improved by referring to our model. Our datasets contained chest X-rays (CXRs) for the following three categories: normal (NORMAL), non-COVID-19 pneumonia (PNEUMONIA), and COVID-19 pneumonia (COVID). We used two public datasets and private dataset collected from eight hospitals for the development and external validation of our DL model (26,393 CXRs). Eight radiologists performed two reading sessions: one session was performed with reference to CXRs only, and the other was performed with reference to both CXRs and the results of the DL model. The evaluation metrics for the reading session were accuracy, sensitivity, specificity, and area under the curve (AUC). The accuracy of our DL model was 0.733, and that of the eight radiologists without DL was 0.696 ± 0.031. There was a significant difference in AUC between the radiologists with and without DL for COVID versus NORMAL or PNEUMONIA (p = 0.0038). Our DL model alone showed better diagnostic performance than that of most radiologists. In addition, our model significantly improved the diagnostic performance of radiologists for COVID versus NORMAL or PNEUMONIA.
Collapse
Affiliation(s)
- Aki Miyazaki
- Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-Cho, Chuo-Ku, Kobe, 650-0017, Japan
| | - Kengo Ikejima
- Department of Radiology, St. Luke's International Hospital, 9-1 Akashi-Cho, Chuo-Ku, Tokyo, 104-8560, Japan
| | - Mizuho Nishio
- Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-Cho, Chuo-Ku, Kobe, 650-0017, Japan.
| | - Minoru Yabuta
- Department of Radiology, St. Luke's International Hospital, 9-1 Akashi-Cho, Chuo-Ku, Tokyo, 104-8560, Japan
| | - Hidetoshi Matsuo
- Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-Cho, Chuo-Ku, Kobe, 650-0017, Japan
| | - Koji Onoue
- Department of Radiology, Kobe City Medical Center General Hospital, 2-1-1 Minatojimaminamimachi, Chuo-Ku, Kobe, 650-0047, Japan
- Department of Diagnostic Imaging and Interventional Radiology, Kyoto Katsura Hospital, 17 Yamada-Hirao, Nishikyo-Ku, Kyoto, 615-8256, Japan
| | - Takaaki Matsunaga
- Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-Cho, Chuo-Ku, Kobe, 650-0017, Japan
| | - Eiko Nishioka
- Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-Cho, Chuo-Ku, Kobe, 650-0017, Japan
| | - Atsushi Kono
- Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-Cho, Chuo-Ku, Kobe, 650-0017, Japan
| | - Daisuke Yamada
- Department of Radiology, St. Luke's International Hospital, 9-1 Akashi-Cho, Chuo-Ku, Tokyo, 104-8560, Japan
| | - Ken Oba
- Department of Radiology, St. Luke's International Hospital, 9-1 Akashi-Cho, Chuo-Ku, Tokyo, 104-8560, Japan
| | - Reiichi Ishikura
- Department of Radiology, Kobe City Medical Center General Hospital, 2-1-1 Minatojimaminamimachi, Chuo-Ku, Kobe, 650-0047, Japan
| | - Takamichi Murakami
- Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-Cho, Chuo-Ku, Kobe, 650-0017, Japan
| |
Collapse
|
55
|
Socha M, Prażuch W, Suwalska A, Foszner P, Tobiasz J, Jaroszewicz J, Gruszczynska K, Sliwinska M, Nowak M, Gizycka B, Zapolska G, Popiela T, Przybylski G, Fiedor P, Pawlowska M, Flisiak R, Simon K, Walecki J, Cieszanowski A, Szurowska E, Marczyk M, Polanska J. Pathological changes or technical artefacts? The problem of the heterogenous databases in COVID-19 CXR image analysis. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107684. [PMID: 37356354 PMCID: PMC10278898 DOI: 10.1016/j.cmpb.2023.107684] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 06/11/2023] [Accepted: 06/18/2023] [Indexed: 06/27/2023]
Abstract
BACKGROUND When the COVID-19 pandemic commenced in 2020, scientists assisted medical specialists with diagnostic algorithm development. One scientific research area related to COVID-19 diagnosis was medical imaging and its potential to support molecular tests. Unfortunately, several systems reported high accuracy in development but did not fare well in clinical application. The reason was poor generalization, a long-standing issue in AI development. Researchers found many causes of this issue and decided to refer to them as confounders, meaning a set of artefacts and methodological errors associated with the method. We aim to contribute to this steed by highlighting an undiscussed confounder related to image resolution. METHODS 20 216 chest X-ray images (CXR) from worldwide centres were analyzed. The CXRs were bijectively projected into the 2D domain by performing Uniform Manifold Approximation and Projection (UMAP) embedding on the radiomic features (rUMAP) or CNN-based neural features (nUMAP) from the pre-last layer of the pre-trained classification neural network. Additional 44 339 thorax CXRs were used for validation. The comprehensive analysis of the multimodality of the density distribution in rUMAP/nUMAP domains and its relation to the original image properties was used to identify the main confounders. RESULTS nUMAP revealed a hidden bias of neural networks towards the image resolution, which the regular up-sampling procedure cannot compensate for. The issue appears regardless of the network architecture and is not observed in a high-resolution dataset. The impact of the resolution heterogeneity can be partially diminished by applying advanced deep-learning-based super-resolution networks. CONCLUSIONS rUMAP and nUMAP are great tools for image homogeneity analysis and bias discovery, as demonstrated by applying them to COVID-19 image data. Nonetheless, nUMAP could be applied to any type of data for which a deep neural network could be constructed. Advanced image super-resolution solutions are needed to reduce the impact of the resolution diversity on the classification network decision.
Collapse
Affiliation(s)
- Marek Socha
- Department of Data Science and Engineering, Silesian University of Technology, Gliwice, Poland
| | - Wojciech Prażuch
- Department of Data Science and Engineering, Silesian University of Technology, Gliwice, Poland
| | - Aleksandra Suwalska
- Department of Data Science and Engineering, Silesian University of Technology, Gliwice, Poland
| | - Paweł Foszner
- Department of Data Science and Engineering, Silesian University of Technology, Gliwice, Poland; Department of Computer Graphics, Vision and Digital Systems, Silesian University of Technology, Gliwice, Poland
| | - Joanna Tobiasz
- Department of Data Science and Engineering, Silesian University of Technology, Gliwice, Poland; Department of Computer Graphics, Vision and Digital Systems, Silesian University of Technology, Gliwice, Poland
| | - Jerzy Jaroszewicz
- Department of Infectious Diseases and Hepatology, Medical University of Silesia, Katowice, Poland
| | - Katarzyna Gruszczynska
- Department of Radiology and Nuclear Medicine, Medical University of Silesia, Katowice, Poland
| | - Magdalena Sliwinska
- Department of Diagnostic Imaging, Voivodship Specialist Hospital, Wroclaw, Poland
| | - Mateusz Nowak
- Department of Radiology, Silesian Hospital, Cieszyn, Poland
| | - Barbara Gizycka
- Department of Imaging Diagnostics, MEGREZ Hospital, Tychy, Poland
| | | | - Tadeusz Popiela
- Department of Radiology, Jagiellonian University Medical College, Krakow, Poland
| | - Grzegorz Przybylski
- Department of Lung Diseases, Cancer and Tuberculosis, Kujawsko-Pomorskie Pulmonology Center, Bydgoszcz, Poland
| | - Piotr Fiedor
- Department of General and Transplantation Surgery, Medical University of Warsaw, Warsaw, Poland
| | - Malgorzata Pawlowska
- Department of Infectious Diseases and Hepatology, Collegium Medicum in Bydgoszcz, Nicolaus Copernicus University, Torun, Poland
| | - Robert Flisiak
- Department of Infectious Diseases and Hepatology, Medical University of Bialystok, Bialystok, Poland
| | - Krzysztof Simon
- Department of Infectious Diseases and Hepatology, Wroclaw Medical University, Wroclaw, Poland
| | - Jerzy Walecki
- Department of Radiology, Centre of Postgraduate Medical Education, Central Clinical Hospital of the Ministry of Interior in Warsaw, Poland
| | - Andrzej Cieszanowski
- Department of Radiology I, The Maria Sklodowska-Curie National Research Institute of Oncology, Warsaw, Poland
| | - Edyta Szurowska
- 2nd Department of Radiology, Medical University of Gdansk, Poland
| | - Michal Marczyk
- Department of Data Science and Engineering, Silesian University of Technology, Gliwice, Poland; Yale Cancer Center, Yale School of Medicine, New Haven, CT, USA
| | - Joanna Polanska
- Department of Data Science and Engineering, Silesian University of Technology, Gliwice, Poland.
| |
Collapse
|
56
|
Kim K, Lee JH, Je Oh S, Chung MJ. AI-based computer-aided diagnostic system of chest digital tomography synthesis: Demonstrating comparative advantage with X-ray-based AI systems. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107643. [PMID: 37348439 DOI: 10.1016/j.cmpb.2023.107643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 05/26/2023] [Accepted: 06/03/2023] [Indexed: 06/24/2023]
Abstract
BACKGROUND Compared with chest X-ray (CXR) imaging, which is a single image projected from the front of the patient, chest digital tomosynthesis (CDTS) imaging can be more advantageous for lung lesion detection because it acquires multiple images projected from multiple angles of the patient. Various clinical comparative analysis and verification studies have been reported to demonstrate this, but there is no artificial intelligence (AI)-based comparative analysis studies. Existing AI-based computer-aided detection (CAD) systems for lung lesion diagnosis have been developed mainly based on CXR images; however, CAD-based on CDTS, which uses multi-angle images of patients in various directions, has not been proposed and verified for its usefulness compared to CXR-based counterparts. BACKGROUND AND OBJECTIVE This study develops and tests a CDTS-based AI CAD system to detect lung lesions to demonstrate performance improvements compared to CXR-based AI CAD. METHODS We used multiple (e.g., five) projection images as input for the CDTS-based AI model and a single-projection image as input for the CXR-based AI model to compare and evaluate the performance between models. Multiple/single projection input images were obtained by virtual projection on the three-dimensional (3D) stack of computed tomography (CT) slices of each patient's lungs from which the bed area was removed. These multiple images result from shooting from the front and left and right 30/60∘. The projected image captured from the front was used as the input for the CXR-based AI model. The CDTS-based AI model used all five projected images. The proposed CDTS-based AI model consisted of five AI models that received images in each of the five directions, and obtained the final prediction result through an ensemble of five models. Each model used WideResNet-50. To train and evaluate CXR- and CDTS-based AI models, 500 healthy data, 206 tuberculosis data, and 242 pneumonia data were used, and three three-fold cross-validation was applied. RESULTS The proposed CDTS-based AI CAD system yielded sensitivities of 0.782 and 0.785 and accuracies of 0.895 and 0.837 for the (binary classification) performance of detecting tuberculosis and pneumonia, respectively, against normal subjects. These results show higher performance than the sensitivity of 0.728 and 0.698 and accuracies of 0.874 and 0.826 for detecting tuberculosis and pneumonia through the CXR-based AI CAD, which only uses a single projection image in the frontal direction. We found that CDTS-based AI CAD improved the sensitivity of tuberculosis and pneumonia by 5.4% and 8.7% respectively, compared to CXR-based AI CAD without loss of accuracy. CONCLUSIONS This study comparatively proves that CDTS-based AI CAD technology can improve performance more than CXR. These results suggest that we can enhance the clinical application of CDTS. Our code is available at https://github.com/kskim-phd/CDTS-CAD-P.
Collapse
Affiliation(s)
- Kyungsu Kim
- Medical AI Research Center, Research Institute for Future Medicine, Samsung Medical Center, Seoul 06351, Republic of Korea; Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul 06351, Republic of Korea.
| | - Ju Hwan Lee
- Department of Health Sciences and Technology, SAIHST, Sungkyunkwan University, Seoul 06351, Republic of Korea
| | - Seong Je Oh
- Department of Health Sciences and Technology, SAIHST, Sungkyunkwan University, Seoul 06351, Republic of Korea
| | - Myung Jin Chung
- Medical AI Research Center, Research Institute for Future Medicine, Samsung Medical Center, Seoul 06351, Republic of Korea; Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul 06351, Republic of Korea; Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 06351, Republic of Korea.
| |
Collapse
|
57
|
Tan M, Xia J, Luo H, Meng G, Zhu Z. Applying the digital data and the bioinformatics tools in SARS-CoV-2 research. Comput Struct Biotechnol J 2023; 21:4697-4705. [PMID: 37841328 PMCID: PMC10568291 DOI: 10.1016/j.csbj.2023.09.044] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 09/29/2023] [Accepted: 09/29/2023] [Indexed: 10/17/2023] Open
Abstract
Bioinformatics has been playing a crucial role in the scientific progress to fight against the pandemic of the coronavirus disease 2019 (COVID-19) caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The advances in novel algorithms, mega data technology, artificial intelligence and deep learning assisted the development of novel bioinformatics tools to analyze daily increasing SARS-CoV-2 data in the past years. These tools were applied in genomic analyses, evolutionary tracking, epidemiological analyses, protein structure interpretation, studies in virus-host interaction and clinical performance. To promote the in-silico analysis in the future, we conducted a review which summarized the databases, web services and software applied in SARS-CoV-2 research. Those digital resources applied in SARS-CoV-2 research may also potentially contribute to the research in other coronavirus and non-coronavirus viruses.
Collapse
Affiliation(s)
- Meng Tan
- School of Life Sciences, Chongqing University, Chongqing, China
| | - Jiaxin Xia
- School of Life Sciences, Chongqing University, Chongqing, China
| | - Haitao Luo
- School of Life Sciences, Chongqing University, Chongqing, China
| | - Geng Meng
- College of Veterinary Medicine, China Agricultural University, Beijing, China
| | - Zhenglin Zhu
- School of Life Sciences, Chongqing University, Chongqing, China
| |
Collapse
|
58
|
Nizam NB, Siddiquee SM, Shirin M, Bhuiyan MIH, Hasan T. COVID-19 Severity Prediction from Chest X-ray Images Using an Anatomy-Aware Deep Learning Model. J Digit Imaging 2023; 36:2100-2112. [PMID: 37369941 PMCID: PMC10502002 DOI: 10.1007/s10278-023-00861-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 05/17/2023] [Accepted: 05/25/2023] [Indexed: 06/29/2023] Open
Abstract
The COVID-19 pandemic has been adversely affecting the patient management systems in hospitals around the world. Radiological imaging, especially chest x-ray and lung Computed Tomography (CT) scans, plays a vital role in the severity analysis of hospitalized COVID-19 patients. However, with an increasing number of patients and a lack of skilled radiologists, automated assessment of COVID-19 severity using medical image analysis has become increasingly important. Chest x-ray (CXR) imaging plays a significant role in assessing the severity of pneumonia, especially in low-resource hospitals, and is the most frequently used diagnostic imaging in the world. Previous methods that automatically predict the severity of COVID-19 pneumonia mainly focus on feature pooling from pre-trained CXR models without explicitly considering the underlying human anatomical attributes. This paper proposes an anatomy-aware (AA) deep learning model that learns the generic features from x-ray images considering the underlying anatomical information. Utilizing a pre-trained model and lung segmentation masks, the model generates a feature vector including disease-level features and lung involvement scores. We have used four different open-source datasets, along with an in-house annotated test set for training and evaluation of the proposed method. The proposed method improves the geographical extent score by 11% in terms of mean squared error (MSE) while preserving the benchmark result in lung opacity score. The results demonstrate the effectiveness of the proposed AA model in COVID-19 severity prediction from chest X-ray images. The algorithm can be used in low-resource setting hospitals for COVID-19 severity prediction, especially where there is a lack of skilled radiologists.
Collapse
Affiliation(s)
- Nusrat Binta Nizam
- mHealth Research Group, Department of Biomedical Engineering, Bangladesh University of Engineering and Technology (BUET), Dhaka, 1205, Bangladesh
| | - Sadi Mohammad Siddiquee
- mHealth Research Group, Department of Biomedical Engineering, Bangladesh University of Engineering and Technology (BUET), Dhaka, 1205, Bangladesh
| | - Mahbuba Shirin
- Department of Radiology and Imaging, Bangabandhu Sheikh Mujib Medical University, Shahbagh, Dhaka, 1000, Bangladesh
| | - Mohammed Imamul Hassan Bhuiyan
- Department of Electrical and Electronics Engineering (EEE), Bangladesh University of Engineering and Technology, Dhaka, 1205, Bangladesh
| | - Taufiq Hasan
- mHealth Research Group, Department of Biomedical Engineering, Bangladesh University of Engineering and Technology (BUET), Dhaka, 1205, Bangladesh.
- Center for Bioengineering Innovation and Design (CBID), Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
59
|
Ahmed MAO, Abbas IA, AbdelSatar Y. HDSNE a new unsupervised multiple image database fusion learning algorithm with flexible and crispy production of one database: a proof case study of lung infection diagnose In chest X-ray images. BMC Med Imaging 2023; 23:134. [PMID: 37718458 PMCID: PMC10506286 DOI: 10.1186/s12880-023-01078-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Accepted: 08/16/2023] [Indexed: 09/19/2023] Open
Abstract
Continuous release of image databases with fully or partially identical inner categories dramatically deteriorates the production of autonomous Computer-Aided Diagnostics (CAD) systems for true comprehensive medical diagnostics. The first challenge is the frequent massive bulk release of medical image databases, which often suffer from two common drawbacks: image duplication and corruption. The many subsequent releases of the same data with the same classes or categories come with no clear evidence of success in the concatenation of those identical classes among image databases. This issue stands as a stumbling block in the path of hypothesis-based experiments for the production of a single learning model that can successfully classify all of them correctly. Removing redundant data, enhancing performance, and optimizing energy resources are among the most challenging aspects. In this article, we propose a global data aggregation scale model that incorporates six image databases selected from specific global resources. The proposed valid learner is based on training all the unique patterns within any given data release, thereby creating a unique dataset hypothetically. The Hash MD5 algorithm (MD5) generates a unique hash value for each image, making it suitable for duplication removal. The T-Distributed Stochastic Neighbor Embedding (t-SNE), with a tunable perplexity parameter, can represent data dimensions. Both the Hash MD5 and t-SNE algorithms are applied recursively, producing a balanced and uniform database containing equal samples per category: normal, pneumonia, and Coronavirus Disease of 2019 (COVID-19). We evaluated the performance of all proposed data and the new automated version using the Inception V3 pre-trained model with various evaluation metrics. The performance outcome of the proposed scale model showed more respectable results than traditional data aggregation, achieving a high accuracy of 98.48%, along with high precision, recall, and F1-score. The results have been proved through a statistical t-test, yielding t-values and p-values. It's important to emphasize that all t-values are undeniably significant, and the p-values provide irrefutable evidence against the null hypothesis. Furthermore, it's noteworthy that the Final dataset outperformed all other datasets across all metric values when diagnosing various lung infections with the same factors.
Collapse
Affiliation(s)
- Muhammad Atta Othman Ahmed
- Department of Computer Science, Faculty of Computers and Information, Luxor University, Luxor, 85951, Egypt.
| | - Ibrahim A Abbas
- Mathematics Department, Faculty of Science, Sohag University, Sohag, 82511, Egypt
| | - Yasser AbdelSatar
- Mathematics Department, Faculty of Science, Sohag University, Sohag, 82511, Egypt
| |
Collapse
|
60
|
Kordon F, Stiglmayr M, Maier A, Martín Vicario C, Pertlwieser T, Kunze H. A principled representation of elongated structures using heatmaps. Sci Rep 2023; 13:15253. [PMID: 37709790 PMCID: PMC10502041 DOI: 10.1038/s41598-023-41221-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 08/23/2023] [Indexed: 09/16/2023] Open
Abstract
The detection of elongated structures like lines or edges is an essential component in semantic image analysis. Classical approaches that rely on significant image gradients quickly reach their limits when the structure is context-dependent, amorphous, or not directly visible. This study introduces a principled mathematical description of elongated structures with various origins and shapes. Among others, it serves as an expressive operational description of target functions that can be well approximated by Convolutional Neural Networks. The nominal position of a curve and its positional uncertainty are encoded as a heatmap by convolving the curve distribution with a filter function. We propose a low-error approximation to the expensive numerical integration by evaluating a distance-dependent function, enabling a lightweight implementation with linear time complexity. We analyze the method's numerical approximation error and behavior for different curve types and signal-to-noise levels. Application to surgical 2D and 3D data, semantic boundary detection, skeletonization, and other related tasks demonstrate the method's versatility at low errors.
Collapse
Affiliation(s)
- Florian Kordon
- Pattern Recognition Lab, Friedrich-Alexander Universität Erlangen-Nürnberg, 91058, Erlangen, Germany.
- Erlangen Graduate School in Advanced Optical Technologies (SAOT), Friedrich-Alexander Universität Erlangen-Nürnberg, 91052, Erlangen, Germany.
- Advanced Therapies, Siemens Healthcare GmbH, 91301, Forchheim, Germany.
| | - Michael Stiglmayr
- Optimization Group, Institute of Mathematical Modelling, Analysis and Computational Mathematics, University of Wuppertal, 42119, Wuppertal, Germany
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander Universität Erlangen-Nürnberg, 91058, Erlangen, Germany
- Erlangen Graduate School in Advanced Optical Technologies (SAOT), Friedrich-Alexander Universität Erlangen-Nürnberg, 91052, Erlangen, Germany
| | - Celia Martín Vicario
- Pattern Recognition Lab, Friedrich-Alexander Universität Erlangen-Nürnberg, 91058, Erlangen, Germany
| | - Tobias Pertlwieser
- Pattern Recognition Lab, Friedrich-Alexander Universität Erlangen-Nürnberg, 91058, Erlangen, Germany
| | - Holger Kunze
- Pattern Recognition Lab, Friedrich-Alexander Universität Erlangen-Nürnberg, 91058, Erlangen, Germany
- Advanced Therapies, Siemens Healthcare GmbH, 91301, Forchheim, Germany
| |
Collapse
|
61
|
Gómez-Peralta JI, Bokhimi X, Quintana P. Convolutional Neural Networks to Assist the Assessment of Lattice Parameters from X-ray Powder Diffraction. J Phys Chem A 2023; 127:7655-7664. [PMID: 37647548 DOI: 10.1021/acs.jpca.3c03860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
This article presents the development of convolutional neural networks (CNNs) for the estimation of lattice parameters in organic compounds across various crystal systems. A comprehensive collection of 92,085 organic compounds was utilized to train the CNNs, encompassing crystals with unit cells containing up to 512 atoms and a maximum unit cell volume of 8000 Å3. Simulated diffraction patterns were generated for each compound, comprising four diffraction patterns with different crystal sizes. These diffraction patterns were generated within a 2θ window of 3-60°, employing a step size of 0.02051°. Two distinct CNN architectures were developed with differing input data. The first CNN, referred to as XRD-CNN, was trained solely on diffraction patterns. In the test set, XRD-CNN demonstrated a mean absolute percentage error (MAPE) of 11.04% for unit cell vectors, 7.40% for angles, and 26.83% for unit cell volume. The second CNN, XRDElem-CNN, incorporated a binary representation of atoms within the unit cell as an additional input. XRDElem-CNN achieved improved performance, yielding MAPE values of 4.73% for unit vectors, 6.49% for angles, and 6.05% for the unit cell volume. To validate the performance of XRDElem-CNN, real diffraction patterns obtained from a conventional laboratory diffractometer (Cu Kα wavelength) were employed. Various representations of atoms within the unit cell were proposed, which were computationally efficient for evaluation with the CNNs. The assessed lattice parameters by XRDElem-CNN were validated using the Lp-search method, showing agreement with the reported values.
Collapse
Affiliation(s)
- Juan Iván Gómez-Peralta
- Laboratorio Nacional de Nano y Biomateriales, CINVESTAV-IPN, Antigua Carretera a Progreso km 6, A. P. 37, 97310 Mérida, Yucatán, Mexico
| | - Xim Bokhimi
- Instituto de Física, Universidad Nacional Autónoma de México, A. P. 20-364, 01000 Ciudad de México, DF, Mexico
| | - Patricia Quintana
- Laboratorio Nacional de Nano y Biomateriales, CINVESTAV-IPN, Antigua Carretera a Progreso km 6, A. P. 37, 97310 Mérida, Yucatán, Mexico
| |
Collapse
|
62
|
Rangel G, Cuevas-Tello JC, Rivera M, Renteria O. A Deep Learning Model Based on Capsule Networks for COVID Diagnostics through X-ray Images. Diagnostics (Basel) 2023; 13:2858. [PMID: 37685396 PMCID: PMC10486517 DOI: 10.3390/diagnostics13172858] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 08/22/2023] [Accepted: 08/24/2023] [Indexed: 09/10/2023] Open
Abstract
X-ray diagnostics are widely used to detect various diseases, such as bone fracture, pneumonia, or intracranial hemorrhage. This method is simple and accessible in most hospitals, but requires an expert who is sometimes unavailable. Today, some diagnoses are made with the help of deep learning algorithms based on Convolutional Neural Networks (CNN), but these algorithms show limitations. Recently, Capsule Networks (CapsNet) have been proposed to overcome these problems. In our work, CapsNet is used to detect whether a chest X-ray image has disease (COVID or pneumonia) or is healthy. An improved model called DRCaps is proposed, which combines the advantage of CapsNet and the dilation rate (dr) parameter to manage images with 226 × 226 resolution. We performed experiments with 16,669 chest images, in which our model achieved an accuracy of 90%. Furthermore, the model size is 11M with a reconstruction stage, which helps to avoid overfitting. Experiments show how the reconstruction stage works and how we can avoid the max-pooling operation for networks with a stride and dilation rate to downsampling the convolution layers. In this paper, DRCaps is superior to other comparable models in terms of accuracy, parameters, and image size handling. The main idea is to keep the model as simple as possible without using data augmentation or a complex preprocessing stage.
Collapse
Affiliation(s)
- Gabriela Rangel
- Facultad de Ingeniería, Universidad Autonoma de San Luis Potosi, San Luis Potosi 78290, Mexico;
- Tecnologico Nacional de Mexico/ITSSLPC, San Luis Potosi 78421, Mexico
| | - Juan C. Cuevas-Tello
- Facultad de Ingeniería, Universidad Autonoma de San Luis Potosi, San Luis Potosi 78290, Mexico;
| | - Mariano Rivera
- Centro de Investigacion en Matematicas, Guanajuato 36000, Mexico; (M.R.); (O.R.)
| | - Octavio Renteria
- Centro de Investigacion en Matematicas, Guanajuato 36000, Mexico; (M.R.); (O.R.)
| |
Collapse
|
63
|
Celik G. CovidCoughNet: A new method based on convolutional neural networks and deep feature extraction using pitch-shifting data augmentation for covid-19 detection from cough, breath, and voice signals. Comput Biol Med 2023; 163:107153. [PMID: 37321101 PMCID: PMC10249348 DOI: 10.1016/j.compbiomed.2023.107153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Revised: 05/25/2023] [Accepted: 06/07/2023] [Indexed: 06/17/2023]
Abstract
This study proposes a new deep learning-based method that demonstrates high performance in detecting Covid-19 disease from cough, breath, and voice signals. This impressive method, named CovidCoughNet, consists of a deep feature extraction network (InceptionFireNet) and a prediction network (DeepConvNet). The InceptionFireNet architecture, based on Inception and Fire modules, was designed to extract important feature maps. The DeepConvNet architecture, which is made up of convolutional neural network blocks, was developed to predict the feature vectors obtained from the InceptionFireNet architecture. The COUGHVID dataset containing cough data and the Coswara dataset containing cough, breath, and voice signals were used as the data sets. The pitch-shifting technique was used to data augmentation the signal data, which significantly contributed to improving performance. Additionally, Chroma features (CF), Root mean square energy (RMSE), Spectral centroid (SC), Spectral bandwidth (SB), Spectral rolloff (SR), Zero crossing rate (ZCR), and Mel frequency cepstral coefficients (MFCC) feature extraction techniques were used to extract important features from voice signals. Experimental studies have shown that using the pitch-shifting technique improved performance by around 3% compared to raw signals. When the proposed model was used with the COUGHVID dataset (Healthy, Covid-19, and Symptomatic), a high performance of 99.19% accuracy, 0.99 precision, 0.98 recall, 0.98 F1-Score, 97.77% specificity, and 98.44% AUC was achieved. Similarly, when the voice data in the Coswara dataset was used, higher performance was achieved compared to the cough and breath studies, with 99.63% accuracy, 100% precision, 0.99 recall, 0.99 F1-Score, 99.24% specificity, and 99.24% AUC. Moreover, when compared with current studies in the literature, the proposed model was observed to exhibit highly successful performance. The codes and details of the experimental studies can be accessed from the relevant Github page: (https://github.com/GaffariCelik/CovidCoughNet).
Collapse
Affiliation(s)
- Gaffari Celik
- Agri Ibrahim Cecen University, Department of Computer Technology, Agri, Turkey.
| |
Collapse
|
64
|
Ghassemi N, Shoeibi A, Khodatars M, Heras J, Rahimi A, Zare A, Zhang YD, Pachori RB, Gorriz JM. Automatic diagnosis of COVID-19 from CT images using CycleGAN and transfer learning. Appl Soft Comput 2023; 144:110511. [PMID: 37346824 PMCID: PMC10263244 DOI: 10.1016/j.asoc.2023.110511] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 08/23/2022] [Accepted: 06/08/2023] [Indexed: 06/23/2023]
Abstract
The outbreak of the corona virus disease (COVID-19) has changed the lives of most people on Earth. Given the high prevalence of this disease, its correct diagnosis in order to quarantine patients is of the utmost importance in the steps of fighting this pandemic. Among the various modalities used for diagnosis, medical imaging, especially computed tomography (CT) imaging, has been the focus of many previous studies due to its accuracy and availability. In addition, automation of diagnostic methods can be of great help to physicians. In this paper, a method based on pre-trained deep neural networks is presented, which, by taking advantage of a cyclic generative adversarial net (CycleGAN) model for data augmentation, has reached state-of-the-art performance for the task at hand, i.e., 99.60% accuracy. Also, in order to evaluate the method, a dataset containing 3163 images from 189 patients has been collected and labeled by physicians. Unlike prior datasets, normal data have been collected from people suspected of having COVID-19 disease and not from data from other diseases, and this database is made available publicly. Moreover, the method's reliability is further evaluated by calibration metrics, and its decision is interpreted by Grad-CAM also to find suspicious regions as another output of the method and make its decisions trustworthy and explainable.
Collapse
Affiliation(s)
- Navid Ghassemi
- Faculty of Electrical Engineering, FPGA Lab, K. N. Toosi University of Technology, Tehran, Iran
- Computer Engineering department, Ferdowsi University of Mashhad, Mashhad, Iran
| | - Afshin Shoeibi
- Faculty of Electrical Engineering, FPGA Lab, K. N. Toosi University of Technology, Tehran, Iran
- Computer Engineering department, Ferdowsi University of Mashhad, Mashhad, Iran
| | - Marjane Khodatars
- Department of Medical Engineering, Mashhad Branch, Islamic Azad University, Mashhad, Iran
| | - Jonathan Heras
- Department of Mathematics and Computer Science, University of La Rioja, La Rioja, Spain
| | - Alireza Rahimi
- Computer Engineering department, Ferdowsi University of Mashhad, Mashhad, Iran
| | - Assef Zare
- Faculty of Electrical Engineering, Gonabad Branch, Islamic Azad University, Gonabad, Iran
| | - Yu-Dong Zhang
- School of Informatics, University of Leicester, Leicester, LE1 7RH, UK
| | - Ram Bilas Pachori
- Department of Electrical Engineering, Indian Institute of Technology Indore, Indore 453552, India
| | - J Manuel Gorriz
- Department of Signal Theory, Networking and Communications, Universidad de Granada, Spain
- Department of Psychiatry, University of Cambridge, UK
| |
Collapse
|
65
|
Guo K, Chen J, Qiu T, Guo S, Luo T, Chen T, Ren S. MedGAN: An adaptive GAN approach for medical image generation. Comput Biol Med 2023; 163:107119. [PMID: 37364533 DOI: 10.1016/j.compbiomed.2023.107119] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 05/09/2023] [Accepted: 05/30/2023] [Indexed: 06/28/2023]
Abstract
Generative adversarial networks (GANs) and their variants as an effective method for generating visually appealing images have shown great potential in different medical imaging applications during past decades. However, some issues remain insufficiently investigated: many models still suffer from model collapse, vanishing gradients, and convergence failure. Considering the fact that medical images differ from typical RGB images in terms of complexity and dimensionality, we propose an adaptive generative adversarial network, namely MedGAN, to mitigate these issues. Specifically, we first use Wasserstein loss as a convergence metric to measure the convergence degree of the generator and the discriminator. Then, we adaptively train MedGAN based on this metric. Finally, we generate medical images based on MedGAN and use them to build few-shot medical data learning models for disease classification and lesion localization. On demodicosis, blister, molluscum, and parakeratosis datasets, our experimental results verify the advantages of MedGAN in model convergence, training speed, and visual quality of generated samples. We believe this approach can be generalized to other medical applications and contribute to radiologists' efforts for disease diagnosis. The source code can be downloaded at https://github.com/geyao-c/MedGAN.
Collapse
Affiliation(s)
- Kehua Guo
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Jie Chen
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Tian Qiu
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Shaojun Guo
- National Innovation of Defense Technology, Academy of Military Sciences PLA China, Fengtai District, Beijing 100071, China.
| | - Tao Luo
- Huawei Technologies Co., Ltd, Changsha 410006, China
| | - Tianyu Chen
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Sheng Ren
- School of Computer Science and Engineering, Hunan University of Arts and Sciences, Changde 415000, China
| |
Collapse
|
66
|
Hadi MU, Qureshi R, Ahmed A, Iftikhar N. A lightweight CORONA-NET for COVID-19 detection in X-ray images. EXPERT SYSTEMS WITH APPLICATIONS 2023; 225:120023. [PMID: 37063778 PMCID: PMC10088342 DOI: 10.1016/j.eswa.2023.120023] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/08/2022] [Revised: 03/28/2023] [Accepted: 03/31/2023] [Indexed: 06/19/2023]
Abstract
Since December 2019, COVID-19 has posed the most serious threat to living beings. With the advancement of vaccination programs around the globe, the need to quickly diagnose COVID-19 in general with little logistics is fore important. As a consequence, the fastest diagnostic option to stop COVID-19 from spreading, especially among senior patients, should be the development of an automated detection system. This study aims to provide a lightweight deep learning method that incorporates a convolutional neural network (CNN), discrete wavelet transform (DWT), and a long short-term memory (LSTM), called CORONA-NET for diagnosing COVID-19 from chest X-ray images. In this system, deep feature extraction is performed by CNN, the feature vector is reduced yet strengthened by DWT, and the extracted feature is detected by LSTM for prediction. The dataset included 3000 X-rays, 1000 of which were COVID-19 obtained locally. Within minutes of the test, the proposed test platform's prototype can accurately detect COVID-19 patients. The proposed method achieves state-of-the-art performance in comparison with the existing deep learning methods. We hope that the suggested method will hasten clinical diagnosis and may be used for patients in remote areas where clinical labs are not easily accessible due to a lack of resources, location, or other factors.
Collapse
Affiliation(s)
- Muhammad Usman Hadi
- Nanotechnology and Integrated Bio-Engineering Centre (NIBEC), School of Engineering, Ulster University, BT15 1AP Belfast, UK
| | - Rizwan Qureshi
- Department of Imaging Physics, MD Anderson Cancer Center, The University of Texas, Houston, TX 77030, USA
| | - Ayesha Ahmed
- Department of Radiology, Aalborg University Hospital, Aalborg 9000, Denmark
| | - Nadeem Iftikhar
- University College of Northern Denmark, Aalborg 9200, Denmark
| |
Collapse
|
67
|
Chang TY, Huang CK, Weng CH, Chen JY. Feature-based deep neural network approach for predicting mortality risk in patients with COVID-19. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE 2023; 124:106644. [PMID: 37366394 PMCID: PMC10277846 DOI: 10.1016/j.engappai.2023.106644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 05/20/2023] [Accepted: 06/12/2023] [Indexed: 06/28/2023]
Abstract
In this study, we integrate deep neural network (DNN) with hybrid approaches (feature selection and instance clustering) to build prediction models for predicting mortality risk in patients with COVID-19. Besides, we use cross-validation methods to evaluate the performance of these prediction models, including feature based DNN, cluster-based DNN, DNN, and neural network (multi-layer perceptron). The COVID-19 dataset with 12,020 instances and 10 cross-validation methods are used to evaluate the prediction models. The experimental results showed that the proposed feature based DNN model, holding Recall (98.62%), F1-score (91.99%), Accuracy (91.41%), and False Negative Rate (1.38%), outperforms than original prediction model (neural network) in the prediction performance. Furthermore, the proposed approach uses the Top 5 features to build a DNN prediction model with high prediction performance, exhibiting the well prediction as the model built by all features (57 features). The novelty of this study is that we integrate feature selection, instance clustering, and DNN techniques to improve prediction performance. Moreover, the proposed approach which is built with fewer features performs much better than the original prediction models in many metrics and can still remain high prediction performance.
Collapse
Affiliation(s)
- Thing-Yuan Chang
- Department of Information Management, National Chin-Yi University of Technology, Taichung 41130, Taiwan, Republic of China
| | - Cheng-Kui Huang
- Department of Business Administration, National Chung Cheng University, 168, University Rd., Min-Hsiung, Chia-Yi, Taiwan, Republic of China
| | - Cheng-Hsiung Weng
- Department of Information Management, National Chin-Yi University of Technology, Taichung 41130, Taiwan, Republic of China
- Department of Information Management, National Changhua University of Education, Changhua City, Changhua County, Taiwan, Republic of China
| | - Jing-Yuan Chen
- Department of Information Management, National Chin-Yi University of Technology, Taichung 41130, Taiwan, Republic of China
| |
Collapse
|
68
|
Bagheri M, Hallaj T, Ansari L, Pakdel FG. Detection of Coronavirus Disease 2019 (COVID-19) by TaqMan Real-Time PCR in Iran. MAEDICA 2023; 18:442-446. [PMID: 38023762 PMCID: PMC10674112 DOI: 10.26574/maedica.2023.18.3.442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/01/2023]
Abstract
Introduction: Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is known as a positivesense single-strand RNA virus and leads to Coronavirus disease 2019 (COVID-19). Coronaviruses significantly impact the human respiratory tract. Coronavirus disease is potentially fatal and transmissible in the world. In this study we evaluated the presence or absence of SARS-CoV-2 in 220 patients with un-explained pneumonia by TaqMan real-time PCR assay regarding open reading frame (ORF1ab) and nucleocapsid (N) protein genes. Materials and methods: Totally, 224 patients entered the study. Upper and lower respiratory tract secretion samples were obtained during 2020 from patients. Samples contained nose and throat swabs with viral transport medium. RNA was isolated from clinical samples with the GenePure Plus fully automatic Nucleic Acid Purification System, NPA-32+ (Hangzhou Bioer Technology Co. Ltd, Hangzhou, China). Outcomes: 72.32% of cases were positive for COVID-19. All positive cases had the most common symptoms of illness regarding fatigue, dry cough, dyspnea, headache, abdominal pain, nausa, vomiting and myalgia. Fever was observed in 50% of positive cases. Chest computed tomography (CT) scan of all tested patients indicated two-sided chest involvement. Conclusion:Detection of COVID-19 by TaqMan real-time PCR seems to be a powerful method for the screening and detection of novel corona virus infection.
Collapse
Affiliation(s)
- Morteza Bagheri
- Cellular and Molecular Research Center, Cellular and Molecular Medicine Institute, Urmia University of Medical Sciences, Urmia, Iran
| | - Tooba Hallaj
- Cellular and Molecular Research Center, Cellular and Molecular Medicine Institute, Urmia University of Medical Sciences, Urmia, Iran
| | - Legha Ansari
- Cellular and Molecular Research Center, Cellular and Molecular Medicine Institute, Urmia University of Medical Sciences, Urmia, Iran
| | - Firouz Ghaderi Pakdel
- Cellular and Molecular Research Center, Cellular and Molecular Medicine Institute, Urmia University of Medical Sciences, Urmia, Iran
| |
Collapse
|
69
|
Rafique Q, Rehman A, Afghan MS, Ahmad HM, Zafar I, Fayyaz K, Ain Q, Rayan RA, Al-Aidarous KM, Rashid S, Mushtaq G, Sharma R. Reviewing methods of deep learning for diagnosing COVID-19, its variants and synergistic medicine combinations. Comput Biol Med 2023; 163:107191. [PMID: 37354819 PMCID: PMC10281043 DOI: 10.1016/j.compbiomed.2023.107191] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Revised: 05/28/2023] [Accepted: 06/19/2023] [Indexed: 06/26/2023]
Abstract
The COVID-19 pandemic has necessitated the development of reliable diagnostic methods for accurately detecting the novel coronavirus and its variants. Deep learning (DL) techniques have shown promising potential as screening tools for COVID-19 detection. In this study, we explore the realistic development of DL-driven COVID-19 detection methods and focus on the fully automatic framework using available resources, which can effectively investigate various coronavirus variants through modalities. We conducted an exploration and comparison of several diagnostic techniques that are widely used and globally validated for the detection of COVID-19. Furthermore, we explore review-based studies that provide detailed information on synergistic medicine combinations for the treatment of COVID-19. We recommend DL methods that effectively reduce time, cost, and complexity, providing valuable guidance for utilizing available synergistic combinations in clinical and research settings. This study also highlights the implication of innovative diagnostic technical and instrumental strategies, exploring public datasets, and investigating synergistic medicines using optimised DL rules. By summarizing these findings, we aim to assist future researchers in their endeavours by providing a comprehensive overview of the implication of DL techniques in COVID-19 detection and treatment. Integrating DL methods with various diagnostic approaches holds great promise in improving the accuracy and efficiency of COVID-19 diagnostics, thus contributing to effective control and management of the ongoing pandemic.
Collapse
Affiliation(s)
- Qandeel Rafique
- Department of Internal Medicine, Sahiwal Medical College, Sahiwal, 57040, Pakistan.
| | - Ali Rehman
- Department of General Medicine Govt. Eye and General Hospital Lahore, 54000, Pakistan.
| | - Muhammad Sher Afghan
- Department of Internal Medicine District Headquarter Hospital Faislaabad, 62300, Pakistan.
| | - Hafiz Muhamad Ahmad
- Department of Internal Medicine District Headquarter Hospital Bahawalnagar, 62300, Pakistan.
| | - Imran Zafar
- Department of Bioinformatics and Computational Biology, Virtual University Pakistan, 44000, Pakistan.
| | - Kompal Fayyaz
- Department of National Centre for Bioinformatics, Quaid-I-Azam University Islamabad, 45320, Pakistan.
| | - Quratul Ain
- Department of Chemistry, Government College Women University Faisalabad, 03822, Pakistan.
| | - Rehab A Rayan
- Department of Epidemiology, High Institute of Public Health, Alexandria University, 21526, Egypt.
| | - Khadija Mohammed Al-Aidarous
- Department of Computer Science, College of Science and Arts in Sharurah, Najran University, 51730, Saudi Arabia.
| | - Summya Rashid
- Department of Pharmacology & Toxicology, College of Pharmacy, Prince Sattam Bin Abdulaziz University, P.O. Box 173, Al-Kharj, 11942, Saudi Arabia.
| | - Gohar Mushtaq
- Center for Scientific Research, Faculty of Medicine, Idlib University, Idlib, Syria.
| | - Rohit Sharma
- Department of Rasashastra and Bhaishajya Kalpana, Faculty of Ayurveda, Institute of Medical Sciences, Banaras Hindu University, Varanasi, India.
| |
Collapse
|
70
|
Arora M, Davis CM, Gowda NR, Foster DG, Mondal A, Coopersmith CM, Kamaleswaran R. Uncertainty-Aware Convolutional Neural Network for Identifying Bilateral Opacities on Chest X-rays: A Tool to Aid Diagnosis of Acute Respiratory Distress Syndrome. Bioengineering (Basel) 2023; 10:946. [PMID: 37627831 PMCID: PMC10451804 DOI: 10.3390/bioengineering10080946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 07/26/2023] [Accepted: 08/03/2023] [Indexed: 08/27/2023] Open
Abstract
Acute Respiratory Distress Syndrome (ARDS) is a severe lung injury with high mortality, primarily characterized by bilateral pulmonary opacities on chest radiographs and hypoxemia. In this work, we trained a convolutional neural network (CNN) model that can reliably identify bilateral opacities on routine chest X-ray images of critically ill patients. We propose this model as a tool to generate predictive alerts for possible ARDS cases, enabling early diagnosis. Our team created a unique dataset of 7800 single-view chest-X-ray images labeled for the presence of bilateral or unilateral pulmonary opacities, or 'equivocal' images, by three blinded clinicians. We used a novel training technique that enables the CNN to explicitly predict the 'equivocal' class using an uncertainty-aware label smoothing loss. We achieved an Area under the Receiver Operating Characteristic Curve (AUROC) of 0.82 (95% CI: 0.80, 0.85), a precision of 0.75 (95% CI: 0.73, 0.78), and a sensitivity of 0.76 (95% CI: 0.73, 0.78) on the internal test set while achieving an (AUROC) of 0.84 (95% CI: 0.81, 0.86), a precision of 0.73 (95% CI: 0.63, 0.69), and a sensitivity of 0.73 (95% CI: 0.70, 0.75) on an external validation set. Further, our results show that this approach improves the model calibration and diagnostic odds ratio of the hypothesized alert tool, making it ideal for clinical decision support systems.
Collapse
Affiliation(s)
- Mehak Arora
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, GA 30332, USA;
| | - Carolyn M. Davis
- Department of Surgery, Emory University School of Medicine, Atlanta, GA 30332, USA; (C.M.D.); (D.G.F.); (C.M.C.)
- Emory Critical Care Center, Emory University School of Medicine, Atlanta, GA 30332, USA
| | - Niraj R. Gowda
- Division of Pulmonary, Critical Care, Allergy and Sleep Medicine, Emory University School of Medicine, Atlanta, GA 30332, USA;
| | - Dennis G. Foster
- Department of Surgery, Emory University School of Medicine, Atlanta, GA 30332, USA; (C.M.D.); (D.G.F.); (C.M.C.)
| | - Angana Mondal
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, GA 30332, USA;
| | - Craig M. Coopersmith
- Department of Surgery, Emory University School of Medicine, Atlanta, GA 30332, USA; (C.M.D.); (D.G.F.); (C.M.C.)
- Emory Critical Care Center, Emory University School of Medicine, Atlanta, GA 30332, USA
| | - Rishikesan Kamaleswaran
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, GA 30332, USA;
- Emory Critical Care Center, Emory University School of Medicine, Atlanta, GA 30332, USA
| |
Collapse
|
71
|
Pisano F, Cannas B, Fanni A, Pasella M, Canetto B, Giglio SR, Mocci S, Chessa L, Perra A, Littera R. Decision trees for early prediction of inadequate immune response to coronavirus infections: a pilot study on COVID-19. Front Med (Lausanne) 2023; 10:1230733. [PMID: 37601789 PMCID: PMC10433226 DOI: 10.3389/fmed.2023.1230733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Accepted: 07/19/2023] [Indexed: 08/22/2023] Open
Abstract
Introduction Few artificial intelligence models exist to predict severe forms of COVID-19. Most rely on post-infection laboratory data, hindering early treatment for high-risk individuals. Methods This study developed a machine learning model to predict inherent risk of severe symptoms after contracting SARS-CoV-2. Using a Decision Tree trained on 153 Alpha variant patients, demographic, clinical and immunogenetic markers were considered. Model performance was assessed on Alpha and Delta variant datasets. Key risk factors included age, gender, absence of KIR2DS2 gene (alone or with HLA-C C1 group alleles), presence of 14-bp polymorphism in HLA-G gene, presence of KIR2DS5 gene, and presence of KIR telomeric region A/A. Results The model achieved 83.01% accuracy for Alpha variant and 78.57% for Delta variant, with True Positive Rates of 80.82 and 77.78%, and True Negative Rates of 85.00% and 79.17%, respectively. The model showed high sensitivity in identifying individuals at risk. Discussion The present study demonstrates the potential of AI algorithms, combined with demographic, epidemiologic, and immunogenetic data, in identifying individuals at high risk of severe COVID-19 and facilitating early treatment. Further studies are required for routine clinical integration.
Collapse
Affiliation(s)
- Fabio Pisano
- Department of Electrical and Electronic Engineering, University of Cagliari, Cagliari, Italy
| | - Barbara Cannas
- Department of Electrical and Electronic Engineering, University of Cagliari, Cagliari, Italy
| | - Alessandra Fanni
- Department of Electrical and Electronic Engineering, University of Cagliari, Cagliari, Italy
| | - Manuela Pasella
- Department of Electrical and Electronic Engineering, University of Cagliari, Cagliari, Italy
| | | | - Sabrina Rita Giglio
- Medical Genetics, Department of Medical Sciences and Public Health, University of Cagliari, Cagliari, Italy
- AART-ODV (Association for the Advancement of Research on Transplantation), Cagliari, Italy
- Medical Genetics, R. Binaghi Hospital, Local Public Health and Social Care Unit (ASSL) of Cagliari, Cagliari, Italy
- Centre for Research University Services (CeSAR, Centro Servizi di Ateneo per la Ricerca), University of Cagliari, Cagliari, Monserrato, Italy
| | - Stefano Mocci
- Medical Genetics, Department of Medical Sciences and Public Health, University of Cagliari, Cagliari, Italy
- Centre for Research University Services (CeSAR, Centro Servizi di Ateneo per la Ricerca), University of Cagliari, Cagliari, Monserrato, Italy
| | - Luchino Chessa
- AART-ODV (Association for the Advancement of Research on Transplantation), Cagliari, Italy
- Department of Medical Sciences and Public Health, University of Cagliari, Cagliari, Italy
- Liver Unit, Department of Internal Medicine, University Hospital of Cagliari, Cagliari, Italy
| | - Andrea Perra
- AART-ODV (Association for the Advancement of Research on Transplantation), Cagliari, Italy
- Unit of Oncology and Molecular Pathology, Department of Biomedical Sciences, University of Cagliari, Cagliari, Italy
| | - Roberto Littera
- AART-ODV (Association for the Advancement of Research on Transplantation), Cagliari, Italy
- Medical Genetics, R. Binaghi Hospital, Local Public Health and Social Care Unit (ASSL) of Cagliari, Cagliari, Italy
| |
Collapse
|
72
|
Saha S, Dutta S, Goswami B, Nandi D. ADU-Net: An Attention Dense U-Net based deep supervised DNN for automated lesion segmentation of COVID-19 from chest CT images. Biomed Signal Process Control 2023; 85:104974. [PMID: 37122956 PMCID: PMC10121143 DOI: 10.1016/j.bspc.2023.104974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 04/01/2023] [Accepted: 04/15/2023] [Indexed: 05/02/2023]
Abstract
An automatic method for qualitative and quantitative evaluation of chest Computed Tomography (CT) images is essential for diagnosing COVID-19 patients. We aim to develop an automated COVID-19 prediction framework using deep learning. We put forth a novel Deep Neural Network (DNN) composed of an attention-based dense U-Net with deep supervision for COVID-19 lung lesion segmentation from chest CT images. We incorporate dense U-Net where convolution kernel size 5×5 is used instead of 3×3. The dense and transition blocks are introduced to implement a densely connected network on each encoder level. Also, the attention mechanism is applied between the encoder, skip connection, and decoder. These are used to keep both the high and low-level features efficiently. The deep supervision mechanism creates secondary segmentation maps from the features. Deep supervision combines secondary supervision maps from various resolution levels and produces a better final segmentation map. The trained artificial DNN model takes the test data at its input and generates a prediction output for COVID-19 lesion segmentation. The proposed model has been applied to the MedSeg COVID-19 chest CT segmentation dataset. Data pre-processing methods help the training process and improve performance. We compare the performance of the proposed DNN model with state-of-the-art models by computing the well-known metrics: dice coefficient, Jaccard coefficient, accuracy, specificity, sensitivity, and precision. As a result, the proposed model outperforms the state-of-the-art models. This new model may be considered an efficient automated screening system for COVID-19 diagnosis and can potentially improve patient health care and management system.
Collapse
Affiliation(s)
- Sanjib Saha
- Department of Computer Science and Engineering, National Institute of Technology, Durgapur, 713209, West Bengal, India
- Department of Computer Science and Engineering, Dr. B. C. Roy Engineering College, Durgapur, 713206, West Bengal, India
| | - Subhadeep Dutta
- Department of Computer Science and Engineering, Dr. B. C. Roy Engineering College, Durgapur, 713206, West Bengal, India
| | - Biswarup Goswami
- Department of Respiratory Medicine, Health and Family Welfare, Government of West Bengal, Kolkata, 700091, West Bengal, India
| | - Debashis Nandi
- Department of Computer Science and Engineering, National Institute of Technology, Durgapur, 713209, West Bengal, India
| |
Collapse
|
73
|
Zaeri N. Artificial intelligence and machine learning responses to COVID-19 related inquiries. J Med Eng Technol 2023; 47:301-320. [PMID: 38625639 DOI: 10.1080/03091902.2024.2321846] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2021] [Accepted: 02/18/2024] [Indexed: 04/17/2024]
Abstract
Researchers and scientists can use computational-based models to turn linked data into useful information, aiding in disease diagnosis, examination, and viral containment due to recent artificial intelligence and machine learning breakthroughs. In this paper, we extensively study the role of artificial intelligence and machine learning in delivering efficient responses to the COVID-19 pandemic almost four years after its start. In this regard, we examine a large number of critical studies conducted by various academic and research communities from multiple disciplines, as well as practical implementations of artificial intelligence algorithms that suggest potential solutions in investigating different COVID-19 decision-making scenarios. We identify numerous areas where artificial intelligence and machine learning can impact this context, including diagnosis (using chest X-ray imaging and CT imaging), severity, tracking, treatment, and the drug industry. Furthermore, we analyse the dilemma's limits, restrictions, and hazards.
Collapse
Affiliation(s)
- Naser Zaeri
- Faculty of Computer Studies, Arab Open University, Kuwait
| |
Collapse
|
74
|
Gopatoti A, Vijayalakshmi P. MTMC-AUR2CNet: Multi-textural multi-class attention recurrent residual convolutional neural network for COVID-19 classification using chest X-ray images. Biomed Signal Process Control 2023; 85:104857. [PMID: 36968651 PMCID: PMC10027978 DOI: 10.1016/j.bspc.2023.104857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 02/13/2023] [Accepted: 03/11/2023] [Indexed: 03/24/2023]
Abstract
Coronavirus disease (COVID-19) has infected over 603 million confirmed cases as of September 2022, and its rapid spread has raised concerns worldwide. More than 6.4 million fatalities in confirmed patients have been reported. According to reports, the COVID-19 virus causes lung damage and rapidly mutates before the patient receives any diagnosis-specific medicine. Daily increasing COVID-19 cases and the limited number of diagnosis tool kits encourage the use of deep learning (DL) models to assist health care practitioners using chest X-ray (CXR) images. The CXR is a low radiation radiography tool available in hospitals to diagnose COVID-19 and combat this spread. We propose a Multi-Textural Multi-Class (MTMC) UNet-based Recurrent Residual Convolutional Neural Network (MTMC-UR2CNet) and MTMC-UR2CNet with attention mechanism (MTMC-AUR2CNet) for multi-class lung lobe segmentation of CXR images. The lung lobe segmentation output of MTMC-UR2CNet and MTMC-AUR2CNet are mapped individually with their input CXRs to generate the region of interest (ROI). The multi-textural features are extracted from the ROI of each proposed MTMC network. The extracted multi-textural features from ROI are fused and are trained to the Whale optimization algorithm (WOA) based DeepCNN classifier on classifying the CXR images into normal (healthy), COVID-19, viral pneumonia, and lung opacity. The experimental result shows that the MTMC-AUR2CNet has superior performance in multi-class lung lobe segmentation of CXR images with an accuracy of 99.47%, followed by MTMC-UR2CNet with an accuracy of 98.39%. Also, MTMC-AUR2CNet improves the multi-textural multi-class classification accuracy of the WOA-based DeepCNN classifier to 97.60% compared to MTMC-UR2CNet.
Collapse
Affiliation(s)
- Anandbabu Gopatoti
- Department of Electronics and Communication Engineering, Hindusthan College of Engineering and Technology, Coimbatore, Tamil Nadu, India
- Centre for Research, Anna University, Chennai, Tamil Nadu, India
| | - P Vijayalakshmi
- Department of Electronics and Communication Engineering, Hindusthan College of Engineering and Technology, Coimbatore, Tamil Nadu, India
| |
Collapse
|
75
|
Ikeda K, Sakabe N, Maruyama S, Ito C, Shimoyama Y, Oboshi W, Komene T, Yamaguchi Y, Sato S, Nagata K. Relationship between a deep learning model and liquid-based cytological processing techniques. Cytopathology 2023; 34:308-317. [PMID: 37051774 DOI: 10.1111/cyt.13235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Revised: 02/28/2023] [Accepted: 03/23/2023] [Indexed: 04/14/2023]
Abstract
OBJECTIVE Artificial intelligence (AI)-based cytopathology studies conducted using deep learning have enabled cell detection and classification. Liquid-based cytology (LBC) has facilitated the standardisation of specimen preparation; however, cytomorphology varies according to the LBC processing technique used. In this study, we elucidated the relationship between two LBC techniques and cell detection and classification using a deep learning model. METHODS Cytological specimens were prepared using the ThinPrep and SurePath methods. The accuracy of cell detection and cell classification was examined using the one- and five-cell models, which were trained with one and five cell types, respectively. RESULTS When the same LBC processing techniques were used for the training and detection preparations, the cell detection and classification rates were high. The model trained on ThinPrep preparations was more accurate than that trained on SurePath. When the preparation types used for training and detection were different, the accuracy of cell detection and classification was significantly reduced (P < 0.01). The model trained on both ThinPrep and SurePath preparations exhibited slightly reduced cell detection and classification rates but was highly accurate. CONCLUSIONS For the two LBC processing techniques, cytomorphology varied according to cell type; this difference affects the accuracy of cell detection and classification by deep learning. Therefore, for highly accurate cell detection and classification using AI, the same processing technique must be used for both training and detection. Our assessment also suggests that a deep learning model should be constructed using specimens prepared via a variety of processing techniques to construct a globally applicable AI model.
Collapse
Affiliation(s)
- Katsuhide Ikeda
- Pathophysiology Sciences, Department of Integrated Health Sciences, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Nanako Sakabe
- Pathophysiology Sciences, Department of Integrated Health Sciences, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Sayumi Maruyama
- Pathophysiology Sciences, Department of Integrated Health Sciences, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Chihiro Ito
- Pathophysiology Sciences, Department of Integrated Health Sciences, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Yuka Shimoyama
- Pathophysiology Sciences, Department of Integrated Health Sciences, Nagoya University Graduate School of Medicine, Nagoya, Japan
| | - Wataru Oboshi
- Department of Medical Technology and Sciences, School of Health Sciences at Narita, International University of Health and Welfare, Narita, Japan
| | - Tetsuya Komene
- Department of Medical Technology and Sciences, School of Health Sciences at Narita, International University of Health and Welfare, Narita, Japan
| | - Yoshitaka Yamaguchi
- Department of Medical Technology and Sciences, School of Health Sciences at Narita, International University of Health and Welfare, Narita, Japan
| | - Shouichi Sato
- Clinical Engineering, Faculty of medical sciences, Juntendo University, Urayasu, Japan
| | - Kohzo Nagata
- Pathophysiology Sciences, Department of Integrated Health Sciences, Nagoya University Graduate School of Medicine, Nagoya, Japan
| |
Collapse
|
76
|
Tian M, Wang H, Sun Y, Wu S, Tang Q, Zhang M. Fine-grained attention & knowledge-based collaborative network for diabetic retinopathy grading. Heliyon 2023; 9:e17217. [PMID: 37449186 PMCID: PMC10336422 DOI: 10.1016/j.heliyon.2023.e17217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 06/09/2023] [Accepted: 06/10/2023] [Indexed: 07/18/2023] Open
Abstract
Accurate diabetic retinopathy (DR) grading is crucial for making the proper treatment plan to reduce the damage caused by vision loss. This task is challenging due to the fact that the DR related lesions are often small and subtle in visual differences and intra-class variations. Moreover, relationships between the lesions and the DR levels are complicated. Although many deep learning (DL) DR grading systems have been developed with some success, there are still rooms for grading accuracy improvement. A common issue is that not much medical knowledge was used in these DL DR grading systems. As a result, the grading results are not properly interpreted by ophthalmologists, thus hinder the potential for practical applications. This paper proposes a novel fine-grained attention & knowledge-based collaborative network (FA+KC-Net) to address this concern. The fine-grained attention network dynamically divides the extracted feature maps into smaller patches and effectively captures small image features that are meaningful in the sense of its training from large amount of retinopathy fundus images. The knowledge-based collaborative network extracts a-priori medical knowledge features, i.e., lesions such as the microaneurysms (MAs), soft exudates (SEs), hard exudates (EXs), and hemorrhages (HEs). Finally, decision rules are developed to fuse the DR grading results from the fine-grained network and the knowledge-based collaborative network to make the final grading. Extensive experiments are carried out on four widely-used datasets, the DDR, Messidor, APTOS, and EyePACS to evaluate the efficacy of our method and compare with other state-of-the-art (SOTA) DL models. Simulation results show that proposed FA+KC-Net is accurate and stable, achieves the best performances on the DDR, Messidor, and APTOS datasets.
Collapse
Affiliation(s)
- Miao Tian
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Hongqiu Wang
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Yingxue Sun
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Shaozhi Wu
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Qingqing Tang
- Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, 610041, China
| | - Meixia Zhang
- Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, 610041, China
| |
Collapse
|
77
|
Li H, Drukker K, Hu Q, Whitney HM, Fuhrman JD, Giger ML. Predicting intensive care need for COVID-19 patients using deep learning on chest radiography. J Med Imaging (Bellingham) 2023; 10:044504. [PMID: 37608852 PMCID: PMC10440543 DOI: 10.1117/1.jmi.10.4.044504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Revised: 07/12/2023] [Accepted: 08/01/2023] [Indexed: 08/24/2023] Open
Abstract
Purpose Image-based prediction of coronavirus disease 2019 (COVID-19) severity and resource needs can be an important means to address the COVID-19 pandemic. In this study, we propose an artificial intelligence/machine learning (AI/ML) COVID-19 prognosis method to predict patients' needs for intensive care by analyzing chest X-ray radiography (CXR) images using deep learning. Approach The dataset consisted of 8357 CXR exams from 5046 COVID-19-positive patients as confirmed by reverse transcription polymerase chain reaction (RT-PCR) tests for the SARS-CoV-2 virus with a training/validation/test split of 64%/16%/20% on a by patient level. Our model involved a DenseNet121 network with a sequential transfer learning technique employed to train on a sequence of gradually more specific and complex tasks: (1) fine-tuning a model pretrained on ImageNet using a previously established CXR dataset with a broad spectrum of pathologies; (2) refining on another established dataset to detect pneumonia; and (3) fine-tuning using our in-house training/validation datasets to predict patients' needs for intensive care within 24, 48, 72, and 96 h following the CXR exams. The classification performances were evaluated on our independent test set (CXR exams of 1048 patients) using the area under the receiver operating characteristic curve (AUC) as the figure of merit in the task of distinguishing between those COVID-19-positive patients who required intensive care following the imaging exam and those who did not. Results Our proposed AI/ML model achieved an AUC (95% confidence interval) of 0.78 (0.74, 0.81) when predicting the need for intensive care 24 h in advance, and at least 0.76 (0.73, 0.80) for 48 h or more in advance using predictions based on the AI prognostic marker derived from CXR images. Conclusions This AI/ML prediction model for patients' needs for intensive care has the potential to support both clinical decision-making and resource management.
Collapse
Affiliation(s)
- Hui Li
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Karen Drukker
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Qiyuan Hu
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Heather M. Whitney
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Jordan D. Fuhrman
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Maryellen L. Giger
- The University of Chicago, Department of Radiology, Chicago, Illinois, United States
| |
Collapse
|
78
|
Chalacheva P, Khoo MCK. Integrating Machine Learning with Biomedical Signal Processing and Systems Analysis: An Applications-based Course. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38082972 DOI: 10.1109/embc40787.2023.10340498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
The growing importance of data analytics in biomedicine is increasingly becoming recognized in biomedical engineering curricula through the introduction of machine learning classes that generally run in parallel to, but separately from, more traditional engineering courses, such as signal and systems analysis. We propose a new approach that systematically integrates signal processing and systems analysis with key techniques in machine learning. In the proposed course, the student obtains hands-on experience in applying algorithms that can be applied to practical problems of physiological signal conditioning, analysis and interpretation. This is achieved by exposing the student to a sequence of 4 applications-based modules that represent different biomedical engineering problems: human activity recognition from wearable devices, epileptic seizure detection, quantification of dynamic respiratory-cardiac coupling in humans under different conditions, and detection of sleep apnea episodes from heart rate variability data. Within each module, the student gains the experience of working with the data in question "from the ground up". We also introduce a general plan for assessment of student learning, and discuss the expected outcomes and limitations of this integrative approach to teaching.Clinical Relevance- The proposed course is targeted at biomedical engineering students at the senior undergraduate or first-year graduate level who are interested in learning how to analyze physiological signals. The course would also be suitable for clinician-scientists who have prior training in statistics with some exposure to engineering mathematics.
Collapse
|
79
|
Liyanarachchi R, Wijekoon J, Premathilaka M, Vidhanaarachchi S. COVID-19 symptom identification using Deep Learning and hardware emulated systems. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE 2023; 125:106709. [PMID: 38620194 PMCID: PMC10300286 DOI: 10.1016/j.engappai.2023.106709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 05/27/2023] [Accepted: 06/21/2023] [Indexed: 04/17/2024]
Abstract
The COVID-19 pandemic disrupted regular global activities in every possible way. This pandemic, caused by the transmission of the infectious Coronavirus, is characterized by main symptoms such as fever, fatigue, cough, and loss of smell. A current key focus of the scientific community is to develop automated methods that can effectively identify COVID-19 patients and are also adaptable for foreseen future virus outbreaks. To classify COVID-19 suspects, it is required to use contactless automatic measurements of more than one symptom. This study explores the effectiveness of using Deep Learning combined with a hardware-emulated system to identify COVID-19 patients in Sri Lanka based on two main symptoms: cough and shortness of breath. To achieve this, a Convolutional Neural Network (CNN) based on Transfer Learning was employed to analyze and compare the features of a COVID-19 cough with other types of coughs. Real-time video footage was captured using a FLIR C2 thermal camera and a web camera and subsequently processed using OpenCV image processing algorithms. The objective was to detect the nasal cavities in the video frames and measure the breath cycles per minute, thereby identifying instances of shortness of breath. The proposed method was first tested on crowd-sourced datasets (Coswara, Coughvid, ESC-50, and a dataset from Kaggle) obtained online. It was then applied and verified using a dataset obtained from local hospitals in Sri Lanka. The accuracy of the developed methodologies in diagnosing cough resemblance and recognizing shortness of breath was found to be 94% and 95%, respectively.
Collapse
|
80
|
Wang C, Xu C, Zhang Y, Lu P. Diagnosis of Chest Pneumonia with X-ray Images Based on Graph Reasoning. Diagnostics (Basel) 2023; 13:2125. [PMID: 37371018 PMCID: PMC10297047 DOI: 10.3390/diagnostics13122125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Revised: 06/09/2023] [Accepted: 06/11/2023] [Indexed: 06/29/2023] Open
Abstract
Pneumonia is an acute respiratory infection that affects the lungs. It is the single largest infectious disease that kills children worldwide. According to a 2019 World Health Organization survey, pneumonia caused 740,180 deaths in children under 5 years of age, accounting for 14% of all deaths in children under 5 years of age but 22% of all deaths in children aged 1 to 5 years. This shows that early recognition of pneumonia in children is particularly important. In this study, we propose a pneumonia binary classification model for chest X-ray image recognition based on a deep learning approach. We extract features using a traditional convolutional network framework to obtain features containing rich semantic information. The adjacency matrix is also constructed to represent the degree of relevance of each region in the image. In the final part of the model, we use graph inference to complete the global modeling to help classify pneumonia disease. A total of 6189 children's X-ray films containing 3319 normal cases and 2870 pneumonia cases were used in the experiment. In total, 20% was selected as the test data set, and 11 common models were compared using 4 evaluation metrics, of which the accuracy rate reached 89.1% and the F1-score reached 90%, achieving the optimum.
Collapse
Affiliation(s)
- Cheng Wang
- School of Information and Electronic Engineering, Zhejiang University of Science and Technology, Hangzhou 310023, China; (C.W.); (Y.Z.)
| | - Chang Xu
- School of Information and Electronic Engineering, Zhejiang University of Science and Technology, Hangzhou 310023, China; (C.W.); (Y.Z.)
| | - Yulai Zhang
- School of Information and Electronic Engineering, Zhejiang University of Science and Technology, Hangzhou 310023, China; (C.W.); (Y.Z.)
| | - Peng Lu
- Institute of Computer Innovation Technology, Zhejiang University, Hangzhou 310023, China;
| |
Collapse
|
81
|
Krishna S, Suganthi S, Bhavsar A, Yesodharan J, Krishnamoorthy S. An interpretable decision-support model for breast cancer diagnosis using histopathology images. J Pathol Inform 2023; 14:100319. [PMID: 37416058 PMCID: PMC10320615 DOI: 10.1016/j.jpi.2023.100319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 05/29/2023] [Accepted: 06/08/2023] [Indexed: 07/08/2023] Open
Abstract
Microscopic examination of biopsy tissue slides is perceived as the gold-standard methodology for the confirmation of presence of cancer cells. Manual analysis of an overwhelming inflow of tissue slides is highly susceptible to misreading of tissue slides by pathologists. A computerized framework for histopathology image analysis is conceived as a diagnostic tool that greatly benefits pathologists, augmenting definitive diagnosis of cancer. Convolutional Neural Network (CNN) turned out to be the most adaptable and effective technique in the detection of abnormal pathologic histology. Despite their high sensitivity and predictive power, clinical translation is constrained by a lack of intelligible insights into the prediction. A computer-aided system that can offer a definitive diagnosis and interpretability is therefore highly desirable. Conventional visual explanatory techniques, Class Activation Mapping (CAM), combined with CNN models offers interpretable decision making. The major challenge in CAM is, it cannot be optimized to create the best visualization map. CAM also decreases the performance of the CNN models. To address this challenge, we introduce a novel interpretable decision-support model using CNN with a trainable attention mechanism using response-based feed-forward visual explanation. We introduce a variant of DarkNet19 CNN model for the classification of histopathology images. In order to achieve visual interpretation as well as boost the performance of the DarkNet19 model, an attention branch is integrated with DarkNet19 network forming Attention Branch Network (ABN). The attention branch uses a convolution layer of DarkNet19 and Global Average Pooling (GAP) to model the context of the visual features and generate a heatmap to identify the region of interest. Finally, the perception branch is constituted using a fully connected layer to classify images. We trained and validated our model using more than 7000 breast cancer biopsy slide images from an openly available dataset and achieved 98.7% accuracy in the binary classification of histopathology images. The observations substantiated the enhanced clinical interpretability of the DarkNet19 CNN model, supervened by the attention branch, besides delivering a 3%-4% performance boost of the baseline model. The cancer regions highlighted by the proposed model correlate well with the findings of an expert pathologist. The coalesced approach of unifying attention branch with the CNN model capacitates pathologists with augmented diagnostic interpretability of histological images with no detriment to state-of-art performance. The model's proficiency in pinpointing the region of interest is an added bonus that can lead to accurate clinical translation of deep learning models that underscore clinical decision support.
Collapse
Affiliation(s)
- Sruthi Krishna
- Center for Wireless Networks & Applications (WNA), Amrita Vishwa Vidyapeetham, Amritapuri, India
| | | | - Arnav Bhavsar
- School of Computing and Electrical Engineering, IIT Mandi, Himachal Pradesh, India
| | - Jyotsna Yesodharan
- Department of Pathology, Amrita Institute of Medical Science, Kochi, India
| | | |
Collapse
|
82
|
Das S, Ayus I, Gupta D. A comprehensive review of COVID-19 detection with machine learning and deep learning techniques. HEALTH AND TECHNOLOGY 2023; 13:1-14. [PMID: 37363343 PMCID: PMC10244837 DOI: 10.1007/s12553-023-00757-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 05/14/2023] [Indexed: 06/28/2023]
Abstract
Purpose The first transmission of coronavirus to humans started in Wuhan city of China, took the shape of a pandemic called Corona Virus Disease 2019 (COVID-19), and posed a principal threat to the entire world. The researchers are trying to inculcate artificial intelligence (Machine learning or deep learning models) for the efficient detection of COVID-19. This research explores all the existing machine learning (ML) or deep learning (DL) models, used for COVID-19 detection which may help the researcher to explore in different directions. The main purpose of this review article is to present a compact overview of the application of artificial intelligence to the research experts, helping them to explore the future scopes of improvement. Methods The researchers have used various machine learning, deep learning, and a combination of machine and deep learning models for extracting significant features and classifying various health conditions in COVID-19 patients. For this purpose, the researchers have utilized different image modalities such as CT-Scan, X-Ray, etc. This study has collected over 200 research papers from various repositories like Google Scholar, PubMed, Web of Science, etc. These research papers were passed through various levels of scrutiny and finally, 50 research articles were selected. Results In those listed articles, the ML / DL models showed an accuracy of 99% and above while performing the classification of COVID-19. This study has also presented various clinical applications of various research. This study specifies the importance of various machine and deep learning models in the field of medical diagnosis and research. Conclusion In conclusion, it is evident that ML/DL models have made significant progress in recent years, but there are still limitations that need to be addressed. Overfitting is one such limitation that can lead to incorrect predictions and overburdening of the models. The research community must continue to work towards finding ways to overcome these limitations and make machine and deep learning models even more effective and efficient. Through this ongoing research and development, we can expect even greater advances in the future.
Collapse
Affiliation(s)
- Sreeparna Das
- Department of Computer Science and Engineering, National Institute of Technology Arunachal Pradesh, Jote, Arunachal Pradesh 791113 India
| | - Ishan Ayus
- Department of Computer Science and Engineering, ITER, Siksha ‘O’ Anusandhan Deemed to be University, Bhubaneswar, Odisha 751030 India
| | - Deepak Gupta
- Department of Computer Science and Engineering, Motilal Nehru National Institute of Technology Allahabad, Prayagraj, UP 211004 India
| |
Collapse
|
83
|
Srinivas K, Gagana Sri R, Pravallika K, Nishitha K, Polamuri SR. COVID-19 prediction based on hybrid Inception V3 with VGG16 using chest X-ray images. MULTIMEDIA TOOLS AND APPLICATIONS 2023:1-18. [PMID: 37362699 PMCID: PMC10240113 DOI: 10.1007/s11042-023-15903-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 05/12/2023] [Accepted: 05/18/2023] [Indexed: 06/28/2023]
Abstract
The Corona Virus was first started in the Wuhan city, China in December of 2019. It belongs to the Coronaviridae family, which can infect both animals and humans. The diagnosis of coronavirus disease-2019 (COVID-19) is typically detected by Serology, Genetic Real-Time reverse transcription-Polymerase Chain Reaction (RT-PCR), and Antigen testing. These testing methods have limitations like limited sensitivity, high cost, and long turn-around time. It is necessary to develop an automatic detection system for COVID-19 prediction. Chest X-ray is a lower-cost process in comparison to chest Computed tomography (CT). Deep learning is the best fruitful technique of machine learning, which provides useful investigation for learning and screening a large amount of chest X-ray images with COVID-19 and normal. There are many deep learning methods for prediction, but these methods have a few limitations like overfitting, misclassification, and false predictions for poor-quality chest X-rays. In order to overcome these limitations, the novel hybrid model called "Inception V3 with VGG16 (Visual Geometry Group)" is proposed for the prediction of COVID-19 using chest X-rays. It is a combination of two deep learning models, Inception V3 and VGG16 (IV3-VGG). To build the hybrid model, collected 243 images from the COVID-19 Radiography Database. Out of 243 X-rays, 121 are COVID-19 positive and 122 are normal images. The hybrid model is divided into two modules namely pre-processing and the IV3-VGG. In the dataset, some of the images with different sizes and different color intensities are identified and pre-processed. The second module i.e., IV3-VGG consists of four blocks. The first block is considered for VGG-16 and blocks 2 and 3 are considered for Inception V3 networks and final block 4 consists of four layers namely Avg pooling, dropout, fully connected, and Softmax layers. The experimental results show that the IV3-VGG model achieves the highest accuracy of 98% compared to the existing five prominent deep learning models such as Inception V3, VGG16, ResNet50, DenseNet121, and MobileNet.
Collapse
Affiliation(s)
- K. Srinivas
- Department of CSE, VR Siddhartha Engineering College, Vijayawada, 520007 India
| | - R. Gagana Sri
- Department of CSE, VR Siddhartha Engineering College, Vijayawada, 520007 India
| | - K. Pravallika
- Department of CSE, Sir C. R. Reddy Engineering College, Eluru, 534007 India
| | - K. Nishitha
- Department of CSE, VR Siddhartha Engineering College, Vijayawada, 520007 India
| | - Subba Rao Polamuri
- Department of CSE, Bonam Venkata Chalamayya Engineering College (Autonomous), Odalarevu, 533210 India
| |
Collapse
|
84
|
Sailunaz K, Özyer T, Rokne J, Alhajj R. A survey of machine learning-based methods for COVID-19 medical image analysis. Med Biol Eng Comput 2023; 61:1257-1297. [PMID: 36707488 PMCID: PMC9883138 DOI: 10.1007/s11517-022-02758-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 12/22/2022] [Indexed: 01/29/2023]
Abstract
The ongoing COVID-19 pandemic caused by the SARS-CoV-2 virus has already resulted in 6.6 million deaths with more than 637 million people infected after only 30 months since the first occurrences of the disease in December 2019. Hence, rapid and accurate detection and diagnosis of the disease is the first priority all over the world. Researchers have been working on various methods for COVID-19 detection and as the disease infects lungs, lung image analysis has become a popular research area for detecting the presence of the disease. Medical images from chest X-rays (CXR), computed tomography (CT) images, and lung ultrasound images have been used by automated image analysis systems in artificial intelligence (AI)- and machine learning (ML)-based approaches. Various existing and novel ML, deep learning (DL), transfer learning (TL), and hybrid models have been applied for detecting and classifying COVID-19, segmentation of infected regions, assessing the severity, and tracking patient progress from medical images of COVID-19 patients. In this paper, a comprehensive review of some recent approaches on COVID-19-based image analyses is provided surveying the contributions of existing research efforts, the available image datasets, and the performance metrics used in recent works. The challenges and future research scopes to address the progress of the fight against COVID-19 from the AI perspective are also discussed. The main objective of this paper is therefore to provide a summary of the research works done in COVID detection and analysis from medical image datasets using ML, DL, and TL models by analyzing their novelty and efficiency while mentioning other COVID-19-based review/survey researches to deliver a brief overview on the maximum amount of information on COVID-19-based existing researches.
Collapse
Affiliation(s)
- Kashfia Sailunaz
- Department of Computer Science, University of Calgary, Calgary, AB, Canada
| | - Tansel Özyer
- Department of Computer Engineering, Ankara Medipol University, Ankara, Turkey
| | - Jon Rokne
- Department of Computer Science, University of Calgary, Calgary, AB, Canada
| | - Reda Alhajj
- Department of Computer Science, University of Calgary, Calgary, AB, Canada.
- Department of Computer Engineering, Istanbul Medipol University, Istanbul, Turkey.
- Department of Health Informatics, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
85
|
Xie T, Wang Z, Li H, Wu P, Huang H, Zhang H, Alsaadi FE, Zeng N. Progressive attention integration-based multi-scale efficient network for medical imaging analysis with application to COVID-19 diagnosis. Comput Biol Med 2023; 159:106947. [PMID: 37099976 PMCID: PMC10116157 DOI: 10.1016/j.compbiomed.2023.106947] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 03/30/2023] [Accepted: 04/15/2023] [Indexed: 04/28/2023]
Abstract
In this paper, a novel deep learning-based medical imaging analysis framework is developed, which aims to deal with the insufficient feature learning caused by the imperfect property of imaging data. Named as multi-scale efficient network (MEN), the proposed method integrates different attention mechanisms to realize sufficient extraction of both detailed features and semantic information in a progressive learning manner. In particular, a fused-attention block is designed to extract fine-grained details from the input, where the squeeze-excitation (SE) attention mechanism is applied to make the model focus on potential lesion areas. A multi-scale low information loss (MSLIL)-attention block is proposed to compensate for potential global information loss and enhance the semantic correlations among features, where the efficient channel attention (ECA) mechanism is adopted. The proposed MEN is comprehensively evaluated on two COVID-19 diagnostic tasks, and the results show that as compared with some other advanced deep learning models, the proposed method is competitive in accurate COVID-19 recognition, which yields the best accuracy of 98.68% and 98.85%, respectively, and exhibits satisfactory generalization ability as well.
Collapse
Affiliation(s)
- Tingyi Xie
- School of Opto-electronic and Communication Engineering, Xiamen University of Technology, Xiamen 361024, China
| | - Zidong Wang
- Department of Computer Science, Brunel University London, Uxbridge UB8 3PH, UK.
| | - Han Li
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361005, China
| | - Peishu Wu
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361005, China
| | - Huixiang Huang
- School of Opto-electronic and Communication Engineering, Xiamen University of Technology, Xiamen 361024, China
| | - Hongyi Zhang
- School of Opto-electronic and Communication Engineering, Xiamen University of Technology, Xiamen 361024, China
| | - Fuad E Alsaadi
- Communication Systems and Networks Research Group, Department of Electrical and Computer Engineering, Faculty of Engineering, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Nianyin Zeng
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361005, China.
| |
Collapse
|
86
|
Yin M, Liang X, Wang Z, Zhou Y, He Y, Xue Y, Gao J, Lin J, Yu C, Liu L, Liu X, Xu C, Zhu J. Identification of Asymptomatic COVID-19 Patients on Chest CT Images Using Transformer-Based or Convolutional Neural Network-Based Deep Learning Models. J Digit Imaging 2023; 36:827-836. [PMID: 36596937 PMCID: PMC9810383 DOI: 10.1007/s10278-022-00754-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 11/30/2022] [Accepted: 12/07/2022] [Indexed: 01/04/2023] Open
Abstract
Novel coronavirus disease 2019 (COVID-19) has rapidly spread throughout the world; however, it is difficult for clinicians to make early diagnoses. This study is to evaluate the feasibility of using deep learning (DL) models to identify asymptomatic COVID-19 patients based on chest CT images. In this retrospective study, six DL models (Xception, NASNet, ResNet, EfficientNet, ViT, and Swin), based on convolutional neural networks (CNNs) or transformer architectures, were trained to identify asymptomatic patients with COVID-19 on chest CT images. Data from Yangzhou were randomly split into a training set (n = 2140) and an internal-validation set (n = 360). Data from Suzhou was the external-test set (n = 200). Model performance was assessed by the metrics accuracy, recall, and specificity and was compared with the assessments of two radiologists. A total of 2700 chest CT images were collected in this study. In the validation dataset, the Swin model achieved the highest accuracy of 0.994, followed by the EfficientNet model (0.954). The recall and the precision of the Swin model were 0.989 and 1.000, respectively. In the test dataset, the Swin model was still the best and achieved the highest accuracy (0.980). All the DL models performed remarkably better than the two experts. Last, the time on the test set diagnosis spent by two experts-42 min, 17 s (junior); and 29 min, 43 s (senior)-was significantly higher than those of the DL models (all below 2 min). This study evaluated the feasibility of multiple DL models in distinguishing asymptomatic patients with COVID-19 from healthy subjects on chest CT images. It found that a transformer-based model, the Swin model, performed best.
Collapse
Affiliation(s)
- Minyue Yin
- Department of Gastroenterology, the First Affiliated Hospital of Soochow University, Suzhou, 215006, Jiangsu, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215006, Jiangsu, China
| | - Xiaolong Liang
- Department of Orthopedics, the First Affiliated Hospital of Soochow University, Suzhou, 215006, Jiangsu, China
| | - Zilan Wang
- Department of Neurosurgery, the First Affiliated Hospital of Soochow University, Suzhou, 215006, Jiangsu, China
| | - Yijia Zhou
- Medical School, Soochow University, Suzhou, 215006, Jiangsu, China
| | - Yu He
- Medical School, Soochow University, Suzhou, 215006, Jiangsu, China
| | - Yuhan Xue
- Medical School, Soochow University, Suzhou, 215006, Jiangsu, China
| | - Jingwen Gao
- Department of Gastroenterology, the First Affiliated Hospital of Soochow University, Suzhou, 215006, Jiangsu, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215006, Jiangsu, China
| | - Jiaxi Lin
- Department of Gastroenterology, the First Affiliated Hospital of Soochow University, Suzhou, 215006, Jiangsu, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215006, Jiangsu, China
| | - Chenyan Yu
- Department of Gastroenterology, the First Affiliated Hospital of Soochow University, Suzhou, 215006, Jiangsu, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215006, Jiangsu, China
| | - Lu Liu
- Department of Gastroenterology, the First Affiliated Hospital of Soochow University, Suzhou, 215006, Jiangsu, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215006, Jiangsu, China
| | - Xiaolin Liu
- Department of Gastroenterology, the First Affiliated Hospital of Soochow University, Suzhou, 215006, Jiangsu, China
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215006, Jiangsu, China
| | - Chao Xu
- Department of Radiotherapy, the First Affiliated Hospital of Soochow University, Suzhou, 215006, Jiangsu, China
| | - Jinzhou Zhu
- Department of Gastroenterology, the First Affiliated Hospital of Soochow University, Suzhou, 215006, Jiangsu, China.
- Suzhou Clinical Center of Digestive Diseases, Suzhou, 215006, Jiangsu, China.
- The 23Rd Ward, Yangzhou Third People's Hospital, Yangzhou, 225000, Jiangsu, China.
| |
Collapse
|
87
|
Wang T, Nie Z, Wang R, Xu Q, Huang H, Xu H, Xie F, Liu XJ. PneuNet: deep learning for COVID-19 pneumonia diagnosis on chest X-ray image analysis using Vision Transformer. Med Biol Eng Comput 2023; 61:1395-1408. [PMID: 36719562 PMCID: PMC9887581 DOI: 10.1007/s11517-022-02746-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Accepted: 12/22/2022] [Indexed: 02/01/2023]
Abstract
A long-standing challenge in pneumonia diagnosis is recognizing the pathological lung texture, especially the ground-glass appearance pathological texture. One main difficulty lies in precisely extracting and recognizing the pathological features. The patients, especially those with mild symptoms, show very little difference in lung texture, neither conventional computer vision methods nor convolutional neural networks perform well on pneumonia diagnosis based on chest X-ray (CXR) images. In the meanwhile, the Coronavirus Disease 2019 (COVID-19) pandemic continues wreaking havoc around the world, where quick and accurate diagnosis backed by CXR images is in high demand. Rather than simply recognizing the patterns, extracting feature maps from the original CXR image is what we need in the classification process. Thus, we propose a Vision Transformer (VIT)-based model called PneuNet to make an accurate diagnosis backed by channel-based attention through X-ray images of the lung, where multi-head attention is applied on channel patches rather than feature patches. The techniques presented in this paper are oriented toward the medical application of deep neural networks and VIT. Extensive experiment results show that our method can reach 94.96% accuracy in the three-categories classification problem on the test set, which outperforms previous deep learning models.
Collapse
Affiliation(s)
- Tianmu Wang
- Department of Mechanical Engineering, Tsinghua University, Beijing, 100084 China
- State Key Laboratory of Tribology in Advanced Equipment, Tsinghua University, Beijing, 100084 China
- Beijing Key Lab of Precision/Ultra-precision Manufacturing Equipments and Control, Tsinghua University, Beijing, 100084 China
| | - Zhenguo Nie
- Department of Mechanical Engineering, Tsinghua University, Beijing, 100084 China
- State Key Laboratory of Tribology in Advanced Equipment, Tsinghua University, Beijing, 100084 China
- Beijing Key Lab of Precision/Ultra-precision Manufacturing Equipments and Control, Tsinghua University, Beijing, 100084 China
| | - Ruijing Wang
- School of System & Enterprises, Stevens Institute of Technology, Hoboken, NJ 07030 USA
| | - Qingfeng Xu
- Department of Mechanical Engineering, Tsinghua University, Beijing, 100084 China
- National Cancer Center, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100060 China
| | - Hongshi Huang
- Institute of Sports Medicine, Peking University Third Hospital, Beijing, 100091 China
| | - Handing Xu
- Department of Mechanical Engineering, Tsinghua University, Beijing, 100084 China
- State Key Laboratory of Tribology in Advanced Equipment, Tsinghua University, Beijing, 100084 China
- Beijing Key Lab of Precision/Ultra-precision Manufacturing Equipments and Control, Tsinghua University, Beijing, 100084 China
| | - Fugui Xie
- Department of Mechanical Engineering, Tsinghua University, Beijing, 100084 China
- State Key Laboratory of Tribology in Advanced Equipment, Tsinghua University, Beijing, 100084 China
- Beijing Key Lab of Precision/Ultra-precision Manufacturing Equipments and Control, Tsinghua University, Beijing, 100084 China
| | - Xin-Jun Liu
- Department of Mechanical Engineering, Tsinghua University, Beijing, 100084 China
- State Key Laboratory of Tribology in Advanced Equipment, Tsinghua University, Beijing, 100084 China
- Beijing Key Lab of Precision/Ultra-precision Manufacturing Equipments and Control, Tsinghua University, Beijing, 100084 China
| |
Collapse
|
88
|
Yuan J, Wu F, Li Y, Li J, Huang G, Huang Q. DPDH-CapNet: A Novel Lightweight Capsule Network with Non-routing for COVID-19 Diagnosis Using X-ray Images. J Digit Imaging 2023; 36:988-1000. [PMID: 36813978 PMCID: PMC9946284 DOI: 10.1007/s10278-023-00791-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 01/26/2023] [Accepted: 01/29/2023] [Indexed: 02/24/2023] Open
Abstract
COVID-19 has claimed millions of lives since its outbreak in December 2019, and the damage continues, so it is urgent to develop new technologies to aid its diagnosis. However, the state-of-the-art deep learning methods often rely on large-scale labeled data, limiting their clinical application in COVID-19 identification. Recently, capsule networks have achieved highly competitive performance for COVID-19 detection, but they require expensive routing computation or traditional matrix multiplication to deal with the capsule dimensional entanglement. A more lightweight capsule network is developed to effectively address these problems, namely DPDH-CapNet, which aims to enhance the technology of automated diagnosis for COVID-19 chest X-ray images. It adopts depthwise convolution (D), point convolution (P), and dilated convolution (D) to construct a new feature extractor, thus successfully capturing the local and global dependencies of COVID-19 pathological features. Simultaneously, it constructs the classification layer by homogeneous (H) vector capsules with an adaptive, non-iterative, and non-routing mechanism. We conduct experiments on two publicly available combined datasets, including normal, pneumonia, and COVID-19 images. With a limited number of samples, the parameters of the proposed model are reduced by 9x compared to the state-of-the-art capsule network. Moreover, our model has faster convergence speed and better generalization, and its accuracy, precision, recall, and F-measure are improved to 97.99%, 98.05%, 98.02%, and 98.03%, respectively. In addition, experimental results demonstrate that, contrary to the transfer learning method, the proposed model does not require pre-training and a large number of training samples.
Collapse
Affiliation(s)
- Jianjun Yuan
- College of Artificial Intelligence, Southwest University, Chongqing, 40075, China.
| | - Fujun Wu
- College of Artificial Intelligence, Southwest University, Chongqing, 40075, China
| | - Yuxi Li
- College of Artificial Intelligence, Southwest University, Chongqing, 40075, China
| | - Jinyi Li
- College of Artificial Intelligence, Southwest University, Chongqing, 40075, China
| | - Guojun Huang
- College of Artificial Intelligence, Southwest University, Chongqing, 40075, China
| | - Quanyong Huang
- College of Machinery and Automation, Wuhan University of Science and Technology, Heping Avenue No. 947, Wuhan, Hubei Province, 430091, China.
| |
Collapse
|
89
|
Poola RG, Pl L, Y SS. COVID-19 diagnosis: A comprehensive review of pre-trained deep learning models based on feature extraction algorithm. RESULTS IN ENGINEERING 2023; 18:101020. [PMID: 36945336 PMCID: PMC10017171 DOI: 10.1016/j.rineng.2023.101020] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Revised: 03/01/2023] [Accepted: 03/08/2023] [Indexed: 05/14/2023]
Abstract
Due to the augmented rise of COVID-19, clinical specialists are looking for fast faultless diagnosis strategies to restrict Covid spread while attempting to lessen the computational complexity. In this way, swift diagnosis techniques for COVID-19 with high precision can offer valuable aid to clinical specialists. RT- PCR test is an expensive and tedious COVID diagnosis technique in practice. Medical imaging is feasible to diagnose COVID-19 by X-ray chest radiography to get around the shortcomings of RT-PCR. Through a variety of Deep Transfer-learning models, this research investigates the potential of Artificial Intelligence -based early diagnosis of COVID-19 via X-ray chest radiographs. With 10,192 normal and 3616 Covid X-ray chest radiographs, the deep transfer-learning models are optimized to further the accurate diagnosis. The x-ray chest radiographs undergo a data augmentation phase before developing a modified dataset to train the Deep Transfer-learning models. The Deep Transfer-learning architectures are trained using the extracted features from the Feature Extraction stage. During training, the classification of X-ray Chest radiographs based on feature extraction algorithm values is converted into a feature label set containing the classified image data with a feature string value representing the number of edges detected after edge detection. The feature label set is further tested with the SVM, KNN, NN, Naive Bayes and Logistic Regression classifiers to audit the quality metrics of the proposed model. The quality metrics include accuracy, precision, F1 score, recall and AUC. The Inception-V3 dominates the six Deep Transfer-learning models, according to the assessment results, with a training accuracy of 84.79% and a loss function of 2.4%. The performance of Cubic SVM was superior to that of the other SVM classifiers, with an AUC score of 0.99, precision of 0.983, recall of 0.8977, accuracy of 95.8%, and F1 score of 0.9384. Cosine KNN fared better than the other KNN classifiers with an AUC score of 0.95, precision of 0.974, recall of 0.777, accuracy of 90.8%, and F1 score of 0.864. Wide NN fared better than the other NN classifiers with an AUC score of 0.98, precision of 0.975, recall of 0.907, accuracy of 95.5%, and F1 score of 0.939. According to the findings, SVM classifiers topped other classifiers in terms of performance indicators like accuracy, precision, recall, F1-score, and AUC. The SVM classifiers reported better mean optimal scores compared to other classifiers. The performance assessment metrics uncover that the proposed methodology can aid in preliminary COVID diagnosis.
Collapse
Affiliation(s)
| | - Lahari Pl
- Dept. of ECE, SRM University, AP, India
| | | |
Collapse
|
90
|
Sun J, Pi P, Tang C, Wang SH, Zhang YD. CTMLP: Can MLPs replace CNNs or transformers for COVID-19 diagnosis? Comput Biol Med 2023; 159:106847. [PMID: 37068316 PMCID: PMC10098038 DOI: 10.1016/j.compbiomed.2023.106847] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 02/13/2023] [Accepted: 03/30/2023] [Indexed: 04/19/2023]
Abstract
BACKGROUND Convolutional Neural Networks (CNNs) and the hybrid models of CNNs and Vision Transformers (VITs) are the recent mainstream methods for COVID-19 medical image diagnosis. However, pure CNNs lack global modeling ability, and the hybrid models of CNNs and VITs have problems such as large parameters and computational complexity. These models are difficult to be used effectively for medical diagnosis in just-in-time applications. METHODS Therefore, a lightweight medical diagnosis network CTMLP based on convolutions and multi-layer perceptrons (MLPs) is proposed for the diagnosis of COVID-19. The previous self-supervised algorithms are based on CNNs and VITs, and the effectiveness of such algorithms for MLPs is not yet known. At the same time, due to the lack of ImageNet-scale datasets in the medical image domain for model pre-training. So, a pre-training scheme TL-DeCo based on transfer learning and self-supervised learning was constructed. In addition, TL-DeCo is too tedious and resource-consuming to build a new model each time. Therefore, a guided self-supervised pre-training scheme was constructed for the new lightweight model pre-training. RESULTS The proposed CTMLP achieves an accuracy of 97.51%, an f1-score of 97.43%, and a recall of 98.91% without pre-training, even with only 48% of the number of ResNet50 parameters. Furthermore, the proposed guided self-supervised learning scheme can improve the baseline of simple self-supervised learning by 1%-1.27%. CONCLUSION The final results show that the proposed CTMLP can replace CNNs or Transformers for a more efficient diagnosis of COVID-19. In addition, the additional pre-training framework was developed to make it more promising in clinical practice.
Collapse
Affiliation(s)
- Junding Sun
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo, Henan, 454000, PR China.
| | - Pengpeng Pi
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo, Henan, 454000, PR China.
| | - Chaosheng Tang
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo, Henan, 454000, PR China.
| | - Shui-Hua Wang
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo, Henan, 454000, PR China; School of Computing and Mathematical Sciences, University of Leicester, Leicester, LE1 7RH, UK; Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, 21589, Saudi Arabia.
| | - Yu-Dong Zhang
- School of Computer Science and Technology, Henan Polytechnic University, Jiaozuo, Henan, 454000, PR China; School of Computing and Mathematical Sciences, University of Leicester, Leicester, LE1 7RH, UK; Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, 21589, Saudi Arabia.
| |
Collapse
|
91
|
Erol Doğan G, Uzbaş B. Diagnosis of COVID-19 from blood parameters using convolutional neural network. Soft comput 2023; 27:1-16. [PMID: 37362276 PMCID: PMC10225057 DOI: 10.1007/s00500-023-08508-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/10/2023] [Indexed: 06/28/2023]
Abstract
Asymptomatically presenting COVID-19 complicates the detection of infected individuals. Additionally, the virus changes too many genomic variants, which increases the virus's ability to spread. Because there isn't a specific treatment for COVID-19 in a short time, the essential goal is to reduce the virulence of the disease. Blood parameters, which contain essential clinical information about infectious diseases and are easy to access, have an important place in COVID-19 detection. The convolutional neural network (CNN) architecture, which is popular in image processing, produces highly successful results for COVID-19 detection models. When the literature is examined, it is seen that COVID-19 studies with CNN are generally done using lung images. In this study, one-dimensional (1D) blood parameters data were converted into two-dimensional (2D) image data after preprocessing, and COVID-19 detection was made with CNN. The t-distributed stochastic neighbor embedding method was applied to transfer the feature vectors to the 2D plane. All data were framed with convex hull and minimum bounding rectangle algorithms to obtain image data. The image data obtained by pixel mapping was presented to the developed 3-line CNN architecture. This study proposes an effective and successful model by providing a combination of low-cost and rapidly-accessible blood parameters and CNN architecture making image data processing highly successful for COVID-19 detection. Ultimately, COVID-19 detection was made with a success rate of 94.85%. This study has brought a new perspective to COVID-19 detection studies by obtaining 2D image data from 1D COVID-19 blood parameters and using CNN.
Collapse
Affiliation(s)
| | - Betül Uzbaş
- Computer Engineering Department, Konya Technical University, Konya, Turkey
| |
Collapse
|
92
|
Dahiya D. COVID-19 Disease Prediction Utilizing Dilated Convolution Neural Network Based Levy Flight Tunicate Swarm Optimization. WIRELESS PERSONAL COMMUNICATIONS 2023; 131:1-14. [PMID: 37360135 PMCID: PMC10224759 DOI: 10.1007/s11277-023-10505-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 05/12/2023] [Indexed: 06/28/2023]
Abstract
The worldwide pandemic of COVID-19 illness has wreaked havoc on the health and lives of countless individuals in more than 200 countries. More than 44 million individuals have been afflicted by October 2020, with over 1,000,000 fatalities reported. This disease, which is classified as a pandemic, is still being researched for diagnosis and therapy. It is critical to diagnose this condition early in order to save a person's life. Diagnostic investigations based on deep learning are speeding up this procedure. As a result, in order to contribute to this sector, our research proposes a deep learning-based technique that may be employed for illness early detection. Based on this insight, gaussian filter is applied to the collected CT images and the filtered images are subjected to the proposed tunicate dilated convolutional neural network, whereas covid and non-covid disease are categorized to improve the accuracy requirement. The hyperparameters involved in the proposed deep learning techniques are optimally tuned using the proposed levy flight based tunicate behaviour. To validate the proposed methodology, evaluation metrics are tested and shows superiority of the proposed approach during COVID-19 diagnostic studies.
Collapse
Affiliation(s)
- Deepak Dahiya
- Computer Science (Tenure Stream), School of Engineering and Computer Science, University of Pittsburgh, Johnstown US
| |
Collapse
|
93
|
Hassan A, Elhoseny M, Kayed M. A novel and accurate deep learning-based Covid-19 diagnostic model for heart patients. SIGNAL, IMAGE AND VIDEO PROCESSING 2023; 17:1-8. [PMID: 37362230 PMCID: PMC10197036 DOI: 10.1007/s11760-023-02561-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Revised: 03/08/2023] [Accepted: 03/14/2023] [Indexed: 06/28/2023]
Abstract
Using radiographic changes of COVID-19 in the medical images, artificial intelligence techniques such as deep learning are used to extract some graphical features of COVID-19 and present a Covid-19 diagnostic tool. Differently from previous works that focus on using deep learning to analyze CT scans or X-ray images, this paper uses deep learning to scan electro diagram (ECG) images to diagnose Covid-19. Covid-19 patients with heart disease are the most people exposed to violent symptoms of Covid-19 and death. This shows that there is a special, unclear relation (until now) and parameters between covid-19 and heart disease. So, as previous works, using a general diagnostic model to detect covid-19 from all patients, based on the same rules, is not accurate as we prove later in the practical section of our paper because the model faces dispersion in the data during the training process. So, this paper aims to propose a novel model that focuses on diagnosing accurately Covid-19 for heart patients only to increase the accuracy and to reduce the waiting time of a heart patient to perform a covid-19 diagnosis. Also, we handle the only one existed dataset that contains ECGs of Covid-19 patients and produce a new version, with the help of a heart diseases expert, which consists of two classes: ECGs of heart patients with positive Covid-19 and ECGs of heart patients with negative Covid-19 cases. This dataset will help medical experts and data scientists to study the relation between Covid-19 and heart patients. We achieve overall accuracy, sensitivity and specificity 99.1%, 99% and 100%, respectively. Supplementary Information The online version contains supplementary material available at 10.1007/s11760-023-02561-8.
Collapse
Affiliation(s)
- Ahmed Hassan
- Faculty of Science, Beni-Suef University, Beni-Suef, 62511 Egypt
| | - Mohamed Elhoseny
- Faculty of Computers and Information, Mansoura University, Mansoura, 35516 Egypt
| | - Mohammed Kayed
- Faculty of Computers and Artificial Intelligence, Beni-Suef University, Beni-Suef, 62511 Egypt
| |
Collapse
|
94
|
Iqbal U, Imtiaz R, Saudagar AKJ, Alam KA. CRV-NET: Robust Intensity Recognition of Coronavirus in Lung Computerized Tomography Scan Images. Diagnostics (Basel) 2023; 13:diagnostics13101783. [PMID: 37238266 DOI: 10.3390/diagnostics13101783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2023] [Revised: 05/01/2023] [Accepted: 05/10/2023] [Indexed: 05/28/2023] Open
Abstract
The early diagnosis of infectious diseases is demanded by digital healthcare systems. Currently, the detection of the new coronavirus disease (COVID-19) is a major clinical requirement. For COVID-19 detection, deep learning models are used in various studies, but the robustness is still compromised. In recent years, deep learning models have increased in popularity in almost every area, particularly in medical image processing and analysis. The visualization of the human body's internal structure is critical in medical analysis; many imaging techniques are in use to perform this job. A computerized tomography (CT) scan is one of them, and it has been generally used for the non-invasive observation of the human body. The development of an automatic segmentation method for lung CT scans showing COVID-19 can save experts time and can reduce human error. In this article, the CRV-NET is proposed for the robust detection of COVID-19 in lung CT scan images. A public dataset (SARS-CoV-2 CT Scan dataset), is used for the experimental work and customized according to the scenario of the proposed model. The proposed modified deep-learning-based U-Net model is trained on a custom dataset with 221 training images and their ground truth, which was labeled by an expert. The proposed model is tested on 100 test images, and the results show that the model segments COVID-19 with a satisfactory level of accuracy. Moreover, the comparison of the proposed CRV-NET with different state-of-the-art convolutional neural network models (CNNs), including the U-Net Model, shows better results in terms of accuracy (96.67%) and robustness (low epoch value in detection and the smallest training data size).
Collapse
Affiliation(s)
- Uzair Iqbal
- Department of Artificial Intelligence and Data Science, National University of Computer and Emerging Sciences, Islamabad Campus, Islamabad 44000, Pakistan
| | - Romil Imtiaz
- Information and Communication Engineering, Northwestern Polytechnical University, Xi'an 710072, China
| | - Abdul Khader Jilani Saudagar
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
| | - Khubaib Amjad Alam
- Department of Software Engineering, National University of Computer and Emerging Sciences, Islamabad Campus, Islamabad 44000, Pakistan
| |
Collapse
|
95
|
Cheng L, Lan L, Ramalingam M, He J, Yang Y, Gao M, Shi Z. A review of current effective COVID-19 testing methods and quality control. Arch Microbiol 2023; 205:239. [PMID: 37195393 DOI: 10.1007/s00203-023-03579-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Revised: 05/04/2023] [Accepted: 05/04/2023] [Indexed: 05/18/2023]
Abstract
COVID-19 is a highly infectious disease caused by the SARS-CoV-2 virus, which primarily affects the respiratory system and can lead to severe illness. The virus is extremely contagious, early and accurate diagnosis of SARS-CoV-2 is crucial to contain its spread, to provide prompt treatment, and to prevent complications. Currently, the reverse transcriptase polymerase chain reaction (RT-PCR) is considered to be the gold standard for detecting COVID-19 in its early stages. In addition, loop-mediated isothermal amplification (LMAP), clustering rule interval short palindromic repeats (CRISPR), colloidal gold immunochromatographic assay (GICA), computed tomography (CT), and electrochemical sensors are also common tests. However, these different methods vary greatly in terms of their detection efficiency, specificity, accuracy, sensitivity, cost, and throughput. Besides, most of the current detection methods are conducted in central hospitals and laboratories, which is a great challenge for remote and underdeveloped areas. Therefore, it is essential to review the advantages and disadvantages of different COVID-19 detection methods, as well as the technology that can enhance detection efficiency and improve detection quality in greater details.
Collapse
Affiliation(s)
- Lijia Cheng
- Clinical Medical College & Affiliated Hospital, School of Basic Medical Sciences, Chengdu University, Chengdu, 610106, China.
| | - Liang Lan
- Clinical Medical College & Affiliated Hospital, School of Basic Medical Sciences, Chengdu University, Chengdu, 610106, China
| | - Murugan Ramalingam
- Clinical Medical College & Affiliated Hospital, School of Basic Medical Sciences, Chengdu University, Chengdu, 610106, China
| | - Jianrong He
- Clinical Medical College & Affiliated Hospital, School of Basic Medical Sciences, Chengdu University, Chengdu, 610106, China
| | - Yimin Yang
- Clinical Medical College & Affiliated Hospital, School of Basic Medical Sciences, Chengdu University, Chengdu, 610106, China
| | - Min Gao
- Clinical Medical College & Affiliated Hospital, School of Basic Medical Sciences, Chengdu University, Chengdu, 610106, China
| | - Zheng Shi
- Clinical Medical College & Affiliated Hospital, School of Basic Medical Sciences, Chengdu University, Chengdu, 610106, China.
| |
Collapse
|
96
|
DeGrave AJ, Cai ZR, Janizek JD, Daneshjou R, Lee SI. Dissection of medical AI reasoning processes via physician and generative-AI collaboration. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.05.12.23289878. [PMID: 37292705 PMCID: PMC10246034 DOI: 10.1101/2023.05.12.23289878] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Despite the proliferation and clinical deployment of artificial intelligence (AI)-based medical software devices, most remain black boxes that are uninterpretable to key stakeholders including patients, physicians, and even the developers of the devices. Here, we present a general model auditing framework that combines insights from medical experts with a highly expressive form of explainable AI that leverages generative models, to understand the reasoning processes of AI devices. We then apply this framework to generate the first thorough, medically interpretable picture of the reasoning processes of machine-learning-based medical image AI. In our synergistic framework, a generative model first renders "counterfactual" medical images, which in essence visually represent the reasoning process of a medical AI device, and then physicians translate these counterfactual images to medically meaningful features. As our use case, we audit five high-profile AI devices in dermatology, an area of particular interest since dermatology AI devices are beginning to achieve deployment globally. We reveal how dermatology AI devices rely both on features used by human dermatologists, such as lesional pigmentation patterns, as well as multiple, previously unreported, potentially undesirable features, such as background skin texture and image color balance. Our study also sets a precedent for the rigorous application of explainable AI to understand AI in any specialized domain and provides a means for practitioners, clinicians, and regulators to uncloak AI's powerful but previously enigmatic reasoning processes in a medically understandable way.
Collapse
Affiliation(s)
- Alex J DeGrave
- Paul G. Allen School of Computer Science and Engineering, University of Washington
- Medical Scientist Training Program, University of Washington
| | - Zhuo Ran Cai
- Program for Clinical Research and Technology, Stanford University
| | - Joseph D Janizek
- Paul G. Allen School of Computer Science and Engineering, University of Washington
- Medical Scientist Training Program, University of Washington
| | - Roxana Daneshjou
- Department of Dermatology, Stanford School of Medicine
- Department of Biomedical Data Science, Stanford School of Medicine
| | - Su-In Lee
- Paul G. Allen School of Computer Science and Engineering, University of Washington
| |
Collapse
|
97
|
Lee MH, Shomanov A, Kudaibergenova M, Viderman D. Deep Learning Methods for Interpretation of Pulmonary CT and X-ray Images in Patients with COVID-19-Related Lung Involvement: A Systematic Review. J Clin Med 2023; 12:jcm12103446. [PMID: 37240552 DOI: 10.3390/jcm12103446] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 04/25/2023] [Accepted: 05/06/2023] [Indexed: 05/28/2023] Open
Abstract
SARS-CoV-2 is a novel virus that has been affecting the global population by spreading rapidly and causing severe complications, which require prompt and elaborate emergency treatment. Automatic tools to diagnose COVID-19 could potentially be an important and useful aid. Radiologists and clinicians could potentially rely on interpretable AI technologies to address the diagnosis and monitoring of COVID-19 patients. This paper aims to provide a comprehensive analysis of the state-of-the-art deep learning techniques for COVID-19 classification. The previous studies are methodically evaluated, and a summary of the proposed convolutional neural network (CNN)-based classification approaches is presented. The reviewed papers have presented a variety of CNN models and architectures that were developed to provide an accurate and quick automatic tool to diagnose the COVID-19 virus based on presented CT scan or X-ray images. In this systematic review, we focused on the critical components of the deep learning approach, such as network architecture, model complexity, parameter optimization, explainability, and dataset/code availability. The literature search yielded a large number of studies over the past period of the virus spread, and we summarized their past efforts. State-of-the-art CNN architectures, with their strengths and weaknesses, are discussed with respect to diverse technical and clinical evaluation metrics to safely implement current AI studies in medical practice.
Collapse
Affiliation(s)
- Min-Ho Lee
- School of Engineering and Digital Sciences, Nazarbayev University, Kabanbay Batyr Ave. 53, Astana 010000, Kazakhstan
| | - Adai Shomanov
- School of Engineering and Digital Sciences, Nazarbayev University, Kabanbay Batyr Ave. 53, Astana 010000, Kazakhstan
| | - Madina Kudaibergenova
- School of Engineering and Digital Sciences, Nazarbayev University, Kabanbay Batyr Ave. 53, Astana 010000, Kazakhstan
| | - Dmitriy Viderman
- School of Medicine, Nazarbayev University, 5/1 Kerey and Zhanibek Khandar Str., Astana 010000, Kazakhstan
| |
Collapse
|
98
|
Tenali N, Babu GRM. HQDCNet: Hybrid Quantum Dilated Convolution Neural Network for detecting covid-19 in the context of Big Data Analytics. MULTIMEDIA TOOLS AND APPLICATIONS 2023; 83:1-27. [PMID: 37362720 PMCID: PMC10176300 DOI: 10.1007/s11042-023-15515-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Revised: 01/12/2023] [Accepted: 04/19/2023] [Indexed: 06/28/2023]
Abstract
Medical care services are changing to address problems with the development of big data frameworks as a result of the widespread use of big data analytics. Covid illness has recently been one of the leading causes of death in people. Since then, related input chest X-ray image for diagnosing COVID illness have been enhanced by diagnostic tools. Big data technological breakthroughs provide a fantastic option for reducing contagious Covid disease. To increase the model's confidence, it is necessary to integrate a large number of training sets, however handling the data may be difficult. With the development of big data technology, a unique method to identify and categorise covid illness is now found in this research. In order to manage incoming big data, a massive volume of chest x-ray images is gathered and analysed using a distributed computing server built on the Hadoop framework. In order to group identical groups in the input x-ray images, which in turn segments the dominating portions of an image, the fuzzy empowered weighted k-means algorithm is then employed. A hybrid quantum dilated convolution neural network is suggested to classify various kinds of covid instances, and a Black Widow-based Moth Flame is also shown to improve the performance of the classifier pattern. The performance analysis of COVID-19 detection makes use of the COVID-19 radiography dataset. The suggested HQDCNet approach has an accuracy of 99.01. The experimental results are evaluated in Python using performance metrics such as accuracy, precision, recall, f-measure, and loss function.
Collapse
Affiliation(s)
- Nagamani Tenali
- Department of CSE, Y.S. Rajasekhar Reddy University College of Engineering & Technology, Acharya Nagarjuna University, Guntur, Nagarjuna Nagar India
| | - Gatram Rama Mohan Babu
- Computer Science and Engineering (AI&ML), RVR & JC College of Engineering, Guntur, Chowdavaram India
| |
Collapse
|
99
|
Gupta A, Mishra S, Sahu SC, Srinivasarao U, Naik KJ. Application of Convolutional Neural Networks for COVID-19 Detection in X-ray Images Using InceptionV3 and U-Net. NEW GENERATION COMPUTING 2023; 41:475-502. [PMID: 37229179 PMCID: PMC10173914 DOI: 10.1007/s00354-023-00217-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Accepted: 04/25/2023] [Indexed: 05/27/2023]
Abstract
COVID-19 has expanded overall across the globe after its initial cases were discovered in December 2019 in Wuhan-China. Because the virus has impacted people's health worldwide, its fast identification is essential for preventing disease spread and reducing mortality rates. The reverse transcription polymerase chain reaction (RT-PCR) is the primary leading method for detecting COVID-19 disease; it has high costs and long turnaround times. Hence, quick and easy-to-use innovative diagnostic instruments are required. According to a new study, COVID-19 is linked to discoveries in chest X-ray pictures. The suggested approach includes a stage of pre-processing with lung segmentation, removing the surroundings that do not provide information pertinent to the task and may result in biased results. The InceptionV3 and U-Net deep learning models used in this work process the X-ray photo and classifies them as COVID-19 negative or positive. The CNN model that uses a transfer learning approach was trained. Finally, the findings are analyzed and interpreted through different examples. The obtained COVID-19 detection accuracy is around 99% for the best models.
Collapse
Affiliation(s)
- Aman Gupta
- Department of Computer Science and Engineering, National Institute of Technology Raipur, Raipur , Chhattisgarh India
| | - Shashank Mishra
- Department of Computer Science and Engineering, National Institute of Technology Raipur, Raipur , Chhattisgarh India
| | - Sourav Chandan Sahu
- Department of Computer Science and Engineering, National Institute of Technology Raipur, Raipur , Chhattisgarh India
| | - Ulligaddala Srinivasarao
- Department of Computer Science and Engineering, National Institute of Technology Raipur, Raipur , Chhattisgarh India
| | - K. Jairam Naik
- Department of Computer Science and Engineering, National Institute of Technology Raipur, Raipur , Chhattisgarh India
| |
Collapse
|
100
|
Shareef AQ, Kurnaz S. Deep Learning Based COVID-19 Detection via Hard Voting Ensemble Method. WIRELESS PERSONAL COMMUNICATIONS 2023:1-12. [PMID: 37360134 PMCID: PMC10170044 DOI: 10.1007/s11277-023-10485-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 04/24/2023] [Indexed: 06/28/2023]
Abstract
Healthcare systems throughout the world are under a great deal of strain because to the continuing COVID-19 epidemic, making early and precise diagnosis critical for limiting the virus's propagation and efficiently treating victims. The utilization of medical imaging methods like X-rays can help to speed up the diagnosis procedure. Which can offer valuable insights into the virus's existence in the lungs. We present a unique ensemble approach to identify COVID-19 using X-ray pictures (X-ray-PIC) in this paper. The suggested approach, based on hard voting, combines the confidence scores of three classic deep learning models: CNN, VGG16, and DenseNet. We also apply transfer learning to enhance performance on small medical image datasets. Experiments indicate that the suggested strategy outperforms current techniques with a 97% accuracy, a 96% precision, a 100% recall, and a 98% F1-score.These results demonstrate the effectiveness of using ensemble approaches and COVID-19 transfer-learning diagnosis using X-ray-PIC, which could greatly aid in early detection and reducing the burden on global health systems.
Collapse
Affiliation(s)
- Asaad Qasim Shareef
- Department of Electrical Computer Engineering, Altinbas University, Istanbul, Turkey
| | - Sefer Kurnaz
- Department of Electrical Computer Engineering, Altinbas University, Istanbul, Turkey
| |
Collapse
|