1
|
Kim J, Lee H, Oh SS, Jang J, Lee H. Automated Quantification of Total Cerebral Blood Flow from Phase-Contrast MRI and Deep Learning. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:563-574. [PMID: 38343224 PMCID: PMC11031545 DOI: 10.1007/s10278-023-00948-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 10/06/2023] [Accepted: 10/25/2023] [Indexed: 04/20/2024]
Abstract
Knowledge of input blood to the brain, which is represented as total cerebral blood flow (tCBF), is important in evaluating brain health. Phase-contrast (PC) magnetic resonance imaging (MRI) enables blood velocity mapping, allowing for noninvasive measurements of tCBF. In the procedure, manual selection of brain-feeding arteries is an essential step, but is time-consuming and often subjective. Thus, the purpose of this work was to develop and validate a deep learning (DL)-based technique for automated tCBF quantifications. To enhance the DL segmentation performance on arterial blood vessels, in the preprocessing step magnitude and phase images of PC MRI were multiplied several times. Thereafter, a U-Net was trained on 218 images for three-class segmentation. Network performance was evaluated in terms of the Dice coefficient and the intersection-over-union (IoU) on 40 test images, and additionally, on externally acquired 20 datasets. Finally, tCBF was calculated from the DL-predicted vessel segmentation maps, and its accuracy was statistically assessed with the correlation of determination (R2), the intraclass correlation coefficient (ICC), paired t-tests, and Bland-Altman analysis, in comparison to manually derived values. Overall, the DL segmentation network provided accurate labeling of arterial blood vessels for both internal (Dice=0.92, IoU=0.86) and external (Dice=0.90, IoU=0.82) tests. Furthermore, statistical analyses for tCBF estimates revealed good agreement between automated versus manual quantifications in both internal (R2=0.85, ICC=0.91, p=0.52) and external (R2=0.88, ICC=0.93, p=0.88) test groups. The results suggest feasibility of a simple and automated protocol for quantifying tCBF from neck PC MRI and deep learning.
Collapse
Affiliation(s)
- Jinwon Kim
- School of Electronic and Electrical Engineering, Kyungpook National University, IT1-603, Daehak-ro 80, Buk-gu, Daegu, 41075, Republic of Korea
| | - Hyebin Lee
- Department of Radiology, College of Medicine, Seoul St. Mary's Hospital, The Catholic University of Korea, Seoul, Republic of Korea
| | - Sung Suk Oh
- Medical Device Development Center, Daegu-Gyeongbuk Medical Innovation Foundation (K-MEDI hub), Daegu, Republic of Korea
| | - Jinhee Jang
- Department of Radiology, College of Medicine, Seoul St. Mary's Hospital, The Catholic University of Korea, Seoul, Republic of Korea
| | - Hyunyeol Lee
- School of Electronic and Electrical Engineering, Kyungpook National University, IT1-603, Daehak-ro 80, Buk-gu, Daegu, 41075, Republic of Korea.
| |
Collapse
|
2
|
Chadaga K, Prabhu S, Sampathila N, Chadaga R, Umakanth S, Bhat D, G S SK. Explainable artificial intelligence approaches for COVID-19 prognosis prediction using clinical markers. Sci Rep 2024; 14:1783. [PMID: 38245638 PMCID: PMC10799946 DOI: 10.1038/s41598-024-52428-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 01/18/2024] [Indexed: 01/22/2024] Open
Abstract
The COVID-19 influenza emerged and proved to be fatal, causing millions of deaths worldwide. Vaccines were eventually discovered, effectively preventing the severe symptoms caused by the disease. However, some of the population (elderly and patients with comorbidities) are still vulnerable to severe symptoms such as breathlessness and chest pain. Identifying these patients in advance is imperative to prevent a bad prognosis. Hence, machine learning and deep learning algorithms have been used for early COVID-19 severity prediction using clinical and laboratory markers. The COVID-19 data was collected from two Manipal hospitals after obtaining ethical clearance. Multiple nature-inspired feature selection algorithms are used to choose the most crucial markers. A maximum testing accuracy of 95% was achieved by the classifiers. The predictions obtained by the classifiers have been demystified using five explainable artificial intelligence techniques (XAI). According to XAI, the most important markers are c-reactive protein, basophils, lymphocytes, albumin, D-Dimer and neutrophils. The models could be deployed in various healthcare facilities to predict COVID-19 severity in advance so that appropriate treatments could be provided to mitigate a severe prognosis. The computer aided diagnostic method can also aid the healthcare professionals and ease the burden on already suffering healthcare infrastructure.
Collapse
Affiliation(s)
- Krishnaraj Chadaga
- Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, India.
| | - Srikanth Prabhu
- Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, India.
| | - Niranjana Sampathila
- Department of Biomedical Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, India.
| | - Rajagopala Chadaga
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, India
| | - Shashikiran Umakanth
- Department of Medicine, Dr. TMA Hospital, Manipal Academy of Higher Education, Manipal, Karnataka, India
| | - Devadas Bhat
- Department of Biomedical Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, India
| | - Shashi Kumar G S
- Department of Electronics and Communication Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, India
| |
Collapse
|
3
|
Eliwa EHI, El Koshiry AM, Abd El-Hafeez T, Farghaly HM. Utilizing convolutional neural networks to classify monkeypox skin lesions. Sci Rep 2023; 13:14495. [PMID: 37661211 PMCID: PMC10475460 DOI: 10.1038/s41598-023-41545-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 08/28/2023] [Indexed: 09/05/2023] Open
Abstract
Monkeypox is a rare viral disease that can cause severe illness in humans, presenting with skin lesions and rashes. However, accurately diagnosing monkeypox based on visual inspection of the lesions can be challenging and time-consuming, especially in resource-limited settings where laboratory tests may not be available. In recent years, deep learning methods, particularly Convolutional Neural Networks (CNNs), have shown great potential in image recognition and classification tasks. To this end, this study proposes an approach using CNNs to classify monkeypox skin lesions. Additionally, the study optimized the CNN model using the Grey Wolf Optimizer (GWO) algorithm, resulting in a significant improvement in accuracy, precision, recall, F1-score, and AUC compared to the non-optimized model. The GWO optimization strategy can enhance the performance of CNN models on similar tasks. The optimized model achieved an impressive accuracy of 95.3%, indicating that the GWO optimizer has improved the model's ability to discriminate between positive and negative classes. The proposed approach has several potential benefits for improving the accuracy and efficiency of monkeypox diagnosis and surveillance. It could enable faster and more accurate diagnosis of monkeypox skin lesions, leading to earlier detection and better patient outcomes. Furthermore, the approach could have crucial public health implications for controlling and preventing monkeypox outbreaks. Overall, this study offers a novel and highly effective approach for diagnosing monkeypox, which could have significant real-world applications.
Collapse
Affiliation(s)
- Entesar Hamed I Eliwa
- Department of Mathematics and Statistics, College of Science, King Faisal University, P.O. Box: 400, 31982, Al-Ahsa, Saudi Arabia.
- Department of Computer Science, Faculty of Science, Minia University, Minya, Egypt.
| | - Amr Mohamed El Koshiry
- Department of Curricula and Teaching Methods, College of Education, King Faisal University, P.O. Box: 400, 31982, Al-Ahsa, Saudi Arabia.
- Faculty of Specific Education, Minia University, Minya, Egypt.
| | - Tarek Abd El-Hafeez
- Department of Computer Science, Faculty of Science, Minia University, Minya, Egypt.
- Computer Science Unit, Deraya University, New Minya, Egypt.
| | | |
Collapse
|
4
|
Emara HM, Shoaib MR, El-Shafai W, Elwekeil M, Hemdan EED, Fouda MM, Taha TE, El-Fishawy AS, El-Rabaie ESM, El-Samie FEA. Simultaneous Super-Resolution and Classification of Lung Disease Scans. Diagnostics (Basel) 2023; 13:diagnostics13071319. [PMID: 37046537 PMCID: PMC10093568 DOI: 10.3390/diagnostics13071319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 03/17/2023] [Accepted: 03/20/2023] [Indexed: 04/05/2023] Open
Abstract
Acute lower respiratory infection is a leading cause of death in developing countries. Hence, progress has been made for early detection and treatment. There is still a need for improved diagnostic and therapeutic strategies, particularly in resource-limited settings. Chest X-ray and computed tomography (CT) have the potential to serve as effective screening tools for lower respiratory infections, but the use of artificial intelligence (AI) in these areas is limited. To address this gap, we present a computer-aided diagnostic system for chest X-ray and CT images of several common pulmonary diseases, including COVID-19, viral pneumonia, bacterial pneumonia, tuberculosis, lung opacity, and various types of carcinoma. The proposed system depends on super-resolution (SR) techniques to enhance image details. Deep learning (DL) techniques are used for both SR reconstruction and classification, with the InceptionResNetv2 model used as a feature extractor in conjunction with a multi-class support vector machine (MCSVM) classifier. In this paper, we compare the proposed model performance to those of other classification models, such as Resnet101 and Inceptionv3, and evaluate the effectiveness of using both softmax and MCSVM classifiers. The proposed system was tested on three publicly available datasets of CT and X-ray images and it achieved a classification accuracy of 98.028% using a combination of SR and InceptionResNetv2. Overall, our system has the potential to serve as a valuable screening tool for lower respiratory disorders and assist clinicians in interpreting chest X-ray and CT images. In resource-limited settings, it can also provide a valuable diagnostic support.
Collapse
Affiliation(s)
- Heba M. Emara
- Department of Electronics and Communications Engineering, High Institute of Electronic Engineering, Ministry of Higher Education, Bilbis-Sharqiya 44621, Egypt
| | - Mohamed R. Shoaib
- School of Computer Science and Engineering (SCSE), Nanyang Technological University (NTU), Singapore 639798, Singapore
| | - Walid El-Shafai
- Security Engineering Lab, Computer Science Department, Prince Sultan University, Riyadh 11586, Saudi Arabia
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf 32952, Egypt
| | - Mohamed Elwekeil
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf 32952, Egypt
| | - Ezz El-Din Hemdan
- Department of Computer Science and Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf 32952, Egypt
| | - Mostafa M. Fouda
- Department of Electrical and Computer Engineering, Idaho State University, Pocatello, ID 83209, USA
| | - Taha E. Taha
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf 32952, Egypt
| | - Adel S. El-Fishawy
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf 32952, Egypt
| | - El-Sayed M. El-Rabaie
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf 32952, Egypt
| | - Fathi E. Abd El-Samie
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic Engineering, Menoufia University, Menouf 32952, Egypt
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh 11564, Saudi Arabia
| |
Collapse
|
5
|
Berdahl CT, Baker L, Mann S, Osoba O, Girosi F. Strategies to Improve the Impact of Artificial Intelligence on Health Equity: Scoping Review. JMIR AI 2023; 2:e42936. [PMID: 38875587 PMCID: PMC11041459 DOI: 10.2196/42936] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 12/14/2022] [Accepted: 12/29/2022] [Indexed: 06/16/2024]
Abstract
BACKGROUND Emerging artificial intelligence (AI) applications have the potential to improve health, but they may also perpetuate or exacerbate inequities. OBJECTIVE This review aims to provide a comprehensive overview of the health equity issues related to the use of AI applications and identify strategies proposed to address them. METHODS We searched PubMed, Web of Science, the IEEE (Institute of Electrical and Electronics Engineers) Xplore Digital Library, ProQuest U.S. Newsstream, Academic Search Complete, the Food and Drug Administration (FDA) website, and ClinicalTrials.gov to identify academic and gray literature related to AI and health equity that were published between 2014 and 2021 and additional literature related to AI and health equity during the COVID-19 pandemic from 2020 and 2021. Literature was eligible for inclusion in our review if it identified at least one equity issue and a corresponding strategy to address it. To organize and synthesize equity issues, we adopted a 4-step AI application framework: Background Context, Data Characteristics, Model Design, and Deployment. We then created a many-to-many mapping of the links between issues and strategies. RESULTS In 660 documents, we identified 18 equity issues and 15 strategies to address them. Equity issues related to Data Characteristics and Model Design were the most common. The most common strategies recommended to improve equity were improving the quantity and quality of data, evaluating the disparities introduced by an application, increasing model reporting and transparency, involving the broader community in AI application development, and improving governance. CONCLUSIONS Stakeholders should review our many-to-many mapping of equity issues and strategies when planning, developing, and implementing AI applications in health care so that they can make appropriate plans to ensure equity for populations affected by their products. AI application developers should consider adopting equity-focused checklists, and regulators such as the FDA should consider requiring them. Given that our review was limited to documents published online, developers may have unpublished knowledge of additional issues and strategies that we were unable to identify.
Collapse
Affiliation(s)
- Carl Thomas Berdahl
- RAND Corporation, Santa Monica, CA, United States
- Department of Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, United States
- Department of Emergency Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | | | - Sean Mann
- RAND Corporation, Santa Monica, CA, United States
| | - Osonde Osoba
- RAND Corporation, Santa Monica, CA, United States
| | | |
Collapse
|
6
|
Sørensen PJ, Carlsen JF, Larsen VA, Andersen FL, Ladefoged CN, Nielsen MB, Poulsen HS, Hansen AE. Evaluation of the HD-GLIO Deep Learning Algorithm for Brain Tumour Segmentation on Postoperative MRI. Diagnostics (Basel) 2023; 13:diagnostics13030363. [PMID: 36766468 PMCID: PMC9914320 DOI: 10.3390/diagnostics13030363] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 01/11/2023] [Accepted: 01/17/2023] [Indexed: 01/21/2023] Open
Abstract
In the context of brain tumour response assessment, deep learning-based three-dimensional (3D) tumour segmentation has shown potential to enter the routine radiological workflow. The purpose of the present study was to perform an external evaluation of a state-of-the-art deep learning 3D brain tumour segmentation algorithm (HD-GLIO) on an independent cohort of consecutive, post-operative patients. For 66 consecutive magnetic resonance imaging examinations, we compared delineations of contrast-enhancing (CE) tumour lesions and non-enhancing T2/FLAIR hyperintense abnormality (NE) lesions by the HD-GLIO algorithm and radiologists using Dice similarity coefficients (Dice). Volume agreement was assessed using concordance correlation coefficients (CCCs) and Bland-Altman plots. The algorithm performed very well regarding the segmentation of NE volumes (median Dice = 0.79) and CE tumour volumes larger than 1.0 cm3 (median Dice = 0.86). If considering all cases with CE tumour lesions, the performance dropped significantly (median Dice = 0.40). Volume agreement was excellent with CCCs of 0.997 (CE tumour volumes) and 0.922 (NE volumes). The findings have implications for the application of the HD-GLIO algorithm in the routine radiological workflow where small contrast-enhancing tumours will constitute a considerable share of the follow-up cases. Our study underlines that independent validations on clinical datasets are key to asserting the robustness of deep learning algorithms.
Collapse
Affiliation(s)
- Peter Jagd Sørensen
- Department of Radiology, Centre of Diagnostic Investigation, Copenhagen University Hospital—Rigshospitalet, 2100 Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, 2100 Copenhagen, Denmark
- The DCCC Brain Tumor Center, 2100 Copenhagen, Denmark
- Correspondence:
| | - Jonathan Frederik Carlsen
- Department of Radiology, Centre of Diagnostic Investigation, Copenhagen University Hospital—Rigshospitalet, 2100 Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, 2100 Copenhagen, Denmark
| | - Vibeke Andrée Larsen
- Department of Radiology, Centre of Diagnostic Investigation, Copenhagen University Hospital—Rigshospitalet, 2100 Copenhagen, Denmark
| | - Flemming Littrup Andersen
- Department of Clinical Medicine, University of Copenhagen, 2100 Copenhagen, Denmark
- Department of Clinical Physiology and Nuclear Medicine, Centre of Diagnostic Investigation, Copenhagen University Hospital—Rigshospitalet, 2100 Copenhagen, Denmark
| | - Claes Nøhr Ladefoged
- Department of Clinical Physiology and Nuclear Medicine, Centre of Diagnostic Investigation, Copenhagen University Hospital—Rigshospitalet, 2100 Copenhagen, Denmark
| | - Michael Bachmann Nielsen
- Department of Radiology, Centre of Diagnostic Investigation, Copenhagen University Hospital—Rigshospitalet, 2100 Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, 2100 Copenhagen, Denmark
| | - Hans Skovgaard Poulsen
- The DCCC Brain Tumor Center, 2100 Copenhagen, Denmark
- Department of Oncology, Centre for Cancer and Organ Diseases, Copenhagen University Hospital—Rigshospitalet, 2100 Copenhagen, Denmark
| | - Adam Espe Hansen
- Department of Radiology, Centre of Diagnostic Investigation, Copenhagen University Hospital—Rigshospitalet, 2100 Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, 2100 Copenhagen, Denmark
- The DCCC Brain Tumor Center, 2100 Copenhagen, Denmark
| |
Collapse
|
7
|
Li L, Liu L, Du X, Wang X, Zhang Z, Zhang J, Zhang P, Liu J. CGUN-2A: Deep Graph Convolutional Network via Contrastive Learning for Large-Scale Zero-Shot Image Classification. SENSORS (BASEL, SWITZERLAND) 2022; 22:9980. [PMID: 36560351 PMCID: PMC9782518 DOI: 10.3390/s22249980] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 12/10/2022] [Accepted: 12/15/2022] [Indexed: 06/17/2023]
Abstract
Taxonomy illustrates that natural creatures can be classified with a hierarchy. The connections between species are explicit and objective and can be organized into a knowledge graph (KG). It is a challenging task to mine features of known categories from KG and to reason on unknown categories. Graph Convolutional Network (GCN) has recently been viewed as a potential approach to zero-shot learning. GCN enables knowledge transfer by sharing the statistical strength of nodes in the graph. More layers of graph convolution are stacked in order to aggregate the hierarchical information in the KG. However, the Laplacian over-smoothing problem will be severe as the number of GCN layers deepens, which leads the features between nodes toward a tendency to be similar and degrade the performance of zero-shot image classification tasks. We consider two parts to mitigate the Laplacian over-smoothing problem, namely reducing the invalid node aggregation and improving the discriminability among nodes in the deep graph network. We propose a top-k graph pooling method based on the self-attention mechanism to control specific node aggregation, and we introduce a dual structural symmetric knowledge graph additionally to enhance the representation of nodes in the latent space. Finally, we apply these new concepts to the recently widely used contrastive learning framework and propose a novel Contrastive Graph U-Net with two Attention-based graph pooling (Att-gPool) layers, CGUN-2A, which explicitly alleviates the Laplacian over-smoothing problem. To evaluate the performance of the method on complex real-world scenes, we test it on the large-scale zero-shot image classification dataset. Extensive experiments show the positive effect of allowing nodes to perform specific aggregation, as well as homogeneous graph comparison, in our deep graph network. We show how it significantly boosts zero-shot image classification performance. The Hit@1 accuracy is 17.5% relatively higher than the baseline model on the ImageNet21K dataset.
Collapse
Affiliation(s)
- Liangwei Li
- MOEMIL Laboratory, School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, No. 4, Section 2, North Jianshe Road, Chengdu 610054, China
| | - Lin Liu
- MOEMIL Laboratory, School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, No. 4, Section 2, North Jianshe Road, Chengdu 610054, China
| | - Xiaohui Du
- MOEMIL Laboratory, School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, No. 4, Section 2, North Jianshe Road, Chengdu 610054, China
| | - Xiangzhou Wang
- MOEMIL Laboratory, School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, No. 4, Section 2, North Jianshe Road, Chengdu 610054, China
| | - Ziruo Zhang
- School of Information and Engineering, University of Electronic Science and Technology of China, No. 4, Section 2, North Jianshe Road, Chengdu 610054, China
| | - Jing Zhang
- MOEMIL Laboratory, School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, No. 4, Section 2, North Jianshe Road, Chengdu 610054, China
| | - Ping Zhang
- MOEMIL Laboratory, School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, No. 4, Section 2, North Jianshe Road, Chengdu 610054, China
| | - Juanxiu Liu
- MOEMIL Laboratory, School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, No. 4, Section 2, North Jianshe Road, Chengdu 610054, China
| |
Collapse
|
8
|
Addo D, Zhou S, Jackson JK, Nneji GU, Monday HN, Sarpong K, Patamia RA, Ekong F, Owusu-Agyei CA. EVAE-Net: An Ensemble Variational Autoencoder Deep Learning Network for COVID-19 Classification Based on Chest X-ray Images. Diagnostics (Basel) 2022; 12:2569. [PMID: 36359413 PMCID: PMC9689048 DOI: 10.3390/diagnostics12112569] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 10/13/2022] [Accepted: 10/18/2022] [Indexed: 09/08/2024] Open
Abstract
The COVID-19 pandemic has had a significant impact on many lives and the economies of many countries since late December 2019. Early detection with high accuracy is essential to help break the chain of transmission. Several radiological methodologies, such as CT scan and chest X-ray, have been employed in diagnosing and monitoring COVID-19 disease. Still, these methodologies are time-consuming and require trial and error. Machine learning techniques are currently being applied by several studies to deal with COVID-19. This study exploits the latent embeddings of variational autoencoders combined with ensemble techniques to propose three effective EVAE-Net models to detect COVID-19 disease. Two encoders are trained on chest X-ray images to generate two feature maps. The feature maps are concatenated and passed to either a combined or individual reparameterization phase to generate latent embeddings by sampling from a distribution. The latent embeddings are concatenated and passed to a classification head for classification. The COVID-19 Radiography Dataset from Kaggle is the source of chest X-ray images. The performances of the three models are evaluated. The proposed model shows satisfactory performance, with the best model achieving 99.19% and 98.66% accuracy on four classes and three classes, respectively.
Collapse
Affiliation(s)
- Daniel Addo
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610056, China
| | - Shijie Zhou
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610056, China
| | - Jehoiada Kofi Jackson
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610056, China
| | - Grace Ugochi Nneji
- Department of Computing, Oxford Brookes College of Chengdu University of Technology, Chengdu 610059, China
| | - Happy Nkanta Monday
- Department of Computing, Oxford Brookes College of Chengdu University of Technology, Chengdu 610059, China
| | - Kwabena Sarpong
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610056, China
| | - Rutherford Agbeshi Patamia
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610056, China
| | - Favour Ekong
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610056, China
| | | |
Collapse
|
9
|
Intelligent Facemask Coverage Detector in a World of Chaos. Processes (Basel) 2022. [DOI: 10.3390/pr10091710] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
The recent outbreak of COVID-19 around the world has caused a global health catastrophe along with economic consequences. As per the World Health Organization (WHO), this devastating crisis can be minimized and controlled if humans wear facemasks in public; however, the prevention of spreading COVID-19 can only be possible only if they are worn properly, covering both the nose and mouth. Nonetheless, in public places or in chaos, a manual check of persons wearing the masks properly or not is a hectic job and can cause panic. For such conditions, an automatic mask-wearing system is desired. Therefore, this study analyzed several deep learning pre-trained networks and classical machine learning algorithms that can automatically detect whether the person wears the facemask or not. For this, 40,000 images are utilized to train and test 9 different models, namely, InceptionV3, EfficientNetB0, EfficientNetB2, DenseNet201, ResNet152, VGG19, convolutional neural network (CNN), support vector machine (SVM), and random forest (RF), to recognize facemasks in images. Besides just detecting the mask, the trained models also detect whether the person is wearing the mask properly (covering nose and mouth), partially (mouth only), or wearing it inappropriately (not covering nose and mouth). Experimental work reveals that InceptionV3 and EfficientNetB2 outperformed all other methods by attaining an overall accuracy of around 98.40% and a precision, recall, and F1-score of 98.30%.
Collapse
|