1
|
Ray A, Sarkar S, Schwenker F, Sarkar R. Decoding skin cancer classification: perspectives, insights, and advances through researchers' lens. Sci Rep 2024; 14:30542. [PMID: 39695157 DOI: 10.1038/s41598-024-81961-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2024] [Accepted: 12/02/2024] [Indexed: 12/20/2024] Open
Abstract
Skin cancer is a significant global health concern, with timely and accurate diagnosis playing a critical role in improving patient outcomes. In recent years, computer-aided diagnosis systems have emerged as powerful tools for automated skin cancer classification, revolutionizing the field of dermatology. This survey analyzes 107 research papers published over the last 18 years, providing a thorough evaluation of advancements in classification techniques, with a focus on the growing integration of computer vision and artificial intelligence (AI) in enhancing diagnostic accuracy and reliability. The paper begins by presenting an overview of the fundamental concepts of skin cancer, addressing underlying challenges in accurate classification, and highlighting the limitations of traditional diagnostic methods. Extensive examination is devoted to a range of datasets, including the HAM10000 and the ISIC archive, among others, commonly employed by researchers. The exploration then delves into machine learning techniques coupled with handcrafted features, emphasizing their inherent limitations. Subsequent sections provide a comprehensive investigation into deep learning-based approaches, encompassing convolutional neural networks, transfer learning, attention mechanisms, ensemble techniques, generative adversarial networks, vision transformers, and segmentation-guided classification strategies, detailing various architectures, tailored for skin lesion analysis. The survey also sheds light on the various hybrid and multimodal techniques employed for classification. By critically analyzing each approach and highlighting its limitations, this survey provides researchers with valuable insights into the latest advancements, trends, and gaps in skin cancer classification. Moreover, it offers clinicians practical knowledge on the integration of AI tools to enhance diagnostic decision-making processes. This comprehensive analysis aims to bridge the gap between research and clinical practice, serving as a guide for the AI community to further advance the state-of-the-art in skin cancer classification systems.
Collapse
Affiliation(s)
- Amartya Ray
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, 700032, India
| | - Sujan Sarkar
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, 700032, India
| | - Friedhelm Schwenker
- Institute of Neural Information Processing, Ulm University, 89081, Ulm, Germany.
| | - Ram Sarkar
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, 700032, India
| |
Collapse
|
2
|
Raju ASN, Venkatesh K, Padmaja B, Kumar CHNS, Patnala PRM, Lasisi A, Islam S, Razak A, Khan WA. Exploring vision transformers and XGBoost as deep learning ensembles for transforming carcinoma recognition. Sci Rep 2024; 14:30052. [PMID: 39627293 PMCID: PMC11614869 DOI: 10.1038/s41598-024-81456-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2024] [Accepted: 11/26/2024] [Indexed: 12/06/2024] Open
Abstract
Early detection of colorectal carcinoma (CRC), one of the most prevalent forms of cancer worldwide, significantly enhances the prognosis of patients. This research presents a new method for improving CRC detection using a deep learning ensemble with the Computer Aided Diagnosis (CADx). The method involves combining pre-trained convolutional neural network (CNN) models, such as ADaRDEV2I-22, DaRD-22, and ADaDR-22, using Vision Transformers (ViT) and XGBoost. The study addresses the challenges associated with imbalanced datasets and the necessity of sophisticated feature extraction in medical image analysis. Initially, the CKHK-22 dataset comprised 24 classes. However, we refined it to 14 classes, which led to an improvement in data balance and quality. This improvement enabled more precise feature extraction and improved classification results. We created two ensemble models: the first model used Vision Transformers to capture long-range spatial relationships in the images, while the second model combined CNNs with XGBoost to facilitate structured data classification. We implemented DCGAN-based augmentation to enhance the dataset's diversity. The tests showed big improvements in performance, with the ADaDR-22 + Vision Transformer group getting the best results, with a testing accuracy of 93.4% and an AUC of 98.8%. In contrast, the ADaDR-22 + XGBoost model had an AUC of 97.8% and an accuracy of 92.2%. These findings highlight the efficacy of the proposed ensemble models in detecting CRC and highlight the importance of using well-balanced, high-quality datasets. The proposed method significantly enhances the clinical diagnostic accuracy and the capabilities of medical image analysis or early CRC detection.
Collapse
Affiliation(s)
- Akella Subrahmanya Narasimha Raju
- Department of Computer Science and Engineering (Data Science), Institute of Aeronautical Engineering, Dundigul, Hyderabad, Telangana, 500043, India.
| | - K Venkatesh
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Chennai, Tamilnadu, 603203, India
| | - B Padmaja
- Department of Computer Science and Engineering-AI&ML, Institute of Aeronautical Engineering, Dundigal, Hyderabad, 500043, India
| | - C H N Santhosh Kumar
- Department of Computer Science and Engineering, Anurag Engineering College, Kodada, Telangana, 508206, India
| | | | - Ayodele Lasisi
- Department of Computer Science, College of Computer Science, King Khalid University, Abha, Saudi Arabia
| | - Saiful Islam
- Civil Engineering Department, College of Engineering, King Khalid University, 61421, Abha, Saudi Arabia
| | - Abdul Razak
- Department of Mechanical Engineering, P. A. College of Engineering (Affiliated to Visvesvaraya Technological UniversityBelagavi), Mangaluru, India
| | - Wahaj Ahmad Khan
- School of Civil Engineering & Architecture, Institute of Technology, Dire-Dawa University, 1362, Dire Dawa, Ethiopia.
| |
Collapse
|
3
|
Chakraborty C, Achar U, Nayek S, Achar A, Mukherjee R. CAD-PsorNet: deep transfer learning for computer-assisted diagnosis of skin psoriasis. Sci Rep 2024; 14:26557. [PMID: 39489752 PMCID: PMC11532500 DOI: 10.1038/s41598-024-76852-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2024] [Accepted: 10/17/2024] [Indexed: 11/05/2024] Open
Abstract
Psoriasis, being a chronic, inflammatory, lifelong skin disorder, has become a major threat to the human population. The precise and effective diagnosis of psoriasis continues to be difficult for clinicians due to its varied nature. In northern India, the prevalence of psoriasis among adult population ranges from 0.44 to 2.8%. Chronic plaque psoriasis accounts for over 90% of cases. This study utilized a dataset of 325 raw images collected from a reputable local hospital using a digital camera under uniform lighting conditions. These images were processed to generate 496 image patches (both diseased and normal), which were then normalized and resized for model training. An automated psoriasis image recognition framework was developed using four state-of-the-art deep transfer learning models: VGG16, VGG19, MobileNetV1, and ResNet-50. The convolutional layers adopted various edge, shape, and color filters to generate the feature map for psoriasis detection. Each pre-trained model was adapted with two dense layers, one dropout layer, and one output layer to classify input images. Among these models, MobileNetV1 achieved the best performance, with 94.84% sensitivity, 89.37% specificity, and 97.24% overall accuracy. Hyper-parameter tuning was performed using grid search to optimize learning rates, batch sizes, and dropout rates. The AdaGrad (Adaptive gradient)) optimizer was chosen for its adaptive learning rate capabilities, facilitating quicker convergence in model performance. Consequently, the methodology's performance improved to 94.25% sensitivity, 96.42% specificity, and 99.13% overall accuracy. The model's performance was also compared with non-machine learning-based diagnostic methods, yielding a Dice coefficient of 0.98. However, the model's effectiveness is dependent upon high-quality input images, as poor image conditions may affect accuracy, and it may not generalize well across diverse demographics or psoriasis variations, highlighting the need for varied training datasets for robustness.
Collapse
Affiliation(s)
- Chandan Chakraborty
- National Institute of Technical Teachers' Training & Research (Deemed to be University), Kolkata, 700106, India
| | - Unmesh Achar
- Kalinga Institute of Industrial Technology, Bhubaneswar, 751024, Orissa, India
| | - Sumit Nayek
- National Institute of Technical Teachers' Training & Research (Deemed to be University), Kolkata, 700106, India
| | - Arun Achar
- Nil Ratan Sircar Medical College & Hospital, Kolkata, 700014, India
| | - Rashmi Mukherjee
- Raja Narendra Lal Khan Women's College (Autonomous), Paschim Medinipur, 721102, India.
| |
Collapse
|
4
|
Paraddy S, Virupakshappa. Addressing Challenges in Skin Cancer Diagnosis: A Convolutional Swin Transformer Approach. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01290-9. [PMID: 39436477 DOI: 10.1007/s10278-024-01290-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2024] [Revised: 09/25/2024] [Accepted: 09/30/2024] [Indexed: 10/23/2024]
Abstract
Skin cancer is one of the top three hazardous cancer types, and it is caused by the abnormal proliferation of tumor cells. Diagnosing skin cancer accurately and early is crucial for saving patients' lives. However, it is a challenging task due to various significant issues, including lesion variations in texture, shape, color, and size; artifacts (hairs); uneven lesion boundaries; and poor contrast. To solve these issues, this research proposes a novel Convolutional Swin Transformer (CSwinformer) method for segmenting and classifying skin lesions accurately. The framework involves phases such as data preprocessing, segmentation, and classification. In the first phase, Gaussian filtering, Z-score normalization, and augmentation processes are executed to remove unnecessary noise, re-organize the data, and increase data diversity. In the phase of segmentation, we design a new model "Swinformer-Net" integrating Swin Transformer and U-Net frameworks, to accurately define a region of interest. At the final phase of classification, the segmented outcome is input into the newly proposed module "Multi-Scale Dilated Convolutional Neural Network meets Transformer (MD-CNNFormer)," where the data samples are classified into respective classes. We use four benchmark datasets-HAM10000, ISBI 2016, PH2, and Skin Cancer ISIC for evaluation. The results demonstrated the designed framework's better efficiency against the traditional approaches. The proposed method provided classification accuracy of 98.72%, pixel accuracy of 98.06%, and dice coefficient of 97.67%, respectively. The proposed method offered a promising solution in skin lesion segmentation and classification, supporting clinicians to accurately diagnose skin cancer.
Collapse
Affiliation(s)
- Sudha Paraddy
- Computer Science & Engineering, PDA College of Engineering, Kalaburagi, India
| | - Virupakshappa
- Department of Computer Science and Engineering, Sharnbasva University, Kalaburagi, Karnataka, India.
| |
Collapse
|
5
|
Yi S, Chen Z. MIDC: Medical image dataset cleaning framework based on deep learning. Heliyon 2024; 10:e38910. [PMID: 39444398 PMCID: PMC11497395 DOI: 10.1016/j.heliyon.2024.e38910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2024] [Revised: 09/17/2024] [Accepted: 10/02/2024] [Indexed: 10/25/2024] Open
Abstract
Deep learning technology is widely used in the field of medical imaging. Among them, Convolutional Neural Networks (CNNs) are the most widely used, and the quality of the dataset is crucial for the training of CNN diagnostic models, as mislabeled data can easily affect the accuracy of the diagnostic models. However, due to medical specialization, it is difficult for non-professional physicians to judge mislabeled medical image data. In this paper, we proposed a new framework named medical image dataset cleaning (MIDC), whose main contribution is to improve the quality of public datasets by automatically cleaning up mislabeled data. The main innovations of MIDC are: firstly, the framework innovatively utilizes multiple public datasets of the same disease, relying on different CNNs to automatically recognize images and remove mislabeled data to complete the data cleaning process. This process does not rely on annotations from professional physicians and does not require additional datasets with more reliable labels; Secondly, a novel grading rule is designed to divide the datasets into high-accuracy datasets and low-accuracy datasets, based on which the data cleaning process can be performed; Thirdly, a novel data cleaning module based on CNN is designed to identify and clean low-accuracy datasets by using high-accuracy datasets. In the experiments, the validity of the proposed framework was verified by using four kinds of datasets diabetic retinal, viral pneumonia, breast tumor, and skin cancer, with results showing an increase in the average diagnostic accuracy from 71.18 % to 85.13 %, 82.50 %to 93.79 %, 85.59 %to 93.45 %, and 84.55 %to 94.21 %, respectively. The proposed data cleaning framework MIDC could better help physicians diagnose diseases based on the dataset with mislabeled data.
Collapse
Affiliation(s)
- Sanli Yi
- School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, 650500, China
- Key Laboratory of Computer Technology Application of Yunnan Province, Kunming, 650500, China
| | - Ziyan Chen
- School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, 650500, China
- Key Laboratory of Computer Technology Application of Yunnan Province, Kunming, 650500, China
| |
Collapse
|
6
|
Saleh N, Hassan MA, Salaheldin AM. Skin cancer classification based on an optimized convolutional neural network and multicriteria decision-making. Sci Rep 2024; 14:17323. [PMID: 39068205 PMCID: PMC11283527 DOI: 10.1038/s41598-024-67424-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2024] [Accepted: 07/11/2024] [Indexed: 07/30/2024] Open
Abstract
Skin cancer is a type of cancer disease in which abnormal alterations in skin characteristics can be detected. It can be treated if it is detected early. Many artificial intelligence-based models have been developed for skin cancer detection and classification. Considering the development of numerous models according to various scenarios and selecting the optimum model was rarely considered in previous works. This study aimed to develop various models for skin cancer classification and select the optimum model. Convolutional neural networks (CNNs) in the form of AlexNet, Inception V3, MobileNet V2, and ResNet 50 were used for feature extraction. Feature reduction was carried out using two algorithms of the grey wolf optimizer (GWO) in addition to using the original features. Skin cancer images were classified into four classes based on six machine learning (ML) classifiers. As a result, 51 models were developed with different combinations of CNN algorithms, without GWO algorithms, with two GWO algorithms, and with six ML classifiers. To select the optimum model with the best results, the multicriteria decision-making approach was utilized to rank the alternatives by perimeter similarity (RAPS). Model training and testing were conducted using the International Skin Imaging Collaboration (ISIC) 2017 dataset. Based on nine evaluation metrics and according to the RAPS method, the AlexNet algorithm with a classical GWO yielded the optimum model, achieving a classification accuracy of 94.5%. This work presents the first study on benchmarking skin cancer classification with many models. Feature reduction not only reduces the time spent on training but also improves classification accuracy. The RAPS method has proven its robustness in the problem of selecting the best model for skin cancer classification.
Collapse
Affiliation(s)
- Neven Saleh
- Systems and Biomedical Engineering Department, Higher Institute of Engineering, EL Shorouk Academy, Cairo, Egypt.
- Electrical Communication and Electronic Systems Engineering Department, Engineering Faculty, October University for Modern Sciences and Arts, Giza, Egypt.
| | - Mohammed A Hassan
- Biomedical Engineering Department, Faculty of Engineering, Helwan University, Cairo, Egypt
| | - Ahmed M Salaheldin
- Systems and Biomedical Engineering Department, Higher Institute of Engineering, EL Shorouk Academy, Cairo, Egypt
| |
Collapse
|
7
|
Quishpe-Usca A, Cuenca-Dominguez S, Arias-Viñansaca A, Bosmediano-Angos K, Villalba-Meneses F, Ramírez-Cando L, Tirado-Espín A, Cadena-Morejón C, Almeida-Galárraga D, Guevara C. The effect of hair removal and filtering on melanoma detection: a comparative deep learning study with AlexNet CNN. PeerJ Comput Sci 2024; 10:e1953. [PMID: 38660169 PMCID: PMC11041978 DOI: 10.7717/peerj-cs.1953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Accepted: 03/03/2024] [Indexed: 04/26/2024]
Abstract
Melanoma is the most aggressive and prevalent form of skin cancer globally, with a higher incidence in men and individuals with fair skin. Early detection of melanoma is essential for the successful treatment and prevention of metastasis. In this context, deep learning methods, distinguished by their ability to perform automated and detailed analysis, extracting melanoma-specific features, have emerged. These approaches excel in performing large-scale analysis, optimizing time, and providing accurate diagnoses, contributing to timely treatments compared to conventional diagnostic methods. The present study offers a methodology to assess the effectiveness of an AlexNet-based convolutional neural network (CNN) in identifying early-stage melanomas. The model is trained on a balanced dataset of 10,605 dermoscopic images, and on modified datasets where hair, a potential obstructive factor, was detected and removed allowing for an assessment of how hair removal affects the model's overall performance. To perform hair removal, we propose a morphological algorithm combined with different filtering techniques for comparison: Fourier, Wavelet, average blur, and low-pass filters. The model is evaluated through 10-fold cross-validation and the metrics of accuracy, recall, precision, and the F1 score. The results demonstrate that the proposed model performs the best for the dataset where we implemented both a Wavelet filter and hair removal algorithm. It has an accuracy of 91.30%, a recall of 87%, a precision of 95.19%, and an F1 score of 90.91%.
Collapse
Affiliation(s)
- Angélica Quishpe-Usca
- School of Biological Sciences and Engineering, Yachay Tech University, San Miguel de Urcuquí, Imbabura, Ecuador
| | - Stefany Cuenca-Dominguez
- School of Biological Sciences and Engineering, Yachay Tech University, San Miguel de Urcuquí, Imbabura, Ecuador
| | - Araceli Arias-Viñansaca
- School of Biological Sciences and Engineering, Yachay Tech University, San Miguel de Urcuquí, Imbabura, Ecuador
| | - Karen Bosmediano-Angos
- School of Biological Sciences and Engineering, Yachay Tech University, San Miguel de Urcuquí, Imbabura, Ecuador
| | - Fernando Villalba-Meneses
- School of Biological Sciences and Engineering, Yachay Tech University, San Miguel de Urcuquí, Imbabura, Ecuador
| | - Lenin Ramírez-Cando
- School of Biological Sciences and Engineering, Yachay Tech University, San Miguel de Urcuquí, Imbabura, Ecuador
| | - Andrés Tirado-Espín
- School of Mathematical and Computational Sciences, Yachay Tech University, San Miguel de Urcuquí, Imbabura, Ecuador
| | - Carolina Cadena-Morejón
- School of Mathematical and Computational Sciences, Yachay Tech University, San Miguel de Urcuquí, Imbabura, Ecuador
| | - Diego Almeida-Galárraga
- School of Biological Sciences and Engineering, Yachay Tech University, San Miguel de Urcuquí, Imbabura, Ecuador
| | - Cesar Guevara
- Quantitative Methods Department, CUNEF Universidad, Madrid, Madrid, Spain
| |
Collapse
|
8
|
Naeem A, Anees T. DVFNet: A deep feature fusion-based model for the multiclassification of skin cancer utilizing dermoscopy images. PLoS One 2024; 19:e0297667. [PMID: 38507348 PMCID: PMC10954125 DOI: 10.1371/journal.pone.0297667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 01/11/2024] [Indexed: 03/22/2024] Open
Abstract
Skin cancer is a common cancer affecting millions of people annually. Skin cells inside the body that grow in unusual patterns are a sign of this invasive disease. The cells then spread to other organs and tissues through the lymph nodes and destroy them. Lifestyle changes and increased solar exposure contribute to the rise in the incidence of skin cancer. Early identification and staging are essential due to the high mortality rate associated with skin cancer. In this study, we presented a deep learning-based method named DVFNet for the detection of skin cancer from dermoscopy images. To detect skin cancer images are pre-processed using anisotropic diffusion methods to remove artifacts and noise which enhances the quality of images. A combination of the VGG19 architecture and the Histogram of Oriented Gradients (HOG) is used in this research for discriminative feature extraction. SMOTE Tomek is used to resolve the problem of imbalanced images in the multiple classes of the publicly available ISIC 2019 dataset. This study utilizes segmentation to pinpoint areas of significantly damaged skin cells. A feature vector map is created by combining the features of HOG and VGG19. Multiclassification is accomplished by CNN using feature vector maps. DVFNet achieves an accuracy of 98.32% on the ISIC 2019 dataset. Analysis of variance (ANOVA) statistical test is used to validate the model's accuracy. Healthcare experts utilize the DVFNet model to detect skin cancer at an early clinical stage.
Collapse
Affiliation(s)
- Ahmad Naeem
- Department of Computer Science, School of Systems and Technology, University of Management and Technology, Lahore, Pakistan
| | - Tayyaba Anees
- Department of Software Engineering, School of Systems and Technology, University of Management and Technology, Lahore, Pakistan
| |
Collapse
|
9
|
Hermosilla P, Soto R, Vega E, Suazo C, Ponce J. Skin Cancer Detection and Classification Using Neural Network Algorithms: A Systematic Review. Diagnostics (Basel) 2024; 14:454. [PMID: 38396492 PMCID: PMC10888121 DOI: 10.3390/diagnostics14040454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2023] [Revised: 02/07/2024] [Accepted: 02/10/2024] [Indexed: 02/25/2024] Open
Abstract
In recent years, there has been growing interest in the use of computer-assisted technology for early detection of skin cancer through the analysis of dermatoscopic images. However, the accuracy illustrated behind the state-of-the-art approaches depends on several factors, such as the quality of the images and the interpretation of the results by medical experts. This systematic review aims to critically assess the efficacy and challenges of this research field in order to explain the usability and limitations and highlight potential future lines of work for the scientific and clinical community. In this study, the analysis was carried out over 45 contemporary studies extracted from databases such as Web of Science and Scopus. Several computer vision techniques related to image and video processing for early skin cancer diagnosis were identified. In this context, the focus behind the process included the algorithms employed, result accuracy, and validation metrics. Thus, the results yielded significant advancements in cancer detection using deep learning and machine learning algorithms. Lastly, this review establishes a foundation for future research, highlighting potential contributions and opportunities to improve the effectiveness of skin cancer detection through machine learning.
Collapse
Affiliation(s)
- Pamela Hermosilla
- Escuela de Ingeniería Informática, Pontificia Universidad Católica de Valparaíso, Avenida Brasil 2241, Valparaíso 2362807, Chile (E.V.); (C.S.); (J.P.)
| | | | | | | | | |
Collapse
|
10
|
Foltz EA, Witkowski A, Becker AL, Latour E, Lim JY, Hamilton A, Ludzik J. Artificial Intelligence Applied to Non-Invasive Imaging Modalities in Identification of Nonmelanoma Skin Cancer: A Systematic Review. Cancers (Basel) 2024; 16:629. [PMID: 38339380 PMCID: PMC10854803 DOI: 10.3390/cancers16030629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 01/28/2024] [Accepted: 01/29/2024] [Indexed: 02/12/2024] Open
Abstract
BACKGROUND The objective of this study is to systematically analyze the current state of the literature regarding novel artificial intelligence (AI) machine learning models utilized in non-invasive imaging for the early detection of nonmelanoma skin cancers. Furthermore, we aimed to assess their potential clinical relevance by evaluating the accuracy, sensitivity, and specificity of each algorithm and assessing for the risk of bias. METHODS Two reviewers screened the MEDLINE, Cochrane, PubMed, and Embase databases for peer-reviewed studies that focused on AI-based skin cancer classification involving nonmelanoma skin cancers and were published between 2018 and 2023. The search terms included skin neoplasms, nonmelanoma, basal-cell carcinoma, squamous-cell carcinoma, diagnostic techniques and procedures, artificial intelligence, algorithms, computer systems, dermoscopy, reflectance confocal microscopy, and optical coherence tomography. Based on the search results, only studies that directly answered the review objectives were included and the efficacy measures for each were recorded. A QUADAS-2 risk assessment for bias in included studies was then conducted. RESULTS A total of 44 studies were included in our review; 40 utilizing dermoscopy, 3 using reflectance confocal microscopy (RCM), and 1 for hyperspectral epidermal imaging (HEI). The average accuracy of AI algorithms applied to all imaging modalities combined was 86.80%, with the same average for dermoscopy. Only one of the three studies applying AI to RCM measured accuracy, with a result of 87%. Accuracy was not measured in regard to AI based HEI interpretation. CONCLUSION AI algorithms exhibited an overall favorable performance in the diagnosis of nonmelanoma skin cancer via noninvasive imaging techniques. Ultimately, further research is needed to isolate pooled diagnostic accuracy for nonmelanoma skin cancers as many testing datasets also include melanoma and other pigmented lesions.
Collapse
Affiliation(s)
- Emilie A. Foltz
- Department of Dermatology, Oregon Health & Science University, Portland, OR 97201, USA
- Elson S. Floyd College of Medicine, Washington State University, Spokane, WA 99202, USA
| | - Alexander Witkowski
- Department of Dermatology, Oregon Health & Science University, Portland, OR 97201, USA
| | - Alyssa L. Becker
- Department of Dermatology, Oregon Health & Science University, Portland, OR 97201, USA
- John A. Burns School of Medicine, University of Hawai’i at Manoa, Honolulu, HI 96813, USA
| | - Emile Latour
- Biostatistics Shared Resource, Knight Cancer Institute, Oregon Health & Science University, Portland, OR 97201, USA
| | - Jeong Youn Lim
- Biostatistics Shared Resource, Knight Cancer Institute, Oregon Health & Science University, Portland, OR 97201, USA
| | - Andrew Hamilton
- Department of Dermatology, Oregon Health & Science University, Portland, OR 97201, USA
| | - Joanna Ludzik
- Department of Dermatology, Oregon Health & Science University, Portland, OR 97201, USA
| |
Collapse
|
11
|
Abbasi EY, Deng Z, Magsi AH, Ali Q, Kumar K, Zubedi A. Optimizing Skin Cancer Survival Prediction with Ensemble Techniques. Bioengineering (Basel) 2023; 11:43. [PMID: 38247920 PMCID: PMC10813432 DOI: 10.3390/bioengineering11010043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 12/22/2023] [Accepted: 12/27/2023] [Indexed: 01/23/2024] Open
Abstract
The advancement in cancer research using high throughput technology and artificial intelligence (AI) is gaining momentum to improve disease diagnosis and targeted therapy. However, the complex and imbalanced data with high dimensionality pose significant challenges for computational approaches and multi-omics data analysis. This study focuses on predicting skin cancer and analyzing overall survival probability. We employ the Kaplan-Meier estimator and Cox proportional hazards regression model, utilizing high-throughput machine learning (ML)-based ensemble methods. Our proposed ML-based ensemble techniques are applied to a publicly available dataset from the ICGC Data Portal, specifically targeting skin cutaneous melanoma cancers (SKCM). We used eight baseline classifiers, namely, random forest (RF), decision tree (DT), gradient boosting (GB), AdaBoost, Gaussian naïve Bayes (GNB), extra tree (ET), logistic regression (LR), and light gradient boosting machine (Light GBM or LGBM). The study evaluated the performance of the proposed ensemble methods and survival analysis on SKCM. The proposed methods demonstrated promising results, outperforming other algorithms and models in terms of accuracy compared to traditional methods. Specifically, the RF classifier exhibited outstanding precision results. Additionally, four different ensemble methods (stacking, bagging, boosting, and voting) were created and trained to achieve optimal results. The performance was evaluated and interpreted using accuracy, precision, recall, F1 score, confusion matrix, and ROC curves, where the voting method achieved a promising accuracy of 99%. On the other hand, the RF classifier achieved an outstanding accuracy of 99%, which exhibits the best performance. We compared our proposed study with the existing state-of-the-art techniques and found significant improvements in several key aspects. Our approach not only demonstrated superior performance in terms of accuracy but also showcased remarkable efficiency. Thus, this research work contributes to diagnosing SKCM with high accuracy.
Collapse
Affiliation(s)
- Erum Yousef Abbasi
- State Key Laboratory of Wireless Network Positioning and Communication Engineering Integration Research, School of Electronics Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China;
| | - Zhongliang Deng
- State Key Laboratory of Wireless Network Positioning and Communication Engineering Integration Research, School of Electronics Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China;
| | - Arif Hussain Magsi
- State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing 100876, China;
| | - Qasim Ali
- Department of Software Engineering, Mehran University of Engineering and Technology, Jamshoro 76062, Pakistan;
| | - Kamlesh Kumar
- School of Electronics Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China;
| | - Asma Zubedi
- School of Economics and Management, Beijing University of Posts and Telecommunications, Beijing 100876, China;
| |
Collapse
|
12
|
Riaz S, Naeem A, Malik H, Naqvi RA, Loh WK. Federated and Transfer Learning Methods for the Classification of Melanoma and Nonmelanoma Skin Cancers: A Prospective Study. SENSORS (BASEL, SWITZERLAND) 2023; 23:8457. [PMID: 37896548 PMCID: PMC10611214 DOI: 10.3390/s23208457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 10/09/2023] [Accepted: 10/12/2023] [Indexed: 10/29/2023]
Abstract
Skin cancer is considered a dangerous type of cancer with a high global mortality rate. Manual skin cancer diagnosis is a challenging and time-consuming method due to the complexity of the disease. Recently, deep learning and transfer learning have been the most effective methods for diagnosing this deadly cancer. To aid dermatologists and other healthcare professionals in classifying images into melanoma and nonmelanoma cancer and enabling the treatment of patients at an early stage, this systematic literature review (SLR) presents various federated learning (FL) and transfer learning (TL) techniques that have been widely applied. This study explores the FL and TL classifiers by evaluating them in terms of the performance metrics reported in research studies, which include true positive rate (TPR), true negative rate (TNR), area under the curve (AUC), and accuracy (ACC). This study was assembled and systemized by reviewing well-reputed studies published in eminent fora between January 2018 and July 2023. The existing literature was compiled through a systematic search of seven well-reputed databases. A total of 86 articles were included in this SLR. This SLR contains the most recent research on FL and TL algorithms for classifying malignant skin cancer. In addition, a taxonomy is presented that summarizes the many malignant and non-malignant cancer classes. The results of this SLR highlight the limitations and challenges of recent research. Consequently, the future direction of work and opportunities for interested researchers are established that help them in the automated classification of melanoma and nonmelanoma skin cancers.
Collapse
Affiliation(s)
- Shafia Riaz
- Department of Computer Science, National College of Business Administration & Economics Sub Campus Multan, Multan 60000, Pakistan; (S.R.); (H.M.)
| | - Ahmad Naeem
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan;
| | - Hassaan Malik
- Department of Computer Science, National College of Business Administration & Economics Sub Campus Multan, Multan 60000, Pakistan; (S.R.); (H.M.)
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan;
| | - Rizwan Ali Naqvi
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Woong-Kee Loh
- School of Computing, Gachon University, Seongnam 13120, Republic of Korea
| |
Collapse
|
13
|
Patel RH, Foltz EA, Witkowski A, Ludzik J. Analysis of Artificial Intelligence-Based Approaches Applied to Non-Invasive Imaging for Early Detection of Melanoma: A Systematic Review. Cancers (Basel) 2023; 15:4694. [PMID: 37835388 PMCID: PMC10571810 DOI: 10.3390/cancers15194694] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Revised: 09/05/2023] [Accepted: 09/19/2023] [Indexed: 10/15/2023] Open
Abstract
BACKGROUND Melanoma, the deadliest form of skin cancer, poses a significant public health challenge worldwide. Early detection is crucial for improved patient outcomes. Non-invasive skin imaging techniques allow for improved diagnostic accuracy; however, their use is often limited due to the need for skilled practitioners trained to interpret images in a standardized fashion. Recent innovations in artificial intelligence (AI)-based techniques for skin lesion image interpretation show potential for the use of AI in the early detection of melanoma. OBJECTIVE The aim of this study was to evaluate the current state of AI-based techniques used in combination with non-invasive diagnostic imaging modalities including reflectance confocal microscopy (RCM), optical coherence tomography (OCT), and dermoscopy. We also aimed to determine whether the application of AI-based techniques can lead to improved diagnostic accuracy of melanoma. METHODS A systematic search was conducted via the Medline/PubMed, Cochrane, and Embase databases for eligible publications between 2018 and 2022. Screening methods adhered to the 2020 version of the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. Included studies utilized AI-based algorithms for melanoma detection and directly addressed the review objectives. RESULTS We retrieved 40 papers amongst the three databases. All studies directly comparing the performance of AI-based techniques with dermatologists reported the superior or equivalent performance of AI-based techniques in improving the detection of melanoma. In studies directly comparing algorithm performance on dermoscopy images to dermatologists, AI-based algorithms achieved a higher ROC (>80%) in the detection of melanoma. In these comparative studies using dermoscopic images, the mean algorithm sensitivity was 83.01% and the mean algorithm specificity was 85.58%. Studies evaluating machine learning in conjunction with OCT boasted accuracy of 95%, while studies evaluating RCM reported a mean accuracy rate of 82.72%. CONCLUSIONS Our results demonstrate the robust potential of AI-based techniques to improve diagnostic accuracy and patient outcomes through the early identification of melanoma. Further studies are needed to assess the generalizability of these AI-based techniques across different populations and skin types, improve standardization in image processing, and further compare the performance of AI-based techniques with board-certified dermatologists to evaluate clinical applicability.
Collapse
Affiliation(s)
- Raj H. Patel
- Edward Via College of Osteopathic Medicine, VCOM-Louisiana, 4408 Bon Aire Dr, Monroe, LA 71203, USA
- Department of Dermatology, Oregon Health & Science University, Portland, OR 97239, USA (A.W.); (J.L.)
| | - Emilie A. Foltz
- Department of Dermatology, Oregon Health & Science University, Portland, OR 97239, USA (A.W.); (J.L.)
- Elson S. Floyd College of Medicine, Washington State University, Spokane, WA 99202, USA
| | - Alexander Witkowski
- Department of Dermatology, Oregon Health & Science University, Portland, OR 97239, USA (A.W.); (J.L.)
| | - Joanna Ludzik
- Department of Dermatology, Oregon Health & Science University, Portland, OR 97239, USA (A.W.); (J.L.)
| |
Collapse
|
14
|
Abbas Q, Daadaa Y, Rashid U, Ibrahim MEA. Assist-Dermo: A Lightweight Separable Vision Transformer Model for Multiclass Skin Lesion Classification. Diagnostics (Basel) 2023; 13:2531. [PMID: 37568894 PMCID: PMC10417387 DOI: 10.3390/diagnostics13152531] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 07/22/2023] [Accepted: 07/26/2023] [Indexed: 08/13/2023] Open
Abstract
A dermatologist-like automatic classification system is developed in this paper to recognize nine different classes of pigmented skin lesions (PSLs), using a separable vision transformer (SVT) technique to assist clinical experts in early skin cancer detection. In the past, researchers have developed a few systems to recognize nine classes of PSLs. However, they often require enormous computations to achieve high performance, which is burdensome to deploy on resource-constrained devices. In this paper, a new approach to designing SVT architecture is developed based on SqueezeNet and depthwise separable CNN models. The primary goal is to find a deep learning architecture with few parameters that has comparable accuracy to state-of-the-art (SOTA) architectures. This paper modifies the SqueezeNet design for improved runtime performance by utilizing depthwise separable convolutions rather than simple conventional units. To develop this Assist-Dermo system, a data augmentation technique is applied to control the PSL imbalance problem. Next, a pre-processing step is integrated to select the most dominant region and then enhance the lesion patterns in a perceptual-oriented color space. Afterwards, the Assist-Dermo system is designed to improve efficacy and performance with several layers and multiple filter sizes but fewer filters and parameters. For the training and evaluation of Assist-Dermo models, a set of PSL images is collected from different online data sources such as Ph2, ISBI-2017, HAM10000, and ISIC to recognize nine classes of PSLs. On the chosen dataset, it achieves an accuracy (ACC) of 95.6%, a sensitivity (SE) of 96.7%, a specificity (SP) of 95%, and an area under the curve (AUC) of 0.95. The experimental results show that the suggested Assist-Dermo technique outperformed SOTA algorithms when recognizing nine classes of PSLs. The Assist-Dermo system performed better than other competitive systems and can support dermatologists in the diagnosis of a wide variety of PSLs through dermoscopy. The Assist-Dermo model code is freely available on GitHub for the scientific community.
Collapse
Affiliation(s)
- Qaisar Abbas
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia; (Y.D.); (M.E.A.I.)
| | - Yassine Daadaa
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia; (Y.D.); (M.E.A.I.)
| | - Umer Rashid
- Department of Computer Science, Quaid-i-Azam University, Islamabad 44000, Pakistan;
| | - Mostafa E. A. Ibrahim
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia; (Y.D.); (M.E.A.I.)
- Department of Electrical Engineering, Benha Faculty of Engineering, Benha University, Qalubia, Benha 13518, Egypt
| |
Collapse
|
15
|
Mehmood A, Gulzar Y, Ilyas QM, Jabbari A, Ahmad M, Iqbal S. SBXception: A Shallower and Broader Xception Architecture for Efficient Classification of Skin Lesions. Cancers (Basel) 2023; 15:3604. [PMID: 37509267 PMCID: PMC10377736 DOI: 10.3390/cancers15143604] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 07/05/2023] [Accepted: 07/08/2023] [Indexed: 07/30/2023] Open
Abstract
Skin cancer is a major public health concern around the world. Skin cancer identification is critical for effective treatment and improved results. Deep learning models have shown considerable promise in assisting dermatologists in skin cancer diagnosis. This study proposes SBXception: a shallower and broader variant of the Xception network. It uses Xception as the base model for skin cancer classification and increases its performance by reducing the depth and expanding the breadth of the architecture. We used the HAM10000 dataset, which contains 10,015 dermatoscopic images of skin lesions classified into seven categories, for training and testing the proposed model. Using the HAM10000 dataset, we fine-tuned the new model and reached an accuracy of 96.97% on a holdout test set. SBXception also achieved significant performance enhancement with 54.27% fewer training parameters and reduced training time compared to the base model. Our findings show that reducing and expanding the Xception model architecture can greatly improve its performance in skin cancer categorization.
Collapse
Affiliation(s)
- Abid Mehmood
- Department of Management Information Systems, College of Business Administration, King Faisal University, Al Ahsa 31982, Saudi Arabia
| | - Yonis Gulzar
- Department of Management Information Systems, College of Business Administration, King Faisal University, Al Ahsa 31982, Saudi Arabia
| | - Qazi Mudassar Ilyas
- Department of Information Systems, College of Computer Sciences and Information Technology, King Faisal University, Al Ahsa 31982, Saudi Arabia
| | - Abdoh Jabbari
- College of Computer Science and Information Technology, Jazan University, Jazan 45142, Saudi Arabia
| | - Muneer Ahmad
- Department of Human and Digital Interface, Woosong University, Daejeon 34606, Republic of Korea
| | - Sajid Iqbal
- Department of Information Systems, College of Computer Sciences and Information Technology, King Faisal University, Al Ahsa 31982, Saudi Arabia
| |
Collapse
|
16
|
Tahir M, Naeem A, Malik H, Tanveer J, Naqvi RA, Lee SW. DSCC_Net: Multi-Classification Deep Learning Models for Diagnosing of Skin Cancer Using Dermoscopic Images. Cancers (Basel) 2023; 15:cancers15072179. [PMID: 37046840 PMCID: PMC10093058 DOI: 10.3390/cancers15072179] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 04/04/2023] [Accepted: 04/04/2023] [Indexed: 04/08/2023] Open
Abstract
Skin cancer is one of the most lethal kinds of human illness. In the present state of the health care system, skin cancer identification is a time-consuming procedure and if it is not diagnosed initially then it can be threatening to human life. To attain a high prospect of complete recovery, early detection of skin cancer is crucial. In the last several years, the application of deep learning (DL) algorithms for the detection of skin cancer has grown in popularity. Based on a DL model, this work intended to build a multi-classification technique for diagnosing skin cancers such as melanoma (MEL), basal cell carcinoma (BCC), squamous cell carcinoma (SCC), and melanocytic nevi (MN). In this paper, we have proposed a novel model, a deep learning-based skin cancer classification network (DSCC_Net) that is based on a convolutional neural network (CNN), and evaluated it on three publicly available benchmark datasets (i.e., ISIC 2020, HAM10000, and DermIS). For the skin cancer diagnosis, the classification performance of the proposed DSCC_Net model is compared with six baseline deep networks, including ResNet-152, Vgg-16, Vgg-19, Inception-V3, EfficientNet-B0, and MobileNet. In addition, we used SMOTE Tomek to handle the minority classes issue that exists in this dataset. The proposed DSCC_Net obtained a 99.43% AUC, along with a 94.17%, accuracy, a recall of 93.76%, a precision of 94.28%, and an F1-score of 93.93% in categorizing the four distinct types of skin cancer diseases. The rates of accuracy for ResNet-152, Vgg-19, MobileNet, Vgg-16, EfficientNet-B0, and Inception-V3 are 89.32%, 91.68%, 92.51%, 91.12%, 89.46% and 91.82%, respectively. The results showed that our proposed DSCC_Net model performs better as compared to baseline models, thus offering significant support to dermatologists and health experts to diagnose skin cancer.
Collapse
Affiliation(s)
- Maryam Tahir
- Department of Computer Science, National College of Business Administration & Economics Lahore, Multan Sub Campus, Multan 60000, Pakistan
| | - Ahmad Naeem
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan
| | - Hassaan Malik
- Department of Computer Science, National College of Business Administration & Economics Lahore, Multan Sub Campus, Multan 60000, Pakistan
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan
| | - Jawad Tanveer
- Department of Computer Science and Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Rizwan Ali Naqvi
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Seung-Won Lee
- School of Medicine, Sungkyunkwan University, Suwon 16419, Republic of Korea
| |
Collapse
|
17
|
Olayah F, Senan EM, Ahmed IA, Awaji B. AI Techniques of Dermoscopy Image Analysis for the Early Detection of Skin Lesions Based on Combined CNN Features. Diagnostics (Basel) 2023; 13:diagnostics13071314. [PMID: 37046532 PMCID: PMC10093624 DOI: 10.3390/diagnostics13071314] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 03/23/2023] [Accepted: 03/29/2023] [Indexed: 04/05/2023] Open
Abstract
Melanoma is one of the deadliest types of skin cancer that leads to death if not diagnosed early. Many skin lesions are similar in the early stages, which causes an inaccurate diagnosis. Accurate diagnosis of the types of skin lesions helps dermatologists save patients’ lives. In this paper, we propose hybrid systems based on the advantages of fused CNN models. CNN models receive dermoscopy images of the ISIC 2019 dataset after segmenting the area of lesions and isolating them from healthy skin through the Geometric Active Contour (GAC) algorithm. Artificial neural network (ANN) and Random Forest (Rf) receive fused CNN features and classify them with high accuracy. The first methodology involved analyzing the area of skin lesions and diagnosing their type early using the hybrid models CNN-ANN and CNN-RF. CNN models (AlexNet, GoogLeNet and VGG16) receive lesions area only and produce high depth feature maps. Thus, the deep feature maps were reduced by the PCA and then classified by ANN and RF networks. The second methodology involved analyzing the area of skin lesions and diagnosing their type early using the hybrid CNN-ANN and CNN-RF models based on the features of the fused CNN models. It is worth noting that the features of the CNN models were serially integrated after reducing their high dimensions by Principal Component Analysis (PCA). Hybrid models based on fused CNN features achieved promising results for diagnosing dermatoscopic images of the ISIC 2019 data set and distinguishing skin cancer from other skin lesions. The AlexNet-GoogLeNet-VGG16-ANN hybrid model achieved an AUC of 94.41%, sensitivity of 88.90%, accuracy of 96.10%, precision of 88.69%, and specificity of 99.44%.
Collapse
Affiliation(s)
- Fekry Olayah
- Department of Information System, Faculty Computer Science and Information System, Najran University, Najran 66462, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana’a, Yemen
| | | | - Bakri Awaji
- Department of Computer Science, Faculty of Computer Science and Information System, Najran University, Najran 66462, Saudi Arabia
| |
Collapse
|
18
|
Malik H, Anees T, Naeem A, Naqvi RA, Loh WK. Blockchain-Federated and Deep-Learning-Based Ensembling of Capsule Network with Incremental Extreme Learning Machines for Classification of COVID-19 Using CT Scans. Bioengineering (Basel) 2023; 10:203. [PMID: 36829697 PMCID: PMC9952069 DOI: 10.3390/bioengineering10020203] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Revised: 01/30/2023] [Accepted: 02/01/2023] [Indexed: 02/09/2023] Open
Abstract
Due to the rapid rate of SARS-CoV-2 dissemination, a conversant and effective strategy must be employed to isolate COVID-19. When it comes to determining the identity of COVID-19, one of the most significant obstacles that researchers must overcome is the rapid propagation of the virus, in addition to the dearth of trustworthy testing models. This problem continues to be the most difficult one for clinicians to deal with. The use of AI in image processing has made the formerly insurmountable challenge of finding COVID-19 situations more manageable. In the real world, there is a problem that has to be handled about the difficulties of sharing data between hospitals while still honoring the privacy concerns of the organizations. When training a global deep learning (DL) model, it is crucial to handle fundamental concerns such as user privacy and collaborative model development. For this study, a novel framework is designed that compiles information from five different databases (several hospitals) and edifies a global model using blockchain-based federated learning (FL). The data is validated through the use of blockchain technology (BCT), and FL trains the model on a global scale while maintaining the secrecy of the organizations. The proposed framework is divided into three parts. First, we provide a method of data normalization that can handle the diversity of data collected from five different sources using several computed tomography (CT) scanners. Second, to categorize COVID-19 patients, we ensemble the capsule network (CapsNet) with incremental extreme learning machines (IELMs). Thirdly, we provide a strategy for interactively training a global model using BCT and FL while maintaining anonymity. Extensive tests employing chest CT scans and a comparison of the classification performance of the proposed model to that of five DL algorithms for predicting COVID-19, while protecting the privacy of the data for a variety of users, were undertaken. Our findings indicate improved effectiveness in identifying COVID-19 patients and achieved an accuracy of 98.99%. Thus, our model provides substantial aid to medical practitioners in their diagnosis of COVID-19.
Collapse
Affiliation(s)
- Hassaan Malik
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan
| | - Tayyaba Anees
- Department of Software Engineering, University of Management and Technology, Lahore 54000, Pakistan
| | - Ahmad Naeem
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan
| | - Rizwan Ali Naqvi
- Department of Unmanned Vehicle Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Woong-Kee Loh
- School of Computing, Gachon University, Seongnam 13120, Republic of Korea
| |
Collapse
|
19
|
Malik H, Naeem A, Naqvi RA, Loh WK. DMFL_Net: A Federated Learning-Based Framework for the Classification of COVID-19 from Multiple Chest Diseases Using X-rays. SENSORS (BASEL, SWITZERLAND) 2023; 23:743. [PMID: 36679541 PMCID: PMC9864925 DOI: 10.3390/s23020743] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 01/02/2023] [Accepted: 01/03/2023] [Indexed: 05/14/2023]
Abstract
Coronavirus Disease 2019 (COVID-19) is still a threat to global health and safety, and it is anticipated that deep learning (DL) will be the most effective way of detecting COVID-19 and other chest diseases such as lung cancer (LC), tuberculosis (TB), pneumothorax (PneuTh), and pneumonia (Pneu). However, data sharing across hospitals is hampered by patients' right to privacy, leading to unexpected results from deep neural network (DNN) models. Federated learning (FL) is a game-changing concept since it allows clients to train models together without sharing their source data with anybody else. Few studies, however, focus on improving the model's accuracy and stability, whereas most existing FL-based COVID-19 detection techniques aim to maximize secondary objectives such as latency, energy usage, and privacy. In this work, we design a novel model named decision-making-based federated learning network (DMFL_Net) for medical diagnostic image analysis to distinguish COVID-19 from four distinct chest disorders including LC, TB, PneuTh, and Pneu. The DMFL_Net model that has been suggested gathers data from a variety of hospitals, constructs the model using the DenseNet-169, and produces accurate predictions from information that is kept secure and only released to authorized individuals. Extensive experiments were carried out with chest X-rays (CXR), and the performance of the proposed model was compared with two transfer learning (TL) models, i.e., VGG-19 and VGG-16 in terms of accuracy (ACC), precision (PRE), recall (REC), specificity (SPF), and F1-measure. Additionally, the DMFL_Net model is also compared with the default FL configurations. The proposed DMFL_Net + DenseNet-169 model achieves an accuracy of 98.45% and outperforms other approaches in classifying COVID-19 from four chest diseases and successfully protects the privacy of the data among diverse clients.
Collapse
Affiliation(s)
- Hassaan Malik
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan
| | - Ahmad Naeem
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan
| | - Rizwan Ali Naqvi
- Department of Unmanned Vehicle Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Woong-Kee Loh
- School of Computing, Gachon University, Seongnam 13120, Republic of Korea
| |
Collapse
|