1
|
Wang R, Chen X, Wang X, Wang H, Qian C, Yao L, Zhang K. A novel approach for melanoma detection utilizing GAN synthesis and vision transformer. Comput Biol Med 2024; 176:108572. [PMID: 38749327 DOI: 10.1016/j.compbiomed.2024.108572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Revised: 05/06/2024] [Accepted: 05/06/2024] [Indexed: 05/31/2024]
Abstract
BACKGROUND AND OBJECTIVE Melanoma, a malignant form of skin cancer, is a critical health concern worldwide. Early and accurate detection plays a pivotal role in improving patient's conditions. Current diagnosis of skin cancer largely relies on visual inspections such as dermoscopy examinations, clinical screening and histopathological examinations. However, these approaches are characterized by low efficiency, high costs, and a lack of guaranteed accuracy. Consequently, deep learning based techniques have emerged in the field of melanoma detection, successfully aiding in improving the accuracy of diagnosis. However, the high similarity between benign and malignant melanomas, combined with the class imbalance issue in skin lesion datasets, present a significant challenge in further improving the diagnosis accuracy. We propose a two-stage framework for melanoma detection to address these issues. METHODS In the first stage, we use Style Generative Adversarial Networks with Adaptive discriminator augmentation synthesis to generate realistic and diverse melanoma images, which are then combined with the original dataset to create an augmented dataset. In the second stage, we utilize a vision Transformer of BatchFormer to extract features and detect melanoma or non-melanoma skin lesions on the augmented dataset obtained in the previous step, specifically, we employed a dual-branch training strategy in this process. RESULTS Our experimental results on the ISIC2020 dataset demonstrate the effectiveness of the proposed approach, showing a significant improvement in melanoma detection. The method achieved an accuracy of 98.43%, an AUC value of 98.63%, and an F1 value of 99.01%, surpassing some existing methods. CONCLUSION The method is feasible, efficient, and achieves early melanoma screening. It significantly enhances detection accuracy and can assist physicians in diagnosis to a great extent.
Collapse
Affiliation(s)
- Rui Wang
- School of Communication and Information Engineering, Shanghai University, No. 99 Shangda Rd., Shanghai, 200444, China.
| | - Xiaofei Chen
- School of Communication and Information Engineering, Shanghai University, No. 99 Shangda Rd., Shanghai, 200444, China.
| | - Xiangyang Wang
- School of Communication and Information Engineering, Shanghai University, No. 99 Shangda Rd., Shanghai, 200444, China.
| | - Haiquan Wang
- Department of General Surgery, Shanghai General Hospital of Shanghai Jiaotong University, No. 100 Haining Rd., Shanghai, 200080, China.
| | - Chunhua Qian
- Department of Endocrinology and Metabolism, Shanghai Tenth People's Hospital, Tongji University, School of Medicine, No. 301 Middle Yanchang Rd., Shanghai, 200072, China.
| | - Liucheng Yao
- School of Communication and Information Engineering, Shanghai University, No. 99 Shangda Rd., Shanghai, 200444, China.
| | - Kecheng Zhang
- School of Communication and Information Engineering, Shanghai University, No. 99 Shangda Rd., Shanghai, 200444, China.
| |
Collapse
|
2
|
Dai W, Liu R, Wu T, Wang M, Yin J, Liu J. Deeply Supervised Skin Lesions Diagnosis With Stage and Branch Attention. IEEE J Biomed Health Inform 2024; 28:719-729. [PMID: 37624725 DOI: 10.1109/jbhi.2023.3308697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/27/2023]
Abstract
Accurate and unbiased examinations of skin lesions are critical for the early diagnosis and treatment of skin diseases. Visual features of skin lesions vary significantly because the images are collected from patients with different lesion colours and morphologies by using dissimilar imaging equipment. Recent studies have reported that ensembled convolutional neural networks (CNNs) are practical to classify the images for early diagnosis of skin disorders. However, the practical use of these ensembled CNNs is limited as these networks are heavyweight and inadequate for processing contextual information. Although lightweight networks (e.g., MobileNetV3 and EfficientNet) were developed to achieve parameter reduction for implementing deep neural networks on mobile devices, insufficient depth of feature representation restricts the performance. To address the existing limitations, we develop a new lite and effective neural network, namely HierAttn. The HierAttn applies a novel deep supervision strategy to learn the local and global features by using multi-stage and multi-branch attention mechanisms with only one training loss. The efficacy of HierAttn was evaluated by using the dermoscopy images dataset ISIC2019 and smartphone photos dataset PAD-UFES-20 (PAD2020). The experimental results show that HierAttn achieves the best accuracy and area under the curve (AUC) among the state-of-the-art lightweight networks.
Collapse
|
3
|
Monica KM, Shreeharsha J, Falkowski-Gilski P, Falkowska-Gilska B, Awasthy M, Phadke R. Melanoma skin cancer detection using mask-RCNN with modified GRU model. Front Physiol 2024; 14:1324042. [PMID: 38292449 PMCID: PMC10825805 DOI: 10.3389/fphys.2023.1324042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Accepted: 12/18/2023] [Indexed: 02/01/2024] Open
Abstract
Introduction: Melanoma Skin Cancer (MSC) is a type of cancer in the human body; therefore, early disease diagnosis is essential for reducing the mortality rate. However, dermoscopic image analysis poses challenges due to factors such as color illumination, light reflections, and the varying sizes and shapes of lesions. To overcome these challenges, an automated framework is proposed in this manuscript. Methods: Initially, dermoscopic images are acquired from two online benchmark datasets: International Skin Imaging Collaboration (ISIC) 2020 and Human against Machine (HAM) 10000. Subsequently, a normalization technique is employed on the dermoscopic images to decrease noise impact, outliers, and variations in the pixels. Furthermore, cancerous regions in the pre-processed images are segmented utilizing the mask-faster Region based Convolutional Neural Network (RCNN) model. The mask-RCNN model offers precise pixellevel segmentation by accurately delineating object boundaries. From the partitioned cancerous regions, discriminative feature vectors are extracted by applying three pre-trained CNN models, namely ResNeXt101, Xception, and InceptionV3. These feature vectors are passed into the modified Gated Recurrent Unit (GRU) model for MSC classification. In the modified GRU model, a swish-Rectified Linear Unit (ReLU) activation function is incorporated that efficiently stabilizes the learning process with better convergence rate during training. Results and discussion: The empirical investigation demonstrate that the modified GRU model attained an accuracy of 99.95% and 99.98% on the ISIC 2020 and HAM 10000 datasets, where the obtained results surpass the conventional detection models.
Collapse
Affiliation(s)
- K. M. Monica
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamil Nadu, India
| | - J. Shreeharsha
- Department of Computer Science and Engineering, Rao Bahadur Y. Mahabaleswarappa Engineering College, Ballari, Karnataka, India
| | | | | | - Mohan Awasthy
- Department of Engineering and Technology, Bharati Vidyapeeth Peeth Deemed to be University, Navi Mumbai, Maharashtra, India
| | - Rekha Phadke
- Department of Electronics and Communication Engineering, Nitte Meenakshi Institute of Technology, Bangalore, Karnataka, India
| |
Collapse
|
4
|
Hossain MM, Hossain MM, Arefin MB, Akhtar F, Blake J. Combining State-of-the-Art Pre-Trained Deep Learning Models: A Noble Approach for Skin Cancer Detection Using Max Voting Ensemble. Diagnostics (Basel) 2023; 14:89. [PMID: 38201399 PMCID: PMC10795598 DOI: 10.3390/diagnostics14010089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 12/21/2023] [Accepted: 12/22/2023] [Indexed: 01/12/2024] Open
Abstract
Skin cancer poses a significant healthcare challenge, requiring precise and prompt diagnosis for effective treatment. While recent advances in deep learning have dramatically improved medical image analysis, including skin cancer classification, ensemble methods offer a pathway for further enhancing diagnostic accuracy. This study introduces a cutting-edge approach employing the Max Voting Ensemble Technique for robust skin cancer classification on ISIC 2018: Task 1-2 dataset. We incorporate a range of cutting-edge, pre-trained deep neural networks, including MobileNetV2, AlexNet, VGG16, ResNet50, DenseNet201, DenseNet121, InceptionV3, ResNet50V2, InceptionResNetV2, and Xception. These models have been extensively trained on skin cancer datasets, achieving individual accuracies ranging from 77.20% to 91.90%. Our method leverages the synergistic capabilities of these models by combining their complementary features to elevate classification performance further. In our approach, input images undergo preprocessing for model compatibility. The ensemble integrates the pre-trained models with their architectures and weights preserved. For each skin lesion image under examination, every model produces a prediction. These are subsequently aggregated using the max voting ensemble technique to yield the final classification, with the majority-voted class serving as the conclusive prediction. Through comprehensive testing on a diverse dataset, our ensemble outperformed individual models, attaining an accuracy of 93.18% and an AUC score of 0.9320, thus demonstrating superior diagnostic reliability and accuracy. We evaluated the effectiveness of our proposed method on the HAM10000 dataset to ensure its generalizability. Our ensemble method delivers a robust, reliable, and effective tool for the classification of skin cancer. By utilizing the power of advanced deep neural networks, we aim to assist healthcare professionals in achieving timely and accurate diagnoses, ultimately reducing mortality rates and enhancing patient outcomes.
Collapse
Affiliation(s)
- Md. Mamun Hossain
- Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology, Saidpur 5310, Bangladesh
| | - Md. Moazzem Hossain
- Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology, Saidpur 5310, Bangladesh
| | - Most. Binoee Arefin
- Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology, Saidpur 5310, Bangladesh
| | - Fahima Akhtar
- Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology, Saidpur 5310, Bangladesh
| | - John Blake
- School of Computer Science and Engineering, University of Aizu, Aizuwakamatsu 965-8580, Japan
| |
Collapse
|
5
|
Ali G, Anwar M, Nauman M, Faheem M, Rashid J. Lyme rashes disease classification using deep feature fusion technique. Skin Res Technol 2023; 29:e13519. [PMID: 38009027 PMCID: PMC10628356 DOI: 10.1111/srt.13519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Accepted: 10/24/2023] [Indexed: 11/28/2023]
Abstract
Automatic classification of Lyme disease rashes on the skin helps clinicians and dermatologists' probe and investigate Lyme skin rashes effectively. This paper proposes a new in-depth features fusion system to classify Lyme disease rashes. The proposed method consists of two main steps. First, three different deep learning models, Densenet201, InceptionV3, and Exception, were trained independently to extract the deep features from the erythema migrans (EM) images. Second, a deep feature fusion mechanism (meta classifier) is developed to integrate the deep features before the final classification output layer. The meta classifier is a basic deep convolutional neural network trained on original images and features extracted from base level three deep learning models. In the feature fusion mechanism, the last three layers of base models are dropped out and connected to the meta classifier. The proposed deep feature fusion method significantly improved the classification process, where the classification accuracy was 98.97%, which is particularly impressive than the other state-of-the-art models.
Collapse
Affiliation(s)
- Ghulam Ali
- Department of Computer ScienceUniversity of OkaraOkaraPakistan
| | - Muhammad Anwar
- Department of Information SciencesDivision of Science and TechnologyUniversity of EducationLahorePakistan
| | - Muhammad Nauman
- Department of Computer ScienceUniversity of OkaraOkaraPakistan
| | - Muhammad Faheem
- School of Technology and InnovationsUniversity of VaasaVaasaFinland
| | - Javed Rashid
- Department of IT ServicesUniversity of OkaraOkaraPakistan
- MLC LabOkaraPakistan
| |
Collapse
|
6
|
Akram A, Rashid J, Jaffar MA, Faheem M, Amin RU. Segmentation and classification of skin lesions using hybrid deep learning method in the Internet of Medical Things. Skin Res Technol 2023; 29:e13524. [PMID: 38009016 PMCID: PMC10646956 DOI: 10.1111/srt.13524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2023] [Accepted: 10/28/2023] [Indexed: 11/28/2023]
Abstract
INTRODUCTION Particularly within the Internet of Medical Things (IoMT) context, skin lesion analysis is critical for precise diagnosis. To improve the accuracy and efficiency of skin lesion analysis, CAD systems play a crucial role. To segment and classify skin lesions from dermoscopy images, this study focuses on using hybrid deep learning techniques. METHOD This research uses a hybrid deep learning model that combines two cutting-edge approaches: Mask Region-based Convolutional Neural Network (MRCNN) for semantic segmentation and ResNet50 for lesion detection. To pinpoint the precise location of a skin lesion, the MRCNN is used for border delineation. We amass a huge, annotated collection of dermoscopy images for thorough model training. The hybrid deep learning model to capture subtle representations of the images is trained from start to finish using this dataset. RESULTS The experimental results using dermoscopy images show that the suggested hybrid method outperforms the current state-of-the-art methods. The model's capacity to segment lesions into distinct groups is demonstrated by a segmentation accuracy measurement of 95.49 percent. In addition, the classification of skin lesions shows great accuracy and dependability, which is a notable advancement over traditional methods. The model is put through its paces on the ISIC 2020 Challenge dataset, scoring a perfect 96.75% accuracy. Compared to current best practices in IoMT, segmentation and classification models perform exceptionally well. CONCLUSION In conclusion, this paper's hybrid deep learning strategy is highly effective in skin lesion segmentation and classification. The results show that the model has the potential to improve diagnostic accuracy in the setting of IoMT, and it outperforms the current gold standards. The excellent results obtained on the ISIC 2020 Challenge dataset further confirm the viability and superiority of the suggested methodology for skin lesion analysis.
Collapse
Affiliation(s)
- Arslan Akram
- Department of Computer Science and Information TechnologySuperior University LahoreLahorePakistan
- MLC Research LabOkaraPakistan
| | - Javed Rashid
- MLC Research LabOkaraPakistan
- Information Technology ServicesUniversity of OkaraOkaraPakistan
| | - Muhammad Arfan Jaffar
- Department of Computer Science and Information TechnologySuperior University LahoreLahorePakistan
| | - Muhammad Faheem
- School of Technology and InnovationsUniversity of VaasaVaasaFinland
| | - Riaz ul Amin
- MLC Research LabOkaraPakistan
- Department of Computer ScienceUniversity of OkaraOkaraPakistan
| |
Collapse
|
7
|
Ogundokun RO, Li A, Babatunde RS, Umezuruike C, Sadiku PO, Abdulahi AT, Babatunde AN. Enhancing Skin Cancer Detection and Classification in Dermoscopic Images through Concatenated MobileNetV2 and Xception Models. Bioengineering (Basel) 2023; 10:979. [PMID: 37627864 PMCID: PMC10451641 DOI: 10.3390/bioengineering10080979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 08/04/2023] [Accepted: 08/17/2023] [Indexed: 08/27/2023] Open
Abstract
One of the most promising research initiatives in the healthcare field is focused on the rising incidence of skin cancer worldwide and improving early discovery methods for the disease. The most significant factor in the fatalities caused by skin cancer is the late identification of the disease. The likelihood of human survival may be significantly improved by performing an early diagnosis followed by appropriate therapy. It is not a simple process to extract the elements from the photographs of the tumors that may be used for the prospective identification of skin cancer. Several deep learning models are widely used to extract efficient features for a skin cancer diagnosis; nevertheless, the literature demonstrates that there is still room for additional improvements in various performance metrics. This study proposes a hybrid deep convolutional neural network architecture for identifying skin cancer by adding two main heuristics. These include Xception and MobileNetV2 models. Data augmentation was introduced to balance the dataset, and the transfer learning technique was utilized to resolve the challenges of the absence of labeled datasets. It has been detected that the suggested method of employing Xception in conjunction with MobileNetV2 attains the most excellent performance, particularly concerning the dataset that was evaluated: specifically, it produced 97.56% accuracy, 97.00% area under the curve, 100% sensitivity, 93.33% precision, 96.55% F1 score, and 0.0370 false favorable rates. This research has implications for clinical practice and public health, offering a valuable tool for dermatologists and healthcare professionals in their fight against skin cancer.
Collapse
Affiliation(s)
- Roseline Oluwaseun Ogundokun
- Department of Computer Science, Landmark University, Omu Aran 251103, Nigeria
- Department of Multimedia Engineering, Kaunas University of Technology, 44249 Kaunas, Lithuania
| | - Aiman Li
- School of Marxism, Guangzhou University of Chinese Medicine, Guangzhou 510006, China
| | | | | | - Peter O. Sadiku
- Department of Computer Science, University of Ilorin, Ilorin 240003, Nigeria
| | | | | |
Collapse
|
8
|
Dahou A, Aseeri AO, Mabrouk A, Ibrahim RA, Al-Betar MA, Elaziz MA. Optimal Skin Cancer Detection Model Using Transfer Learning and Dynamic-Opposite Hunger Games Search. Diagnostics (Basel) 2023; 13:diagnostics13091579. [PMID: 37174970 PMCID: PMC10178333 DOI: 10.3390/diagnostics13091579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 04/21/2023] [Accepted: 04/25/2023] [Indexed: 05/15/2023] Open
Abstract
Recently, pre-trained deep learning (DL) models have been employed to tackle and enhance the performance on many tasks such as skin cancer detection instead of training models from scratch. However, the existing systems are unable to attain substantial levels of accuracy. Therefore, we propose, in this paper, a robust skin cancer detection framework for to improve the accuracy by extracting and learning relevant image representations using a MobileNetV3 architecture. Thereafter, the extracted features are used as input to a modified Hunger Games Search (HGS) based on Particle Swarm Optimization (PSO) and Dynamic-Opposite Learning (DOLHGS). This modification is used as a novel feature selection to alloacte the most relevant feature to maximize the model's performance. For evaluation of the efficiency of the developed DOLHGS, the ISIC-2016 dataset and the PH2 dataset were employed, including two and three categories, respectively. The proposed model has accuracy 88.19% on the ISIC-2016 dataset and 96.43% on PH2. Based on the experimental results, the proposed approach showed more accurate and efficient performance in skin cancer detection than other well-known and popular algorithms in terms of classification accuracy and optimized features.
Collapse
Affiliation(s)
- Abdelghani Dahou
- Mathematics and Computer Science Department, University of Ahmed DRAIA, Adrar 01000, Algeria
| | - Ahmad O Aseeri
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
| | - Alhassan Mabrouk
- Mathematics and Computer Science Department, Faculty of Science, Beni-Suef University, Beni-Suef 65214, Egypt
| | - Rehab Ali Ibrahim
- Department of Mathematics, Faculty of Science, Zagazig University, Zagazig 44519, Egypt
| | - Mohammed Azmi Al-Betar
- Artificial Intelligence Research Center (AIRC), College of Engineering and Information Technology, Ajman University, Ajman P.O. Box 346, United Arab Emirates
| | - Mohamed Abd Elaziz
- Department of Mathematics, Faculty of Science, Zagazig University, Zagazig 44519, Egypt
- Artificial Intelligence Research Center (AIRC), College of Engineering and Information Technology, Ajman University, Ajman P.O. Box 346, United Arab Emirates
- Faculty of Computer Science & Engineering, Galala University, Suez 43511, Egypt
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos 10999, Lebanon
| |
Collapse
|
9
|
Tahir M, Naeem A, Malik H, Tanveer J, Naqvi RA, Lee SW. DSCC_Net: Multi-Classification Deep Learning Models for Diagnosing of Skin Cancer Using Dermoscopic Images. Cancers (Basel) 2023; 15:cancers15072179. [PMID: 37046840 PMCID: PMC10093058 DOI: 10.3390/cancers15072179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 04/04/2023] [Accepted: 04/04/2023] [Indexed: 04/08/2023] Open
Abstract
Skin cancer is one of the most lethal kinds of human illness. In the present state of the health care system, skin cancer identification is a time-consuming procedure and if it is not diagnosed initially then it can be threatening to human life. To attain a high prospect of complete recovery, early detection of skin cancer is crucial. In the last several years, the application of deep learning (DL) algorithms for the detection of skin cancer has grown in popularity. Based on a DL model, this work intended to build a multi-classification technique for diagnosing skin cancers such as melanoma (MEL), basal cell carcinoma (BCC), squamous cell carcinoma (SCC), and melanocytic nevi (MN). In this paper, we have proposed a novel model, a deep learning-based skin cancer classification network (DSCC_Net) that is based on a convolutional neural network (CNN), and evaluated it on three publicly available benchmark datasets (i.e., ISIC 2020, HAM10000, and DermIS). For the skin cancer diagnosis, the classification performance of the proposed DSCC_Net model is compared with six baseline deep networks, including ResNet-152, Vgg-16, Vgg-19, Inception-V3, EfficientNet-B0, and MobileNet. In addition, we used SMOTE Tomek to handle the minority classes issue that exists in this dataset. The proposed DSCC_Net obtained a 99.43% AUC, along with a 94.17%, accuracy, a recall of 93.76%, a precision of 94.28%, and an F1-score of 93.93% in categorizing the four distinct types of skin cancer diseases. The rates of accuracy for ResNet-152, Vgg-19, MobileNet, Vgg-16, EfficientNet-B0, and Inception-V3 are 89.32%, 91.68%, 92.51%, 91.12%, 89.46% and 91.82%, respectively. The results showed that our proposed DSCC_Net model performs better as compared to baseline models, thus offering significant support to dermatologists and health experts to diagnose skin cancer.
Collapse
Affiliation(s)
- Maryam Tahir
- Department of Computer Science, National College of Business Administration & Economics Lahore, Multan Sub Campus, Multan 60000, Pakistan
| | - Ahmad Naeem
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan
| | - Hassaan Malik
- Department of Computer Science, National College of Business Administration & Economics Lahore, Multan Sub Campus, Multan 60000, Pakistan
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan
| | - Jawad Tanveer
- Department of Computer Science and Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Rizwan Ali Naqvi
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Seung-Won Lee
- School of Medicine, Sungkyunkwan University, Suwon 16419, Republic of Korea
| |
Collapse
|
10
|
Sample-Efficient Deep Learning Techniques for Burn Severity Assessment with Limited Data Conditions. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12147317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
The automatic analysis of medical data and images to help diagnosis has recently become a major area in the application of deep learning. In general, deep learning techniques can be effective when a large high-quality dataset is available for model training. Thus, there is a need for sample-efficient learning techniques, particularly in the field of medical image analysis, as significant cost and effort are required to obtain a sufficient number of well-annotated high-quality training samples. In this paper, we address the problem of deep neural network training under sample deficiency by investigating several sample-efficient deep learning techniques. We concentrate on applying these techniques to skin burn image analysis and classification. We first build a large-scale, professionally annotated dataset of skin burn images, which enables the establishment of convolutional neural network (CNN) models for burn severity assessment with high accuracy. We then deliberately set data limitation conditions and adapt several sample-efficient techniques, such as transferable learning (TL), self-supervised learning (SSL), federated learning (FL), and generative adversarial network (GAN)-based data augmentation, to those conditions. Through comprehensive experimentation, we evaluate the sample-efficient deep learning techniques for burn severity assessment, and show, in particular, that SSL models learned on a small task-specific dataset can achieve comparable accuracy to a baseline model learned on a six-times larger dataset. We also demonstrate the applicability of FL and GANs to model training under different data limitation conditions that commonly occur in the area of healthcare and medicine where deep learning models are adopted.
Collapse
|