1
|
Sinha A, Kawahara J, Pakzad A, Abhishek K, Ruthven M, Ghorbel E, Kacem A, Aouada D, Hamarneh G. DermSynth3D: Synthesis of in-the-wild annotated dermatology images. Med Image Anal 2024; 95:103145. [PMID: 38615432 DOI: 10.1016/j.media.2024.103145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 02/11/2024] [Accepted: 03/18/2024] [Indexed: 04/16/2024]
Abstract
In recent years, deep learning (DL) has shown great potential in the field of dermatological image analysis. However, existing datasets in this domain have significant limitations, including a small number of image samples, limited disease conditions, insufficient annotations, and non-standardized image acquisitions. To address these shortcomings, we propose a novel framework called DermSynth3D. DermSynth3D blends skin disease patterns onto 3D textured meshes of human subjects using a differentiable renderer and generates 2D images from various camera viewpoints under chosen lighting conditions in diverse background scenes. Our method adheres to top-down rules that constrain the blending and rendering process to create 2D images with skin conditions that mimic in-the-wild acquisitions, ensuring more meaningful results. The framework generates photo-realistic 2D dermatological images and the corresponding dense annotations for semantic segmentation of the skin, skin conditions, body parts, bounding boxes around lesions, depth maps, and other 3D scene parameters, such as camera position and lighting conditions. DermSynth3D allows for the creation of custom datasets for various dermatology tasks. We demonstrate the effectiveness of data generated using DermSynth3D by training DL models on synthetic data and evaluating them on various dermatology tasks using real 2D dermatological images. We make our code publicly available at https://github.com/sfu-mial/DermSynth3D.
Collapse
Affiliation(s)
- Ashish Sinha
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada
| | - Jeremy Kawahara
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada
| | - Arezou Pakzad
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada
| | - Kumar Abhishek
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada
| | - Matthieu Ruthven
- Computer Vision, Imaging & Machine Intelligence Research Group, Interdisciplinary Centre for Security, Reliability and Trust (SnT), University of Luxembourg, L-1855, Luxembourg
| | - Enjie Ghorbel
- Computer Vision, Imaging & Machine Intelligence Research Group, Interdisciplinary Centre for Security, Reliability and Trust (SnT), University of Luxembourg, L-1855, Luxembourg; Cristal Laboratory, National School of Computer Sciences, University of Manouba, 2010, Tunisia
| | - Anis Kacem
- Computer Vision, Imaging & Machine Intelligence Research Group, Interdisciplinary Centre for Security, Reliability and Trust (SnT), University of Luxembourg, L-1855, Luxembourg
| | - Djamila Aouada
- Computer Vision, Imaging & Machine Intelligence Research Group, Interdisciplinary Centre for Security, Reliability and Trust (SnT), University of Luxembourg, L-1855, Luxembourg
| | - Ghassan Hamarneh
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada.
| |
Collapse
|
2
|
Hu Z, Mei W, Chen H, Hou W. Multi-scale feature fusion and class weight loss for skin lesion classification. Comput Biol Med 2024; 176:108594. [PMID: 38761501 DOI: 10.1016/j.compbiomed.2024.108594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 05/09/2024] [Accepted: 05/10/2024] [Indexed: 05/20/2024]
Abstract
Skin cancer is one of the common types of cancer. It spreads quickly and is not easy to detect in the early stages, posing a major threat to human health. In recent years, deep learning methods have attracted widespread attention for skin cancer detection in dermoscopic images. However, training a practical classifier becomes highly challenging due to inter-class similarity and intra-class variation in skin lesion images. To address these problems, we propose a multi-scale fusion structure that combines shallow and deep features for more accurate classification. Simultaneously, we implement three approaches to the problem of class imbalance: class weighting, label smoothing, and resampling. In addition, the HAM10000_RE dataset strips out hair features to demonstrate the role of hair features in the classification process. We demonstrate that the region of interest is the most critical classification feature for the HAM10000_SE dataset, which segments lesion regions. We evaluated the effectiveness of our model using the HAM10000 and ISIC2019 dataset. The results showed that this method performed well in dermoscopic classification tasks, with ACC and AUC of 94.0% and 99.3%, on the HAM10000 dataset and ACC of 89.8% for the ISIC2019 dataset. The overall performance of our model is excellent in comparison to state-of-the-art models.
Collapse
Affiliation(s)
- Zhentao Hu
- School of Artificial Intelligence, Henan University, Zhengzhou, 450046, China
| | - Weiqiang Mei
- School of Artificial Intelligence, Henan University, Zhengzhou, 450046, China.
| | - Hongyu Chen
- School of Artificial Intelligence, Henan University, Zhengzhou, 450046, China
| | - Wei Hou
- College of Computer and Information Engineering, Henan University, Kaifeng, 475001, China
| |
Collapse
|
3
|
Xu Z, Guo X, Wang J. Enhancing skin lesion segmentation with a fusion of convolutional neural networks and transformer models. Heliyon 2024; 10:e31395. [PMID: 38807881 PMCID: PMC11130697 DOI: 10.1016/j.heliyon.2024.e31395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 05/11/2024] [Accepted: 05/15/2024] [Indexed: 05/30/2024] Open
Abstract
Accurate segmentation is crucial in diagnosing and analyzing skin lesions. However, automatic segmentation of skin lesions is extremely challenging because of their variable sizes, uneven color distributions, irregular shapes, hair occlusions, and blurred boundaries. Owing to the limited range of convolutional networks receptive fields, shallow convolution cannot extract the global features of images and thus has limited segmentation performance. Because medical image datasets are small in scale, the use of excessively deep networks could cause overfitting and increase computational complexity. Although transformer networks can focus on extracting global information, they cannot extract sufficient local information and accurately segment detailed lesion features. In this study, we designed a dual-branch encoder that combines a convolution neural network (CNN) and a transformer. The CNN branch of the encoder comprises four layers, which learn the local features of images through layer-wise downsampling. The transformer branch also comprises four layers, enabling the learning of global image information through attention mechanisms. The feature fusion module in the network integrates local features and global information, emphasizes important channel features through the channel attention mechanism, and filters irrelevant feature expressions. The information exchange between the decoder and encoder is finally achieved through skip connections to supplement the information lost during the sampling process, thereby enhancing segmentation accuracy. The data used in this paper are from four public datasets, including images of melanoma, basal cell tumor, fibroma, and benign nevus. Because of the limited size of the image data, we enhanced them using methods such as random horizontal flipping, random vertical flipping, random brightness enhancement, random contrast enhancement, and rotation. The segmentation accuracy is evaluated through intersection over union and duration, integrity, commitment, and effort indicators, reaching 87.7 % and 93.21 %, 82.05 % and 89.19 %, 86.81 % and 92.72 %, and 92.79 % and 96.21 %, respectively, on the ISIC 2016, ISIC 2017, ISIC 2018, and PH2 datasets, respectively (code: https://github.com/hyjane/CCT-Net).
Collapse
Affiliation(s)
- Zhijian Xu
- School of Electronic Information Engineering, China West Normal University, No. 1 Shida Road, Nanchong, Sichuan, 637009, China
| | - Xingyue Guo
- School of Computer Science, China West Normal University, No. 1 Shida Road, Nanchong, Sichuan, 637009, China
| | - Juan Wang
- School of Computer Science, China West Normal University, No. 1 Shida Road, Nanchong, Sichuan, 637009, China
| |
Collapse
|
4
|
Romero-Morelos P, Herrera-López E, González-Yebra B. Development, Application and Utility of a Machine Learning Approach for Melanoma and Non-Melanoma Lesion Classification Using Counting Box Fractal Dimension. Diagnostics (Basel) 2024; 14:1132. [PMID: 38893659 PMCID: PMC11171650 DOI: 10.3390/diagnostics14111132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 04/16/2024] [Accepted: 05/09/2024] [Indexed: 06/21/2024] Open
Abstract
The diagnosis and identification of melanoma are not always accurate, even for experienced dermatologists. Histopathology continues to be the gold standard, assessing specific parameters such as the Breslow index. However, it remains invasive and may lack effectiveness. Therefore, leveraging mathematical modeling and informatics has been a pursuit of diagnostic methods favoring early detection. Fractality, a mathematical parameter quantifying complexity and irregularity, has proven useful in melanoma diagnosis. Nonetheless, no studies have implemented this metric to feed artificial intelligence algorithms for the automatic classification of dermatological lesions, including melanoma. Hence, this study aimed to determine the combined utility of fractal dimension and unsupervised low-computational-requirements machine learning models in classifying melanoma and non-melanoma lesions. We analyzed 39,270 dermatological lesions obtained from the International Skin Imaging Collaboration. Box-counting fractal dimensions were calculated for these lesions. Fractal values were used to implement classification methods by unsupervised machine learning based on principal component analysis and iterated K-means (100 iterations). A clear separation was observed, using only fractal dimension values, between benign or malignant lesions (sensibility 72.4% and specificity 50.1%) and melanoma or non-melanoma lesions (sensibility 72.8% and specificity 50%) and subsequently, the classification quality based on the machine learning model was ≈80% for both benign and malignant or melanoma and non-melanoma lesions. However, the grouping of metastatic melanoma versus non-metastatic melanoma was less effective, probably due to the small sample size included in MM lesions. Nevertheless, we could suggest a decision algorithm based on fractal dimension for dermatological lesion discrimination. On the other hand, it was also determined that the fractal dimension is sufficient to generate unsupervised artificial intelligence models that allow for a more efficient classification of dermatological lesions.
Collapse
Affiliation(s)
- Pablo Romero-Morelos
- Department of Research, State University of the Valley of Ecatepec, Ecatepec 55210, México State, Mexico; (P.R.-M.); (E.H.-L.)
- National Laboratory of Artificial Intelligence and Data Science, CONAHCyT (LNC-IACD), Ecatepec 55210, México State, Mexico
| | - Elizabeth Herrera-López
- Department of Research, State University of the Valley of Ecatepec, Ecatepec 55210, México State, Mexico; (P.R.-M.); (E.H.-L.)
- National Laboratory of Artificial Intelligence and Data Science, CONAHCyT (LNC-IACD), Ecatepec 55210, México State, Mexico
| | - Beatriz González-Yebra
- Department of Medicine and Nutrition, Division of Health Sciences, University of Guanajuato, Campus León, León 37670, Guanajuato, Mexico
| |
Collapse
|
5
|
Prasad V, Jeba Jingle ID, Sriramakrishnan GV. DTDO: Driving Training Development Optimization enabled deep learning approach for brain tumour classification using MRI. NETWORK (BRISTOL, ENGLAND) 2024:1-42. [PMID: 38801074 DOI: 10.1080/0954898x.2024.2351159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Accepted: 04/29/2024] [Indexed: 05/29/2024]
Abstract
A brain tumour is an abnormal mass of tissue. Brain tumours vary in size, from tiny to large. Moreover, they display variations in location, shape, and size, which add complexity to their detection. The accurate delineation of tumour regions poses a challenge due to their irregular boundaries. In this research, these issues are overcome by introducing the DTDO-ZFNet for detection of brain tumour. The input Magnetic Resonance Imaging (MRI) image is fed to the pre-processing stage. Tumour areas are segmented by utilizing SegNet in which the factors of SegNet are biased using DTDO. The image augmentation is carried out using eminent techniques, such as geometric transformation and colour space transformation. Here, features such as GIST descriptor, PCA-NGIST, statistical feature and Haralick features, SLBT feature, and CNN features are extricated. Finally, the categorization of the tumour is accomplished based on ZFNet, which is trained by utilizing DTDO. The devised DTDO is a consolidation of DTBO and CDDO. The comparison of proposed DTDO-ZFNet with the existing methods, which results in highest accuracy of 0.944, a positive predictive value (PPV) of 0.936, a true positive rate (TPR) of 0.939, a negative predictive value (NPV) of 0.937, and a minimal false-negative rate (FNR) of 0.061%.
Collapse
Affiliation(s)
- Vadamodula Prasad
- Department of Computer Science & Engineering, Lendi Institute of Engineering & Technology, Jonnada, India
| | - Issac Diana Jeba Jingle
- Department of Computer Science & Engineering, Christ (Deemed to be University), Bangalore, India
| | | |
Collapse
|
6
|
Li Z, Zhang N, Gong H, Qiu R, Zhang W. SG-MIAN: Self-guided multiple information aggregation network for image-level weakly supervised skin lesion segmentation. Comput Biol Med 2024; 170:107988. [PMID: 38232452 DOI: 10.1016/j.compbiomed.2024.107988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 12/11/2023] [Accepted: 01/13/2024] [Indexed: 01/19/2024]
Abstract
Nowadays, skin disease is becoming one of the most malignant diseases that threaten people's health. Computer aided diagnosis based on deep learning has become a widely used technology to assist medical professionals in diagnosis, and segmentation of lesion areas is one of the most important steps in it. However, traditional medical image segmentation methods rely on numerous pixel-level labels for fully supervised training, and such labeling process is time-consuming and requires professional competence. In order to reduce the costs of pixel-level labeling, we proposed a method only using image-level label to segment skin lesion areas. Due to the lack of lesion's spatial and intensity information in image-level labels, and the wide distribution range of irregular shape and different texture on skin lesions, the algorithm must pay great attention to the automatic lesion localization and perception of lesion boundary. In this paper, we proposed a Self-Guided Multiple Information Aggregation Network (SG-MIAN). Our backbone network MIAN utilizes the Multiple Spatial Perceptron (MSP) solely using classification information as guidance to discriminate the key classification features of lesion areas, and thereby performing more accurate localization and activation of lesion areas. Additionally, adjunct to MSP, we also proposed an Auxiliary Activation Structure (AAS) and two auxiliary loss functions to further self-guided boundary correction, achieving the goal of accurate boundary activation. To verify the effectiveness of the proposed method, we conducted extensive experiments using the HAM10000 dataset and the PH2dataset, which demonstrated superior performance compared to most existing weakly supervised segmentation methods.
Collapse
Affiliation(s)
- Zhixun Li
- School of Mathematics and Computer Sciences, Nanchang University, Nanchang, China
| | - Nan Zhang
- School of Mathematics and Computer Sciences, Nanchang University, Nanchang, China
| | - Huiling Gong
- School of Mathematics and Computer Sciences, Nanchang University, Nanchang, China
| | - Ruiyun Qiu
- School of Mathematics and Computer Sciences, Nanchang University, Nanchang, China.
| | - Wei Zhang
- School of Mathematics and Computer Sciences, Nanchang University, Nanchang, China
| |
Collapse
|
7
|
Aluri S, Imambi SS. Brain tumour classification using MRI images based on lenet with golden teacher learning optimization. NETWORK (BRISTOL, ENGLAND) 2024; 35:27-54. [PMID: 37947040 DOI: 10.1080/0954898x.2023.2275720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 10/22/2023] [Indexed: 11/12/2023]
Abstract
Brain tumour (BT) is a dangerous neurological disorder produced by abnormal cell growth within the skull or brain. Nowadays, the death rate of people with BT is linearly growing. The finding of tumours at an early stage is crucial for giving treatment to patients, which improves the survival rate of patients. Hence, the BT classification (BTC) is done in this research using magnetic resonance imaging (MRI) images. In this research, the input MRI image is pre-processed using a non-local means (NLM) filter that denoises the input image. For attaining the effective classified result, the tumour area from the MRI image is segmented by the SegNet model. Furthermore, the BTC is accomplished by the LeNet model whose weight is optimized by the Golden Teacher Learning Optimization Algorithm (GTLO) such that the classified output produced by the LeNet model is Gliomas, Meningiomas, and Pituitary tumours. The experimental outcome displays that the GTLO-LeNet achieved an Accuracy of 0.896, Negative Predictive value (NPV) of 0.907, Positive Predictive value (PPV) of 0.821, True Negative Rate (TNR) of 0.880, and True Positive Rate (TPR) of 0.888.
Collapse
Affiliation(s)
- Srilakshmi Aluri
- Research Scholar, Computer Science & Engineering, K L Educational foundation, deemed to be University, Vaddeswaram, India
| | - Sagar S Imambi
- Professor, Computer Science and Engineering, K L Educational foundation, deemed to be University, Vaddeswaram, India
| |
Collapse
|
8
|
Nugroho ES, Ardiyanto I, Nugroho HA. Boosting the performance of pretrained CNN architecture on dermoscopic pigmented skin lesion classification. Skin Res Technol 2023; 29:e13505. [PMID: 38009020 PMCID: PMC10598432 DOI: 10.1111/srt.13505] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 10/12/2023] [Indexed: 11/28/2023]
Abstract
BACKGROUND Pigmented skin lesions (PSLs) pose medical and esthetic challenges for those affected. PSLs can cause skin cancers, particularly melanoma, which can be life-threatening. Detecting and treating melanoma early can reduce mortality rates. Dermoscopic imaging offers a noninvasive and cost-effective technique for examining PSLs. However, the lack of standardized colors, image capture settings, and artifacts makes accurate analysis challenging. Computer-aided diagnosis (CAD) using deep learning models, such as convolutional neural networks (CNNs), has shown promise by automatically extracting features from medical images. Nevertheless, enhancing the CNN models' performance remains challenging, notably concerning sensitivity. MATERIALS AND METHODS In this study, we aim to enhance the classification performance of selected pretrained CNNs. We use the 2019 ISIC dataset, which presents eight disease classes. To achieve this goal, two methods are applied: resolution of the dataset imbalance challenge through augmentation and optimization of the training hyperparameters via Bayesian tuning. RESULTS The performance improvement was observed for all tested pretrained CNNs. The Inception-V3 model achieved the best performance compared to similar results, with an accuracy of 96.40% and an AUC of 0.98. CONCLUSION According to the study, classification performance was significantly enhanced by augmentation and Bayesian hyperparameter tuning.
Collapse
Affiliation(s)
- Erwin Setyo Nugroho
- Engineering Faculty, Department of Electrical Engineering and Information TechnologyUniversitas Gadjah MadaYogyakartaIndonesia
- Department of InformaticsPoliteknik Caltex RiauRiauIndonesia
| | - Igi Ardiyanto
- Engineering Faculty, Department of Electrical Engineering and Information TechnologyUniversitas Gadjah MadaYogyakartaIndonesia
| | - Hanung Adi Nugroho
- Engineering Faculty, Department of Electrical Engineering and Information TechnologyUniversitas Gadjah MadaYogyakartaIndonesia
| |
Collapse
|
9
|
Debelee TG. Skin Lesion Classification and Detection Using Machine Learning Techniques: A Systematic Review. Diagnostics (Basel) 2023; 13:3147. [PMID: 37835889 PMCID: PMC10572538 DOI: 10.3390/diagnostics13193147] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 09/22/2023] [Accepted: 09/24/2023] [Indexed: 10/15/2023] Open
Abstract
Skin lesions are essential for the early detection and management of a number of dermatological disorders. Learning-based methods for skin lesion analysis have drawn much attention lately because of improvements in computer vision and machine learning techniques. A review of the most-recent methods for skin lesion classification, segmentation, and detection is presented in this survey paper. The significance of skin lesion analysis in healthcare and the difficulties of physical inspection are discussed in this survey paper. The review of state-of-the-art papers targeting skin lesion classification is then covered in depth with the goal of correctly identifying the type of skin lesion from dermoscopic, macroscopic, and other lesion image formats. The contribution and limitations of various techniques used in the selected study papers, including deep learning architectures and conventional machine learning methods, are examined. The survey then looks into study papers focused on skin lesion segmentation and detection techniques that aimed to identify the precise borders of skin lesions and classify them accordingly. These techniques make it easier to conduct subsequent analyses and allow for precise measurements and quantitative evaluations. The survey paper discusses well-known segmentation algorithms, including deep-learning-based, graph-based, and region-based ones. The difficulties, datasets, and evaluation metrics particular to skin lesion segmentation are also discussed. Throughout the survey, notable datasets, benchmark challenges, and evaluation metrics relevant to skin lesion analysis are highlighted, providing a comprehensive overview of the field. The paper concludes with a summary of the major trends, challenges, and potential future directions in skin lesion classification, segmentation, and detection, aiming to inspire further advancements in this critical domain of dermatological research.
Collapse
Affiliation(s)
- Taye Girma Debelee
- Ethiopian Artificial Intelligence Institute, Addis Ababa 40782, Ethiopia;
- Department of Electrical and Computer Engineering, Addis Ababa Science and Technology University, Addis Ababa 16417, Ethiopia
| |
Collapse
|
10
|
Derekas P, Spyridonos P, Likas A, Zampeta A, Gaitanis G, Bassukas I. The Promise of Semantic Segmentation in Detecting Actinic Keratosis Using Clinical Photography in the Wild. Cancers (Basel) 2023; 15:4861. [PMID: 37835555 PMCID: PMC10571759 DOI: 10.3390/cancers15194861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 10/01/2023] [Accepted: 10/02/2023] [Indexed: 10/15/2023] Open
Abstract
AK is a common precancerous skin condition that requires effective detection and treatment monitoring. To improve the monitoring of the AK burden in clinical settings with enhanced automation and precision, the present study evaluates the application of semantic segmentation based on the U-Net architecture (i.e., AKU-Net). AKU-Net employs transfer learning to compensate for the relatively small dataset of annotated images and integrates a recurrent process based on convLSTM to exploit contextual information and address the challenges related to the low contrast and ambiguous boundaries of AK-affected skin regions. We used an annotated dataset of 569 clinical photographs from 115 patients with actinic keratosis to train and evaluate the model. From each photograph, patches of 512 × 512 pixels were extracted using translation lesion boxes that encompassed lesions in different positions and captured different contexts of perilesional skin. In total, 16,488 translation-augmented crops were used for training the model, and 403 lesion center crops were used for testing. To demonstrate the improvements in AK detection, AKU-Net was compared with plain U-Net and U-Net++ architectures. The experimental results highlighted the effectiveness of AKU-Net, improving upon both automation and precision over existing approaches, paving the way for more effective and reliable evaluation of actinic keratosis in clinical settings.
Collapse
Affiliation(s)
- Panagiotis Derekas
- Department of Computer Science & Engineering, School of Engineering, University of Ioannina, 45110 Ioannina, Greece; (P.D.); (A.L.)
| | - Panagiota Spyridonos
- Department of Medical Physics, Faculty of Medicine, School of Health Sciences, University of Ioannina, 45110 Ioannina, Greece
| | - Aristidis Likas
- Department of Computer Science & Engineering, School of Engineering, University of Ioannina, 45110 Ioannina, Greece; (P.D.); (A.L.)
| | - Athanasia Zampeta
- Department of Skin and Venereal Diseases, Faculty of Medicine, School of Health Sciences, University of Ioannina, 45110 Ioannina, Greece; (A.Z.); (G.G.); (I.B.)
| | - Georgios Gaitanis
- Department of Skin and Venereal Diseases, Faculty of Medicine, School of Health Sciences, University of Ioannina, 45110 Ioannina, Greece; (A.Z.); (G.G.); (I.B.)
| | - Ioannis Bassukas
- Department of Skin and Venereal Diseases, Faculty of Medicine, School of Health Sciences, University of Ioannina, 45110 Ioannina, Greece; (A.Z.); (G.G.); (I.B.)
| |
Collapse
|
11
|
Bibi S, Khan MA, Shah JH, Damaševičius R, Alasiry A, Marzougui M, Alhaisoni M, Masood A. MSRNet: Multiclass Skin Lesion Recognition Using Additional Residual Block Based Fine-Tuned Deep Models Information Fusion and Best Feature Selection. Diagnostics (Basel) 2023; 13:3063. [PMID: 37835807 PMCID: PMC10572512 DOI: 10.3390/diagnostics13193063] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 09/19/2023] [Accepted: 09/24/2023] [Indexed: 10/15/2023] Open
Abstract
Cancer is one of the leading significant causes of illness and chronic disease worldwide. Skin cancer, particularly melanoma, is becoming a severe health problem due to its rising prevalence. The considerable death rate linked with melanoma requires early detection to receive immediate and successful treatment. Lesion detection and classification are more challenging due to many forms of artifacts such as hairs, noise, and irregularity of lesion shape, color, irrelevant features, and textures. In this work, we proposed a deep-learning architecture for classifying multiclass skin cancer and melanoma detection. The proposed architecture consists of four core steps: image preprocessing, feature extraction and fusion, feature selection, and classification. A novel contrast enhancement technique is proposed based on the image luminance information. After that, two pre-trained deep models, DarkNet-53 and DensNet-201, are modified in terms of a residual block at the end and trained through transfer learning. In the learning process, the Genetic algorithm is applied to select hyperparameters. The resultant features are fused using a two-step approach named serial-harmonic mean. This step increases the accuracy of the correct classification, but some irrelevant information is also observed. Therefore, an algorithm is developed to select the best features called marine predator optimization (MPA) controlled Reyni Entropy. The selected features are finally classified using machine learning classifiers for the final classification. Two datasets, ISIC2018 and ISIC2019, have been selected for the experimental process. On these datasets, the obtained maximum accuracy of 85.4% and 98.80%, respectively. To prove the effectiveness of the proposed methods, a detailed comparison is conducted with several recent techniques and shows the proposed framework outperforms.
Collapse
Affiliation(s)
- Sobia Bibi
- Department of CS, COMSATS University Islamabad, Wah Campus, Islamabad 45550, Pakistan; (S.B.); (J.H.S.)
| | - Muhammad Attique Khan
- Department of Computer Science and Mathematics, Lebanese American University, Beirut 1102-2801, Lebanon;
- Department of CS, HITEC University, Taxila 47080, Pakistan
| | - Jamal Hussain Shah
- Department of CS, COMSATS University Islamabad, Wah Campus, Islamabad 45550, Pakistan; (S.B.); (J.H.S.)
| | - Robertas Damaševičius
- Center of Excellence Forest 4.0, Faculty of Informatics, Kaunas University of Technology, 51368 Kaunas, Lithuania;
| | - Areej Alasiry
- College of Computer Science, King Khalid University, Abha 61413, Saudi Arabia; (A.A.); (M.M.)
| | - Mehrez Marzougui
- College of Computer Science, King Khalid University, Abha 61413, Saudi Arabia; (A.A.); (M.M.)
| | - Majed Alhaisoni
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh 11564, Saudi Arabia;
| | - Anum Masood
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology (NTNU), 7034 Trondheim, Norway
| |
Collapse
|
12
|
Hussain M, Khan MA, Damaševičius R, Alasiry A, Marzougui M, Alhaisoni M, Masood A. SkinNet-INIO: Multiclass Skin Lesion Localization and Classification Using Fusion-Assisted Deep Neural Networks and Improved Nature-Inspired Optimization Algorithm. Diagnostics (Basel) 2023; 13:2869. [PMID: 37761236 PMCID: PMC10527569 DOI: 10.3390/diagnostics13182869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 08/30/2023] [Accepted: 09/01/2023] [Indexed: 09/29/2023] Open
Abstract
Background: Using artificial intelligence (AI) with the concept of a deep learning-based automated computer-aided diagnosis (CAD) system has shown improved performance for skin lesion classification. Although deep convolutional neural networks (DCNNs) have significantly improved many image classification tasks, it is still difficult to accurately classify skin lesions because of a lack of training data, inter-class similarity, intra-class variation, and the inability to concentrate on semantically significant lesion parts. Innovations: To address these issues, we proposed an automated deep learning and best feature selection framework for multiclass skin lesion classification in dermoscopy images. The proposed framework performs a preprocessing step at the initial step for contrast enhancement using a new technique that is based on dark channel haze and top-bottom filtering. Three pre-trained deep learning models are fine-tuned in the next step and trained using the transfer learning concept. In the fine-tuning process, we added and removed a few additional layers to lessen the parameters and later selected the hyperparameters using a genetic algorithm (GA) instead of manual assignment. The purpose of hyperparameter selection using GA is to improve the learning performance. After that, the deeper layer is selected for each network and deep features are extracted. The extracted deep features are fused using a novel serial correlation-based approach. This technique reduces the feature vector length to the serial-based approach, but there is little redundant information. We proposed an improved anti-Lion optimization algorithm for the best feature selection to address this issue. The selected features are finally classified using machine learning algorithms. Main Results: The experimental process was conducted using two publicly available datasets, ISIC2018 and ISIC2019. Employing these datasets, we obtained an accuracy of 96.1 and 99.9%, respectively. Comparison was also conducted with state-of-the-art techniques and shows the proposed framework improved accuracy. Conclusions: The proposed framework successfully enhances the contrast of the cancer region. Moreover, the selection of hyperparameters using the automated techniques improved the learning process of the proposed framework. The proposed fusion and improved version of the selection process maintains the best accuracy and shorten the computational time.
Collapse
Affiliation(s)
| | - Muhammad Attique Khan
- Department of Computer Science and Mathematics, Lebanese American University, Beirut 13-5053, Lebanon
- Department of Computer Science, HITEC University, Taxila 47080, Pakistan
| | - Robertas Damaševičius
- Center of Excellence Forest 4.0, Faculty of Informatics, Kaunas University of Technology, 51368 Kaunas, Lithuania;
| | - Areej Alasiry
- College of Computer Science, King Khalid University, Abha 61413, Saudi Arabia; (A.A.); (M.M.)
| | - Mehrez Marzougui
- College of Computer Science, King Khalid University, Abha 61413, Saudi Arabia; (A.A.); (M.M.)
| | - Majed Alhaisoni
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh 11564, Saudi Arabia;
| | - Anum Masood
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology (NTNU), 7034 Trondheim, Norway
| |
Collapse
|