1
|
Qin J, Pei D, Guo Q, Cai X, Xie L, Zhang W. Intersection-union dual-stream cross-attention Lova-SwinUnet for skin cancer hair segmentation and image repair. Comput Biol Med 2024; 180:108931. [PMID: 39079414 DOI: 10.1016/j.compbiomed.2024.108931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 06/16/2024] [Accepted: 07/19/2024] [Indexed: 08/29/2024]
Abstract
Skin cancer images have hair occlusion problems, which greatly affects the accuracy of diagnosis and classification. Current dermoscopic hair removal methods use segmentation networks to locate hairs, and then uses repair networks to perform image repair. However, it is difficult to segment hair and capture the overall structure between hairs because of the hair being thin, unclear, and similar in color to the entire image. When conducting image restoration tasks, the only available images are those obstructed by hair, and there is no corresponding ground truth (supervised data) of the same scene without hair obstruction. In addition, the texture information and structural information used in existing repair methods are often insufficient, which leads to poor results in skin cancer image repair. To address these challenges, we propose the intersection-union dual-stream cross-attention Lova-SwinUnet (IUDC-LS). Firstly, we propose the Lova-SwinUnet module, which embeds Lovasz loss function into Swin-Unet, enabling the network to better capture features of various scales, thus obtaining better hair mask segmentation results. Secondly, we design the intersection-union (IU) module, which takes the mask results obtained in the previous step for pairwise intersection or union, and then overlays the results on the skin cancer image without hair to generate the labeled training data. This turns the unsupervised image repair task into the supervised one. Finally, we propose the dual-stream cross-attention (DC) module, which makes texture information and structure information interact with each other, and then uses cross-attention to make the network pay attention to the more important texture information and structure information in the fusion process of texture information and structure information, so as to improve the effect of image repair. The experimental results show that the PSNR index and SSIM index of the proposed method are increased by 5.4875 and 0.0401 compared with the other common methods. Experimental results unequivocally demonstrate the effectiveness of our approach, which serves as a potent tool for skin cancer detection, significantly surpassing the performance of comparable methods.
Collapse
Affiliation(s)
- Juanjuan Qin
- Shanxi Key Laboratory of Big Data Analysis and Parallel Computing, Taiyuan University of Science and Technology, Taiyuan, China.
| | - Dong Pei
- Shanxi Key Laboratory of Big Data Analysis and Parallel Computing, Taiyuan University of Science and Technology, Taiyuan, China.
| | - Qian Guo
- Shanxi Key Laboratory of Big Data Analysis and Parallel Computing, Taiyuan University of Science and Technology, Taiyuan, China.
| | - Xingjuan Cai
- Shanxi Key Laboratory of Big Data Analysis and Parallel Computing, Taiyuan University of Science and Technology, Taiyuan, China; State Key Laboratory for Novel Software Technology at Nanjing University, Nanjing University, Nanjing, China.
| | - Liping Xie
- Shanxi Key Laboratory of Big Data Analysis and Parallel Computing, Taiyuan University of Science and Technology, Taiyuan, China.
| | - Wensheng Zhang
- The Institute of Automation, Chinese Academy of Sciences (CAS), Beijing, China.
| |
Collapse
|
2
|
Shakeel CS, Khan SJ. Machine learning (ML) techniques as effective methods for evaluating hair and skin assessments: A systematic review. Proc Inst Mech Eng H 2024; 238:132-148. [PMID: 38156410 DOI: 10.1177/09544119231216290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2023]
Abstract
Machine Learning (ML) techniques provide the ability to effectively evaluate and analyze human skin and hair assessments. The aim of this study is to systematically review the effectiveness of applying Machine Learning (ML) methods and Artificial Intelligence (AI) techniques in order to evaluate hair and skin assessments. PubMed, Web of Science, IEEE Xplore, and Science Direct were searched in order to retrieve research publications between 1 January 2010 and 31 March 2020 using appropriate keywords such as "hair and skin analysis." Following accurate screening, 20 peer-reviewed publications were selected for inclusion in this systematic review. The analysis demonstrated that prevalent Machine Learning (ML) methods comprised of Support Vector Machine (SVM), k-nearest Neighbor, and Artificial Neural Networks (ANN). ANN's were observed to yield the highest accuracy of 95% followed by SVM generating 90%. These techniques were most commonly applied for drafting framework assessments such as that of Melanoma. Values of parameters such as Sensitivity, Specificity, and Area under the Curve (AUC) were extracted from the studies and with the help of comparisons, relevant inferences were also made. ANN's were observed to yield the highest sensitivity of 82.30% as well as a 96.90% specificity. Hence, with this systematic review, a summarization of the studies was drafted that encapsulated how Machine Learning (ML) techniques have been employed for the analysis and evaluation of hair and skin assessments.
Collapse
Affiliation(s)
| | - Saad Jawaid Khan
- Department of Biomedical Engineering, Ziauddin University (ZUFESTM), Karachi, Pakistan
| |
Collapse
|
3
|
Lama N, Kasmi R, Hagerty JR, Stanley RJ, Young R, Miinch J, Nepal J, Nambisan A, Stoecker WV. ChimeraNet: U-Net for Hair Detection in Dermoscopic Skin Lesion Images. J Digit Imaging 2023; 36:526-535. [PMID: 36385676 PMCID: PMC10039207 DOI: 10.1007/s10278-022-00740-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 11/02/2022] [Accepted: 11/09/2022] [Indexed: 11/18/2022] Open
Abstract
Hair and ruler mark structures in dermoscopic images are an obstacle preventing accurate image segmentation and detection of critical network features. Recognition and removal of hairs from images can be challenging, especially for hairs that are thin, overlapping, faded, or of similar color as skin or overlaid on a textured lesion. This paper proposes a novel deep learning (DL) technique to detect hair and ruler marks in skin lesion images. Our proposed ChimeraNet is an encoder-decoder architecture that employs pretrained EfficientNet in the encoder and squeeze-and-excitation residual (SERes) structures in the decoder. We applied this approach at multiple image sizes and evaluated it using the publicly available HAM10000 (ISIC2018 Task 3) skin lesion dataset. Our test results show that the largest image size (448 × 448) gave the highest accuracy of 98.23 and Jaccard index of 0.65 on the HAM10000 (ISIC 2018 Task 3) skin lesion dataset, exhibiting better performance than for two well-known deep learning approaches, U-Net and ResUNet-a. We found the Dice loss function to give the best results for all measures. Further evaluated on 25 additional test images, the technique yields state-of-the-art accuracy compared to 8 previously reported classical techniques. We conclude that the proposed ChimeraNet architecture may enable improved detection of fine image structures. Further application of DL techniques to detect dermoscopy structures is warranted.
Collapse
Affiliation(s)
- Norsang Lama
- Missouri University of Science & Technology, Rolla, MO, 65409, USA
| | | | | | - R Joe Stanley
- Missouri University of Science & Technology, Rolla, MO, 65409, USA.
| | - Reagan Young
- Missouri University of Science & Technology, Rolla, MO, 65409, USA
| | - Jessica Miinch
- Missouri University of Science & Technology, Rolla, MO, 65409, USA
| | | | - Anand Nambisan
- Missouri University of Science & Technology, Rolla, MO, 65409, USA
| | | |
Collapse
|
4
|
Jena B, Naik MK, Panda R, Abraham A. A novel minimum generalized cross entropy-based multilevel segmentation technique for the brain MRI/dermoscopic images. Comput Biol Med 2022; 151:106214. [PMID: 36308899 DOI: 10.1016/j.compbiomed.2022.106214] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Revised: 09/20/2022] [Accepted: 10/15/2022] [Indexed: 12/27/2022]
Abstract
BACKGROUND One of the challenging and the primary stages of medical image examination is the identification of the source of any disease, which may be the aberrant damage or change in tissue or organ caused by infections, injury, and a variety of other factors. Any such condition related to skin or brain sometimes advances in cancer and becomes a life-threatening disease. So, an efficient automatic image segmentation approach is required at the initial stage of medical image analysis. PURPOSE To make a segmentation process efficient and reliable, it is essential to use an appropriate objective function and an efficient optimization algorithm to produce optimal results. METHOD The above problem is resolved in this paper by introducing a new minimum generalized cross entropy (MGCE) as an objective function, with the inclusion of the degree of divergence. Another key contribution is the development of a new optimizer called opposition African vulture optimization algorithm (OAVOA). The proposed optimizer boosted the exploration, skill by inheriting the opposition-based learning. THE RESULTS The experimental work in this study starts with a performance evaluation of the optimizer over a set of standards (23 numbers) and IEEE CEC14 (8 numbers) Benchmark functions. The comparative analysis of test results shows that the OAVOA outperforms different state-of-the-art optimizers. The suggested OAVOA-MGCE based multilevel thresholding approach is carried out on two different types of medical images - Brain MRI Images (AANLIB dataset), and dermoscopic images (ISIC 2016 dataset) and found superior than other entropy-based thresholding methods.
Collapse
Affiliation(s)
- Bibekananda Jena
- Dept. of Electronics and Communication Engineering, Anil Neerukonda Institute of Technology & Science, Sangivalasa, Visakhapatnam, Andhra Pradesh, 531162, India.
| | - Manoj Kumar Naik
- Faculty of Engineering and Technology, Siksha O Anusandhan, Bhubaneswar, Odisha, 751030, India.
| | - Rutuparna Panda
- Dept of Electronics and Telecommunication Engineering, Veer Surendra Sai University of Technology, Burla, Odisha, 768018, India.
| | - Ajith Abraham
- Machine Intelligence Research Labs, Scientific Network for Innovation and Research Excellence, WA, 98071-2259, USA.
| |
Collapse
|
5
|
Lee J, Jeong I, Kim K, Cho J. Design and Implementation of Embedded-Based Vein Image Processing System with Enhanced Denoising Capabilities. SENSORS (BASEL, SWITZERLAND) 2022; 22:8559. [PMID: 36366256 PMCID: PMC9656323 DOI: 10.3390/s22218559] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 10/17/2022] [Accepted: 11/03/2022] [Indexed: 06/16/2023]
Abstract
In general, it is very difficult to visually locate blood vessels for intravenous injection or surgery. In addition, if vein detection fails, physical and mental pain occurs to the patient and leads to financial loss in the hospital. In order to prevent this problem, NIR-based vein detection technology is developing. The proposed study combines vein detection and digital hair removal to eliminate body hair, a noise that hinders the accuracy of detection, improving the performance of the entire algorithm by about 10.38% over existing systems. In addition, as a result of performing venous detection of patients without body hair, 5.04% higher performance than the existing system was detected, and the proposed study results were verified. It is expected that the use of devices to which the proposed study is applied will provide more accurate vascular maps in general situations.
Collapse
|
6
|
Bardou D, Bouaziz H, Lv L, Zhang T. Hair removal in dermoscopy images using variational autoencoders. Skin Res Technol 2022; 28:445-454. [PMID: 35254677 PMCID: PMC9907627 DOI: 10.1111/srt.13145] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 01/17/2022] [Indexed: 01/23/2023]
Abstract
BACKGROUND In recent years, melanoma is rising at a faster rate compared to other cancers. Although it is the most serious type of skin cancer, the diagnosis at early stages makes it curable. Dermoscopy is a reliable medical technique used to detect melanoma by using a dermoscope to examine the skin. In the last few decades, digital imaging devices have made great progress which allowed capturing and storing high-quality images from these examinations. The stored images are now being standardized and used for the automatic detection of melanoma. However, when the hair covers the skin, this makes the task challenging. Therefore, it is important to eliminate the hair to get accurate results. METHODS In this paper, we propose a simple yet efficient method for hair removal using a variational autoencoder without the need for paired samples. The encoder takes as input a dermoscopy image and builds a latent distribution that ignores hair as it is considered noise, while the decoder reconstructs a hair-free image. Both encoder and decoder use a decent convolutional neural networks architecture that provides high performance. The construction of our model comprises two stages of training. In the first stage, the model has trained on hair-occluded images to output hair-free images, and in the second stage, it is optimized using hair-free images to preserve the image textures. Although the variational autoencoder produces hair-free images, it does not maintain the quality of the generated images. Thus, we explored the use of three-loss functions including the structural similarity index (SSIM), L1-norm, and L2-norm to improve the visual quality of the generated images. RESULTS The evaluation of the hair-free reconstructed images is carried out using t-distributed stochastic neighbor embedding (SNE) feature mapping by visualizing the distribution of the real hair-free images and the synthesized hair-free images. The conducted experiments on the publicly available dataset HAM10000 show that our method is very efficient.
Collapse
Affiliation(s)
- Dalal Bardou
- Department of Computer Science and Mathematics University of Abbes Laghrour Khenchela Algeria
| | - Hamida Bouaziz
- Mécatronique Laboratory Department of Computer Science Jijel University Jijel Algeria
| | - Laishui Lv
- School of Computer Science and Engineering Nanjing University of Science and Technology Nanjing China
| | - Ting Zhang
- School of Computer Science and Engineering Nanjing University of Science and Technology Nanjing China
| |
Collapse
|
7
|
Sies K, Winkler JK, Fink C, Bardehle F, Toberer F, Buhl T, Enk A, Blum A, Stolz W, Rosenberger A, Haenssle HA. Does sex matter? Analysis of sex-related differences in the diagnostic performance of a market-approved convolutional neural network for skin cancer detection. Eur J Cancer 2022; 164:88-94. [PMID: 35182926 DOI: 10.1016/j.ejca.2021.12.034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Revised: 12/17/2021] [Accepted: 12/29/2021] [Indexed: 11/03/2022]
Abstract
BACKGROUND Advances in biomedical artificial intelligence may introduce or perpetuate sex and gender discriminations. Convolutional neural networks (CNN) have proven a dermatologist-level performance in image classification tasks but have not been assessed for sex and gender biases that may affect training data and diagnostic performance. In this study, we investigated sex-related imbalances in training data and diagnostic performance of a market-approved CNN for skin cancer classification (Moleanalyzer Pro®, Fotofinder Systems GmbH, Bad Birnbach, Germany). METHODS We screened open-access dermoscopic image repositories widely used for CNN training for distribution of sex. Moreover, the sex-related diagnostic performance of the market-approved CNN was tested in 1549 dermoscopic images stratified by sex (female n = 773; male n = 776). RESULTS Most open-access repositories showed a marked under-representation of images originating from female (40%) versus male (60%) patients. Despite these imbalances and well-known sex-related differences in skin anatomy or skin-directed behaviour, the tested CNN achieved a comparable sensitivity of 87.0% [80.9%-91.3%] versus 87.1% [81.1%-91.4%], specificity of 98.7% [97.4%-99.3%] versus 96.9% [95.2%-98.0%] and ROC-AUC of 0.984 [0.975-0.993] versus 0.979 [0.969-0.988] in dermoscopic images of female versus male origin, respectively. In the sample at hand, sex-related differences in ROC-AUCs were not statistically significant in the per-image analysis nor in an additional per-individual analysis (p ≥ 0.59). CONCLUSION Design and training of artificial intelligence algorithms for medical applications should generally acknowledge sex and gender dimensions. Despite sex-related imbalances in open-access training data, the diagnostic performance of the tested CNN showed no sex-related bias in the classification of skin lesions.
Collapse
Affiliation(s)
- Katharina Sies
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany
| | - Julia K Winkler
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany
| | - Christine Fink
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany
| | - Felicitas Bardehle
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany
| | - Ferdinand Toberer
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany
| | - Timo Buhl
- Department of Dermatology, Venereology and Allergology, University Medical Center Göttingen, Göttingen, Germany
| | - Alexander Enk
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany
| | - Andreas Blum
- Public, Private and Teaching Practice of Dermatology, Konstanz, Germany
| | - Wilhelm Stolz
- Department of Dermatology, Allergology and Environmental Medicine II, Hospital Thalkirchner Street, Munich, Germany
| | - Albert Rosenberger
- Department of Genetic Epidemiology, University of Goettingen, Goettingen, Germany
| | - Holger A Haenssle
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany.
| |
Collapse
|
8
|
Nie Y, Sommella P, Carratu M, Ferro M, O'Nils M, Lundgren J. Recent Advances in Diagnosis of Skin Lesions Using Dermoscopic Images Based on Deep Learning. IEEE ACCESS 2022; 10:95716-95747. [DOI: 10.1109/access.2022.3199613] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2025]
Affiliation(s)
- Yali Nie
- Department of Electronics Design, Mid Sweden University, Sundsvall, Sweden
| | - Paolo Sommella
- Department of Industrial Engineering, University of Salerno, Fisciano, Italy
| | - Marco Carratu
- Department of Industrial Engineering, University of Salerno, Fisciano, Italy
| | - Matteo Ferro
- Department of Industrial Engineering, University of Salerno, Fisciano, Italy
| | - Mattias O'Nils
- Department of Electronics Design, Mid Sweden University, Sundsvall, Sweden
| | - Jan Lundgren
- Department of Electronics Design, Mid Sweden University, Sundsvall, Sweden
| |
Collapse
|
9
|
|
10
|
Cheong KH, Tang KJW, Zhao X, Koh JEW, Faust O, Gururajan R, Ciaccio EJ, Rajinikanth V, Acharya UR. An automated skin melanoma detection system with melanoma-index based on entropy features. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.05.010] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
11
|
Petrie T, Larson C, Heath M, Samatham R, Davis A, Berry E, Leachman S. Quantifying acceptable artefact ranges for dermatologic classification algorithms. SKIN HEALTH AND DISEASE 2021; 1:e19. [PMID: 35664971 PMCID: PMC9060017 DOI: 10.1002/ski2.19] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Revised: 01/26/2021] [Accepted: 01/28/2021] [Indexed: 12/02/2022]
Abstract
Background Many classifiers have been developed that can distinguish different types of skin lesions (e.g., benign nevi, melanoma) with varying degrees of success.1–5 However, even successfully trained classifiers may perform poorly on images that include artefacts. While problems created by hair and ink markings have been published, quantitative measurements of blur, colour and lighting variations on classification accuracy has not yet been reported to our knowledge. Objectives We created a system that measures the impact of various artefacts on machine learning accuracy. Our objectives were to (1) quantitatively identify the most egregious artefacts and (2) demonstrate how to assess a classification algorithm's accuracy when input images include artefacts. Methods We injected artefacts into dermatologic images using techniques that could be controlled with a single variable. This allows us to quantitatively evaluate the impact on the accuracy. We trained two convolutional neural networks on two different binary classification tasks and measured the impact on dermoscopy images over a range of parameter values. The area under the curve and specificity‐at‐a‐given‐sensitivity values were measured for each artefact induced at each parameter. Results General blur had the strongest negative effect on the melanoma versus other task. Conversely, shifting the hue towards blue had a more pronounced effect on the suspicious versus follow task. Conclusions Classifiers should either mitigate artefacts or detect them. Images should be excluded from diagnosis/recommendation when artefacts are present in amounts outside the machine perceived quality range. Failure to do so will reduce accuracy and impede approval from regulatory agencies.
Collapse
Affiliation(s)
- T.C. Petrie
- Department of Dermatology Oregon Health & Science University Portland Oregon USA
| | - C. Larson
- Department of Dermatology Oregon Health & Science University Portland Oregon USA
| | - M. Heath
- Department of Dermatology Oregon Health & Science University Portland Oregon USA
| | - R. Samatham
- Department of Dermatology Oregon Health & Science University Portland Oregon USA
| | - A. Davis
- Department of Dermatology Oregon Health & Science University Portland Oregon USA
| | - E.G. Berry
- Department of Dermatology Oregon Health & Science University Portland Oregon USA
| | - S.A. Leachman
- Department of Dermatology Oregon Health & Science University Portland Oregon USA
| |
Collapse
|
12
|
Attia M, Hossny M, Zhou H, Nahavandi S, Asadi H, Yazdabadi A. Realistic hair simulator for skin lesion images: A novel benchemarking tool. Artif Intell Med 2020; 108:101933. [PMID: 32972662 DOI: 10.1016/j.artmed.2020.101933] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2019] [Revised: 06/05/2020] [Accepted: 07/13/2020] [Indexed: 11/15/2022]
Abstract
Automated skin lesion analysis is one of the trending fields that has gained attention among the dermatologists and health care practitioners. Skin lesion restoration is an essential pre-processing step for lesion enhancements for accurate automated analysis and diagnosis by both dermatologists and computer-aided diagnosis tools. Hair occlusion is one of the most popular artifacts in dermatoscopic images. It can negatively impact the skin lesions diagnosis by both dermatologists and automated computer diagnostic tools. Digital hair removal is a non-invasive method for image enhancement for decrease the hair-occlusion artifact in previously captured images. Several hair removal methods were proposed for skin delineation and removal without standardized benchmarking techniques. Manual annotation is one of the main challenges that hinder the validation of these proposed methods on a large number of images or against benchmarking datasets for comparison purposes. In the presented work, we propose a photo-realistic hair simulator based on context-aware image synthesis using image-to-image translation techniques via conditional adversarial generative networks for generation of different hair occlusions in skin images, along with ground-truth mask for hair location. Hair-occluded image is synthesized using the latent structure of any input hair-free image by deep encoding the input image into a latent vector of features. The locations of required hair are highlighted using white pixels on the input image. Then, these deep encoded features are used to reconstruct the synthetic highly realistic hair-occluded image. Besides, we explored using three loss functions including L1-norm, L2-norm and structural similarity index (SSIM) to maximize the image synthesis visual quality. For the evaluation of the generated samples, the t-SNE feature mapping and Bland-Altman test are used as visualization tools for the experimental results. The results show the superior performance of our proposed method compared to previous methods for hair synthesis with plausible colours and preserving the integrity of the lesion texture. The proposed method can be used to generate benchmarking datasets for comparing the performance of digital hair removal methods. The code is available online at: https://github.com/attiamohammed/realhair.
Collapse
Affiliation(s)
- Mohamed Attia
- Institute for Intelligent Systems Research and Innovation, Deakin University, Australia; Medical Research Institute, Alexandria University, Egypt.
| | - Mohammed Hossny
- Institute for Intelligent Systems Research and Innovation, Deakin University, Australia.
| | - Hailing Zhou
- Institute for Intelligent Systems Research and Innovation, Deakin University, Australia.
| | - Saeid Nahavandi
- Institute for Intelligent Systems Research and Innovation, Deakin University, Australia.
| | - Hamed Asadi
- School of Medicine, Melbourne University, Australia.
| | | |
Collapse
|
13
|
Akram T, Lodhi HMJ, Naqvi SR, Naeem S, Alhaisoni M, Ali M, Haider SA, Qadri NN. A multilevel features selection framework for skin lesion classification. HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES 2020. [DOI: 10.1186/s13673-020-00216-y] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Abstract
Melanoma is considered to be one of the deadliest skin cancer types, whose occurring frequency elevated in the last few years; its earlier diagnosis, however, significantly increases the chances of patients’ survival. In the quest for the same, a few computer based methods, capable of diagnosing the skin lesion at initial stages, have been recently proposed. Despite some success, however, margin exists, due to which the machine learning community still considers this an outstanding research challenge. In this work, we come up with a novel framework for skin lesion classification, which integrates deep features information to generate most discriminant feature vector, with an advantage of preserving the original feature space. We utilize recent deep models for feature extraction, and by taking advantage of transfer learning. Initially, the dermoscopic images are segmented, and the lesion region is extracted, which is later subjected to retrain the selected deep models to generate fused feature vectors. In the second phase, a framework for most discriminant feature selection and dimensionality reduction is proposed, entropy-controlled neighborhood component analysis (ECNCA). This hierarchical framework optimizes fused features by selecting the principle components and extricating the redundant and irrelevant data. The effectiveness of our design is validated on four benchmark dermoscopic datasets; PH2, ISIC MSK, ISIC UDA, and ISBI-2017. To authenticate the proposed method, a fair comparison with the existing techniques is also provided. The simulation results clearly show that the proposed design is accurate enough to categorize the skin lesion with 98.8%, 99.2% and 97.1% and 95.9% accuracy with the selected classifiers on all four datasets, and by utilizing less than 3% features.
Collapse
|