1
|
Nie T, Zhao Y, Yao S. ELA-Net: An Efficient Lightweight Attention Network for Skin Lesion Segmentation. SENSORS (BASEL, SWITZERLAND) 2024; 24:4302. [PMID: 39001081 PMCID: PMC11243870 DOI: 10.3390/s24134302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/04/2024] [Revised: 06/25/2024] [Accepted: 06/27/2024] [Indexed: 07/16/2024]
Abstract
In clinical conditions limited by equipment, attaining lightweight skin lesion segmentation is pivotal as it facilitates the integration of the model into diverse medical devices, thereby enhancing operational efficiency. However, the lightweight design of the model may face accuracy degradation, especially when dealing with complex images such as skin lesion images with irregular regions, blurred boundaries, and oversized boundaries. To address these challenges, we propose an efficient lightweight attention network (ELANet) for the skin lesion segmentation task. In ELANet, two different attention mechanisms of the bilateral residual module (BRM) can achieve complementary information, which enhances the sensitivity to features in spatial and channel dimensions, respectively, and then multiple BRMs are stacked for efficient feature extraction of the input information. In addition, the network acquires global information and improves segmentation accuracy by putting feature maps of different scales through multi-scale attention fusion (MAF) operations. Finally, we evaluate the performance of ELANet on three publicly available datasets, ISIC2016, ISIC2017, and ISIC2018, and the experimental results show that our algorithm can achieve 89.87%, 81.85%, and 82.87% of the mIoU on the three datasets with a parametric of 0.459 M, which is an excellent balance between accuracy and lightness and is superior to many existing segmentation methods.
Collapse
Affiliation(s)
- Tianyu Nie
- School of Geography and Information Engineering, China University of Geosciences, Wuhan 430074, China
| | - Yishi Zhao
- School of Computer Science, China University of Geosciences, Wuhan 430074, China
- Engineering Research Center of Natural Resource Information Management and Digital Twin Engineering Software, Ministry of Education, Wuhan 430074, China
| | - Shihong Yao
- School of Geography and Information Engineering, China University of Geosciences, Wuhan 430074, China
| |
Collapse
|
2
|
Useini V, Tanadini-Lang S, Lohmeyer Q, Meboldt M, Andratschke N, Braun RP, Barranco García J. Automatized self-supervised learning for skin lesion screening. Sci Rep 2024; 14:12697. [PMID: 38830890 PMCID: PMC11148053 DOI: 10.1038/s41598-024-61681-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 05/08/2024] [Indexed: 06/05/2024] Open
Abstract
Melanoma, the deadliest form of skin cancer, has seen a steady increase in incidence rates worldwide, posing a significant challenge to dermatologists. Early detection is crucial for improving patient survival rates. However, performing total body screening (TBS), i.e., identifying suspicious lesions or ugly ducklings (UDs) by visual inspection, can be challenging and often requires sound expertise in pigmented lesions. To assist users of varying expertise levels, an artificial intelligence (AI) decision support tool was developed. Our solution identifies and characterizes UDs from real-world wide-field patient images. It employs a state-of-the-art object detection algorithm to locate and isolate all skin lesions present in a patient's total body images. These lesions are then sorted based on their level of suspiciousness using a self-supervised AI approach, tailored to the specific context of the patient under examination. A clinical validation study was conducted to evaluate the tool's performance. The results demonstrated an average sensitivity of 95% for the top-10 AI-identified UDs on skin lesions selected by the majority of experts in pigmented skin lesions. The study also found that the tool increased dermatologists' confidence when formulating a diagnosis, and the average majority agreement with the top-10 AI-identified UDs reached 100% when assisted by our tool. With the development of this AI-based decision support tool, we aim to address the shortage of specialists, enable faster consultation times for patients, and demonstrate the impact and usability of AI-assisted screening. Future developments will include expanding the dataset to include histologically confirmed melanoma and validating the tool for additional body regions.
Collapse
Affiliation(s)
- Vullnet Useini
- Department of Mechanical and Process Engineering, ETH Zurich, Leonhardstrasse 21, 8092, Zurich, Switzerland
- Department of Radiation Oncology, University Hospital Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
| | - Stephanie Tanadini-Lang
- Department of Radiation Oncology, University Hospital Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
- University of Zurich, Rämistrasse 71, 8006, Zurich, Switzerland
| | - Quentin Lohmeyer
- Department of Mechanical and Process Engineering, ETH Zurich, Leonhardstrasse 21, 8092, Zurich, Switzerland
| | - Mirko Meboldt
- Department of Mechanical and Process Engineering, ETH Zurich, Leonhardstrasse 21, 8092, Zurich, Switzerland
| | - Nicolaus Andratschke
- Department of Radiation Oncology, University Hospital Zurich, Rämistrasse 100, 8091, Zurich, Switzerland
- University of Zurich, Rämistrasse 71, 8006, Zurich, Switzerland
| | - Ralph P Braun
- Department of Dermatology, University Hospital Zurich, Gloriastrasse 31, 8091, Zurich, Switzerland
| | - Javier Barranco García
- Department of Radiation Oncology, University Hospital Zurich, Rämistrasse 100, 8091, Zurich, Switzerland.
- University of Zurich, Rämistrasse 71, 8006, Zurich, Switzerland.
| |
Collapse
|
3
|
Primiero CA, Rezze GG, Caffery LJ, Carrera C, Podlipnik S, Espinosa N, Puig S, Janda M, Soyer HP, Malvehy J. A Narrative Review: Opportunities and Challenges in Artificial Intelligence Skin Image Analyses Using Total Body Photography. J Invest Dermatol 2024; 144:1200-1207. [PMID: 38231164 DOI: 10.1016/j.jid.2023.11.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 09/19/2023] [Accepted: 11/09/2023] [Indexed: 01/18/2024]
Abstract
Artificial intelligence (AI) algorithms for skin lesion classification have reported accuracy at par with and even outperformance of expert dermatologists in experimental settings. However, the majority of algorithms do not represent real-world clinical approach where skin phenotype and clinical background information are considered. We review the current state of AI for skin lesion classification and present opportunities and challenges when applied to total body photography (TBP). AI in TBP analysis presents opportunities for intrapatient assessment of skin phenotype and holistic risk assessment by incorporating patient-level metadata, although challenges exist for protecting patient privacy in algorithm development and improving explainable AI methods.
Collapse
Affiliation(s)
- Clare A Primiero
- Dermatology Department, Hospital Clinic and Fundació Clínic per la Recerca Biomèdica - Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain; Dermatology Research Centre, Frazer Institute, The University of Queensland, Brisbane, Australia
| | - Gisele Gargantini Rezze
- Dermatology Department, Hospital Clinic and Fundació Clínic per la Recerca Biomèdica - Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain
| | - Liam J Caffery
- Dermatology Research Centre, Frazer Institute, The University of Queensland, Brisbane, Australia; Centre of Health Services Research, Faculty of Medicine, The University of Queensland, Brisbane, Australia; Centre for Online Health, The University of Queensland, Brisbane, Australia
| | - Cristina Carrera
- Dermatology Department, Hospital Clinic and Fundació Clínic per la Recerca Biomèdica - Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain; Medicine Department, University of Barcelona, Barcelona, Spain; CIBER de Enfermedades raras, Instituto de Salud Carlos III, Barcelona, Spain
| | - Sebastian Podlipnik
- Dermatology Department, Hospital Clinic and Fundació Clínic per la Recerca Biomèdica - Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain; CIBER de Enfermedades raras, Instituto de Salud Carlos III, Barcelona, Spain
| | - Natalia Espinosa
- Dermatology Department, Hospital Clinic and Fundació Clínic per la Recerca Biomèdica - Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain
| | - Susana Puig
- Dermatology Department, Hospital Clinic and Fundació Clínic per la Recerca Biomèdica - Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain; Medicine Department, University of Barcelona, Barcelona, Spain; CIBER de Enfermedades raras, Instituto de Salud Carlos III, Barcelona, Spain
| | - Monika Janda
- Centre of Health Services Research, Faculty of Medicine, The University of Queensland, Brisbane, Australia
| | - H Peter Soyer
- Dermatology Research Centre, Frazer Institute, The University of Queensland, Brisbane, Australia; Dermatology Department, Princess Alexandra Hospital, Brisbane, Australia
| | - Josep Malvehy
- Dermatology Department, Hospital Clinic and Fundació Clínic per la Recerca Biomèdica - Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain; Medicine Department, University of Barcelona, Barcelona, Spain; CIBER de Enfermedades raras, Instituto de Salud Carlos III, Barcelona, Spain.
| |
Collapse
|
4
|
Winkler JK, Kommoss KS, Toberer F, Enk A, Maul LV, Navarini AA, Hudson J, Salerni G, Rosenberger A, Haenssle HA. Performance of an automated total body mapping algorithm to detect melanocytic lesions of clinical relevance. Eur J Cancer 2024; 202:114026. [PMID: 38547776 DOI: 10.1016/j.ejca.2024.114026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 03/11/2024] [Accepted: 03/14/2024] [Indexed: 04/21/2024]
Abstract
IMPORTANCE Total body photography for skin cancer screening is a well-established tool allowing documentation and follow-up of the entire skin surface. Artificial intelligence-based systems are increasingly applied for automated lesion detection and diagnosis. DESIGN AND PATIENTS In this prospective observational international multicentre study experienced dermatologists performed skin cancer screenings and identified clinically relevant melanocytic lesions (CRML, requiring biopsy or observation). Additionally, patients received 2D automated total body mapping (ATBM) with automated lesion detection (ATBM master, Fotofinder Systems GmbH). Primary endpoint was the percentage of CRML detected by the bodyscan software. Secondary endpoints included the percentage of correctly identified "new" and "changed" lesions during follow-up examinations. RESULTS At baseline, dermatologists identified 1075 CRML in 236 patients and 999 CRML (92.9%) were also detected by the automated software. During follow-up examinations dermatologists identified 334 CRMLs in 55 patients, with 323 (96.7%) also being detected by ATBM with automated lesions detection. Moreover, all new (n = 13) or changed CRML (n = 24) during follow-up were detected by the software. Average time requirements per baseline examination was 14.1 min (95% CI [12.8-15.5]). Subgroup analysis of undetected lesions revealed either technical (e.g. covering by clothing, hair) or lesion-specific reasons (e.g. hypopigmentation, palmoplantar sites). CONCLUSIONS ATBM with lesion detection software correctly detected the vast majority of CRML and new or changed CRML during follow-up examinations in a favourable amount of time. Our prospective international study underlines that automated lesion detection in TBP images is feasible, which is of relevance for developing AI-based skin cancer screenings.
Collapse
Affiliation(s)
- Julia K Winkler
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany.
| | | | - Ferdinand Toberer
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany
| | - Alexander Enk
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany
| | - Lara V Maul
- Department of Dermatology, University Hospital of Basel, Basel, Switzerland
| | | | - Jeremy Hudson
- North Queensland Skin Centre, Townsville, Queensland, Australia
| | - Gabriel Salerni
- Department of Dermatology, Hospital Provincial del Centenario de Rosario- Universidad Nacional de Rosario, Rosario, Argentina
| | - Albert Rosenberger
- Institute of Genetic Epidemiology, University Medical Center, Georg-August University of Goettingen, Goettingen, Germany
| | - Holger A Haenssle
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany
| |
Collapse
|
5
|
Strzelecki M, Kociołek M, Strąkowska M, Kozłowski M, Grzybowski A, Szczypiński PM. Artificial intelligence in the detection of skin cancer: State of the art. Clin Dermatol 2024; 42:280-295. [PMID: 38181888 DOI: 10.1016/j.clindermatol.2023.12.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2024]
Abstract
The incidence of melanoma is increasing rapidly. This cancer has a good prognosis if detected early. For this reason, various systems of skin lesion image analysis, which support imaging diagnostics of this neoplasm, are developing very dynamically. To detect and recognize neoplastic lesions, such systems use various artificial intelligence (AI) algorithms. This area of computer science applications has recently undergone dynamic development, abounding in several solutions that are effective tools supporting diagnosticians in many medical specialties. In this contribution, a number of applications of different classes of AI algorithms for the detection of this skin melanoma are presented and evaluated. Both classic systems based on the analysis of dermatoscopic images as well as total body systems, enabling the analysis of the patient's whole body to detect moles and pathologic changes, are discussed. These increasingly popular applications that allow the analysis of lesion images using smartphones are also described. The quantitative evaluation of the discussed systems with particular emphasis on the method of validation of the implemented algorithms is presented. The advantages and limitations of AI in the analysis of lesion images are also discussed, and problems requiring a solution for more effective use of AI in dermatology are identified.
Collapse
Affiliation(s)
- Michał Strzelecki
- Institute of Electronics, Lodz University of Technology, Łódź, Poland.
| | - Marcin Kociołek
- Institute of Electronics, Lodz University of Technology, Łódź, Poland
| | - Maria Strąkowska
- Institute of Electronics, Lodz University of Technology, Łódź, Poland
| | - Michał Kozłowski
- Department of Mechatronics and Technical and IT Education, Faculty of Technical Science, University of Warmia and Mazury, Olsztyn, Poland
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland
| | | |
Collapse
|
6
|
Hossain MM, Hossain MM, Arefin MB, Akhtar F, Blake J. Combining State-of-the-Art Pre-Trained Deep Learning Models: A Noble Approach for Skin Cancer Detection Using Max Voting Ensemble. Diagnostics (Basel) 2023; 14:89. [PMID: 38201399 PMCID: PMC10795598 DOI: 10.3390/diagnostics14010089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 12/21/2023] [Accepted: 12/22/2023] [Indexed: 01/12/2024] Open
Abstract
Skin cancer poses a significant healthcare challenge, requiring precise and prompt diagnosis for effective treatment. While recent advances in deep learning have dramatically improved medical image analysis, including skin cancer classification, ensemble methods offer a pathway for further enhancing diagnostic accuracy. This study introduces a cutting-edge approach employing the Max Voting Ensemble Technique for robust skin cancer classification on ISIC 2018: Task 1-2 dataset. We incorporate a range of cutting-edge, pre-trained deep neural networks, including MobileNetV2, AlexNet, VGG16, ResNet50, DenseNet201, DenseNet121, InceptionV3, ResNet50V2, InceptionResNetV2, and Xception. These models have been extensively trained on skin cancer datasets, achieving individual accuracies ranging from 77.20% to 91.90%. Our method leverages the synergistic capabilities of these models by combining their complementary features to elevate classification performance further. In our approach, input images undergo preprocessing for model compatibility. The ensemble integrates the pre-trained models with their architectures and weights preserved. For each skin lesion image under examination, every model produces a prediction. These are subsequently aggregated using the max voting ensemble technique to yield the final classification, with the majority-voted class serving as the conclusive prediction. Through comprehensive testing on a diverse dataset, our ensemble outperformed individual models, attaining an accuracy of 93.18% and an AUC score of 0.9320, thus demonstrating superior diagnostic reliability and accuracy. We evaluated the effectiveness of our proposed method on the HAM10000 dataset to ensure its generalizability. Our ensemble method delivers a robust, reliable, and effective tool for the classification of skin cancer. By utilizing the power of advanced deep neural networks, we aim to assist healthcare professionals in achieving timely and accurate diagnoses, ultimately reducing mortality rates and enhancing patient outcomes.
Collapse
Affiliation(s)
- Md. Mamun Hossain
- Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology, Saidpur 5310, Bangladesh
| | - Md. Moazzem Hossain
- Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology, Saidpur 5310, Bangladesh
| | - Most. Binoee Arefin
- Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology, Saidpur 5310, Bangladesh
| | - Fahima Akhtar
- Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology, Saidpur 5310, Bangladesh
| | - John Blake
- School of Computer Science and Engineering, University of Aizu, Aizuwakamatsu 965-8580, Japan
| |
Collapse
|
7
|
Ahmedt-Aristizabal D, Nguyen C, Tychsen-Smith L, Stacey A, Li S, Pathikulangara J, Petersson L, Wang D. Monitoring of Pigmented Skin Lesions Using 3D Whole Body Imaging. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 232:107451. [PMID: 36893580 DOI: 10.1016/j.cmpb.2023.107451] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 02/23/2023] [Accepted: 02/26/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVES Advanced artificial intelligence and machine learning have great potential to redefine how skin lesions are detected, mapped, tracked and documented. Here, we propose a 3D whole-body imaging system known as 3DSkin-mapper to enable automated detection, evaluation and mapping of skin lesions. METHODS A modular camera rig arranged in a cylindrical configuration was designed to automatically capture images of the entire skin surface of a subject synchronously from multiple angles. Based on the images, we developed algorithms for 3D model reconstruction, data processing and skin lesion detection and tracking based on deep convolutional neural networks. We also introduced a customised, user-friendly, and adaptable interface that enables individuals to interactively visualise, manipulate, and annotate the images. The interface includes built-in features such as mapping 2D skin lesions onto the corresponding 3D model. RESULTS The proposed system is developed for skin lesion screening, the focus of this paper is to introduce the system instead of clinical study. Using synthetic and real images we demonstrate the effectiveness of the proposed system by providing multiple views of a target skin lesion, enabling further 3D geometry analysis and longitudinal tracking. Skin lesions are identified as outliers which deserve more attention from a skin cancer physician. Our detector leverages expert annotated labels to learn representations of skin lesions, while capturing the effects of anatomical variability. It takes only a few seconds to capture the entire skin surface, and about half an hour to process and analyse the images. CONCLUSIONS Our experiments show that the proposed system allows fast and easy whole body 3D imaging. It can be used by dermatological clinics to conduct skin screening, detect and track skin lesions over time, identify suspicious lesions, and document pigmented lesions. The system can potentially save clinicians time and effort significantly. The 3D imaging and analysis has the potential to change the paradigm of whole body photography with many applications in skin diseases, including inflammatory and pigmentary disorders. With reduced time requirements for recording and documenting high-quality skin information, doctors could spend more time providing better-quality treatment based on more detailed and accurate information.
Collapse
Affiliation(s)
| | - Chuong Nguyen
- Imaging and Computer Vision group, CSIRO Data61, Australia.
| | | | | | - Shenghong Li
- Imaging and Computer Vision group, CSIRO Data61, Australia.
| | | | - Lars Petersson
- Imaging and Computer Vision group, CSIRO Data61, Australia.
| | - Dadong Wang
- Imaging and Computer Vision group, CSIRO Data61, Australia.
| |
Collapse
|
8
|
Abstract
Deep learning models have been increasingly applied to medical images for tasks such as lesion detection, segmentation, and diagnosis. However, the field suffers from the lack of concrete definitions for usable explanations in different settings. To identify specific aspects of explainability that may catalyse building trust in deep learning models, we will use some techniques to demonstrate many aspects of explaining convolutional neural networks in a medical imaging context. One important factor influencing clinician’s trust is how well a model can justify its predictions or outcomes. Clinicians need understandable explanations about why a machine-learned prediction was made so they can assess whether it is accurate and clinically useful. The provision of appropriate explanations has been generally understood to be critical for establishing trust in deep learning models. However, there lacks a clear understanding on what constitutes an explanation that is both understandable and useful across different domains such as medical image analysis, which hampers efforts towards developing explanatory tool sets specifically tailored towards these tasks. In this paper, we investigated two major directions for explaining convolutional neural networks: feature-based post hoc explanatory methods that try to explain already trained and fixed target models and preliminary analysis and choice of the model architecture with an accuracy of 98% ± 0.156% from 36 CNN architectures with different configurations.
Collapse
|
9
|
Huang Z, Ni Y, Yu Q, Li J, Fan L, Eskin NAM. Deep learning in food science: An insight in evaluating Pickering emulsion properties by droplets classification and quantification via object detection algorithm. Adv Colloid Interface Sci 2022; 304:102663. [PMID: 35430426 DOI: 10.1016/j.cis.2022.102663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Revised: 04/03/2022] [Accepted: 04/03/2022] [Indexed: 11/18/2022]
Abstract
Understanding the complicated emulsion microstructures by microscopic images will help to further elaborate their mechanisms and relevance. The formidable goal of the classification and quantification of emulsion microstructure appears difficult to achieve. However, object detection algorithm in deep learning makes it feasible. This paper reports a new technique for evaluating Pickering emulsion properties through classification and quantification of the emulsion microstructure by object detection algorithm. The trained neural network models characterize the emulsion droplets by distinguishing between different individual emulsion droplets and morphological mechanisms from numerous microscopic images. The quantified results of the emulsion droplets presented in this study, provide details of statistical changes at different concentrations of the Pickering interface and storage temperatures enabling elucidation of the mechanisms involved. This methodology provides a new quantitative and statistical analysis of emulsion microstructure and properties.
Collapse
Affiliation(s)
- Zongyu Huang
- School of Food Science and Technology, Jiangnan University, 1800 Lihu Avenue, Wuxi, Jiangsu 214122, China
| | - Yang Ni
- School of Food Science and Technology, Jiangnan University, 1800 Lihu Avenue, Wuxi, Jiangsu 214122, China
| | - Qun Yu
- School of Food Science and Technology, Jiangnan University, 1800 Lihu Avenue, Wuxi, Jiangsu 214122, China
| | - Jinwei Li
- School of Food Science and Technology, Jiangnan University, 1800 Lihu Avenue, Wuxi, Jiangsu 214122, China
| | - Liuping Fan
- School of Food Science and Technology, Jiangnan University, 1800 Lihu Avenue, Wuxi, Jiangsu 214122, China.
| | - N A Michael Eskin
- Department of Food and Human Nutritional Sciences, University of Manitoba, Winnipeg, Manitoba R3T 2N, Canada
| |
Collapse
|
10
|
Multiclass Skin Lesion Classification Using Hybrid Deep Features Selection and Extreme Learning Machine. SENSORS 2022; 22:s22030799. [PMID: 35161553 PMCID: PMC8838278 DOI: 10.3390/s22030799] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Revised: 01/13/2022] [Accepted: 01/17/2022] [Indexed: 01/27/2023]
Abstract
The variation in skin textures and injuries, as well as the detection and classification of skin cancer, is a difficult task. Manually detecting skin lesions from dermoscopy images is a difficult and time-consuming process. Recent advancements in the domains of the internet of things (IoT) and artificial intelligence for medical applications demonstrated improvements in both accuracy and computational time. In this paper, a new method for multiclass skin lesion classification using best deep learning feature fusion and an extreme learning machine is proposed. The proposed method includes five primary steps: image acquisition and contrast enhancement; deep learning feature extraction using transfer learning; best feature selection using hybrid whale optimization and entropy-mutual information (EMI) approach; fusion of selected features using a modified canonical correlation based approach; and, finally, extreme learning machine based classification. The feature selection step improves the system's computational efficiency and accuracy. The experiment is carried out on two publicly available datasets, HAM10000 and ISIC2018. The achieved accuracy on both datasets is 93.40 and 94.36 percent. When compared to state-of-the-art (SOTA) techniques, the proposed method's accuracy is improved. Furthermore, the proposed method is computationally efficient.
Collapse
|
11
|
Orthorectification of Skin Nevi Images by Means of 3D Model of the Human Body. SENSORS 2021; 21:s21248367. [PMID: 34960467 PMCID: PMC8708235 DOI: 10.3390/s21248367] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 12/09/2021] [Accepted: 12/12/2021] [Indexed: 01/01/2023]
Abstract
Melanoma is the most lethal form of skin cancer, and develops from mutation of pigment-producing cells. As it becomes malignant, it usually grows in size, changes proportions, and develops an irregular border. We introduce a system for early detection of such changes, which enables whole-body screening, especially useful in patients with atypical mole syndrome. The paper proposes a procedure to build a 3D model of the patient, relate the high-resolution skin images with the model, and orthorectify these images to enable detection of size and shape changes in nevi. The novelty is in the application of image encoding indices and barycentric coordinates of the mesh triangles. The proposed procedure was validated with a set of markers of a specified geometry. The markers were attached to the body of a volunteer and analyzed by the system. The results of quantitative comparison of original and corrected images confirm that the orthorectification allows for more accurate estimation of size and proportions of skin nevi.
Collapse
|