1
|
Yang J, Shen J, Kong Y, Wang L, Yao Z, Liang J, Yu X. Successful Treatment of Facial Multiple Melanocytic Nevus-Like Dark Macules Caused by Severe Acne Vulgaris by a Single Session of Intense Pulsed Light Treatment. Clin Cosmet Investig Dermatol 2025; 18:427-430. [PMID: 40007815 PMCID: PMC11853118 DOI: 10.2147/ccid.s497696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2024] [Accepted: 01/12/2025] [Indexed: 02/27/2025]
Abstract
Background and Objective The case report aims to demonstrate the therapeutic effects of a single session of intense pulsed light (IPL) treatment on facial multiple melanocytic nevus-like dark macules induced by severe acne vulgaris. Materials and Methods A 17-year-old male with acne was assessed as Pillsbury IV according to the Pillsbury classification. After three sessions of photodynamic therapy (PDT), he experienced an increase in number and darkening of facial melanocytic nevus-like dark macules. We attempted to use broadband light (BBL) (SCITON Company, USA) (420nm, 8J, 180ms; 515nm, 13J, 20ms; 560nm, 16J, 24ms; 590nm, 16J, 24ms) therapy to improve post-inflammatory erythema (PIE) and post-inflammatory hyperpigmentation (PIH). Following a baseline assessment, we performed a single session of IPL treatment on the patient and evaluated the changes in melanocytic nevus-like dark macules, PIE, PIH, and sebum secretion through standardized photography. Results Compared to the baseline, we observed a significant reduction of the patient's melanocytic nevus-like dark macules and a significant improvement in PIE, PIH, and sebum secretion after a single IPL treatment. Conclusions This study provides preliminary evidence of the effects of IPL treatment on melanocytic nevi associated with severe acne vulgaris. Further research is warranted to elucidate the underlying mechanisms and promote the wider application of this treatment modality in managing acne sequelae.
Collapse
Affiliation(s)
- Jinxiang Yang
- Department of Dermatology, Xinhua Hospital, Shanghai Jiaotong University School of Medicine, Shanghai, People’s Republic of China
- Institute of Dermatology, Shanghai Jiaotong University School of Medicine, Shanghai, People’s Republic of China
| | - Jinwen Shen
- Department of Dermatology, Xinhua Hospital, Shanghai Jiaotong University School of Medicine, Shanghai, People’s Republic of China
- Institute of Dermatology, Shanghai Jiaotong University School of Medicine, Shanghai, People’s Republic of China
| | - Yuwei Kong
- Department of Dermatology, Xinhua Hospital, Shanghai Jiaotong University School of Medicine, Shanghai, People’s Republic of China
- Institute of Dermatology, Shanghai Jiaotong University School of Medicine, Shanghai, People’s Republic of China
| | - Lei Wang
- Department of Dermatology, Xinhua Hospital, Shanghai Jiaotong University School of Medicine, Shanghai, People’s Republic of China
- Institute of Dermatology, Shanghai Jiaotong University School of Medicine, Shanghai, People’s Republic of China
| | - Zhirong Yao
- Department of Dermatology, Xinhua Hospital, Shanghai Jiaotong University School of Medicine, Shanghai, People’s Republic of China
- Institute of Dermatology, Shanghai Jiaotong University School of Medicine, Shanghai, People’s Republic of China
| | - Jianying Liang
- Department of Dermatology, Xinhua Hospital, Shanghai Jiaotong University School of Medicine, Shanghai, People’s Republic of China
- Institute of Dermatology, Shanghai Jiaotong University School of Medicine, Shanghai, People’s Republic of China
| | - Xia Yu
- Department of Dermatology, Xinhua Hospital, Shanghai Jiaotong University School of Medicine, Shanghai, People’s Republic of China
- Institute of Dermatology, Shanghai Jiaotong University School of Medicine, Shanghai, People’s Republic of China
| |
Collapse
|
2
|
Chen JY, Fernandez K, Fadadu RP, Reddy R, Kim MO, Tan J, Wei ML. Skin Cancer Diagnosis by Lesion, Physician, and Examination Type: A Systematic Review and Meta-Analysis. JAMA Dermatol 2025; 161:135-146. [PMID: 39535756 PMCID: PMC11561728 DOI: 10.1001/jamadermatol.2024.4382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Accepted: 08/27/2024] [Indexed: 11/16/2024]
Abstract
Importance Skin cancer is the most common cancer in the US; accurate detection can minimize morbidity and mortality. Objective To assess the accuracy of skin cancer diagnosis by lesion type, physician specialty and experience, and physical examination method. Data Sources PubMed, Embase, and Web of Science. Study Selection Cross-sectional and case-control studies, randomized clinical trials, and nonrandomized controlled trials that used dermatologists or primary care physicians (PCPs) to examine keratinocytic and/or melanocytic skin lesions were included. Data Extraction and Synthesis Search terms, study objectives, and protocol methods were defined before study initiation. Data extraction was performed by a reviewer, with verification by a second reviewer. A mixed-effects model was used in the data analysis. Data analyses were performed from May 2022 to December 2023. Main Outcomes and Measures Meta-analysis of diagnostic accuracy comprised sensitivity and specificity by physician type (primary care physician or dermatologist; experienced or inexperienced) and examination method (in-person clinical examination and/or clinical images vs dermoscopy and/or dermoscopic images). Results In all, 100 studies were included in the analysis. With experienced dermatologists using clinical examination and clinical images, the sensitivity and specificity for diagnosing keratinocytic carcinomas were 79.0% and 89.1%, respectively; using dermoscopy and dermoscopic images, sensitivity and specificity were 83.7% and 87.4%, and for PCPs, 81.4% and 80.1%. Experienced dermatologists had 2.5-fold higher odds of accurate diagnosis of keratinocytic carcinomas using in-person dermoscopy and dermoscopic images compared with in-person clinical examination and images. When examining for melanoma using clinical examination and images, sensitivity and specificity were 76.9% and 89.1% for experienced dermatologists, 78.3% and 66.2% for inexperienced dermatologists, and 37.5% and 84.6% for PCPs, respectively; whereas when using dermoscopy and dermoscopic images, sensitivity and specificity were 85.7% and 81.3%, 78.0% and 69.5%, and 49.5% and 91.3%, respectively. Experienced dermatologists had 5.7-fold higher odds of accurate diagnosis of melanoma using dermoscopy compared with clinical examination. Compared with PCPs, experienced dermatologists had 13.3-fold higher odds of accurate diagnosis of melanoma using dermoscopic images. Conclusions and Relevance The findings of this systematic review and meta-analysis indicate that there are significant differences in diagnostic accuracy for skin cancer when comparing physician specialty and experience, and examination methods. These summary metrics of clinician diagnostic accuracy could be useful benchmarks for clinical trials, practitioner training, and the performance of emerging technologies.
Collapse
Affiliation(s)
- Jennifer Y Chen
- San Francisco Veterans Affairs Health Care System, San Francisco, California
| | - Kristen Fernandez
- San Francisco Veterans Affairs Health Care System, San Francisco, California
| | - Raj P Fadadu
- San Francisco Veterans Affairs Health Care System, San Francisco, California
| | - Rasika Reddy
- San Francisco Veterans Affairs Health Care System, San Francisco, California
| | - Mi-Ok Kim
- Department of Epidemiology and Biostatistics, University of California, San Francisco
- Helen Diller Family Comprehensive Cancer Center, University of California, San Francisco
| | - Josephine Tan
- San Francisco Library, University of California, San Francisco
| | - Maria L Wei
- San Francisco Veterans Affairs Health Care System, San Francisco, California
- Helen Diller Family Comprehensive Cancer Center, University of California, San Francisco
- Department of Dermatology, University of California, San Francisco
| |
Collapse
|
3
|
Garbe C, Amaral T, Peris K, Hauschild A, Arenberger P, Basset-Seguin N, Bastholt L, Bataille V, Brochez L, Del Marmol V, Dréno B, Eggermont AMM, Fargnoli MC, Forsea AM, Höller C, Kaufmann R, Kelleners-Smeets N, Lallas A, Lebbé C, Leiter U, Longo C, Malvehy J, Moreno-Ramirez D, Nathan P, Pellacani G, Saiag P, Stockfleth E, Stratigos AJ, Van Akkooi ACJ, Vieira R, Zalaudek I, Lorigan P, Mandala M. European consensus-based interdisciplinary guideline for melanoma. Part 1: Diagnostics - Update 2024. Eur J Cancer 2025; 215:115152. [PMID: 39700658 DOI: 10.1016/j.ejca.2024.115152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2024] [Accepted: 11/25/2024] [Indexed: 12/21/2024]
Abstract
This guideline was developed in close collaboration with multidisciplinary experts from the European Association of Dermato-Oncology (EADO), the European Dermatology Forum (EDF) and the European Organization for Research and Treatment of Cancer (EORTC). Recommendations for the diagnosis and treatment of melanoma were developed on the basis of systematic literature research and consensus conferences. Cutaneous melanoma (CM) is the most dangerous form of skin tumor and accounts for 90 % of skin cancer mortality. The diagnosis of melanoma can be made clinically and must always be confirmed by dermoscopy. If melanoma is suspected, a histopathological examination is always required. Sequential digital dermoscopy and whole-body photography can be used in high-risk patients to improve the detection of early-stage melanoma. If available, confocal reflectance microscopy can also improve the clinical diagnosis in special cases. Melanoma is classified according to the 8th version of the American Joint Committee on Cancer classification. For thin melanomas up to a tumor thickness of 0.8 mm, no further diagnostic imaging is required. From stage IB, lymph node sonography is recommended, but no further imaging examinations. From stage IIB/C, whole-body examinations with computed tomography or positron emission tomography CT in combination with magnetic resonance imaging of the brain are recommended. From stage IIB/C and higher, a mutation test is recommended, especially for the BRAF V600 mutation. It is important to perform a structured follow-up to detect relapses and secondary primary melanomas as early as possible. A stage-based follow-up regimen is proposed, which in the experience of the guideline group covers the optimal requirements, although further studies may be considered. This guideline is valid until the end of 2026.
Collapse
Affiliation(s)
- Claus Garbe
- Center for Dermatooncology, Department of Dermatology, Eberhard Karls University, Tuebingen, Germany.
| | - Teresa Amaral
- Center for Dermatooncology, Department of Dermatology, Eberhard Karls University, Tuebingen, Germany
| | - Ketty Peris
- Institute of Dermatology, Università Cattolica, Rome, and Fondazione Policlinico Universitario A. Gemelli - IRCCS, Rome, Italy
| | - Axel Hauschild
- Department of Dermatology, University Hospital Schleswig-Holstein (UKSH), Campus Kiel, Kiel, Germany
| | - Petr Arenberger
- Department of Dermatovenereology, Third Faculty of Medicine, Charles University, Prague, Czech Republic
| | - Nicole Basset-Seguin
- Université Paris Cite, AP-HP department of Dermatology INSERM U 976 Hôpital Saint Louis Paris France
| | - Lars Bastholt
- Department of Oncology, Odense University Hospital, Denmark
| | - Veronique Bataille
- Twin Research and Genetic Epidemiology Unit, School of Basic & Medical Biosciences, King's College London, London SE1 7EH, UK
| | - Lieve Brochez
- Department of Dermatology, Ghent University Hospital, Ghent, Belgium
| | - Veronique Del Marmol
- Department of Dermatology, Erasme Hospital, Université Libre de Bruxelles, Brussels, Belgium
| | - Brigitte Dréno
- Nantes Université, INSERM, CNRS, Immunology and New Concepts in ImmunoTherapy, INCIT, UMR 1302/EMR6001, F-44000 Nantes, France
| | - Alexander M M Eggermont
- University Medical Center Utrecht & Princess Maxima Center, Utrecht, the Netherlands; Comprehensive Cancer Center Munich of the Technical University Munich and the Ludwig Maximilians University, Munich, Germany
| | | | - Ana-Maria Forsea
- Dermatology Department, Elias University Hospital, Carol Davila University of Medicine and Pharmacy Bucharest, Romania
| | - Christoph Höller
- Department of Dermatology, Medical University of Vienna, Austria
| | - Roland Kaufmann
- Department of Dermatology, Venereology and Allergology, Frankfurt University Hospital, Frankfurt, Germany
| | - Nicole Kelleners-Smeets
- Department of Dermatology, Maastricht University Medical Center+, Maastricht, the Netherlands
| | - Aimilios Lallas
- First Department of Dermatology, Aristotle University, Thessaloniki, Greece
| | - Celeste Lebbé
- Université Paris Cite, AP-HP department of Dermatology INSERM U 976 Hôpital Saint Louis Paris France
| | - Ulrike Leiter
- Center for Dermatooncology, Department of Dermatology, Eberhard Karls University, Tuebingen, Germany
| | - Caterina Longo
- Department of Dermatology, University of Modena and Reggio Emilia, Modena, and Azienda Unità Sanitaria Locale - IRCCS di Reggio Emilia, Skin Cancer Centre, Reggio Emilia, Italy
| | - Josep Malvehy
- Melanoma Unit, Department of Dermatology, Hospital Clinic, IDIBAPS, Barcelona, Spain; University of Barcelona, Institut d'Investigacions Biomediques August Pi I Sunyer (IDIBAPS), Centro de Investigación Biomédica en Red de Enfermedades Raras CIBERER, Instituto de Salud Carlos III, Barcelona, Spain
| | - David Moreno-Ramirez
- Medical-&-Surgical Dermatology Service. Hospital Universitario Virgen Macarena, Sevilla, Spain
| | - Paul Nathan
- Mount Vernon Cancer Centre, Northwood United Kingdom
| | | | - Philippe Saiag
- University Department of Dermatology, Université de Versailles-Saint Quentin en Yvelines, APHP, Boulogne, France
| | - Eggert Stockfleth
- Skin Cancer Center, Department of Dermatology, Ruhr-University Bochum, 44791 Bochum, Germany
| | - Alexander J Stratigos
- 1st Department of Dermatology, National and Kapodistrian University of Athens School of Medicine, Andreas Sygros Hospital, Athens, Greece
| | - Alexander C J Van Akkooi
- Melanoma Institute Australia, The University of Sydney, and Royal Prince Alfred Hospital, Sydney, New South Wales, Australia
| | - Ricardo Vieira
- Department of Dermatology and Venereology, Centro Hospitalar Universitário de Coimbra, Coimbra, Portugal
| | - Iris Zalaudek
- Dermatology Clinic, Maggiore Hospital, University of Trieste, Trieste, Italy
| | - Paul Lorigan
- The University of Manchester, Oxford Rd, Manchester M13 9PL, UK
| | - Mario Mandala
- University of Perugia, Unit of Medical Oncology, Santa Maria della Misericordia Hospital, Perugia, Italy
| |
Collapse
|
4
|
Marson JW, Tongdee E, Chen RM, Mojeski J, Richardson WM, Schneider JA, Siegel DM. Comparing commercially available dermatoscopes: illuminating the difference via a benign clinical lesion. Int J Dermatol 2025; 64:173-174. [PMID: 38876476 DOI: 10.1111/ijd.17313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Revised: 05/23/2024] [Accepted: 05/24/2024] [Indexed: 06/16/2024]
Affiliation(s)
- Justin W Marson
- Department of Dermatology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Emily Tongdee
- Department of Dermatology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Rebecca M Chen
- Department of Dermatology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Jacob Mojeski
- Department of Dermatology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | | | - Jane A Schneider
- Department of Dermatology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| | - Daniel M Siegel
- Department of Dermatology, SUNY Downstate Health Sciences University, Brooklyn, NY, USA
| |
Collapse
|
5
|
Born LJ, Tembunde Y, Driscoll MS, Grant-Kels JM. Melanoma and melanocytic nevi in pregnancy. Clin Dermatol 2025; 43:71-77. [PMID: 39900309 DOI: 10.1016/j.clindermatol.2025.01.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2025]
Abstract
A changing melanocytic nevus during pregnancy should be biopsied promptly. For women with the dysplastic nevus syndrome, there may be more changes in nevi during pregnancy, requiring close monitoring. Melanoma is one of the most common malignancies that occurs during pregnancy. Those diagnosed with a localized melanoma before, during, or after pregnancy do not have an altered prognosis; however, a few studies have noted thicker melanomas and poorer prognosis when melanoma is diagnosed in the first year postpartum, possibly due to a delay in diagnosis. Although local excision of melanomas can be performed safely during pregnancy, sentinel lymph node biopsy during pregnancy is controversial for the timing and method. There are safe methods of imaging with some special precautions for staging in pregnant women. Systemic therapy requires an interdisciplinary team to assist in patient decision-making because some of these agents are teratogenic. There is no reason to withhold combined estrogen-progestin oral contraceptives or menopausal hormone therapy in those with a previous diagnosis of melanoma, nor should future pregnancies be delayed in those diagnosed with localized melanoma. Only limited data are available concerning prognosis for women with a melanoma diagnosis after in vitro fertilization.
Collapse
Affiliation(s)
| | - Yazmeen Tembunde
- Department of Dermatology, University of Maryland School of Medicine, Baltimore, Maryland, USA
| | - Marcia S Driscoll
- Department of Dermatology, University of Maryland School of Medicine, Baltimore, Maryland, USA
| | - Jane M Grant-Kels
- Department of Dermatology, University of Florida College of Medicine, Gainesville, Florida, USA; Department of Dermatology University of Connecticut School of Medicine, Farmington, Connecticut, USA.
| |
Collapse
|
6
|
Junga A, Schmidle P, Pielage L, Schulze H, Hätscher O, Ständer S, Marschall B, Braun SA. New horizons in dermatological education: Skin cancer screening with virtual reality. J Eur Acad Dermatol Venereol 2024; 38:2259-2267. [PMID: 38497674 PMCID: PMC11587684 DOI: 10.1111/jdv.19960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2023] [Accepted: 02/08/2024] [Indexed: 03/19/2024]
Abstract
BACKGROUND Technological advances in the field of virtual reality (VR) offer new opportunities in many areas of life, including medical education. The University of Münster has been using VR scenarios in the education of medical students for several years, especially for situations that are difficult to reproduce in reality (e.g., brain death). Due to the consistently positive feedback from students, a dermatological VR scenario for skin cancer screening was developed. OBJECTIVES Presentation and first evaluation of the skin cancer screening VR scenario to determine to what extent the technical implementation of the scenario was evaluated overall by the students and how their subjective competence to perform a skin cancer screening changed over the course of the teaching unit (theory seminar, VR scenario, theoretical debriefing). METHODS Students (n = 140) participating in the curricular pilot project during the 2023 summer term were surveyed throughout the teaching unit using several established questionnaires (System Usability Scale, Simulation Task-Load-Index, Realism and Presence Questionnaire) as well as additional questions on cybersickness and subjective learning. RESULTS (i) The use of VR is technically feasible, (ii) students evaluate the VR scenario as a useful curricular supplement, and (iii) from the students' subjective perspective, a good learning outcome is achieved. Although preparation and follow-up appear to be important for overall learning, the greatest increase in subjective competence to perform a skin cancer screening is achieved by the VR scenario. CONCLUSIONS Technically feasible and positively evaluated by students, VR can already be a useful addition to dermatology education, although costs are still high. As a visual discipline, dermatology offers special opportunities to create VR scenarios that are not always available or comfortable for patients in reality. Additionally, VR scenarios guarantee the same conditions for all students, which is essential for a high-quality education.
Collapse
Affiliation(s)
- Anna Junga
- Institute of Education and Student AffairsUniversity of MünsterMünsterGermany
- Department of UrologyStiftungsklinikum PROSELISRecklinghausenGermany
| | - Paul Schmidle
- Department of Dermatology, Medical FacultyUniversity of MünsterMünsterGermany
| | - Leon Pielage
- Institute for GeoinformaticsUniversity of MünsterMünsterGermany
| | - Henriette Schulze
- Institute of Education and Student AffairsUniversity of MünsterMünsterGermany
| | - Ole Hätscher
- Institute of Education and Student AffairsUniversity of MünsterMünsterGermany
- Department of PsychologyUniversity of MünsterMünsterGermany
| | - Sonja Ständer
- Department of Dermatology, Medical FacultyUniversity of MünsterMünsterGermany
| | - Bernhard Marschall
- Institute of Education and Student AffairsUniversity of MünsterMünsterGermany
| | - Stephan Alexander Braun
- Department of Dermatology, Medical FacultyUniversity of MünsterMünsterGermany
- Department of Dermatology, Medical FacultyHeinrich‐Heine UniversityDüsseldorfGermany
| | | |
Collapse
|
7
|
Al-masni MA, Al-Shamiri AK, Hussain D, Gu YH. A Unified Multi-Task Learning Model with Joint Reverse Optimization for Simultaneous Skin Lesion Segmentation and Diagnosis. Bioengineering (Basel) 2024; 11:1173. [PMID: 39593832 PMCID: PMC11592164 DOI: 10.3390/bioengineering11111173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2024] [Revised: 11/17/2024] [Accepted: 11/19/2024] [Indexed: 11/28/2024] Open
Abstract
Classifying and segmenting skin cancer represent pivotal objectives for automated diagnostic systems that utilize dermoscopy images. However, these tasks present significant challenges due to the diverse shape variations of skin lesions and the inherently fuzzy nature of dermoscopy images, including low contrast and the presence of artifacts. Given the robust correlation between the classification of skin lesions and their segmentation, we propose that employing a combined learning method holds the promise of considerably enhancing the performance of both tasks. In this paper, we present a unified multi-task learning strategy that concurrently classifies abnormalities of skin lesions and allows for the joint segmentation of lesion boundaries. This approach integrates an optimization technique known as joint reverse learning, which fosters mutual enhancement through extracting shared features and limiting task dominance across the two tasks. The effectiveness of the proposed method was assessed using two publicly available datasets, ISIC 2016 and PH2, which included melanoma and benign skin cancers. In contrast to the single-task learning strategy, which solely focuses on either classification or segmentation, the experimental findings demonstrated that the proposed network improves the diagnostic capability of skin tumor screening and analysis. The proposed method achieves a significant segmentation performance on skin lesion boundaries, with Dice Similarity Coefficients (DSC) of 89.48% and 88.81% on the ISIC 2016 and PH2 datasets, respectively. Additionally, our multi-task learning approach enhances classification, increasing the F1 score from 78.26% (baseline ResNet50) to 82.07% on ISIC 2016 and from 82.38% to 85.50% on PH2. This work showcases its potential applicability across varied clinical scenarios.
Collapse
Affiliation(s)
- Mohammed A. Al-masni
- Department of Artificial Intelligence and Data Science, College of AI Convergence, Sejong University, Seoul 05006, Republic of Korea; (M.A.A.-m.); (D.H.)
| | - Abobakr Khalil Al-Shamiri
- School of Computer Science, University of Southampton Malaysia, Iskandar Puteri 79100, Johor, Malaysia
| | - Dildar Hussain
- Department of Artificial Intelligence and Data Science, College of AI Convergence, Sejong University, Seoul 05006, Republic of Korea; (M.A.A.-m.); (D.H.)
| | - Yeong Hyeon Gu
- Department of Artificial Intelligence and Data Science, College of AI Convergence, Sejong University, Seoul 05006, Republic of Korea; (M.A.A.-m.); (D.H.)
| |
Collapse
|
8
|
Mateen M, Hayat S, Arshad F, Gu YH, Al-antari MA. Hybrid Deep Learning Framework for Melanoma Diagnosis Using Dermoscopic Medical Images. Diagnostics (Basel) 2024; 14:2242. [PMID: 39410645 PMCID: PMC11476274 DOI: 10.3390/diagnostics14192242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2024] [Revised: 09/29/2024] [Accepted: 10/05/2024] [Indexed: 10/20/2024] Open
Abstract
Background: Melanoma, or skin cancer, is a dangerous form of cancer that is the major cause of the demise of thousands of people around the world. Methods: In recent years, deep learning has become more popular for analyzing and detecting these medical issues. In this paper, a hybrid deep learning approach has been proposed based on U-Net for image segmentation, Inception-ResNet-v2 for feature extraction, and the Vision Transformer model with a self-attention mechanism for refining the features for early and accurate diagnosis and classification of skin cancer. Furthermore, in the proposed approach, hyperparameter tuning helps to obtain more accurate and optimized results for image classification. Results: Dermoscopic shots gathered by the worldwide skin imaging collaboration (ISIC2020) challenge dataset are used in the proposed research work and achieved 98.65% accuracy, 99.20% sensitivity, and 98.03% specificity, which outperforms the other existing approaches for skin cancer classification. Furthermore, the HAM10000 dataset is used for ablation studies to compare and validate the performance of the proposed approach. Conclusions: The achieved outcome suggests that the proposed approach would be able to serve as a valuable tool for assisting dermatologists in the early detection of melanoma.
Collapse
Affiliation(s)
- Muhammad Mateen
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, China
| | - Shaukat Hayat
- Department of Software Engineering, International Islamic University, Islamabad 44000, Pakistan;
| | - Fizzah Arshad
- Department of Computer Science, Air University Multan Campus, Multan 61000, Pakistan;
| | - Yeong-Hyeon Gu
- Department of Artificial Intelligence and Data Science, College of AI Convergence, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea
| | - Mugahed A. Al-antari
- Department of Artificial Intelligence and Data Science, College of AI Convergence, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea
| |
Collapse
|
9
|
Sosnowska-Sienkiewicz P, Januszkiewicz-Lewandowska D, Calik J, Telman-Kołodziejczyk G, Mańkowski P. Nevi and Melanoma in Children: What to Do in Daily Medical Practice: Encyclopedia for Pediatricians and Family Doctors. Diagnostics (Basel) 2024; 14:2004. [PMID: 39335684 PMCID: PMC11431136 DOI: 10.3390/diagnostics14182004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2024] [Revised: 09/07/2024] [Accepted: 09/09/2024] [Indexed: 09/30/2024] Open
Abstract
Melanocytic nevi, commonly known as moles, are benign skin lesions that often occur in children and adolescents. Overall, they are less common in children compared to adults. Understanding the diagnosis and management of melanocytic nevi and risk factors for melanoma development is crucial for their early detection and appropriate treatment. This paper presents children's most common melanocytic nevi, including their epidemiology, morphology, diagnostic methods, and treatment.
Collapse
Affiliation(s)
| | | | - Jacek Calik
- Department of Clinical Oncology, Wroclaw Medical University, 50-556 Wrocław, Poland
| | - Gabriela Telman-Kołodziejczyk
- Department of Pediatric Oncology, Hematology and Transplantology, Poznan University of Medical Sciences, 60-572 Poznan, Poland
| | - Przemysław Mańkowski
- Department of Pediatric Surgery, Traumatology and Urology, Poznan University of Medical Sciences, 60-572 Poznan, Poland
| |
Collapse
|
10
|
Cai L, Hou K, Zhou S. Intelligent skin lesion segmentation using deformable attention Transformer U-Net with bidirectional attention mechanism in skin cancer images. Skin Res Technol 2024; 30:e13783. [PMID: 39113617 PMCID: PMC11306920 DOI: 10.1111/srt.13783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2024] [Accepted: 05/20/2024] [Indexed: 08/11/2024]
Abstract
BACKGROUND In recent years, the increasing prevalence of skin cancers, particularly malignant melanoma, has become a major concern for public health. The development of accurate automated segmentation techniques for skin lesions holds immense potential in alleviating the burden on medical professionals. It is of substantial clinical importance for the early identification and intervention of skin cancer. Nevertheless, the irregular shape, uneven color, and noise interference of the skin lesions have presented significant challenges to the precise segmentation. Therefore, it is crucial to develop a high-precision and intelligent skin lesion segmentation framework for clinical treatment. METHODS A precision-driven segmentation model for skin cancer images is proposed based on the Transformer U-Net, called BiADATU-Net, which integrates the deformable attention Transformer and bidirectional attention blocks into the U-Net. The encoder part utilizes deformable attention Transformer with dual attention block, allowing adaptive learning of global and local features. The decoder part incorporates specifically tailored scSE attention modules within skip connection layers to capture image-specific context information for strong feature fusion. Additionally, deformable convolution is aggregated into two different attention blocks to learn irregular lesion features for high-precision prediction. RESULTS A series of experiments are conducted on four skin cancer image datasets (i.e., ISIC2016, ISIC2017, ISIC2018, and PH2). The findings show that our model exhibits satisfactory segmentation performance, all achieving an accuracy rate of over 96%. CONCLUSION Our experiment results validate the proposed BiADATU-Net achieves competitive performance supremacy compared to some state-of-the-art methods. It is potential and valuable in the field of skin lesion segmentation.
Collapse
Affiliation(s)
- Lili Cai
- School of Biomedical EngineeringGuangzhou Xinhua UniversityGuangzhouChina
| | - Keke Hou
- School of Health SciencesGuangzhou Xinhua UniversityGuangzhouChina
| | - Su Zhou
- School of Biomedical EngineeringGuangzhou Xinhua UniversityGuangzhouChina
| |
Collapse
|
11
|
Zhang X. Dermoscopic features of melanocyte-derived tumors in mucosal sites: a case control study. Arch Dermatol Res 2024; 316:354. [PMID: 38850370 DOI: 10.1007/s00403-024-03084-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2023] [Revised: 11/27/2023] [Accepted: 04/26/2024] [Indexed: 06/10/2024]
Affiliation(s)
- Xiang Zhang
- Department of Dermatology, Second Affiliated Hospital of Wannan Medical College, Wuhu, 241000, China.
| |
Collapse
|
12
|
Sun J, Karasaki KM, Farma JM. The Use of Gene Expression Profiling and Biomarkers in Melanoma Diagnosis and Predicting Recurrence: Implications for Surveillance and Treatment. Cancers (Basel) 2024; 16:583. [PMID: 38339333 PMCID: PMC10854922 DOI: 10.3390/cancers16030583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Revised: 01/22/2024] [Accepted: 01/26/2024] [Indexed: 02/12/2024] Open
Abstract
Cutaneous melanoma is becoming more prevalent in the United States and has the highest mortality among cutaneous malignancies. The majority of melanomas are diagnosed at an early stage and, as such, survival is generally favorable. However, there remains prognostic uncertainty among subsets of early- and intermediate-stage melanoma patients, some of whom go on to develop advanced disease while others remain disease-free. Melanoma gene expression profiling (GEP) has evolved with the notion to help bridge this gap and identify higher- or lower-risk patients to better tailor treatment and surveillance protocols. These tests seek to prognosticate melanomas independently of established AJCC 8 cancer staging and clinicopathologic features (sex, age, primary tumor location, thickness, ulceration, mitotic rate, lymphovascular invasion, microsatellites, and/or SLNB status). While there is a significant opportunity to improve the accuracy of melanoma prognostication and diagnosis, it is equally important to understand the current landscape of molecular profiling for melanoma treatment. Society guidelines currently do not recommend molecular testing outside of clinical trials for melanoma clinical decision making, citing insufficient high-quality evidence guiding indications for the testing and interpretation of results. The goal of this chapter is to review the available literature for GEP testing for melanoma diagnosis and prognostication and understand their place in current treatment paradigms.
Collapse
Affiliation(s)
- James Sun
- Department of Surgical Oncology, Fox Chase Cancer Center, Philadelphia, PA 19002, USA;
| | | | - Jeffrey M. Farma
- Department of Surgical Oncology, Fox Chase Cancer Center, Philadelphia, PA 19002, USA;
| |
Collapse
|
13
|
Azeem M, Kiani K, Mansouri T, Topping N. SkinLesNet: Classification of Skin Lesions and Detection of Melanoma Cancer Using a Novel Multi-Layer Deep Convolutional Neural Network. Cancers (Basel) 2023; 16:108. [PMID: 38201535 PMCID: PMC10778045 DOI: 10.3390/cancers16010108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 12/20/2023] [Accepted: 12/22/2023] [Indexed: 01/12/2024] Open
Abstract
Skin cancer is a widespread disease that typically develops on the skin due to frequent exposure to sunlight. Although cancer can appear on any part of the human body, skin cancer accounts for a significant proportion of all new cancer diagnoses worldwide. There are substantial obstacles to the precise diagnosis and classification of skin lesions because of morphological variety and indistinguishable characteristics across skin malignancies. Recently, deep learning models have been used in the field of image-based skin-lesion diagnosis and have demonstrated diagnostic efficiency on par with that of dermatologists. To increase classification efficiency and accuracy for skin lesions, a cutting-edge multi-layer deep convolutional neural network termed SkinLesNet was built in this study. The dataset used in this study was extracted from the PAD-UFES-20 dataset and was augmented. The PAD-UFES-20-Modified dataset includes three common forms of skin lesions: seborrheic keratosis, nevus, and melanoma. To comprehensively assess SkinLesNet's performance, its evaluation was expanded beyond the PAD-UFES-20-Modified dataset. Two additional datasets, HAM10000 and ISIC2017, were included, and SkinLesNet was compared to the widely used ResNet50 and VGG16 models. This broader evaluation confirmed SkinLesNet's effectiveness, as it consistently outperformed both benchmarks across all datasets.
Collapse
Affiliation(s)
- Muhammad Azeem
- School of Science, Engineering & Environment, University of Salford, Manchester M5 4WT, UK; (K.K.); (T.M.); (N.T.)
| | | | | | | |
Collapse
|
14
|
Ali MU, Khalid M, Alshanbari H, Zafar A, Lee SW. Enhancing Skin Lesion Detection: A Multistage Multiclass Convolutional Neural Network-Based Framework. Bioengineering (Basel) 2023; 10:1430. [PMID: 38136020 PMCID: PMC10741172 DOI: 10.3390/bioengineering10121430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 12/07/2023] [Accepted: 12/14/2023] [Indexed: 12/24/2023] Open
Abstract
The early identification and treatment of various dermatological conditions depend on the detection of skin lesions. Due to advancements in computer-aided diagnosis and machine learning approaches, learning-based skin lesion analysis methods have attracted much interest recently. Employing the concept of transfer learning, this research proposes a deep convolutional neural network (CNN)-based multistage and multiclass framework to categorize seven types of skin lesions. In the first stage, a CNN model was developed to classify skin lesion images into two classes, namely benign and malignant. In the second stage, the model was then used with the transfer learning concept to further categorize benign lesions into five subcategories (melanocytic nevus, actinic keratosis, benign keratosis, dermatofibroma, and vascular) and malignant lesions into two subcategories (melanoma and basal cell carcinoma). The frozen weights of the CNN developed-trained with correlated images benefited the transfer learning using the same type of images for the subclassification of benign and malignant classes. The proposed multistage and multiclass technique improved the classification accuracy of the online ISIC2018 skin lesion dataset by up to 93.4% for benign and malignant class identification. Furthermore, a high accuracy of 96.2% was achieved for subclassification of both classes. Sensitivity, specificity, precision, and F1-score metrics further validated the effectiveness of the proposed multistage and multiclass framework. Compared to existing CNN models described in the literature, the proposed approach took less time to train and had a higher classification rate.
Collapse
Affiliation(s)
- Muhammad Umair Ali
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea;
| | - Majdi Khalid
- Department of Computer Science and Artificial Intelligence, College of Computers, Umm Al-Qura University, Makkah 21955, Saudi Arabia; (M.K.); (H.A.)
| | - Hanan Alshanbari
- Department of Computer Science and Artificial Intelligence, College of Computers, Umm Al-Qura University, Makkah 21955, Saudi Arabia; (M.K.); (H.A.)
| | - Amad Zafar
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea;
| | - Seung Won Lee
- Department of Precision Medicine, Sungkyunkwan University School of Medicine, Suwon 16419, Republic of Korea
| |
Collapse
|
15
|
Huang HY, Hsiao YP, Karmakar R, Mukundan A, Chaudhary P, Hsieh SC, Wang HC. A Review of Recent Advances in Computer-Aided Detection Methods Using Hyperspectral Imaging Engineering to Detect Skin Cancer. Cancers (Basel) 2023; 15:5634. [PMID: 38067338 PMCID: PMC10705122 DOI: 10.3390/cancers15235634] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 11/20/2023] [Accepted: 11/24/2023] [Indexed: 08/15/2024] Open
Abstract
Skin cancer, a malignant neoplasm originating from skin cell types including keratinocytes, melanocytes, and sweat glands, comprises three primary forms: basal cell carcinoma (BCC), squamous cell carcinoma (SCC), and malignant melanoma (MM). BCC and SCC, while constituting the most prevalent categories of skin cancer, are generally considered less aggressive compared to MM. Notably, MM possesses a greater capacity for invasiveness, enabling infiltration into adjacent tissues and dissemination via both the circulatory and lymphatic systems. Risk factors associated with skin cancer encompass ultraviolet (UV) radiation exposure, fair skin complexion, a history of sunburn incidents, genetic predisposition, immunosuppressive conditions, and exposure to environmental carcinogens. Early detection of skin cancer is of paramount importance to optimize treatment outcomes and preclude the progression of disease, either locally or to distant sites. In pursuit of this objective, numerous computer-aided diagnosis (CAD) systems have been developed. Hyperspectral imaging (HSI), distinguished by its capacity to capture information spanning the electromagnetic spectrum, surpasses conventional RGB imaging, which relies solely on three color channels. Consequently, this study offers a comprehensive exploration of recent CAD investigations pertaining to skin cancer detection and diagnosis utilizing HSI, emphasizing diagnostic performance parameters such as sensitivity and specificity.
Collapse
Affiliation(s)
- Hung-Yi Huang
- Department of Dermatology, Ditmanson Medical Foundation Chiayi Christian Hospital, Chia Yi City 60002, Taiwan;
| | - Yu-Ping Hsiao
- Department of Dermatology, Chung Shan Medical University Hospital, No.110, Sec. 1, Jianguo N. Rd., South District, Taichung City 40201, Taiwan;
- Institute of Medicine, School of Medicine, Chung Shan Medical University, No.110, Sec. 1, Jianguo N. Rd., South District, Taichung City 40201, Taiwan
| | - Riya Karmakar
- Department of Mechanical Engineering, National Chung Cheng University, 168, University Rd., Min Hsiung, Chia Yi City 62102, Taiwan; (R.K.); (A.M.)
| | - Arvind Mukundan
- Department of Mechanical Engineering, National Chung Cheng University, 168, University Rd., Min Hsiung, Chia Yi City 62102, Taiwan; (R.K.); (A.M.)
| | - Pramod Chaudhary
- Department of Aeronautical Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Avadi, Chennai 600 062, India;
| | - Shang-Chin Hsieh
- Department of Plastic Surgery, Kaohsiung Armed Forces General Hospital, 2, Zhongzheng 1st. Rd., Lingya District, Kaohsiung 80284, Taiwan
| | - Hsiang-Chen Wang
- Department of Mechanical Engineering, National Chung Cheng University, 168, University Rd., Min Hsiung, Chia Yi City 62102, Taiwan; (R.K.); (A.M.)
- Department of Medical Research, Dalin Tzu Chi General Hospital, No. 2, Min-Sheng Rd., Dalin Town, Chia Yi City 62247, Taiwan
- Technology Development, Hitspectra Intelligent Technology Co., Ltd., Kaohsiung 80661, Taiwan
| |
Collapse
|
16
|
Nazari S, Garcia R. Automatic Skin Cancer Detection Using Clinical Images: A Comprehensive Review. Life (Basel) 2023; 13:2123. [PMID: 38004263 PMCID: PMC10672549 DOI: 10.3390/life13112123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 10/21/2023] [Accepted: 10/23/2023] [Indexed: 11/26/2023] Open
Abstract
Skin cancer has become increasingly common over the past decade, with melanoma being the most aggressive type. Hence, early detection of skin cancer and melanoma is essential in dermatology. Computational methods can be a valuable tool for assisting dermatologists in identifying skin cancer. Most research in machine learning for skin cancer detection has focused on dermoscopy images due to the existence of larger image datasets. However, general practitioners typically do not have access to a dermoscope and must rely on naked-eye examinations or standard clinical images. By using standard, off-the-shelf cameras to detect high-risk moles, machine learning has also proven to be an effective tool. The objective of this paper is to provide a comprehensive review of image-processing techniques for skin cancer detection using clinical images. In this study, we evaluate 51 state-of-the-art articles that have used machine learning methods to detect skin cancer over the past decade, focusing on clinical datasets. Even though several studies have been conducted in this field, there are still few publicly available clinical datasets with sufficient data that can be used as a benchmark, especially when compared to the existing dermoscopy databases. In addition, we observed that the available artifact removal approaches are not quite adequate in some cases and may also have a negative impact on the models. Moreover, the majority of the reviewed articles are working with single-lesion images and do not consider typical mole patterns and temporal changes in the lesions of each patient.
Collapse
|
17
|
Ma X, Shan J, Ning F, Li W, Li H. EFFNet: A skin cancer classification model based on feature fusion and random forests. PLoS One 2023; 18:e0293266. [PMID: 37871038 PMCID: PMC10593232 DOI: 10.1371/journal.pone.0293266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 10/03/2023] [Indexed: 10/25/2023] Open
Abstract
Computer-aided diagnosis techniques based on deep learning in skin cancer classification have disadvantages such as unbalanced datasets, redundant information in the extracted features and ignored interactions of partial features among different convolutional layers. In order to overcome these disadvantages, we propose a skin cancer classification model named EFFNet, which is based on feature fusion and random forests. Firstly, the model preprocesses the HAM10000 dataset to make each category of training set images balanced by image enhancement technology. Then, the pre-training weights of the EfficientNetV2 model on the ImageNet dataset are fine-tuned on the HAM10000 skin cancer dataset. After that, an improved hierarchical bilinear pooling is introduced to capture the interactions of some features between the layers and enhance the expressive ability of features. Finally, the fused features are passed into the random forests for classification prediction. The experimental results show that the accuracy, recall, precision and F1-score of the model reach 94.96%, 93.74%, 93.16% and 93.24% respectively. Compared with other models, the accuracy rate is improved to some extent and the highest accuracy rate can be increased by about 10%.
Collapse
Affiliation(s)
- Xiaopu Ma
- School of Computer Science and Technology, Nanyang Normal University, Nanyang, Henan, China
| | - Jiangdan Shan
- School of Life Sciences and Agricultural Engineering, Nanyang Normal University, Nanyang, Henan, China
| | - Fei Ning
- School of Life Sciences and Agricultural Engineering, Nanyang Normal University, Nanyang, Henan, China
| | - Wentao Li
- School of Computer Science and Technology, Nanyang Normal University, Nanyang, Henan, China
| | - He Li
- School of Computer Science and Technology, Nanyang Normal University, Nanyang, Henan, China
| |
Collapse
|
18
|
Bibi S, Khan MA, Shah JH, Damaševičius R, Alasiry A, Marzougui M, Alhaisoni M, Masood A. MSRNet: Multiclass Skin Lesion Recognition Using Additional Residual Block Based Fine-Tuned Deep Models Information Fusion and Best Feature Selection. Diagnostics (Basel) 2023; 13:3063. [PMID: 37835807 PMCID: PMC10572512 DOI: 10.3390/diagnostics13193063] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 09/19/2023] [Accepted: 09/24/2023] [Indexed: 10/15/2023] Open
Abstract
Cancer is one of the leading significant causes of illness and chronic disease worldwide. Skin cancer, particularly melanoma, is becoming a severe health problem due to its rising prevalence. The considerable death rate linked with melanoma requires early detection to receive immediate and successful treatment. Lesion detection and classification are more challenging due to many forms of artifacts such as hairs, noise, and irregularity of lesion shape, color, irrelevant features, and textures. In this work, we proposed a deep-learning architecture for classifying multiclass skin cancer and melanoma detection. The proposed architecture consists of four core steps: image preprocessing, feature extraction and fusion, feature selection, and classification. A novel contrast enhancement technique is proposed based on the image luminance information. After that, two pre-trained deep models, DarkNet-53 and DensNet-201, are modified in terms of a residual block at the end and trained through transfer learning. In the learning process, the Genetic algorithm is applied to select hyperparameters. The resultant features are fused using a two-step approach named serial-harmonic mean. This step increases the accuracy of the correct classification, but some irrelevant information is also observed. Therefore, an algorithm is developed to select the best features called marine predator optimization (MPA) controlled Reyni Entropy. The selected features are finally classified using machine learning classifiers for the final classification. Two datasets, ISIC2018 and ISIC2019, have been selected for the experimental process. On these datasets, the obtained maximum accuracy of 85.4% and 98.80%, respectively. To prove the effectiveness of the proposed methods, a detailed comparison is conducted with several recent techniques and shows the proposed framework outperforms.
Collapse
Affiliation(s)
- Sobia Bibi
- Department of CS, COMSATS University Islamabad, Wah Campus, Islamabad 45550, Pakistan; (S.B.); (J.H.S.)
| | - Muhammad Attique Khan
- Department of Computer Science and Mathematics, Lebanese American University, Beirut 1102-2801, Lebanon;
- Department of CS, HITEC University, Taxila 47080, Pakistan
| | - Jamal Hussain Shah
- Department of CS, COMSATS University Islamabad, Wah Campus, Islamabad 45550, Pakistan; (S.B.); (J.H.S.)
| | - Robertas Damaševičius
- Center of Excellence Forest 4.0, Faculty of Informatics, Kaunas University of Technology, 51368 Kaunas, Lithuania;
| | - Areej Alasiry
- College of Computer Science, King Khalid University, Abha 61413, Saudi Arabia; (A.A.); (M.M.)
| | - Mehrez Marzougui
- College of Computer Science, King Khalid University, Abha 61413, Saudi Arabia; (A.A.); (M.M.)
| | - Majed Alhaisoni
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh 11564, Saudi Arabia;
| | - Anum Masood
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology (NTNU), 7034 Trondheim, Norway
| |
Collapse
|
19
|
Nervil GG, Ternov NK, Vestergaard T, Sølvsten H, Chakera AH, Tolsgaard MG, Hölmich LR. Improving Skin Cancer Diagnostics Through a Mobile App With a Large Interactive Image Repository: Randomized Controlled Trial. JMIR DERMATOLOGY 2023; 6:e48357. [PMID: 37624707 PMCID: PMC10448292 DOI: 10.2196/48357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 07/03/2023] [Indexed: 08/27/2023] Open
Abstract
BACKGROUND Skin cancer diagnostics is challenging, and mastery requires extended periods of dedicated practice. OBJECTIVE The aim of the study was to determine if self-paced pattern recognition training in skin cancer diagnostics with clinical and dermoscopic images of skin lesions using a large-scale interactive image repository (LIIR) with patient cases improves primary care physicians' (PCPs') diagnostic skills and confidence. METHODS A total of 115 PCPs were randomized (allocation ratio 3:1) to receive or not receive self-paced pattern recognition training in skin cancer diagnostics using an LIIR with patient cases through a quiz-based smartphone app during an 8-day period. The participants' ability to diagnose skin cancer was evaluated using a 12-item multiple-choice questionnaire prior to and 8 days after the educational intervention period. Their thoughts on the use of dermoscopy were assessed using a study-specific questionnaire. A learning curve was calculated through the analysis of data from the mobile app. RESULTS On average, participants in the intervention group spent 2 hours 26 minutes quizzing digital patient cases and 41 minutes reading the educational material. They had an average preintervention multiple choice questionnaire score of 52.0% of correct answers, which increased to 66.4% on the postintervention test; a statistically significant improvement of 14.3 percentage points (P<.001; 95% CI 9.8-18.9) with intention-to-treat analysis. Analysis of participants who received the intervention as per protocol (500 patient cases in 8 days) showed an average increase of 16.7 percentage points (P<.001; 95% CI 11.3-22.0) from 53.9% to 70.5%. Their overall ability to correctly recognize malignant lesions in the LIIR patient cases improved over the intervention period by 6.6 percentage points from 67.1% (95% CI 65.2-69.3) to 73.7% (95% CI 72.5-75.0) and their ability to set the correct diagnosis improved by 10.5 percentage points from 42.5% (95% CI 40.2%-44.8%) to 53.0% (95% CI 51.3-54.9). The diagnostic confidence of participants in the intervention group increased on a scale from 1 to 4 by 32.9% from 1.6 to 2.1 (P<.001). Participants in the control group did not increase their postintervention score or their diagnostic confidence during the same period. CONCLUSIONS Self-paced pattern recognition training in skin cancer diagnostics through the use of a digital LIIR with patient cases delivered by a quiz-based mobile app improves the diagnostic accuracy of PCPs. TRIAL REGISTRATION ClinicalTrials.gov NCT05661370; https://classic.clinicaltrials.gov/ct2/show/NCT05661370.
Collapse
Affiliation(s)
- Gustav Gede Nervil
- Department of Plastic Surgery, Herlev-Gentofte Hospital, Herlev, Denmark
| | | | - Tine Vestergaard
- Department of Dermatology and Allergy Centre, Odense University Hospital, Odense, Denmark
| | | | | | - Martin Grønnebæk Tolsgaard
- Copenhagen Academy for Medical Education and Simulation, Copenhagen University Hospital Rigshospitalet, Copenhagen, Denmark
- Department of Obstetrics, Copenhagen University Hospital Rigshospitalet, Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark
| | - Lisbet Rosenkrantz Hölmich
- Department of Plastic Surgery, Herlev-Gentofte Hospital, Herlev, Denmark
- Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
20
|
Schuh S, Schiele S, Thamm J, Kranz S, Welzel J, Blum A. Implementation of a dermatoscopy curriculum during residency at Augsburg University Hospital in Germany. J Dtsch Dermatol Ges 2023; 21:872-879. [PMID: 37235503 DOI: 10.1111/ddg.15115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 04/04/2023] [Indexed: 05/28/2023]
Abstract
BACKGROUND AND OBJECTIVES To date, there is no structured program for dermatoscopy training during residency in Germany. Whether and how much dermatoscopy training is acquired is left to the initiative of each resident, although dermatoscopy is one of the core competencies of dermatological training and daily practice. The aim of the study was to establish a structured dermatoscopy curriculum during residency at the University Hospital Augsburg. PATIENTS AND METHODS An online platform with dermatoscopy modules was created, accessible regardless of time and place. Practical skills were acquired under the personal guidance of a dermatoscopy expert. Participants were tested on their level of knowledge before and after completing the modules. Test scores on management decisions and correct dermatoscopic diagnosis were analyzed. RESULTS Results of 28 participants showed improvements in management decisions from pre- to posttest (74.0% vs. 89.4%) and in dermatoscopic accuracy (65.0% vs. 85.6%). Pre- vs. posttest differences in test score (7.05/10 vs. 8.94/10 points) and correct diagnosis were significant (p < 0.001). CONCLUSIONS The dermatoscopy curriculum increases the number of correct management decisions and dermatoscopy diagnoses. This will result in more skin cancers being detected, and fewer benign lesions being excised. The curriculum can be offered to other dermatology training centers and medical professionals.
Collapse
Affiliation(s)
- Sandra Schuh
- Department of Dermatology and Allergology, University Hospital Augsburg, Augsburg, Germany
| | - Stefan Schiele
- Institute of Mathematics, University of Augsburg, Augsburg, Germany
| | - Janis Thamm
- Department of Dermatology and Allergology, University Hospital Augsburg, Augsburg, Germany
| | - Stefanie Kranz
- Department of Dermatology and Allergology, University Hospital Augsburg, Augsburg, Germany
| | - Julia Welzel
- Department of Dermatology and Allergology, University Hospital Augsburg, Augsburg, Germany
| | - Andreas Blum
- Public, Private and Teaching Practice of Dermatology, Konstanz, Germany
| |
Collapse
|
21
|
Zhang Z, Ye S, Liu Z, Wang H, Ding W. Deep Hyperspherical Clustering for Skin Lesion Medical Image Segmentation. IEEE J Biomed Health Inform 2023; 27:3770-3781. [PMID: 37022227 DOI: 10.1109/jbhi.2023.3240297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
Diagnosis of skin lesions based on imaging techniques remains a challenging task because data (knowledge) uncertainty may reduce accuracy and lead to imprecise results. This paper investigates a new deep hyperspherical clustering (DHC) method for skin lesion medical image segmentation by combining deep convolutional neural networks and the theory of belief functions (TBF). The proposed DHC aims to eliminate the dependence on labeled data, improve segmentation performance, and characterize the imprecision caused by data (knowledge) uncertainty. First, the SLIC superpixel algorithm is employed to group the image into multiple meaningful superpixels, aiming to maximize the use of context without destroying the boundary information. Second, an autoencoder network is designed to transform the superpixels' information into potential features. Third, a hypersphere loss is developed to train the autoencoder network. The loss is defined to map the input to a pair of hyperspheres so that the network can perceive tiny differences. Finally, the result is redistributed to characterize the imprecision caused by data (knowledge) uncertainty based on the TBF. The proposed DHC method can well characterize the imprecision between skin lesions and non-lesions, which is particularly important for the medical procedures. A series of experiments on four dermoscopic benchmark datasets demonstrate that the proposed DHC yields better segmentation performance, increasing the accuracy of the predictions while can perceive imprecise regions compared to other typical methods.
Collapse
|
22
|
Schuh S, Schiele S, Thamm J, Kranz S, Welzel J, Blum A. Implementierung eines Dermatoskopie-Curriculums in der Facharztausbildung am Universitätsklinikum Augsburg. J Dtsch Dermatol Ges 2023; 21:872-881. [PMID: 37574685 DOI: 10.1111/ddg.15115_g] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 04/04/2023] [Indexed: 08/15/2023]
Abstract
ZusammenfassungHintergrund und ZieleBislang gibt es in Deutschland kein strukturiertes Programm für die Dermatoskopieausbildung während der Facharztausbildung. Es bleibt der Initiative des einzelnen Assistenzarztes überlassen, ob und in welchem Umfang er sich in der Dermatoskopie weiterbildet, obwohl die Dermatoskopie zu den Kernkompetenzen der dermatologischen Ausbildung und der täglichen Praxis gehört. Ziel der Studie war die Etablierung eines strukturierten Dermatoskopie‐Curriculums während der dermatologischen Facharztausbildung am Universitätsklinikum Augsburg.Patienten und MethodikEs wurde eine Online‐Plattform mit Dermatoskopie‐Modulen geschaffen, auf die von überall und jederzeit zugegriffen werden kann. Praktische Fertigkeiten wurden unter individueller Anleitung eines Dermatoskopie‐Experten erworben. Die Teilnehmer wurden vor und nach Abschluss der Module auf ihren Wissensstand getestet. Die Testergebnisse zum therapeutischen Management und zur korrekten dermatoskopischen Diagnose wurden analysiert.ErgebnisseDie Ergebnisse der 28 Teilnehmer verbesserten sich vom Eingangs‐ zum Abschlusstest bei der Managemententscheidung (74,0% vs. 89,4%) und bei der dermatoskopischen Genauigkeit (65,0% vs. 85,6%). Die Unterschiede zwischen Eingangs‐ und Abschlusstest bei der Gesamtpunktzahl (7,05/10 vs. 8,94/10 Punkte) und bei der richtigen Diagnose waren signifikant (p < 0,001).SchlussfolgerungenDas Dermatoskopie‐Curriculum verbessert die Managemententscheidungen und die dermatoskopische Diagnostik der Teilnehmer. Das wird dazu führen, dass mehr Hautkrebsfälle erkannt werden und weniger gutartige Läsionen reseziert werden müssen. Das Curriculum kann anderen dermatologischen Ausbildungszentren und Gesundheitsberufen angeboten werden.
Collapse
Affiliation(s)
- Sandra Schuh
- Klinik für Dermatologie und Allergologie, Universitätsklinikum Augsburg
| | | | - Janis Thamm
- Klinik für Dermatologie und Allergologie, Universitätsklinikum Augsburg
| | - Stefanie Kranz
- Klinik für Dermatologie und Allergologie, Universitätsklinikum Augsburg
| | - Julia Welzel
- Klinik für Dermatologie und Allergologie, Universitätsklinikum Augsburg
| | - Andreas Blum
- Hautarzt- und Lehrpraxis für Dermatologie, Konstanz
| |
Collapse
|
23
|
Dimas G, Cholopoulou E, Iakovidis DK. E pluribus unum interpretable convolutional neural networks. Sci Rep 2023; 13:11421. [PMID: 37452133 PMCID: PMC10349135 DOI: 10.1038/s41598-023-38459-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 07/08/2023] [Indexed: 07/18/2023] Open
Abstract
The adoption of convolutional neural network (CNN) models in high-stake domains is hindered by their inability to meet society's demand for transparency in decision-making. So far, a growing number of methodologies have emerged for developing CNN models that are interpretable by design. However, such models are not capable of providing interpretations in accordance with human perception, while maintaining competent performance. In this paper, we tackle these challenges with a novel, general framework for instantiating inherently interpretable CNN models, named E pluribus unum interpretable CNN (EPU-CNN). An EPU-CNN model consists of CNN sub-networks, each of which receives a different representation of an input image expressing a perceptual feature, such as color or texture. The output of an EPU-CNN model consists of the classification prediction and its interpretation, in terms of relative contributions of perceptual features in different regions of the input image. EPU-CNN models have been extensively evaluated on various publicly available datasets, as well as a contributed benchmark dataset. Medical datasets are used to demonstrate the applicability of EPU-CNN for risk-sensitive decisions in medicine. The experimental results indicate that EPU-CNN models can achieve a comparable or better classification performance than other CNN architectures while providing humanly perceivable interpretations.
Collapse
Affiliation(s)
- George Dimas
- Department of Computer Science and Biomedical Informatics, School of Science, University of Thessaly, Lamia, Greece
| | - Eirini Cholopoulou
- Department of Computer Science and Biomedical Informatics, School of Science, University of Thessaly, Lamia, Greece
| | - Dimitris K Iakovidis
- Department of Computer Science and Biomedical Informatics, School of Science, University of Thessaly, Lamia, Greece.
| |
Collapse
|
24
|
Mirikharaji Z, Abhishek K, Bissoto A, Barata C, Avila S, Valle E, Celebi ME, Hamarneh G. A survey on deep learning for skin lesion segmentation. Med Image Anal 2023; 88:102863. [PMID: 37343323 DOI: 10.1016/j.media.2023.102863] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 02/01/2023] [Accepted: 05/31/2023] [Indexed: 06/23/2023]
Abstract
Skin cancer is a major public health problem that could benefit from computer-aided diagnosis to reduce the burden of this common disease. Skin lesion segmentation from images is an important step toward achieving this goal. However, the presence of natural and artificial artifacts (e.g., hair and air bubbles), intrinsic factors (e.g., lesion shape and contrast), and variations in image acquisition conditions make skin lesion segmentation a challenging task. Recently, various researchers have explored the applicability of deep learning models to skin lesion segmentation. In this survey, we cross-examine 177 research papers that deal with deep learning-based segmentation of skin lesions. We analyze these works along several dimensions, including input data (datasets, preprocessing, and synthetic data generation), model design (architecture, modules, and losses), and evaluation aspects (data annotation requirements and segmentation performance). We discuss these dimensions both from the viewpoint of select seminal works, and from a systematic viewpoint, examining how those choices have influenced current trends, and how their limitations should be addressed. To facilitate comparisons, we summarize all examined works in a comprehensive table as well as an interactive table available online3.
Collapse
Affiliation(s)
- Zahra Mirikharaji
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada
| | - Kumar Abhishek
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada
| | - Alceu Bissoto
- RECOD.ai Lab, Institute of Computing, University of Campinas, Av. Albert Einstein 1251, Campinas 13083-852, Brazil
| | - Catarina Barata
- Institute for Systems and Robotics, Instituto Superior Técnico, Avenida Rovisco Pais, Lisbon 1049-001, Portugal
| | - Sandra Avila
- RECOD.ai Lab, Institute of Computing, University of Campinas, Av. Albert Einstein 1251, Campinas 13083-852, Brazil
| | - Eduardo Valle
- RECOD.ai Lab, School of Electrical and Computing Engineering, University of Campinas, Av. Albert Einstein 400, Campinas 13083-952, Brazil
| | - M Emre Celebi
- Department of Computer Science and Engineering, University of Central Arkansas, 201 Donaghey Ave., Conway, AR 72035, USA.
| | - Ghassan Hamarneh
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada.
| |
Collapse
|
25
|
Rezk E, Eltorki M, El-Dakhakhni W. Interpretable Skin Cancer Classification based on Incremental Domain Knowledge Learning. JOURNAL OF HEALTHCARE INFORMATICS RESEARCH 2023; 7:59-83. [PMID: 36910915 PMCID: PMC9995827 DOI: 10.1007/s41666-023-00127-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Revised: 01/02/2023] [Accepted: 02/03/2023] [Indexed: 02/17/2023]
Abstract
The recent advances in artificial intelligence have led to the rapid development of computer-aided skin cancer diagnosis applications that perform on par with dermatologists. However, the black-box nature of such applications makes it difficult for physicians to trust the predicted decisions, subsequently preventing the proliferation of such applications in the clinical workflow. In this work, we aim to address this challenge by developing an interpretable skin cancer diagnosis approach using clinical images. Accordingly, a skin cancer diagnosis model consolidated with two interpretability methods is developed. The first interpretability method integrates skin cancer diagnosis domain knowledge, characterized by a skin lesion taxonomy, into model development, whereas the other method focuses on visualizing the decision-making process by highlighting the dominant of interest regions of skin lesion images. The proposed model is trained and validated on clinical images since the latter are easily obtainable by non-specialist healthcare providers. The results demonstrate the effectiveness of incorporating lesion taxonomy in improving model classification accuracy, where our model can predict the skin lesion origin as melanocytic or non-melanocytic with an accuracy of 87%, predict lesion malignancy with 77% accuracy, and provide disease diagnosis with an accuracy of 71%. In addition, the implemented interpretability methods assist understand the model's decision-making process and detecting misdiagnoses. This work is a step toward achieving interpretability in skin cancer diagnosis using clinical images. The developed approach can assist general practitioners to make an early diagnosis, thus reducing the redundant referrals that expert dermatologists receive for further investigations.
Collapse
Affiliation(s)
- Eman Rezk
- School of Computational Science and Engineering, McMaster University, Hamilton, ON Canada
| | - Mohamed Eltorki
- Faculty of Health Sciences, McMaster University, Hamilton, ON Canada
| | - Wael El-Dakhakhni
- School of Computational Science and Engineering, McMaster University, Hamilton, ON Canada
| |
Collapse
|
26
|
Maqsood S, Damaševičius R. Multiclass skin lesion localization and classification using deep learning based features fusion and selection framework for smart healthcare. Neural Netw 2023; 160:238-258. [PMID: 36701878 DOI: 10.1016/j.neunet.2023.01.022] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 11/13/2022] [Accepted: 01/19/2023] [Indexed: 01/27/2023]
Abstract
BACKGROUND The idea of smart healthcare has gradually gained attention as a result of the information technology industry's rapid development. Smart healthcare uses next-generation technologies i.e., artificial intelligence (AI) and Internet of Things (IoT), to intelligently transform current medical methods to make them more efficient, dependable and individualized. One of the most prominent uses of telemedicine and e-health in medical image analysis is teledermatology. Telecommunications technologies are used in this industry to send medical information to professionals. Teledermatology is a useful method for the identification of skin lesions, particularly in rural locations, because the skin is visually perceptible. One of the most recent tools for diagnosing skin cancer is dermoscopy. To classify skin malignancies, numerous computational approaches have been proposed in the literature. However, difficulties still exist i.e., lesions with low contrast, imbalanced datasets, high level of memory complexity, and the extraction of redundant features. METHODS In this work, a unified CAD model is proposed based on a deep learning framework for skin lesion segmentation and classification. In the proposed approach, the source dermoscopic images are initially pre-processed using a contrast enhancement based modified bio-inspired multiple exposure fusion approach. In the second stage, a custom 26-layered convolutional neural network (CNN) architecture is designed to segment the skin lesion regions. In the third stage, four pre-trained CNN models (Xception, ResNet-50, ResNet-101 and VGG16) are modified and trained using transfer learning on the segmented lesion images. In the fourth stage, the deep features vectors are extracted from all the CNN models and fused using the convolutional sparse image decomposition fusion approach. In the fifth stage, the univariate measurement and Poisson distribution feature selection approach is used for the best features selection for classification. Finally, the selected features are fed to the multi-class support vector machine (MC-SVM) for the final classification. RESULTS The proposed approach employed to the HAM10000, ISIC2018, ISIC2019, and PH2 datasets and achieved an accuracy of 98.57%, 98.62%, 93.47%, and 98.98% respectively which are better than previous works. CONCLUSION When compared to renowned state-of-the-art methods, experimental results show that the proposed skin lesion detection and classification approach achieved higher performance in terms of both visually and enhanced quantitative evaluation with enhanced accuracy.
Collapse
Affiliation(s)
- Sarmad Maqsood
- Department of Software Engineering, Faculty of Informatics Engineering, Kaunas University of Technology, LT-51386 Kaunas, Lithuania.
| | - Robertas Damaševičius
- Department of Software Engineering, Faculty of Informatics Engineering, Kaunas University of Technology, LT-51386 Kaunas, Lithuania.
| |
Collapse
|
27
|
Wang Y, Wang Y, Cai J, Lee TK, Miao C, Wang ZJ. SSD-KD: A self-supervised diverse knowledge distillation method for lightweight skin lesion classification using dermoscopic images. Med Image Anal 2023; 84:102693. [PMID: 36462373 DOI: 10.1016/j.media.2022.102693] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 11/08/2022] [Accepted: 11/09/2022] [Indexed: 11/15/2022]
Abstract
Skin cancer is one of the most common types of malignancy, affecting a large population and causing a heavy economic burden worldwide. Over the last few years, computer-aided diagnosis has been rapidly developed and make great progress in healthcare and medical practices due to the advances in artificial intelligence, particularly with the adoption of convolutional neural networks. However, most studies in skin cancer detection keep pursuing high prediction accuracies without considering the limitation of computing resources on portable devices. In this case, the knowledge distillation (KD) method has been proven as an efficient tool to help improve the adaptability of lightweight models under limited resources, meanwhile keeping a high-level representation capability. To bridge the gap, this study specifically proposes a novel method, termed SSD-KD, that unifies diverse knowledge into a generic KD framework for skin disease classification. Our method models an intra-instance relational feature representation and integrates it with existing KD research. A dual relational knowledge distillation architecture is self-supervised trained while the weighted softened outputs are also exploited to enable the student model to capture richer knowledge from the teacher model. To demonstrate the effectiveness of our method, we conduct experiments on ISIC 2019, a large-scale open-accessed benchmark of skin diseases dermoscopic images. Experiments show that our distilled MobileNetV2 can achieve an accuracy as high as 85% for the classification tasks of 8 different skin diseases with minimal parameters and computing requirements. Ablation studies confirm the effectiveness of our intra- and inter-instance relational knowledge integration strategy. Compared with state-of-the-art knowledge distillation techniques, the proposed method demonstrates improved performance. To the best of our knowledge, this is the first deep knowledge distillation application for multi-disease classification on the large-scale dermoscopy database. Our codes and models are available at https://github.com/enkiwang/Portable-Skin-Lesion-Diagnosis.
Collapse
Affiliation(s)
- Yongwei Wang
- Joint NTU-UBC Research Centre of Excellence in Active Living for the Elderly (LILY), NTU, Singapore
| | - Yuheng Wang
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada; Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada; Department of Dermatology and Skin Science, University of British Columbia, Vancouver, BC, Canada; Photomedicine Institute, Vancouver Coast Health Research Institute, Vancouver, BC, Canada; Cancer Control Research Program, BC Cancer, Vancouver, BC, Canada.
| | - Jiayue Cai
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Tim K Lee
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada; Department of Dermatology and Skin Science, University of British Columbia, Vancouver, BC, Canada; Photomedicine Institute, Vancouver Coast Health Research Institute, Vancouver, BC, Canada; Cancer Control Research Program, BC Cancer, Vancouver, BC, Canada
| | - Chunyan Miao
- School of Computer Science and Engineering, Nanyang Technological University, Singapore.
| | - Z Jane Wang
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada; Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
28
|
Lovis C, Weber J, Liopyris K, Braun RP, Marghoob AA, Quigley EA, Nelson K, Prentice K, Duhaime E, Halpern AC, Rotemberg V. Agreement Between Experts and an Untrained Crowd for Identifying Dermoscopic Features Using a Gamified App: Reader Feasibility Study. JMIR Med Inform 2023; 11:e38412. [PMID: 36652282 PMCID: PMC9892985 DOI: 10.2196/38412] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 09/28/2022] [Accepted: 10/16/2022] [Indexed: 01/19/2023] Open
Abstract
BACKGROUND Dermoscopy is commonly used for the evaluation of pigmented lesions, but agreement between experts for identification of dermoscopic structures is known to be relatively poor. Expert labeling of medical data is a bottleneck in the development of machine learning (ML) tools, and crowdsourcing has been demonstrated as a cost- and time-efficient method for the annotation of medical images. OBJECTIVE The aim of this study is to demonstrate that crowdsourcing can be used to label basic dermoscopic structures from images of pigmented lesions with similar reliability to a group of experts. METHODS First, we obtained labels of 248 images of melanocytic lesions with 31 dermoscopic "subfeatures" labeled by 20 dermoscopy experts. These were then collapsed into 6 dermoscopic "superfeatures" based on structural similarity, due to low interrater reliability (IRR): dots, globules, lines, network structures, regression structures, and vessels. These images were then used as the gold standard for the crowd study. The commercial platform DiagnosUs was used to obtain annotations from a nonexpert crowd for the presence or absence of the 6 superfeatures in each of the 248 images. We replicated this methodology with a group of 7 dermatologists to allow direct comparison with the nonexpert crowd. The Cohen κ value was used to measure agreement across raters. RESULTS In total, we obtained 139,731 ratings of the 6 dermoscopic superfeatures from the crowd. There was relatively lower agreement for the identification of dots and globules (the median κ values were 0.526 and 0.395, respectively), whereas network structures and vessels showed the highest agreement (the median κ values were 0.581 and 0.798, respectively). This pattern was also seen among the expert raters, who had median κ values of 0.483 and 0.517 for dots and globules, respectively, and 0.758 and 0.790 for network structures and vessels. The median κ values between nonexperts and thresholded average-expert readers were 0.709 for dots, 0.719 for globules, 0.714 for lines, 0.838 for network structures, 0.818 for regression structures, and 0.728 for vessels. CONCLUSIONS This study confirmed that IRR for different dermoscopic features varied among a group of experts; a similar pattern was observed in a nonexpert crowd. There was good or excellent agreement for each of the 6 superfeatures between the crowd and the experts, highlighting the similar reliability of the crowd for labeling dermoscopic images. This confirms the feasibility and dependability of using crowdsourcing as a scalable solution to annotate large sets of dermoscopic images, with several potential clinical and educational applications, including the development of novel, explainable ML tools.
Collapse
Affiliation(s)
| | - Jochen Weber
- Dermatology Section, Memorial Sloan Kettering Cancer Center, New York, NY, United States
| | - Konstantinos Liopyris
- Department of Dermatology, Andreas Syggros Hospital of Cutaneous and Venereal Diseases, University of Athens, Athens, Greece
| | - Ralph P Braun
- Department of Dermatology, University Hospital Zurich, Zurich, Switzerland
| | - Ashfaq A Marghoob
- Dermatology Section, Memorial Sloan Kettering Cancer Center, New York, NY, United States
| | - Elizabeth A Quigley
- Dermatology Section, Memorial Sloan Kettering Cancer Center, New York, NY, United States
| | - Kelly Nelson
- Department of Dermatology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | | | | | - Allan C Halpern
- Dermatology Section, Memorial Sloan Kettering Cancer Center, New York, NY, United States
| | - Veronica Rotemberg
- Dermatology Section, Memorial Sloan Kettering Cancer Center, New York, NY, United States
| |
Collapse
|
29
|
A Novel Framework for Melanoma Lesion Segmentation Using Multiparallel Depthwise Separable and Dilated Convolutions with Swish Activations. JOURNAL OF HEALTHCARE ENGINEERING 2023; 2023:1847115. [PMID: 36794097 PMCID: PMC9925248 DOI: 10.1155/2023/1847115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 08/16/2022] [Accepted: 11/24/2022] [Indexed: 02/08/2023]
Abstract
Skin cancer remains one of the deadliest kinds of cancer, with a survival rate of about 18-20%. Early diagnosis and segmentation of the most lethal kind of cancer, melanoma, is a challenging and critical task. To diagnose medicinal conditions of melanoma lesions, different researchers proposed automatic and traditional approaches to accurately segment the lesions. However, visual similarity among lesions and intraclass differences are very high, which leads to low-performance accuracy. Furthermore, traditional segmentation algorithms often require human inputs and cannot be utilized in automated systems. To address all of these issues, we provide an improved segmentation model based on depthwise separable convolutions that act on each spatial dimension of the image to segment the lesions. The fundamental idea behind these convolutions is to divide the feature learning steps into two simpler parts that are spatial learning of features and a step for channel combination. Besides this, we employ parallel multidilated filters to encode multiple parallel features and broaden the view of filters with dilations. Moreover, for performance evaluation, the proposed approach is evaluated on three different datasets including DermIS, DermQuest, and ISIC2016. The finding indicates that the suggested segmentation model has achieved the Dice score of 97% for DermIS and DermQuest and 94.7% for the ISBI2016 dataset, respectively.
Collapse
|
30
|
Singla S, Murali N, Arabshahi F, Triantafyllou S, Batmanghelich K. Augmentation by Counterfactual Explanation - Fixing an Overconfident Classifier. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION 2023; 2023:4709-4719. [PMID: 37724183 PMCID: PMC10506513 DOI: 10.1109/wacv56688.2023.00470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/20/2023]
Abstract
A highly accurate but overconfident model is ill-suited for deployment in critical applications such as healthcare and autonomous driving. The classification outcome should reflect a high uncertainty on ambiguous in-distribution samples that lie close to the decision boundary. The model should also refrain from making overconfident decisions on samples that lie far outside its training distribution, far-out-of-distribution (far-OOD), or on unseen samples from novel classes that lie near its training distribution (near-OOD). This paper proposes an application of counterfactual explanations in fixing an over-confident classifier. Specifically, we propose to fine-tune a given pre-trained classifier using augmentations from a counterfactual explainer (ACE) to fix its uncertainty characteristics while retaining its predictive performance. We perform extensive experiments with detecting far-OOD, near-OOD, and ambiguous samples. Our empirical results show that the revised model have improved uncertainty measures, and its performance is competitive to the state-of-the-art methods.
Collapse
|
31
|
A comprehensive analysis of dermoscopy images for melanoma detection via deep CNN features. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104186] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
32
|
Drozdowski R, Spaccarelli N, Peters MS, Grant-Kels JM. Dysplastic nevus part I: Historical perspective, classification, and epidemiology. J Am Acad Dermatol 2023; 88:1-10. [PMID: 36038073 DOI: 10.1016/j.jaad.2022.04.068] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 04/05/2022] [Accepted: 04/06/2022] [Indexed: 10/15/2022]
Abstract
Since the late 1970s, the diagnosis and management of dysplastic nevi have been areas fraught with controversy in the fields of dermatology and dermatopathology. Diagnostic uncertainty and lack of standardized nomenclature continue to propagate confusion among clinicians, dermatopathologists, and patients. In part I of this CME review article, we summarize the historical context that gave rise to the debate surrounding dysplastic nevi and review key features for diagnosis, classification, and management, as well as epidemiology. We discuss essentials of clinical criteria, dermoscopic features, histopathologic features, and the diagnostic utility of total body photography and reflectance confocal microscopy in evaluating dysplastic nevi, with emphasis on information available since the last comprehensive review a decade ago.
Collapse
Affiliation(s)
- Roman Drozdowski
- University of Connecticut School of Medicine, Farmington, Connecticut
| | - Natalie Spaccarelli
- Department of Dermatology, The Ohio State University Wexner Medical Center, Columbus, Ohio
| | - Margot S Peters
- Departments of Dermatology and Laboratory Medicine and Pathology, Mayo Clinic, Rochester, Minnesota
| | - Jane M Grant-Kels
- Departments of Dermatology, Pathology and Pediatrics, University of Connecticut School of Medicine, Farmington, Connecticut; Department of Dermatology, University of Florida College of Medicine, Gainesville, Florida.
| |
Collapse
|
33
|
Wang S, Yin Y, Wang D, Wang Y, Jin Y. Interpretability-Based Multimodal Convolutional Neural Networks for Skin Lesion Diagnosis. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:12623-12637. [PMID: 34546933 DOI: 10.1109/tcyb.2021.3069920] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Skin lesion diagnosis is a key step for skin cancer screening, which requires high accuracy and interpretability. Though many computer-aided methods, especially deep learning methods, have made remarkable achievements in skin lesion diagnosis, their generalization and interpretability are still a challenge. To solve this issue, we propose an interpretability-based multimodal convolutional neural network (IM-CNN), which is a multiclass classification model with skin lesion images and metadata of patients as input for skin lesion diagnosis. The structure of IM-CNN consists of three main paths to deal with metadata, features extracted from segmented skin lesion with domain knowledge, and skin lesion images, respectively. We add interpretable visual modules to provide explanations for both images and metadata. In addition to area under the ROC curve (AUC), sensitivity, and specificity, we introduce a new indicator, an AUC curve with a sensitivity larger than 80% (AUC_SEN_80) for performance evaluation. Extensive experimental studies are conducted on the popular HAM10000 dataset, and the results indicate that the proposed model has overwhelming advantages compared with popular deep learning models, such as DenseNet, ResNet, and other state-of-the-art models for melanoma diagnosis. The proposed multimodal model also achieves on average 72% and 21% improvement in terms of sensitivity and AUC_SEN_80, respectively, compared with the single-modal model. The visual explanations can also help gain trust from dermatologists and realize man-machine collaborations, effectively reducing the limitation of black-box models in supporting medical decision making.
Collapse
|
34
|
Qian S, Ren K, Zhang W, Ning H. Skin lesion classification using CNNs with grouping of multi-scale attention and class-specific loss weighting. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107166. [PMID: 36209623 DOI: 10.1016/j.cmpb.2022.107166] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 09/05/2022] [Accepted: 09/29/2022] [Indexed: 06/16/2023]
Abstract
As one of the most common cancers globally, the incidence of skin cancer has been rising. Dermoscopy-based classification has become the most effective method for the diagnosis of skin lesion types due to its accuracy and non-invasive characteristics, which plays a significant role in reducing mortality. Although a great breakthrough of the task of skin lesion classification has been made with the application of convolutional neural network, the inter-class similarity and intra-class variation in skin lesions images, the high class imbalance of the dataset and the lack of ability to focus on the lesion area all affect the classification results of the model. In order to solve these problems, on the one hand, we use the grouping of multi-scale attention blocks (GMAB) to extract multi-scale fine-grained features so as to improve the model's ability to focus on the lesion area. On the other hand, we adopt the method of class-specific loss weighting for the problem of category imbalance. In this paper, we propose a deep convolution neural network dermatoscopic image classification method based on the grouping of multi-scale attention blocks and class-specific loss weighting. We evaluated our model on the HAM10000 dataset, and the results showed that the ACC and AUC of the proposed method were 91.6% and 97.1% respectively, which can achieve good results in dermatoscopic classification tasks.
Collapse
Affiliation(s)
- Shenyi Qian
- Information management center, Zhengzhou University of Light Industry, Zhengzhou 450001, China.
| | - Kunpeng Ren
- School of Computer and Communication Engineering, Zhengzhou University of Light Industry, Zhengzhou 450001, China
| | - Weiwei Zhang
- School of Computer and Communication Engineering, Zhengzhou University of Light Industry, Zhengzhou 450001, China
| | - Haohan Ning
- School of Computer and Communication Engineering, Zhengzhou University of Light Industry, Zhengzhou 450001, China
| |
Collapse
|
35
|
Batista LG, Bugatti PH, Saito PTM. Classification of Skin Lesion through Active Learning Strategies. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107122. [PMID: 36116397 DOI: 10.1016/j.cmpb.2022.107122] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 08/09/2022] [Accepted: 09/08/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE According to the National Cancer Institute, among all malignant tumors, non-melanoma skin cancer, and melanoma are the most frequent in Brazil. Despite having a lower incidence, the melanoma type has accelerated growth and greater lethality. Several studies have been performed in recent years in the computer vision area to assist in the early diagnosis of skin cancer. Despite being widely used and presenting good results, deep learning approaches require a large amount of annotated data and considerable computational cost for training the model. Therefore, the present work explores active learning approaches to select a small set of more informative data for training the classifier. For that, different selection criteria are considered to obtain more effective and efficient classifiers for skin lesions. METHODS We perform an extensive experimental evaluation considering three datasets and different learning strategies and scenarios for validation. In addition to data augmentation, we evaluated two segmentation strategies considering the U-net CNN model and the Fully Convolutional Networks (FCN) with a manual expert review. We also analyzed the best (handcrafted and deep) features that describe each skin lesion and the most suitable classifiers and combinations (extractor-classifier) for this context. The active learning approach evaluated different criteria based on uncertainty, diversity, and representativeness to select the most informative samples. The strategies used were Decreasing Boundary Edges, Entropy, Least Confidence, Margin Sampling, Minimum-Spanning Tree Boundary Edges, and Root-Distance based Sampling. RESULTS It can be observed that the segmentation with FCN and manual correction by the specialist, the Border-Interior Classification (BIC) extractor, and the Random Forest (RF) classifier showed a better performance. Regarding the active learning approach, the Margin Sampling strategy presented the best classification accuracies (about 93%) with only 35% of the training set compared to the traditional learning approach (which requires the entire set). CONCLUSIONS According to the results, it is possible to observe that the selection strategies allow for achieving high accuracies faster (fewer learning iterations) and with a smaller amount of labeled samples compared to the traditional learning approach. Hence, active learning can contribute significantly to the diagnosis of skin lesions, beneficially reducing specialists' annotation costs.
Collapse
Affiliation(s)
- Lucas G Batista
- Department of Computing, Federal University of Technology - Parana, 1640, Alberto Carazzai Av., Cornelio Procopio, PR 86300-000, Brazil.
| | - Pedro H Bugatti
- Department of Computing, Federal University of Technology - Parana, 1640, Alberto Carazzai Av., Cornelio Procopio, PR 86300-000, Brazil.
| | - Priscila T M Saito
- Department of Computing, Federal University of Technology - Parana, 1640, Alberto Carazzai Av., Cornelio Procopio, PR 86300-000, Brazil; Departament of Computing, Federal University of Sao Carlos, km 235, Rodovia Washington Luis, Sao Carlos, SP 13565-905, Brazil; Institute of Computing, State University of Campinas, 1251, Albert Einstein Ave, Cidade Universitária, Campinas, SP 13083-852, Brazil.
| |
Collapse
|
36
|
SkiNet: A deep learning framework for skin lesion diagnosis with uncertainty estimation and explainability. PLoS One 2022; 17:e0276836. [PMID: 36315487 PMCID: PMC9621459 DOI: 10.1371/journal.pone.0276836] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 10/14/2022] [Indexed: 11/05/2022] Open
Abstract
Skin cancer is considered to be the most common human malignancy. Around 5 million new cases of skin cancer are recorded in the United States annually. Early identification and evaluation of skin lesions are of great clinical significance, but the disproportionate dermatologist-patient ratio poses a significant problem in most developing nations. Therefore a novel deep architecture, named as SkiNet, is proposed to provide faster screening solution and assistance to newly trained physicians in the process of clinical diagnosis of skin cancer. The main motive behind SkiNet's design and development is to provide a white box solution, addressing a critical problem of trust and interpretability which is crucial for the wider adoption of Computer-aided diagnosis systems by medical practitioners. The proposed SkiNet is a two-stage pipeline wherein the lesion segmentation is followed by the lesion classification. Monte Carlo dropout and test time augmentation techniques have been employed in the proposed method to estimate epistemic and aleatoric uncertainty. A novel segmentation model named Bayesian MultiResUNet is used to estimate the uncertainty on the predicted segmentation map. Saliency-based methods like XRAI, Grad-CAM and Guided Backprop are explored to provide post-hoc explanations of the deep learning models. The ISIC-2018 dataset is used to perform the experimentation and ablation studies. The results establish the robustness of the proposed model on the traditional benchmarks while addressing the black-box nature of such models to alleviate the skepticism of medical practitioners by incorporating transparency and confidence to the model's prediction.
Collapse
|
37
|
Ou C, Zhou S, Yang R, Jiang W, He H, Gan W, Chen W, Qin X, Luo W, Pi X, Li J. A deep learning based multimodal fusion model for skin lesion diagnosis using smartphone collected clinical images and metadata. Front Surg 2022; 9:1029991. [PMID: 36268206 PMCID: PMC9577400 DOI: 10.3389/fsurg.2022.1029991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2022] [Accepted: 09/15/2022] [Indexed: 11/13/2022] Open
Abstract
Introduction Skin cancer is one of the most common types of cancer. An accessible tool to the public can help screening for malign lesion. We aimed to develop a deep learning model to classify skin lesion using clinical images and meta information collected from smartphones. Methods A deep neural network was developed with two encoders for extracting information from image data and metadata. A multimodal fusion module with intra-modality self-attention and inter-modality cross-attention was proposed to effectively combine image features and meta features. The model was trained on tested on a public dataset and compared with other state-of-the-art methods using five-fold cross-validation. Results Including metadata is shown to significantly improve a model's performance. Our model outperformed other metadata fusion methods in terms of accuracy, balanced accuracy and area under the receiver-operating characteristic curve, with an averaged value of 0.768±0.022, 0.775±0.022 and 0.947±0.007. Conclusion A deep learning model using smartphone collected images and metadata for skin lesion diagnosis was successfully developed. The proposed model showed promising performance and could be a potential tool for skin cancer screening.
Collapse
Affiliation(s)
- Chubin Ou
- Clinical Research Institute, The First People’s Hospital of Foshan, Foshan, China,R/D Center, Visionwise Medical Technology, Foshan, China
| | - Sitong Zhou
- Department of Dermatology, The First People’s Hospital of Foshan, Foshan, China
| | - Ronghua Yang
- Department of Burn and Plastic Surgery, Guangzhou First People's Hospital, South China University of Technology, Guangzhou, China
| | - Weili Jiang
- R/D Center, Visionwise Medical Technology, Foshan, China
| | - Haoyang He
- R/D Center, Visionwise Medical Technology, Foshan, China
| | - Wenjun Gan
- Guangdong Medical University, Zhanjiang, China
| | - Wentao Chen
- Guangdong Medical University, Zhanjiang, China
| | - Xinchi Qin
- Guangdong Medical University, Zhanjiang, China
| | - Wei Luo
- Clinical Research Institute, The First People’s Hospital of Foshan, Foshan, China,Correspondence: Jiehua Li Xiaobing Pi Wei Luo
| | - Xiaobing Pi
- Department of Dermatology, The First People’s Hospital of Foshan, Foshan, China,Correspondence: Jiehua Li Xiaobing Pi Wei Luo
| | - Jiehua Li
- Department of Dermatology, The First People’s Hospital of Foshan, Foshan, China,Correspondence: Jiehua Li Xiaobing Pi Wei Luo
| |
Collapse
|
38
|
Yilmaz A, Gencoglan G, Varol R, Demircali AA, Keshavarz M, Uvet H. MobileSkin: Classification of Skin Lesion Images Acquired Using Mobile Phone-Attached Hand-Held Dermoscopes. J Clin Med 2022; 11:5102. [PMID: 36079042 PMCID: PMC9457478 DOI: 10.3390/jcm11175102] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 08/17/2022] [Accepted: 08/26/2022] [Indexed: 11/16/2022] Open
Abstract
Dermoscopy is the visual examination of the skin under a polarized or non-polarized light source. By using dermoscopic equipment, many lesion patterns that are invisible under visible light can be clearly distinguished. Thus, more accurate decisions can be made regarding the treatment of skin lesions. The use of images collected from a dermoscope has both increased the performance of human examiners and allowed the development of deep learning models. The availability of large-scale dermoscopic datasets has allowed the development of deep learning models that can classify skin lesions with high accuracy. However, most dermoscopic datasets contain images that were collected from digital dermoscopic devices, as these devices are frequently used for clinical examination. However, dermatologists also often use non-digital hand-held (optomechanical) dermoscopes. This study presents a dataset consisting of dermoscopic images taken using a mobile phone-attached hand-held dermoscope. Four deep learning models based on the MobileNetV1, MobileNetV2, NASNetMobile, and Xception architectures have been developed to classify eight different lesion types using this dataset. The number of images in the dataset was increased with different data augmentation methods. The models were initialized with weights that were pre-trained on the ImageNet dataset, and then they were further fine-tuned using the presented dataset. The most successful models on the unseen test data, MobileNetV2 and Xception, had performances of 89.18% and 89.64%. The results were evaluated with the 5-fold cross-validation method and compared. Our method allows for automated examination of dermoscopic images taken with mobile phone-attached hand-held dermoscopes.
Collapse
Affiliation(s)
- Abdurrahim Yilmaz
- Mechatronics Engineering, Yildiz Technical University, 34349 Istanbul, Turkey
- Department of Business Administration, Bundeswehr University Munich, 85579 Munich, Germany
| | - Gulsum Gencoglan
- Department of Dermatology, Liv Hospital Vadistanbul, Istinye University, 34396 Istanbul, Turkey
| | - Rahmetullah Varol
- Mechatronics Engineering, Yildiz Technical University, 34349 Istanbul, Turkey
- Department of Business Administration, Bundeswehr University Munich, 85579 Munich, Germany
| | - Ali Anil Demircali
- Department of Metabolism, Digestion and Reproduction, The Hamlyn Centre, Imperial College London, Bessemer Building, London SW7 2AZ, UK
| | - Meysam Keshavarz
- Department of Electrical and Electronic Engineering, The Hamlyn Centre, Imperial College London, Bessemer Building, London SW7 2AZ, UK
| | - Huseyin Uvet
- Mechatronics Engineering, Yildiz Technical University, 34349 Istanbul, Turkey
| |
Collapse
|
39
|
MDFNet: application of multimodal fusion method based on skin image and clinical data to skin cancer classification. J Cancer Res Clin Oncol 2022:10.1007/s00432-022-04180-1. [PMID: 35918465 DOI: 10.1007/s00432-022-04180-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Accepted: 06/27/2022] [Indexed: 10/16/2022]
Abstract
PURPOSE Skin cancer is one of the ten most common cancer types in the world. Early diagnosis and treatment can effectively reduce the mortality of patients. Therefore, it is of great significance to develop an intelligent diagnosis system for skin cancer. According to the survey, at present, most intelligent diagnosis systems of skin cancer only use skin image data, but the multi-modal cross-fusion analysis using image data and patient clinical data is limited. Therefore, to further explore the complementary relationship between image data and patient clinical data, we propose multimode data fusion diagnosis network (MDFNet), a framework for skin cancer based on data fusion strategy. METHODS MDFNet establishes an effective mapping among heterogeneous data features, effectively fuses clinical skin images and patient clinical data, and effectively solves the problems of feature paucity and insufficient feature richness that only use single-mode data. RESULTS The experimental results present that our proposed smart skin cancer diagnosis model has an accuracy of 80.42%, which is an improvement of about 9% compared with the model accuracy using only medical images, thus effectively confirming the unique fusion advantages exhibited by MDFNet. CONCLUSIONS This illustrates that MDFNet can not only be applied as an effective auxiliary diagnostic tool for skin cancer diagnosis, help physicians improve clinical decision-making ability and effectively improve the efficiency of clinical medicine diagnosis, but also its proposed data fusion method fully exerts the advantage of information convergence and has a certain reference value for the intelligent diagnosis of numerous clinical diseases.
Collapse
|
40
|
Krammer S, Li Y, Jakob N, Boehm AS, Wolff H, Tang P, Lasser T, French LE, Hartmann D. Deep learning-based classification of dermatological lesions given a limited amount of labeled data. J Eur Acad Dermatol Venereol 2022; 36:2516-2524. [PMID: 35876737 DOI: 10.1111/jdv.18460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 06/10/2022] [Indexed: 11/28/2022]
Abstract
BACKGROUND Artificial intelligence (AI) techniques are promising in early diagnosis of skin diseases. However, a precondition for their success is the access to large-scaled annotated data. Until now, obtaining this data has only been feasible with very high personnel and financial resources. OBJECTIVES The aim of this study was to overcome the obstacle caused by the scarcity of labeled data. METHODS To simulate the scenario of label shortage, we discarded a proportion of labels of the training set. The training set consisted of both labeled and unlabeled images. We then leveraged a self-supervised learning technique to pre-train the AI model on the unlabeled images. Next, we fine-tuned the pre-trained model on the labeled images. RESULTS When the images in the training dataset were fully labeled, the self-supervised pre-trained model achieved 95.7% of accuracy, 91.7% of precision and 90.7% of sensitivity. When only 10% of the data was labeled, the model could still yield 87.7% of accuracy, 81.7% of precision and 68.6% of sensitivity. In addition, we also empirically verified that the AI model and dermatologists are consistent in visually inspecting the skin images. CONCLUSIONS The experimental results demonstrate the great potential of the self-supervised learning in alleviating the scarcity of annotated data.
Collapse
Affiliation(s)
- S Krammer
- Department of Dermatology and Allergy, University Hospital, LMU Munich, Munich, Germany
| | - Y Li
- Department of Dermatology and Allergy, University Hospital, LMU Munich, Munich, Germany
| | - N Jakob
- Department of Dermatology and Allergy, University Hospital, LMU Munich, Munich, Germany
| | - A S Boehm
- Department of Dermatology and Allergy, University Hospital, LMU Munich, Munich, Germany
| | - H Wolff
- Department of Dermatology and Allergy, University Hospital, LMU Munich, Munich, Germany
| | - P Tang
- Department of Informatics, School of Computations, Information, and Technology, and Munich Institute of Biomedical Engineering, Technical University of Munich, Munich, Germany
| | - T Lasser
- Department of Informatics, School of Computations, Information, and Technology, and Munich Institute of Biomedical Engineering, Technical University of Munich, Munich, Germany
| | - L E French
- Department of Dermatology and Allergy, University Hospital, LMU Munich, Munich, Germany
| | - D Hartmann
- Department of Dermatology and Allergy, University Hospital, LMU Munich, Munich, Germany
| |
Collapse
|
41
|
Wu Y, Chen B, Zeng A, Pan D, Wang R, Zhao S. Skin Cancer Classification With Deep Learning: A Systematic Review. Front Oncol 2022; 12:893972. [PMID: 35912265 PMCID: PMC9327733 DOI: 10.3389/fonc.2022.893972] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 05/16/2022] [Indexed: 01/21/2023] Open
Abstract
Skin cancer is one of the most dangerous diseases in the world. Correctly classifying skin lesions at an early stage could aid clinical decision-making by providing an accurate disease diagnosis, potentially increasing the chances of cure before cancer spreads. However, achieving automatic skin cancer classification is difficult because the majority of skin disease images used for training are imbalanced and in short supply; meanwhile, the model's cross-domain adaptability and robustness are also critical challenges. Recently, many deep learning-based methods have been widely used in skin cancer classification to solve the above issues and achieve satisfactory results. Nonetheless, reviews that include the abovementioned frontier problems in skin cancer classification are still scarce. Therefore, in this article, we provide a comprehensive overview of the latest deep learning-based algorithms for skin cancer classification. We begin with an overview of three types of dermatological images, followed by a list of publicly available datasets relating to skin cancers. After that, we review the successful applications of typical convolutional neural networks for skin cancer classification. As a highlight of this paper, we next summarize several frontier problems, including data imbalance, data limitation, domain adaptation, model robustness, and model efficiency, followed by corresponding solutions in the skin cancer classification task. Finally, by summarizing different deep learning-based methods to solve the frontier challenges in skin cancer classification, we can conclude that the general development direction of these approaches is structured, lightweight, and multimodal. Besides, for readers' convenience, we have summarized our findings in figures and tables. Considering the growing popularity of deep learning, there are still many issues to overcome as well as chances to pursue in the future.
Collapse
Affiliation(s)
- Yinhao Wu
- School of Intelligent Systems Engineering, Sun Yat-Sen University, Guangzhou, China
| | - Bin Chen
- Affiliated Hangzhou First People’s Hospital, Zhejiang University School of Medicine, Zhejiang, China
| | - An Zeng
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou, China
| | - Dan Pan
- School of Electronics and Information, Guangdong Polytechnic Normal University, Guangzhou, China
| | - Ruixuan Wang
- School of Computer Science and Engineering, Sun Yat-Sen University, Guangzhou, China
| | - Shen Zhao
- School of Intelligent Systems Engineering, Sun Yat-Sen University, Guangzhou, China
| |
Collapse
|
42
|
DTP-Net: A convolutional neural network model to predict threshold for localizing the lesions on dermatological macro-images. Comput Biol Med 2022; 148:105852. [PMID: 35853397 DOI: 10.1016/j.compbiomed.2022.105852] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2021] [Revised: 05/07/2022] [Accepted: 05/13/2022] [Indexed: 11/22/2022]
Abstract
Highly focused images of skin captured with ordinary cameras, called macro-images, are extensively used in dermatology. Being highly focused views, the macro-images contain only lesions and background regions. Hence, the localization of lesions on the macro-images is a simple thresholding problem. However, algorithms that offer an accurate estimate of threshold and retain consistent performance on different dermatological macro-images are rare. A deep learning model, termed 'Deep Threshold Prediction Network (DTP-Net)', is proposed in this paper to address this issue. For training the model, grayscale versions of the macro-images are fed as input to the model, and the corresponding gray-level threshold values at which the Dice similarity index (DSI) between the segmented and the ground-truth images are maximized are defined as the targets. The DTP-Net exhibited the least value of root mean square error for the predicted threshold, compared with 11 state-of-the-art threshold estimation algorithms (such as Otsu's thresholding, Valley emphasized otsu's thresholding, Isodata thresholding, Histogram slope difference distribution-based thresholding, Minimum error thresholding, Poisson's distribution-based minimum error thresholding, Kapur's maximum entropy thresholding, Entropy-weighted otsu's thresholding, Minimum cross-entropy thresholding, Type-2 fuzzy-based thresholding, and Fuzzy entropy thresholding). The DTP-Net could learn the difference between the lesion and background in the intensity space and accurately predict the threshold that separates the lesion from the background. The proposed DTP-Net can be integrated into the segmentation module in automated tools that detect skin cancer from dermatological macro-images.
Collapse
|
43
|
Garbe C, Amaral T, Peris K, Hauschild A, Arenberger P, Basset-Seguin N, Bastholt L, Bataille V, Del Marmol V, Dréno B, Fargnoli MC, Forsea AM, Grob JJ, Höller C, Kaufmann R, Kelleners-Smeets N, Lallas A, Lebbé C, Lytvynenko B, Malvehy J, Moreno-Ramirez D, Nathan P, Pellacani G, Saiag P, Stratigos AJ, Van Akkooi ACJ, Vieira R, Zalaudek I, Lorigan P. European consensus-based interdisciplinary guideline for melanoma. Part 1: Diagnostics: Update 2022. Eur J Cancer 2022; 170:236-255. [PMID: 35570085 DOI: 10.1016/j.ejca.2022.03.008] [Citation(s) in RCA: 123] [Impact Index Per Article: 41.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 03/10/2022] [Indexed: 01/02/2023]
Abstract
Cutaneous melanoma (CM) is potentially the most dangerous form of skin tumor and causes 90% of skin cancer mortality. A unique collaboration of multi-disciplinary experts from the European Dermatology Forum (EDF), the European Association of Dermato-Oncology (EADO) and the European Organization for Research and Treatment of Cancer (EORTC) was formed to make recommendations on CM diagnosis and treatment, based on systematic literature reviews and the experts' experience. The diagnosis of melanoma can be made clinically and shall always be confirmed with dermatoscopy. If a melanoma is suspected, a histopathological examination is always required. Sequential digital dermatoscopy and full body photography can be used in high-risk patients to improve the detection of early melanoma. Where available, confocal reflectance microscopy can also improve clinical diagnosis in special cases. Melanoma shall be classified according to the 8th version of the American Joint Committee on Cancer classification. Thin melanomas up to 0.8 mm tumor thickness do not require further imaging diagnostics. From stage IB onwards, examinations with lymph node sonography are recommended, but no further imaging examinations. From stage IIC onwards whole-body examinations with computed tomography (CT) or positron emission tomography CT (PET-CT) in combination with brain magnetic resonance imaging are recommended. From stage III and higher, mutation testing is recommended, particularly for BRAF V600 mutation. It is important to provide a structured follow-up to detect relapses and secondary primary melanomas as early as possible. There is no evidence to define the frequency and extent of examinations. A stage-based follow-up scheme is proposed which, according to the experience of the guideline group, covers the optimal requirements, but further studies may be considered. This guideline is valid until the end of 2024.
Collapse
Affiliation(s)
- Claus Garbe
- Center for Dermatooncology, Department of Dermatology, Eberhard Karls University, Tuebingen, Germany.
| | - Teresa Amaral
- Center for Dermatooncology, Department of Dermatology, Eberhard Karls University, Tuebingen, Germany
| | - Ketty Peris
- Institute of Dermatology, Università Cattolica, Rome, Italy; Fondazione Policlinico Universitario A. Gemelli - IRCCS, Rome, Italy
| | - Axel Hauschild
- Department of Dermatology, University Hospital Schleswig-Holstein (UKSH), Campus Kiel, Kiel, Germany
| | - Petr Arenberger
- Department of Dermatovenereology, Third Faculty of Medicine, Charles University, Prague, Czech Republic
| | - Nicole Basset-Seguin
- Université Paris Cite, AP-HP Department of Dermatology INSERM U 976 Hôpital Saint Louis Paris France
| | - Lars Bastholt
- Department of Oncology, Odense University Hospital, Denmark
| | - Veronique Bataille
- Twin Research and Genetic Epidemiology Unit, School of Basic & Medical Biosciences, King's College London, London, SE1 7EH, UK
| | - Veronique Del Marmol
- Department of Dermatology, Erasme Hospital, Université Libre de Bruxelles, Brussels, Belgium
| | - Brigitte Dréno
- Dermatology Department, CHU Nantes, CIC 1413, CRCINA, University Nantes, Nantes, France
| | - Maria C Fargnoli
- Dermatology, Department of Biotechnological and Applied Clinical Sciences, University of L'Aquila, L'Aquila, Italy
| | - Ana-Maria Forsea
- Dermatology Department, Elias University Hospital, Carol Davila University of Medicine and Pharmacy Bucharest, Romania
| | | | - Christoph Höller
- Department of Dermatology, Medical University of Vienna, Austria
| | - Roland Kaufmann
- Department of Dermatology, Venereology and Allergology, Frankfurt University Hospital, Frankfurt, Germany
| | - Nicole Kelleners-Smeets
- Department of Dermatology, Maastricht University Medical Center+, Maastricht, the Netherlands
| | - Aimilios Lallas
- First Department of Dermatology, Aristotle University, Thessaloniki, Greece
| | - Celeste Lebbé
- Université Paris Cite, AP-HP Department of Dermatology INSERM U 976 Hôpital Saint Louis Paris France
| | - Bohdan Lytvynenko
- Shupyk National Medical Academy of Postgraduate Education, Kiev, Ukraine
| | - Josep Malvehy
- Melanoma Unit, Department of Dermatology, Hospital Clinic, IDIBAPS, Barcelona, Spain
| | - David Moreno-Ramirez
- Medical-&-Surgical Dermatology Service, Hospital Universitario Virgen Macarena, Sevilla, Spain
| | - Paul Nathan
- Mount-Vernon Cancer Centre, Northwood United Kingdom
| | | | - Philippe Saiag
- University Department of Dermatology, Université de Versailles-Saint Quentin en Yvelines, APHP, Boulogne, France
| | - Alexander J Stratigos
- 1st Department of Dermatology, University of Athens School of Medicine, Andreas Sygros Hospital, Athens, Greece
| | - Alexander C J Van Akkooi
- Melanoma Institute Australia, The University of Sydney, Royal North Shore and Mater Hospitals, Sydney, New South Wales, Australia
| | - Ricardo Vieira
- Department of Dermatology and Venereology, Centro Hospitalar Universitário de Coimbra, Coimbra, Portugal
| | - Iris Zalaudek
- Dermatology Clinic, Maggiore Hospital, University of Trieste, Trieste, Italy
| | - Paul Lorigan
- The University of Manchester, Oxford Rd, Manchester, M13 9PL, UK
| |
Collapse
|
44
|
Verstockt J, Verspeek S, Thiessen F, Tjalma WA, Brochez L, Steenackers G. Skin Cancer Detection Using Infrared Thermography: Measurement Setup, Procedure and Equipment. SENSORS 2022; 22:s22093327. [PMID: 35591018 PMCID: PMC9100961 DOI: 10.3390/s22093327] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 04/13/2022] [Accepted: 04/21/2022] [Indexed: 12/24/2022]
Abstract
Infrared thermography technology has improved dramatically in recent years and is gaining renewed interest in the medical community for applications in skin tissue identification applications. However, there is still a need for an optimized measurement setup and protocol to obtain the most appropriate images for decision making and further processing. Nowadays, various cooling methods, measurement setups and cameras are used, but a general optimized cooling and measurement protocol has not been defined yet. In this literature review, an overview of different measurement setups, thermal excitation techniques and infrared camera equipment is given. It is possible to improve thermal images of skin lesions by choosing an appropriate cooling method, infrared camera and optimized measurement setup.
Collapse
Affiliation(s)
- Jan Verstockt
- InViLab Research Group, Department Electromechanics, Faculty of Applied Engineering, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerpen, Belgium; (S.V.); (G.S.)
- Correspondence:
| | - Simon Verspeek
- InViLab Research Group, Department Electromechanics, Faculty of Applied Engineering, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerpen, Belgium; (S.V.); (G.S.)
| | - Filip Thiessen
- Department of Plastic, Reconstructive and Aesthetic Surgery, Multidisciplinary Breast Clinic, Antwerp University Hospital, University of Antwerp, Wilrijkstraat 10, B-2650 Antwerp, Belgium;
| | - Wiebren A. Tjalma
- Gynaecological Oncology Unit, Department of Obstetrics and Gynaecology, Multidisciplinary Breast Clinic, Antwerp University Hospital, University of Antwerp, Wilrijkstraat 10, B-2650 Antwerp, Belgium;
| | - Lieve Brochez
- Department of Dermatology, Ghent University Hospital, C. Heymanslaan 10, B-9000 Ghent, Belgium;
| | - Gunther Steenackers
- InViLab Research Group, Department Electromechanics, Faculty of Applied Engineering, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerpen, Belgium; (S.V.); (G.S.)
| |
Collapse
|
45
|
Superpixel-Oriented Label Distribution Learning for Skin Lesion Segmentation. Diagnostics (Basel) 2022; 12:diagnostics12040938. [PMID: 35453986 PMCID: PMC9026477 DOI: 10.3390/diagnostics12040938] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2022] [Revised: 03/31/2022] [Accepted: 04/06/2022] [Indexed: 02/04/2023] Open
Abstract
Lesion segmentation is a critical task in skin cancer analysis and detection. When developing deep learning-based segmentation methods, we need a large number of human-annotated labels to serve as ground truth for model-supervised learning. Due to the complexity of dermatological images and the subjective differences of different dermatologists in decision-making, the labels in the segmentation target boundary region are prone to produce uncertain labels or error labels. These labels may lead to unsatisfactory performance of dermoscopy segmentation. In addition, the model trained by the errored one-hot label may be overconfident, which can lead to arbitrary prediction and model overfitting. In this paper, a superpixel-oriented label distribution learning method is proposed. The superpixels formed by the simple linear iterative cluster (SLIC) algorithm combine one-hot labels constraint and define a distance function to convert it into a soft probability distribution. Referring to the model structure of knowledge distillation, after Superpixel-oriented label distribution learning, we get soft labels with structural prior information. Then the soft labels are transferred as new knowledge to the lesion segmentation network for training. Ours method on ISIC 2018 datasets achieves an Dice coefficient reaching 84%, sensitivity 79.6%, precision 80.4%, improved by 19.3%, 8.6% and 2.5% respectively in comparison with the results of U-Net. We also evaluate our method on the tasks of skin lesion segmentation via several general neural network architectures. The experiments show that ours method improves the performance of network image segmentation and can be easily integrated into most existing deep learning architectures.
Collapse
|
46
|
Li Y, Zhu R, Yeh M, Qu A. Dermoscopic Image Classification with Neural Style Transfer. J Comput Graph Stat 2022. [DOI: 10.1080/10618600.2022.2061496] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
| | - Ruoqing Zhu
- Department of Statistics, University of Illinois at Urbana-Champaign
| | | | - Annie Qu
- Department of Statistics, University of California, Irvine
| |
Collapse
|
47
|
Lucieri A, Bajwa MN, Braun SA, Malik MI, Dengel A, Ahmed S. ExAID: A multimodal explanation framework for computer-aided diagnosis of skin lesions. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 215:106620. [PMID: 35033756 DOI: 10.1016/j.cmpb.2022.106620] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 12/01/2021] [Accepted: 01/03/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVES One principal impediment in the successful deployment of Artificial Intelligence (AI) based Computer-Aided Diagnosis (CAD) systems in everyday clinical workflows is their lack of transparent decision-making. Although commonly used eXplainable AI (XAI) methods provide insights into these largely opaque algorithms, such explanations are usually convoluted and not readily comprehensible. The explanation of decisions regarding the malignancy of skin lesions from dermoscopic images demands particular clarity, as the underlying medical problem definition is ambiguous in itself. This work presents ExAID (Explainable AI for Dermatology), a novel XAI framework for biomedical image analysis that provides multi-modal concept-based explanations, consisting of easy-to-understand textual explanations and visual maps, to justify the predictions. METHODS Our framework relies on Concept Activation Vectors to map human-understandable concepts to those learned by an arbitrary Deep Learning (DL) based algorithm, and Concept Localisation Maps to highlight those concepts in the input space. This identification of relevant concepts is then used to construct fine-grained textual explanations supplemented by concept-wise location information to provide comprehensive and coherent multi-modal explanations. All decision-related information is presented in a diagnostic interface for use in clinical routines. Moreover, the framework includes an educational mode providing dataset-level explanation statistics as well as tools for data and model exploration to aid medical research and education processes. RESULTS Through rigorous quantitative and qualitative evaluation of our framework on a range of publicly available dermoscopic image datasets, we show the utility of multi-modal explanations for CAD-assisted scenarios even in case of wrong disease predictions. We demonstrate that concept detectors for the explanation of pre-trained networks reach accuracies of up to 81.46%, which is comparable to supervised networks trained end-to-end. CONCLUSIONS We present a new end-to-end framework for the multi-modal explanation of DL-based biomedical image analysis in Melanoma classification and evaluate its utility on an array of datasets. Since perspicuous explanation is one of the cornerstones of any CAD system, we believe that ExAID will accelerate the transition from AI research to practice by providing dermatologists and researchers with an effective tool that they can both understand and trust. ExAID can also serve as the basis for similar applications in other biomedical fields.
Collapse
Affiliation(s)
- Adriano Lucieri
- German Research Center for Artificial Intelligence (DFKI) GmbH, Trippstadter Straße 122, 67663 Kaiserslautern, Germany; Technical University Kaiserslautern, Erwin-Schrödinger-Straße 52, 67663 Kaiserslautern, Germany.
| | - Muhammad Naseer Bajwa
- German Research Center for Artificial Intelligence (DFKI) GmbH, Trippstadter Straße 122, 67663 Kaiserslautern, Germany; Technical University Kaiserslautern, Erwin-Schrödinger-Straße 52, 67663 Kaiserslautern, Germany.
| | - Stephan Alexander Braun
- University Hospital Münster, Albert-Schweitzer-Campus 1, 48149 Münster, Germany; University Hospital of Düsseldorf, Moorenstraße 5, 40225 Düsseldorf, Germany.
| | - Muhammad Imran Malik
- School of Electrical Engineering and Computer Science (SEECS), National University of Sciences and Technology (NUST), Islamabad, Pakistan; Deep Learning Laboratory, National Center of Artificial Intelligence, Islamabad, Pakistan.
| | - Andreas Dengel
- German Research Center for Artificial Intelligence (DFKI) GmbH, Trippstadter Straße 122, 67663 Kaiserslautern, Germany; Technical University Kaiserslautern, Erwin-Schrödinger-Straße 52, 67663 Kaiserslautern, Germany.
| | - Sheraz Ahmed
- German Research Center for Artificial Intelligence (DFKI) GmbH, Trippstadter Straße 122, 67663 Kaiserslautern, Germany.
| |
Collapse
|
48
|
Szolga LA, Bozga DA, Florea C. End-User Skin Analysis (Moles) through Image Acquisition and Processing System. SENSORS (BASEL, SWITZERLAND) 2022; 22:1123. [PMID: 35161868 PMCID: PMC8839405 DOI: 10.3390/s22031123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Revised: 01/29/2022] [Accepted: 01/29/2022] [Indexed: 06/14/2023]
Abstract
Skin moles and lesions can be the first signs of severe skin diseases such as cancer. This paper presents the development of an end-user device capable of capturing images, segmentation and diagnosis of moles by using the ABCD rule, which stands for analyzing moles' parameters as: asymmetry, border, color, and diameter. These are the main mole characteristics that doctors look at, each of them having a different factor of importance, and depending on these an accurate diagnosis can be given. For the hardware, we developed a small and compact device that can be manipulated easily by anyone without knowledge of medicine, in which we considered a custom-designed 3D enclosure with two white LEDs to control the light. The device has the role of facilitating analysis of the suspicious moles regularly at home, even if only from an indicative and not from a medical point of view. The developed PC software permits the storage of the images in a local database for easy tracking and analysis in time. The image processing developed for the ABCD rule is incorporated into the PC software and tested extensively on the international PH2 database with skin melanoma images to validate our segmentation and criteria evaluation. Using the developed device, we captured mole images for patients, who also took a medical examination by a specialist using the standard dermatoscope. Therefore, we obtained our own database containing 26 images for which we have also the specialists' diagnosis. The performance evaluation measures obtained using our device are-Accuracy: 0.92, Precision: 1.0, Recall: 0.92, F1-score: 0.96.
Collapse
Affiliation(s)
- Lorant Andras Szolga
- Basics of Electronics Department, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania;
| | - Denisa Alice Bozga
- Basics of Electronics Department, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania;
| | - Camelia Florea
- Communications Department, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania;
| |
Collapse
|
49
|
Mariakakis A, Karkar R, Patel SN, Kientz JA, Fogarty J, Munson SA. Using Health Concept Surveying to Elicit Usable Evidence: Case Studies of a Novel Evaluation Methodology. JMIR Hum Factors 2022; 9:e30474. [PMID: 34982038 PMCID: PMC8764610 DOI: 10.2196/30474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2021] [Revised: 09/15/2021] [Accepted: 10/09/2021] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Developers, designers, and researchers use rapid prototyping methods to project the adoption and acceptability of their health intervention technology (HIT) before the technology becomes mature enough to be deployed. Although these methods are useful for gathering feedback that advances the development of HITs, they rarely provide usable evidence that can contribute to our broader understanding of HITs. OBJECTIVE In this research, we aim to develop and demonstrate a variation of vignette testing that supports developers and designers in evaluating early-stage HIT designs while generating usable evidence for the broader research community. METHODS We proposed a method called health concept surveying for untangling the causal relationships that people develop around conceptual HITs. In health concept surveying, investigators gather reactions to design concepts through a scenario-based survey instrument. As the investigator manipulates characteristics related to their HIT, the survey instrument also measures proximal cognitive factors according to a health behavior change model to project how HIT design decisions may affect the adoption and acceptability of an HIT. Responses to the survey instrument were analyzed using path analysis to untangle the causal effects of these factors on the outcome variables. RESULTS We demonstrated health concept surveying in 3 case studies of sensor-based health-screening apps. Our first study (N=54) showed that a wait time incentive could influence more people to go see a dermatologist after a positive test for skin cancer. Our second study (N=54), evaluating a similar application design, showed that although visual explanations of algorithmic decisions could increase participant trust in negative test results, the trust would not have been enough to affect people's decision-making. Our third study (N=263) showed that people might prioritize test specificity or sensitivity depending on the nature of the medical condition. CONCLUSIONS Beyond the findings from our 3 case studies, our research uses the framing of the Health Belief Model to elicit and understand the intrinsic and extrinsic factors that may affect the adoption and acceptability of an HIT without having to build a working prototype. We have made our survey instrument publicly available so that others can leverage it for their own investigations.
Collapse
Affiliation(s)
- Alex Mariakakis
- Department of Computer Science, University of Toronto, Toronto, ON, Canada
| | - Ravi Karkar
- School of Computer Science & Engineering, University of Washington, Seattle, WA, United States
| | - Shwetak N Patel
- School of Computer Science & Engineering, University of Washington, Seattle, WA, United States
| | - Julie A Kientz
- Department of Human Centered Design & Engineering, University of Washington, Seattle, WA, United States
| | - James Fogarty
- School of Computer Science & Engineering, University of Washington, Seattle, WA, United States
| | - Sean A Munson
- Department of Human Centered Design & Engineering, University of Washington, Seattle, WA, United States
| |
Collapse
|
50
|
Jaworek-Korjakowska J, Brodzicki A, Cassidy B, Kendrick C, Yap MH. Interpretability of a Deep Learning Based Approach for the Classification of Skin Lesions into Main Anatomic Body Sites. Cancers (Basel) 2021; 13:6048. [PMID: 34885158 PMCID: PMC8657137 DOI: 10.3390/cancers13236048] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Revised: 11/15/2021] [Accepted: 11/24/2021] [Indexed: 12/15/2022] Open
Abstract
Over the past few decades, different clinical diagnostic algorithms have been proposed to diagnose malignant melanoma in its early stages. Furthermore, the detection of skin moles driven by current deep learning based approaches yields impressive results in the classification of malignant melanoma. However, in all these approaches, the researchers do not take into account the origin of the skin lesion. It has been observed that the specific criteria for in situ and early invasive melanoma highly depend on the anatomic site of the body. To address this problem, we propose a deep learning architecture based framework to classify skin lesions into the three most important anatomic sites, including the face, trunk and extremities, and acral lesions. In this study, we take advantage of pretrained networks, including VGG19, ResNet50, Xception, DenseNet121, and EfficientNetB0, to calculate the features with an adjusted and densely connected classifier. Furthermore, we perform in depth analysis on database, architecture, and result regarding the effectiveness of the proposed framework. Experiments confirm the ability of the developed algorithms to classify skin lesions into the most important anatomical sites with 91.45% overall accuracy for the EfficientNetB0 architecture, which is a state-of-the-art result in this domain.
Collapse
Affiliation(s)
- Joanna Jaworek-Korjakowska
- Department of Automatic Control and Robotics, AGH University of Science and Technology, 30-059 Kraków, Poland
| | - Andrzej Brodzicki
- Department of Automatic Control and Robotics, AGH University of Science and Technology, 30-059 Kraków, Poland
| | - Bill Cassidy
- Department of Computing and Mathematics, Manchester Metropolitan University, John Dalton Building, Chester Street, Manchester M1 5GD, UK; (B.C.); (C.K.); (M.H.Y.)
| | - Connah Kendrick
- Department of Computing and Mathematics, Manchester Metropolitan University, John Dalton Building, Chester Street, Manchester M1 5GD, UK; (B.C.); (C.K.); (M.H.Y.)
| | - Moi Hoon Yap
- Department of Computing and Mathematics, Manchester Metropolitan University, John Dalton Building, Chester Street, Manchester M1 5GD, UK; (B.C.); (C.K.); (M.H.Y.)
| |
Collapse
|