1
|
Scope A, Liopyris K, Weber J, Barnhill RL, Braun RP, Curiel-Lewandrowski CN, Elder DE, Ferrara G, Grant-Kels JM, Jeunon T, Lallas A, Lin JY, Marchetti MA, Marghoob AA, Navarrete-Dechent C, Pellacani G, Soyer HP, Stratigos A, Thomas L, Kittler H, Rotemberg V, Halpern AC. International Skin Imaging Collaboration-Designated Diagnoses (ISIC-DX): Consensus terminology for lesion diagnostic labeling. J Eur Acad Dermatol Venereol 2025; 39:117-125. [PMID: 38733254 DOI: 10.1111/jdv.20055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 02/16/2024] [Indexed: 05/13/2024]
Abstract
BACKGROUND A common terminology for diagnosis is critically important for clinical communication, education, research and artificial intelligence. Prevailing lexicons are limited in fully representing skin neoplasms. OBJECTIVES To achieve expert consensus on diagnostic terms for skin neoplasms and their hierarchical mapping. METHODS Diagnostic terms were extracted from textbooks, publications and extant diagnostic codes. Terms were hierarchically mapped to super-categories (e.g. 'benign') and cellular/tissue-differentiation categories (e.g. 'melanocytic'), and appended with pertinent-modifiers and synonyms. These terms were evaluated using a modified-Delphi consensus approach. Experts from the International-Skin-Imaging-Collaboration (ISIC) were surveyed on agreement with terms and their hierarchical mapping; they could suggest modifying, deleting or adding terms. Consensus threshold was >75% for the initial rounds and >50% for the final round. RESULTS Eighteen experts completed all Delphi rounds. Of 379 terms, 356 (94%) reached consensus in round one. Eleven of 226 (5%) benign-category terms, 6/140 (4%) malignant-category terms and 6/13 (46%) indeterminate-category terms did not reach initial agreement. Following three rounds, final consensus consisted of 362 terms mapped to 3 super-categories and 41 cellular/tissue-differentiation categories. CONCLUSIONS We have created, agreed upon, and made public a taxonomy for skin neoplasms and their hierarchical mapping. Further study will be needed to evaluate the utility and completeness of the lexicon.
Collapse
Affiliation(s)
- Alon Scope
- The Kittner Skin Cancer Screening & Research Institute, Sheba Medical Center and Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- Dermatology Service, Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Konstantinos Liopyris
- Dermatology Service, Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, New York, USA
- Department of Dermatology-Venereology, University of Athens Medical School, Athens, Greece
| | - Jochen Weber
- Dermatology Service, Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Raymond L Barnhill
- Department of Translational Research, Institut Curie, and UFR de Médecine, Université de Paris, Paris, France
| | - Ralph P Braun
- Department of Dermatology, University Hospital Zurich, Zurich, Switzerland
| | - Clara N Curiel-Lewandrowski
- Department of Dermatology, University of Arizona College of Medicine, and the University of Arizona Cancer Center Skin Cancer Institute, Tucson, Arizona, USA
| | - David E Elder
- Department of Pathology and Laboratory Medicine, Hospital of the University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Gerardo Ferrara
- Anatomic Pathology and Cytopathology Unit, Istituto Nazionale Tumori IRCCS Fondazione 'G. Pascale', Naples, Italy
| | - Jane M Grant-Kels
- Department of Dermatology, UConn Health, Farmington, Connecticut, USA
- Department of Dermatology, University of Florida, Gainesville, Florida, USA
| | - Thiago Jeunon
- Departments of Dermatology and Pathology, Hospital Federal de Bonsucesso, Rio de Janeiro, Brazil
| | - Aimilios Lallas
- First Department of Dermatology, Aristotle University, Thessaloniki, Greece
| | - Jennifer Y Lin
- Department of Dermatology, Brigham and Women's Hospital and Melanoma Program, Dana-Farber Cancer Institute, Boston, Massachusetts, USA
| | - Michael A Marchetti
- Dermatology Service, Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Ashfaq A Marghoob
- Dermatology Service, Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Cristian Navarrete-Dechent
- Melanoma and Skin Cancer Unit and Department of Dermatology, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Giovanni Pellacani
- Department of Dermatology, University of Modena and Reggio Emilia, Modena and Dermatology Clinic, University of Rome, Rome, Italy
| | - Hans Peter Soyer
- The University of Queensland Diamantina Institute, University of Queensland, Dermatology Research Centre, Brisbane, Queensland, Australia
| | - Alexander Stratigos
- 1st Department of Dermatology-Venereology, Andreas Sygros Hospital, National and Kapodistrian University of Athens School of Medicine, Athens, Greece
| | - Luc Thomas
- Dermatology Department, Hôpital Universitaire Lyon Sud, Hospices Civils de Lyon, Pierre-Bénite, France
| | - Harald Kittler
- Department of Dermatology, Medical University of Vienna, Vienna, Austria
| | - Veronica Rotemberg
- Dermatology Service, Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Allan C Halpern
- Dermatology Service, Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| |
Collapse
|
2
|
Ahmad I, Alqurashi F. Early cancer detection using deep learning and medical imaging: A survey. Crit Rev Oncol Hematol 2024; 204:104528. [PMID: 39413940 DOI: 10.1016/j.critrevonc.2024.104528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2024] [Accepted: 10/02/2024] [Indexed: 10/18/2024] Open
Abstract
Cancer, characterized by the uncontrolled division of abnormal cells that harm body tissues, necessitates early detection for effective treatment. Medical imaging is crucial for identifying various cancers, yet its manual interpretation by radiologists is often subjective, labour-intensive, and time-consuming. Consequently, there is a critical need for an automated decision-making process to enhance cancer detection and diagnosis. Previously, a lot of work was done on surveys of different cancer detection methods, and most of them were focused on specific cancers and limited techniques. This study presents a comprehensive survey of cancer detection methods. It entails a review of 99 research articles collected from the Web of Science, IEEE, and Scopus databases, published between 2020 and 2024. The scope of the study encompasses 12 types of cancer, including breast, cervical, ovarian, prostate, esophageal, liver, pancreatic, colon, lung, oral, brain, and skin cancers. This study discusses different cancer detection techniques, including medical imaging data, image preprocessing, segmentation, feature extraction, deep learning and transfer learning methods, and evaluation metrics. Eventually, we summarised the datasets and techniques with research challenges and limitations. Finally, we provide future directions for enhancing cancer detection techniques.
Collapse
Affiliation(s)
- Istiak Ahmad
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; School of Information and Communication Technology, Griffith University, Queensland 4111, Australia.
| | - Fahad Alqurashi
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| |
Collapse
|
3
|
Pillai R, Sharma N, Gupta S, Gupta D, Juneja S, Malik S, Qin H, Alqahtani MS, Ksibi A. Enhanced skin cancer diagnosis through grid search algorithm-optimized deep learning models for skin lesion analysis. Front Med (Lausanne) 2024; 11:1436470. [PMID: 39574908 PMCID: PMC11578711 DOI: 10.3389/fmed.2024.1436470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Accepted: 10/14/2024] [Indexed: 11/24/2024] Open
Abstract
Skin cancer is a widespread and perilous disease that necessitates prompt and precise detection for successful treatment. This research introduces a thorough method for identifying skin lesions by utilizing sophisticated deep learning (DL) techniques. The study utilizes three convolutional neural networks (CNNs)-CNN1, CNN2, and CNN3-each assigned to a distinct categorization job. Task 1 involves binary classification to determine whether skin lesions are present or absent. Task 2 involves distinguishing between benign and malignant lesions. Task 3 involves multiclass classification of skin lesion images to identify the precise type of skin lesion from a set of seven categories. The most optimal hyperparameters for the proposed CNN models were determined using the Grid Search Optimization technique. This approach determines optimal values for architectural and fine-tuning hyperparameters, which is essential for learning. Rigorous evaluations of loss, accuracy, and confusion matrix thoroughly assessed the performance of the CNN models. Three datasets from the International Skin Imaging Collaboration (ISIC) Archive were utilized for the classification tasks. The primary objective of this study is to create a robust CNN system that can accurately diagnose skin lesions. Three separate CNN models were developed using the labeled ISIC Archive datasets. These models were designed to accurately detect skin lesions, assess the malignancy of the lesions, and classify the different types of lesions. The results indicate that the proposed CNN models possess robust capabilities in identifying and categorizing skin lesions, aiding healthcare professionals in making prompt and precise diagnostic judgments. This strategy presents an optimistic avenue for enhancing the diagnosis of skin cancer, which could potentially decrease avoidable fatalities and extend the lifespan of people diagnosed with skin cancer. This research enhances the discipline of biomedical image processing for skin lesion identification by utilizing the capabilities of DL algorithms.
Collapse
Affiliation(s)
- Rudresh Pillai
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab
| | - Neha Sharma
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab
| | - Sheifali Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab
| | - Deepali Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab
| | - Sapna Juneja
- CSE(AI), KIET Group of Institutions, Ghaziabad, India
| | - Saurav Malik
- Department of Environmental Health, School of Public Health, Harvard University, Boston, MA, United States
- Department of Pharmacology and Toxicology, University of Arizona, Tucson, AZ, United States
| | - Hong Qin
- School of Data Science, Department of Computer Science, Old Dominion University, Norfolk, VA, United States
- University of Tennessee at Chattanooga, Chattanooga, TN, United States
| | - Mohammed S. Alqahtani
- Department of Radiological Sciences, College of Applied Medical Sciences, King Khalid University, Abha, Saudi Arabia
- BioImaging Unit, Space Research Centre, University of Leicester, Leicester, United Kingdom
| | - Amel Ksibi
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| |
Collapse
|
4
|
Saghir U, Singh SK, Hasan M. Skin Cancer Image Segmentation Based on Midpoint Analysis Approach. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:2581-2596. [PMID: 38627267 PMCID: PMC11522265 DOI: 10.1007/s10278-024-01106-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 02/16/2024] [Accepted: 03/27/2024] [Indexed: 10/30/2024]
Abstract
Skin cancer affects people of all ages and is a common disease. The death toll from skin cancer rises with a late diagnosis. An automated mechanism for early-stage skin cancer detection is required to diminish the mortality rate. Visual examination with scanning or imaging screening is a common mechanism for detecting this disease, but due to its similarity to other diseases, this mechanism shows the least accuracy. This article introduces an innovative segmentation mechanism that operates on the ISIC dataset to divide skin images into critical and non-critical sections. The main objective of the research is to segment lesions from dermoscopic skin images. The suggested framework is completed in two steps. The first step is to pre-process the image; for this, we have applied a bottom hat filter for hair removal and image enhancement by applying DCT and color coefficient. In the next phase, a background subtraction method with midpoint analysis is applied for segmentation to extract the region of interest and achieves an accuracy of 95.30%. The ground truth for the validation of segmentation is accomplished by comparing the segmented images with validation data provided with the ISIC dataset.
Collapse
Affiliation(s)
- Uzma Saghir
- Dept. of Computer Science & Engineering, Lovely Professional University, Punjab, 144001, India
| | - Shailendra Kumar Singh
- Dept. of Computer Science & Engineering, Lovely Professional University, Punjab, 144001, India.
| | - Moin Hasan
- Dept. of Computer Science & Engineering, Jain Deemed-to-be-University, Bengaluru, 562112, India
| |
Collapse
|
5
|
Lawal M, Yi D. Polar contrast attention and skip cross-channel aggregation for efficient learning in U-Net. Comput Biol Med 2024; 181:109047. [PMID: 39182369 DOI: 10.1016/j.compbiomed.2024.109047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Revised: 07/31/2024] [Accepted: 08/18/2024] [Indexed: 08/27/2024]
Abstract
The performance of existing lesion semantic segmentation models has shown a steady improvement with the introduction of mechanisms like attention, skip connections, and deep supervision. However, these advancements often come at the expense of computational requirements, necessitating powerful graphics processing units with substantial video memory. Consequently, certain models may exhibit poor or non-existent performance on more affordable edge devices, such as smartphones and other point-of-care devices. To tackle this challenge, our paper introduces a lesion segmentation model with a low parameter count and minimal operations. This model incorporates polar transformations to simplify images, facilitating faster training and improved performance. We leverage the characteristics of polar images by directing the model's focus to areas most likely to contain segmentation information, achieved through the introduction of a learning-efficient polar-based contrast attention (PCA). This design utilizes Hadamard products to implement a lightweight attention mechanism without significantly increasing model parameters and complexities. Furthermore, we present a novel skip cross-channel aggregation (SC2A) approach for sharing cross-channel corrections, introducing Gaussian depthwise convolution to enhance nonlinearity. Extensive experiments on the ISIC 2018 and Kvasir datasets demonstrate that our model surpasses state-of-the-art models while maintaining only about 25K parameters. Additionally, our proposed model exhibits strong generalization to cross-domain data, as confirmed through experiments on the PH2 dataset and CVC-Polyp dataset. In addition, we evaluate the model's performance in a mobile setting against other lightweight models. Notably, our proposed model outperforms other advanced models in terms of IoU and Dice score, and running time.
Collapse
Affiliation(s)
- Mohammed Lawal
- Department of Computing Science, University of Aberdeen, Aberdeen, AB24 3UE, United Kingdom
| | - Dewei Yi
- Department of Computing Science, University of Aberdeen, Aberdeen, AB24 3UE, United Kingdom.
| |
Collapse
|
6
|
Jairath N, Pahalyants V, Shah R, Weed J, Carucci JA, Criscito MC. Artificial Intelligence in Dermatology: A Systematic Review of Its Applications in Melanoma and Keratinocyte Carcinoma Diagnosis. Dermatol Surg 2024; 50:791-798. [PMID: 38722750 DOI: 10.1097/dss.0000000000004223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/29/2024]
Abstract
BACKGROUND Limited access to dermatologic care may pose an obstacle to the early detection and intervention of cutaneous malignancies. The role of artificial intelligence (AI) in skin cancer diagnosis may alleviate potential care gaps. OBJECTIVE The aim of this systematic review was to offer an in-depth exploration of published AI algorithms trained on dermoscopic and macroscopic clinical images for the diagnosis of melanoma, basal cell carcinoma, and cutaneous squamous cell carcinoma (cSCC). METHODS Adhering to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines, a systematic review was conducted on peer-reviewed articles published between January 1, 2000, and January 26, 2023. RESULTS AND DISCUSSION Among the 232 studies in this review, the overall accuracy, sensitivity, and specificity of AI for tumor detection averaged 90%, 87%, and 91%, respectively. Model performance improved with time. Despite seemingly impressive performance, the paucity of external validation and limited representation of cSCC and skin of color in the data sets limits the generalizability of the current models. In addition, dermatologists coauthored only 12.9% of all studies included in the review. Moving forward, it is imperative to prioritize robustness in data reporting, inclusivity in data collection, and interdisciplinary collaboration to ensure the development of equitable and effective AI tools.
Collapse
Affiliation(s)
- Neil Jairath
- The Ronald O. Perelman Department of Dermatology, New York University Grossman School of Medicine, New York, New York
| | - Vartan Pahalyants
- The Ronald O. Perelman Department of Dermatology, New York University Grossman School of Medicine, New York, New York
| | - Rohan Shah
- Rutgers University School of Medicine, Newark, New Jersey
| | - Jason Weed
- The Ronald O. Perelman Department of Dermatology, New York University Grossman School of Medicine, New York, New York
| | - John A Carucci
- The Ronald O. Perelman Department of Dermatology, New York University Grossman School of Medicine, New York, New York
| | - Maressa C Criscito
- The Ronald O. Perelman Department of Dermatology, New York University Grossman School of Medicine, New York, New York
| |
Collapse
|
7
|
Lyakhova UA, Lyakhov PA. Systematic review of approaches to detection and classification of skin cancer using artificial intelligence: Development and prospects. Comput Biol Med 2024; 178:108742. [PMID: 38875908 DOI: 10.1016/j.compbiomed.2024.108742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 06/03/2024] [Accepted: 06/08/2024] [Indexed: 06/16/2024]
Abstract
In recent years, there has been a significant improvement in the accuracy of the classification of pigmented skin lesions using artificial intelligence algorithms. Intelligent analysis and classification systems are significantly superior to visual diagnostic methods used by dermatologists and oncologists. However, the application of such systems in clinical practice is severely limited due to a lack of generalizability and risks of potential misclassification. Successful implementation of artificial intelligence-based tools into clinicopathological practice requires a comprehensive study of the effectiveness and performance of existing models, as well as further promising areas for potential research development. The purpose of this systematic review is to investigate and evaluate the accuracy of artificial intelligence technologies for detecting malignant forms of pigmented skin lesions. For the study, 10,589 scientific research and review articles were selected from electronic scientific publishers, of which 171 articles were included in the presented systematic review. All selected scientific articles are distributed according to the proposed neural network algorithms from machine learning to multimodal intelligent architectures and are described in the corresponding sections of the manuscript. This research aims to explore automated skin cancer recognition systems, from simple machine learning algorithms to multimodal ensemble systems based on advanced encoder-decoder models, visual transformers (ViT), and generative and spiking neural networks. In addition, as a result of the analysis, future directions of research, prospects, and potential for further development of automated neural network systems for classifying pigmented skin lesions are discussed.
Collapse
Affiliation(s)
- U A Lyakhova
- Department of Mathematical Modeling, North-Caucasus Federal University, 355017, Stavropol, Russia.
| | - P A Lyakhov
- Department of Mathematical Modeling, North-Caucasus Federal University, 355017, Stavropol, Russia; North-Caucasus Center for Mathematical Research, North-Caucasus Federal University, 355017, Stavropol, Russia.
| |
Collapse
|
8
|
Saifullah S, Mercier D, Lucieri A, Dengel A, Ahmed S. The privacy-explainability trade-off: unraveling the impacts of differential privacy and federated learning on attribution methods. Front Artif Intell 2024; 7:1236947. [PMID: 39021435 PMCID: PMC11253022 DOI: 10.3389/frai.2024.1236947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Accepted: 06/17/2024] [Indexed: 07/20/2024] Open
Abstract
Since the advent of deep learning (DL), the field has witnessed a continuous stream of innovations. However, the translation of these advancements into practical applications has not kept pace, particularly in safety-critical domains where artificial intelligence (AI) must meet stringent regulatory and ethical standards. This is underscored by the ongoing research in eXplainable AI (XAI) and privacy-preserving machine learning (PPML), which seek to address some limitations associated with these opaque and data-intensive models. Despite brisk research activity in both fields, little attention has been paid to their interaction. This work is the first to thoroughly investigate the effects of privacy-preserving techniques on explanations generated by common XAI methods for DL models. A detailed experimental analysis is conducted to quantify the impact of private training on the explanations provided by DL models, applied to six image datasets and five time series datasets across various domains. The analysis comprises three privacy techniques, nine XAI methods, and seven model architectures. The findings suggest non-negligible changes in explanations through the implementation of privacy measures. Apart from reporting individual effects of PPML on XAI, the paper gives clear recommendations for the choice of techniques in real applications. By unveiling the interdependencies of these pivotal technologies, this research marks an initial step toward resolving the challenges that hinder the deployment of AI in safety-critical settings.
Collapse
Affiliation(s)
- Saifullah Saifullah
- Department of Computer Science, RPTU Kaiserslautern-Landau, Kaiserslautern, Rhineland-Palatinate, Germany
- Smart Data and Knowledge Services (SDS), DFKI GmbH, Kaiserslautern, Rhineland-Palatinate, Germany
| | - Dominique Mercier
- Department of Computer Science, RPTU Kaiserslautern-Landau, Kaiserslautern, Rhineland-Palatinate, Germany
- Smart Data and Knowledge Services (SDS), DFKI GmbH, Kaiserslautern, Rhineland-Palatinate, Germany
| | - Adriano Lucieri
- Department of Computer Science, RPTU Kaiserslautern-Landau, Kaiserslautern, Rhineland-Palatinate, Germany
- Smart Data and Knowledge Services (SDS), DFKI GmbH, Kaiserslautern, Rhineland-Palatinate, Germany
| | - Andreas Dengel
- Department of Computer Science, RPTU Kaiserslautern-Landau, Kaiserslautern, Rhineland-Palatinate, Germany
- Smart Data and Knowledge Services (SDS), DFKI GmbH, Kaiserslautern, Rhineland-Palatinate, Germany
| | - Sheraz Ahmed
- Smart Data and Knowledge Services (SDS), DFKI GmbH, Kaiserslautern, Rhineland-Palatinate, Germany
| |
Collapse
|
9
|
Guan H, Yap PT, Bozoki A, Liu M. Federated learning for medical image analysis: A survey. PATTERN RECOGNITION 2024; 151:110424. [PMID: 38559674 PMCID: PMC10976951 DOI: 10.1016/j.patcog.2024.110424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Machine learning in medical imaging often faces a fundamental dilemma, namely, the small sample size problem. Many recent studies suggest using multi-domain data pooled from different acquisition sites/centers to improve statistical power. However, medical images from different sites cannot be easily shared to build large datasets for model training due to privacy protection reasons. As a promising solution, federated learning, which enables collaborative training of machine learning models based on data from different sites without cross-site data sharing, has attracted considerable attention recently. In this paper, we conduct a comprehensive survey of the recent development of federated learning methods in medical image analysis. We have systematically gathered research papers on federated learning and its applications in medical image analysis published between 2017 and 2023. Our search and compilation were conducted using databases from IEEE Xplore, ACM Digital Library, Science Direct, Springer Link, Web of Science, Google Scholar, and PubMed. In this survey, we first introduce the background of federated learning for dealing with privacy protection and collaborative learning issues. We then present a comprehensive review of recent advances in federated learning methods for medical image analysis. Specifically, existing methods are categorized based on three critical aspects of a federated learning system, including client end, server end, and communication techniques. In each category, we summarize the existing federated learning methods according to specific research problems in medical image analysis and also provide insights into the motivations of different approaches. In addition, we provide a review of existing benchmark medical imaging datasets and software platforms for current federated learning research. We also conduct an experimental study to empirically evaluate typical federated learning methods for medical image analysis. This survey can help to better understand the current research status, challenges, and potential research opportunities in this promising research field.
Collapse
Affiliation(s)
- Hao Guan
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Andrea Bozoki
- Department of Neurology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Mingxia Liu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| |
Collapse
|
10
|
Vidhani FR, Woo JJ, Zhang YB, Olsen RJ, Ramkumar PN. Automating Linear and Angular Measurements for the Hip and Knee After Computed Tomography: Validation of a Three-Stage Deep Learning and Computer Vision-Based Pipeline for Pathoanatomic Assessment. Arthroplast Today 2024; 27:101394. [PMID: 39071819 PMCID: PMC11282415 DOI: 10.1016/j.artd.2024.101394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 01/17/2024] [Accepted: 04/01/2024] [Indexed: 07/30/2024] Open
Abstract
Background Variability in the bony morphology of pathologic hips/knees is a challenge in automating preoperative computed tomography (CT) scan measurements. With the increasing prevalence of CT for advanced preoperative planning, processing this data represents a critical bottleneck in presurgical planning, research, and development. The purpose of this study was to demonstrate a reproducible and scalable methodology for analyzing CT-based anatomy to process hip and knee anatomy for perioperative planning and execution. Methods One hundred patients with preoperative CT scans undergoing total knee arthroplasty for osteoarthritis were processed. A two-step deep learning pipeline of classification and segmentation models was developed that identifies landmark images and then generates contour representations. We utilized an open-source computer vision library to compute measurements. Classification models were assessed by accuracy, precision, and recall. Segmentation models were evaluated using dice and mean Intersection over Union (IOU) metrics. Contour measurements were compared against manual measurements to validate posterior condylar axis angle, sulcus angle, trochlear groove-tibial tuberosity distance, acetabular anteversion, and femoral version. Results Classifiers identified landmark images with accuracy of 0.91 and 0.88 for hip and knee models, respectively. Segmentation models demonstrated mean IOU scores above 0.95 with the highest dice coefficient of 0.957 [0.954-0.961] (UNet3+) and the highest mean IOU of 0.965 [0.961-0.969] (Attention U-Net). There were no statistically significant differences for the measurements taken automatically vs manually (P > 0.05). Average time for the pipeline to preprocess (48.65 +/- 4.41 sec), classify/retrieve landmark images (8.36 +/- 3.40 sec), segment images (<1 sec), and obtain measurements was 2.58 (+/- 1.92) minutes. Conclusions A fully automated three-stage deep learning and computer vision-based pipeline of classification and segmentation models accurately localized, segmented, and measured landmark hip and knee images for patients undergoing total knee arthroplasty. Incorporation of clinical parameters, like patient-reported outcome measures and instability risk, will be important considerations alongside anatomic parameters.
Collapse
Affiliation(s)
- Faizaan R. Vidhani
- Brown University/The Warren Alpert School of Brown University, Providence, RI, USA
| | - Joshua J. Woo
- Brown University/The Warren Alpert School of Brown University, Providence, RI, USA
| | - Yibin B. Zhang
- Harvard Medical School/Brigham and Women’s, Boston, MA, USA
| | - Reena J. Olsen
- Sports Medicine Institute, Hospital for Special Surgery, New York, NY, USA
| | | |
Collapse
|
11
|
Kim C, Gadgil SU, DeGrave AJ, Omiye JA, Cai ZR, Daneshjou R, Lee SI. Transparent medical image AI via an image-text foundation model grounded in medical literature. Nat Med 2024; 30:1154-1165. [PMID: 38627560 DOI: 10.1038/s41591-024-02887-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Accepted: 02/27/2024] [Indexed: 04/21/2024]
Abstract
Building trustworthy and transparent image-based medical artificial intelligence (AI) systems requires the ability to interrogate data and models at all stages of the development pipeline, from training models to post-deployment monitoring. Ideally, the data and associated AI systems could be described using terms already familiar to physicians, but this requires medical datasets densely annotated with semantically meaningful concepts. In the present study, we present a foundation model approach, named MONET (medical concept retriever), which learns how to connect medical images with text and densely scores images on concept presence to enable important tasks in medical AI development and deployment such as data auditing, model auditing and model interpretation. Dermatology provides a demanding use case for the versatility of MONET, due to the heterogeneity in diseases, skin tones and imaging modalities. We trained MONET based on 105,550 dermatological images paired with natural language descriptions from a large collection of medical literature. MONET can accurately annotate concepts across dermatology images as verified by board-certified dermatologists, competitively with supervised models built on previously concept-annotated dermatology datasets of clinical images. We demonstrate how MONET enables AI transparency across the entire AI system development pipeline, from building inherently interpretable models to dataset and model auditing, including a case study dissecting the results of an AI clinical trial.
Collapse
Affiliation(s)
- Chanwoo Kim
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA
| | - Soham U Gadgil
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA
| | - Alex J DeGrave
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA
- Medical Scientist Training Program, University of Washington, Seattle, WA, USA
| | - Jesutofunmi A Omiye
- Department of Dermatology, Stanford School of Medicine, Stanford, CA, USA
- Department of Biomedical Data Science, Stanford School of Medicine, Stanford, CA, USA
| | - Zhuo Ran Cai
- Program for Clinical Research and Technology, Stanford University, Stanford, CA, USA
| | - Roxana Daneshjou
- Department of Dermatology, Stanford School of Medicine, Stanford, CA, USA.
- Department of Biomedical Data Science, Stanford School of Medicine, Stanford, CA, USA.
| | - Su-In Lee
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA.
| |
Collapse
|
12
|
Pewton SW, Cassidy B, Kendrick C, Yap MH. Dermoscopic dark corner artifacts removal: Friend or foe? COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 244:107986. [PMID: 38157827 DOI: 10.1016/j.cmpb.2023.107986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 12/09/2023] [Accepted: 12/16/2023] [Indexed: 01/03/2024]
Abstract
BACKGROUND AND OBJECTIVES One of the more significant obstacles in classification of skin cancer is the presence of artifacts. This paper investigates the effect of dark corner artifacts, which result from the use of dermoscopes, on the performance of a deep learning binary classification task. Previous research attempted to remove and inpaint dark corner artifacts, with the intention of creating an ideal condition for models. However, such research has been shown to be inconclusive due to a lack of available datasets with corresponding labels for dark corner artifact cases. METHODS To address these issues, we label 10,250 skin lesion images from publicly available datasets and introduce a balanced dataset with an equal number of melanoma and non-melanoma cases. The training set comprises 6126 images without artifacts, and the testing set comprises 4124 images with dark corner artifacts. We conduct three experiments to provide new understanding on the effects of dark corner artifacts, including inpainted and synthetically generated examples, on a deep learning method. RESULTS Our results suggest that introducing synthetic dark corner artifacts which have been superimposed onto the training set improved model performance, particularly in terms of the true negative rate. This indicates that deep learning learnt to ignore dark corner artifacts, rather than treating it as melanoma, when dark corner artifacts were introduced into the training set. Further, we propose a new approach to quantifying heatmaps indicating network focus using a root mean square measure of the brightness intensity in the different regions of the heatmaps. CONCLUSIONS The proposed artifact methods can be used in future experiments to help alleviate possible impacts on model performance. Additionally, the newly proposed heatmap quantification analysis will help to better understand the relationships between heatmap results and other model performance metrics.
Collapse
Affiliation(s)
- Samuel William Pewton
- Department of Computing and Mathematics, Faculty of Science and Engineering, Manchester Metropolitan University, Chester Street, Manchester, M1 5GD, UK.
| | - Bill Cassidy
- Department of Computing and Mathematics, Faculty of Science and Engineering, Manchester Metropolitan University, Chester Street, Manchester, M1 5GD, UK.
| | - Connah Kendrick
- Department of Computing and Mathematics, Faculty of Science and Engineering, Manchester Metropolitan University, Chester Street, Manchester, M1 5GD, UK.
| | - Moi Hoon Yap
- Department of Computing and Mathematics, Faculty of Science and Engineering, Manchester Metropolitan University, Chester Street, Manchester, M1 5GD, UK.
| |
Collapse
|
13
|
Dai W, Liu R, Wu T, Wang M, Yin J, Liu J. Deeply Supervised Skin Lesions Diagnosis With Stage and Branch Attention. IEEE J Biomed Health Inform 2024; 28:719-729. [PMID: 37624725 DOI: 10.1109/jbhi.2023.3308697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/27/2023]
Abstract
Accurate and unbiased examinations of skin lesions are critical for the early diagnosis and treatment of skin diseases. Visual features of skin lesions vary significantly because the images are collected from patients with different lesion colours and morphologies by using dissimilar imaging equipment. Recent studies have reported that ensembled convolutional neural networks (CNNs) are practical to classify the images for early diagnosis of skin disorders. However, the practical use of these ensembled CNNs is limited as these networks are heavyweight and inadequate for processing contextual information. Although lightweight networks (e.g., MobileNetV3 and EfficientNet) were developed to achieve parameter reduction for implementing deep neural networks on mobile devices, insufficient depth of feature representation restricts the performance. To address the existing limitations, we develop a new lite and effective neural network, namely HierAttn. The HierAttn applies a novel deep supervision strategy to learn the local and global features by using multi-stage and multi-branch attention mechanisms with only one training loss. The efficacy of HierAttn was evaluated by using the dermoscopy images dataset ISIC2019 and smartphone photos dataset PAD-UFES-20 (PAD2020). The experimental results show that HierAttn achieves the best accuracy and area under the curve (AUC) among the state-of-the-art lightweight networks.
Collapse
|
14
|
Zhang L, Xiao X, Wen J, Li H. MDKLoss: Medicine domain knowledge loss for skin lesion recognition. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:2671-2690. [PMID: 38454701 DOI: 10.3934/mbe.2024118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/09/2024]
Abstract
Methods based on deep learning have shown good advantages in skin lesion recognition. However, the diversity of lesion shapes and the influence of noise disturbances such as hair, bubbles, and markers leads to large intra-class differences and small inter-class similarities, which existing methods have not yet effectively resolved. In addition, most existing methods enhance the performance of skin lesion recognition by improving deep learning models without considering the guidance of medical knowledge of skin lesions. In this paper, we innovatively construct feature associations between different lesions using medical knowledge, and design a medical domain knowledge loss function (MDKLoss) based on these associations. By expanding the gap between samples of various lesion categories, MDKLoss enhances the capacity of deep learning models to differentiate between different lesions and consequently boosts classification performance. Extensive experiments on ISIC2018 and ISIC2019 datasets show that the proposed method achieves a maximum of 91.6% and 87.6% accuracy. Furthermore, compared with existing state-of-the-art loss functions, the proposed method demonstrates its effectiveness, universality, and superiority.
Collapse
Affiliation(s)
- Li Zhang
- The Second School of Clinical Medicine, Southern Medical University, Guangzhou 510515, China
- Department of Dermatology, Guangdong Second Provincial General Hospital, Guangzhou 510317, China
- Department of Dermatology, Ningbo No. 6 Hospital, Ningbo 315040, China
| | - Xiangling Xiao
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou 510665, China
| | - Ju Wen
- The Second School of Clinical Medicine, Southern Medical University, Guangzhou 510515, China
- Department of Dermatology, Guangdong Second Provincial General Hospital, Guangzhou 510317, China
| | - Huihui Li
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou 510665, China
| |
Collapse
|
15
|
Zhang S, Metaxas D. On the challenges and perspectives of foundation models for medical image analysis. Med Image Anal 2024; 91:102996. [PMID: 37857067 DOI: 10.1016/j.media.2023.102996] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 09/24/2023] [Accepted: 10/04/2023] [Indexed: 10/21/2023]
Abstract
This article discusses the opportunities, applications and future directions of large-scale pretrained models, i.e., foundation models, which promise to significantly improve the analysis of medical images. Medical foundation models have immense potential in solving a wide range of downstream tasks, as they can help to accelerate the development of accurate and robust models, reduce the dependence on large amounts of labeled data, preserve the privacy and confidentiality of patient data. Specifically, we illustrate the "spectrum" of medical foundation models, ranging from general imaging models, modality-specific models, to organ/task-specific models, and highlight their challenges, opportunities and applications. We also discuss how foundation models can be leveraged in downstream medical tasks to enhance the accuracy and efficiency of medical image analysis, leading to more precise diagnosis and treatment decisions.
Collapse
Affiliation(s)
- Shaoting Zhang
- University of Electronic Science and Technology of China, Chengdu, Sichuan, China; Shanghai Artificial Intelligence Laboratory, Shanghai, China.
| | | |
Collapse
|
16
|
Rai HM, Yoo J. A comprehensive analysis of recent advancements in cancer detection using machine learning and deep learning models for improved diagnostics. J Cancer Res Clin Oncol 2023; 149:14365-14408. [PMID: 37540254 DOI: 10.1007/s00432-023-05216-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Accepted: 07/26/2023] [Indexed: 08/05/2023]
Abstract
PURPOSE There are millions of people who lose their life due to several types of fatal diseases. Cancer is one of the most fatal diseases which may be due to obesity, alcohol consumption, infections, ultraviolet radiation, smoking, and unhealthy lifestyles. Cancer is abnormal and uncontrolled tissue growth inside the body which may be spread to other body parts other than where it has originated. Hence it is very much required to diagnose the cancer at an early stage to provide correct and timely treatment. Also, manual diagnosis and diagnostic error may cause of the death of many patients hence much research are going on for the automatic and accurate detection of cancer at early stage. METHODS In this paper, we have done the comparative analysis of the diagnosis and recent advancement for the detection of various cancer types using traditional machine learning (ML) and deep learning (DL) models. In this study, we have included four types of cancers, brain, lung, skin, and breast and their detection using ML and DL techniques. In extensive review we have included a total of 130 pieces of literature among which 56 are of ML-based and 74 are from DL-based cancer detection techniques. Only the peer reviewed research papers published in the recent 5-year span (2018-2023) have been included for the analysis based on the parameters, year of publication, feature utilized, best model, dataset/images utilized, and best accuracy. We have reviewed ML and DL-based techniques for cancer detection separately and included accuracy as the performance evaluation metrics to maintain the homogeneity while verifying the classifier efficiency. RESULTS Among all the reviewed literatures, DL techniques achieved the highest accuracy of 100%, while ML techniques achieved 99.89%. The lowest accuracy achieved using DL and ML approaches were 70% and 75.48%, respectively. The difference in accuracy between the highest and lowest performing models is about 28.8% for skin cancer detection. In addition, the key findings, and challenges for each type of cancer detection using ML and DL techniques have been presented. The comparative analysis between the best performing and worst performing models, along with overall key findings and challenges, has been provided for future research purposes. Although the analysis is based on accuracy as the performance metric and various parameters, the results demonstrate a significant scope for improvement in classification efficiency. CONCLUSION The paper concludes that both ML and DL techniques hold promise in the early detection of various cancer types. However, the study identifies specific challenges that need to be addressed for the widespread implementation of these techniques in clinical settings. The presented results offer valuable guidance for future research in cancer detection, emphasizing the need for continued advancements in ML and DL-based approaches to improve diagnostic accuracy and ultimately save more lives.
Collapse
Affiliation(s)
- Hari Mohan Rai
- School of Computing, Gachon University, 1342 Seongnam-daero, Sujeong-gu, Seongnam-si, 13120, Gyeonggi-do, Republic of Korea.
| | - Joon Yoo
- School of Computing, Gachon University, 1342 Seongnam-daero, Sujeong-gu, Seongnam-si, 13120, Gyeonggi-do, Republic of Korea
| |
Collapse
|
17
|
Ichim L, Mitrica RI, Serghei MO, Popescu D. Detection of Malignant Skin Lesions Based on Decision Fusion of Ensembles of Neural Networks. Cancers (Basel) 2023; 15:4946. [PMID: 37894313 PMCID: PMC10605379 DOI: 10.3390/cancers15204946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 10/02/2023] [Accepted: 10/08/2023] [Indexed: 10/29/2023] Open
Abstract
Today, skin cancer, and especially melanoma, is an increasing and dangerous health disease. The high mortality rate of some types of skin cancers needs to be detected in the early stages and treated urgently. The use of neural network ensembles for the detection of objects of interest in images has gained more and more interest due to the increased performance of the results. In this sense, this paper proposes two ensembles of neural networks, based on the fusion of the decisions of the component neural networks for the detection of four skin lesions (basal cancer cell, melanoma, benign keratosis, and melanocytic nevi). The first system is based on separate learning of three neural networks (MobileNet V2, DenseNet 169, and EfficientNet B2), with multiple weights for the four classes of lesions and weighted overall prediction. The second system is made up of six binary models (one for each pair of classes) for each network; the fusion and prediction are conducted by weighted summation per class and per model. In total, 18 such binary models will be considered. The 91.04% global accuracy of this set of binary models is superior to the first system (89.62%). Separately, only for the binary classifications within the system was the individual accuracy better. The individual F1 score for each class and the global system varied from 81.36% to 94.17%. Finally, a critical comparison is made with similar works from the literature.
Collapse
Affiliation(s)
- Loretta Ichim
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania; (L.I.); (R.-I.M.); (M.-O.S.)
- “Ștefan S. Nicolau” Institute of Virology, 030304 Bucharest, Romania
| | - Razvan-Ionut Mitrica
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania; (L.I.); (R.-I.M.); (M.-O.S.)
| | - Madalina-Oana Serghei
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania; (L.I.); (R.-I.M.); (M.-O.S.)
| | - Dan Popescu
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania; (L.I.); (R.-I.M.); (M.-O.S.)
| |
Collapse
|
18
|
Behara K, Bhero E, Agee JT. Skin Lesion Synthesis and Classification Using an Improved DCGAN Classifier. Diagnostics (Basel) 2023; 13:2635. [PMID: 37627894 PMCID: PMC10453872 DOI: 10.3390/diagnostics13162635] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Revised: 08/06/2023] [Accepted: 08/07/2023] [Indexed: 08/27/2023] Open
Abstract
The prognosis for patients with skin cancer improves with regular screening and checkups. Unfortunately, many people with skin cancer do not receive a diagnosis until the disease has advanced beyond the point of effective therapy. Early detection is critical, and automated diagnostic technologies like dermoscopy, an imaging device that detects skin lesions early in the disease, are a driving factor. The lack of annotated data and class-imbalance datasets makes using automated diagnostic methods challenging for skin lesion classification. In recent years, deep learning models have performed well in medical diagnosis. Unfortunately, such models require a substantial amount of annotated data for training. Applying a data augmentation method based on generative adversarial networks (GANs) to classify skin lesions is a plausible solution by generating synthetic images to address the problem. This article proposes a skin lesion synthesis and classification model based on an Improved Deep Convolutional Generative Adversarial Network (DCGAN). The proposed system generates realistic images using several convolutional neural networks, making training easier. Scaling, normalization, sharpening, color transformation, and median filters enhance image details during training. The proposed model uses generator and discriminator networks, global average pooling with 2 × 2 fractional-stride, backpropagation with a constant learning rate of 0.01 instead of 0.0002, and the most effective hyperparameters for optimization to efficiently generate high-quality synthetic skin lesion images. As for the classification, the final layer of the Discriminator is labeled as a classifier for predicting the target class. This study deals with a binary classification predicting two classes-benign and malignant-in the ISIC2017 dataset: accuracy, recall, precision, and F1-score model classification performance. BAS measures classifier accuracy on imbalanced datasets. The DCGAN Classifier model demonstrated superior performance with a notable accuracy of 99.38% and 99% for recall, precision, F1 score, and BAS, outperforming the state-of-the-art deep learning models. These results show that the DCGAN Classifier can generate high-quality skin lesion images and accurately classify them, making it a promising tool for deep learning-based medical image analysis.
Collapse
Affiliation(s)
- Kavita Behara
- Department of Electrical Engineering, Mangosuthu University of Technology, Durban 4031, South Africa
| | - Ernest Bhero
- Discipline of Electrical, Electronic and Computer Engineering, University of KwaZulu-Natal, Durban 4041, South Africa; (E.B.); (J.T.A.)
| | - John Terhile Agee
- Discipline of Electrical, Electronic and Computer Engineering, University of KwaZulu-Natal, Durban 4041, South Africa; (E.B.); (J.T.A.)
| |
Collapse
|
19
|
Lucieri A, Dengel A, Ahmed S. Translating theory into practice: assessing the privacy implications of concept-based explanations for biomedical AI. FRONTIERS IN BIOINFORMATICS 2023; 3:1194993. [PMID: 37484865 PMCID: PMC10356902 DOI: 10.3389/fbinf.2023.1194993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 06/20/2023] [Indexed: 07/25/2023] Open
Abstract
Artificial Intelligence (AI) has achieved remarkable success in image generation, image analysis, and language modeling, making data-driven techniques increasingly relevant in practical real-world applications, promising enhanced creativity and efficiency for human users. However, the deployment of AI in high-stakes domains such as infrastructure and healthcare still raises concerns regarding algorithm accountability and safety. The emerging field of explainable AI (XAI) has made significant strides in developing interfaces that enable humans to comprehend the decisions made by data-driven models. Among these approaches, concept-based explainability stands out due to its ability to align explanations with high-level concepts familiar to users. Nonetheless, early research in adversarial machine learning has unveiled that exposing model explanations can render victim models more susceptible to attacks. This is the first study to investigate and compare the impact of concept-based explanations on the privacy of Deep Learning based AI models in the context of biomedical image analysis. An extensive privacy benchmark is conducted on three different state-of-the-art model architectures (ResNet50, NFNet, ConvNeXt) trained on two biomedical (ISIC and EyePACS) and one synthetic dataset (SCDB). The success of membership inference attacks while exposing varying degrees of attribution-based and concept-based explanations is systematically compared. The findings indicate that, in theory, concept-based explanations can potentially increase the vulnerability of a private AI system by up to 16% compared to attributions in the baseline setting. However, it is demonstrated that, in more realistic attack scenarios, the threat posed by explanations is negligible in practice. Furthermore, actionable recommendations are provided to ensure the safe deployment of concept-based XAI systems. In addition, the impact of differential privacy (DP) on the quality of concept-based explanations is explored, revealing that while negatively influencing the explanation ability, DP can have an adverse effect on the models' privacy.
Collapse
Affiliation(s)
- Adriano Lucieri
- Smart Data and Knowledge Services (SDS), Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI) GmbH, Kaiserslautern, Germany
- Computer Science Department, RPTU Kaiserslautern-Landau, Kaiserslautern, Germany
| | - Andreas Dengel
- Smart Data and Knowledge Services (SDS), Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI) GmbH, Kaiserslautern, Germany
- Computer Science Department, RPTU Kaiserslautern-Landau, Kaiserslautern, Germany
| | - Sheraz Ahmed
- Smart Data and Knowledge Services (SDS), Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI) GmbH, Kaiserslautern, Germany
| |
Collapse
|
20
|
Khan MA, Akram T, Zhang Y, Alhaisoni M, Al Hejaili A, Shaban KA, Tariq U, Zayyan MH. SkinNet‐ENDO: Multiclass skin lesion recognition using deep neural network and Entropy‐Normal distribution optimization algorithm with ELM. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY 2023; 33:1275-1292. [DOI: 10.1002/ima.22863] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 01/31/2023] [Indexed: 08/25/2024]
Abstract
AbstractThe early diagnosis of skin cancer through clinical methods reduces the human mortality rate. The manual screening of dermoscopic images is not an efficient procedure; therefore, researchers working in the domain of computer vision employed several algorithms to classify the skin lesion. The existing computerized methods have a few drawbacks, such as low accuracy and high computational time. Therefore, in this work, we proposed a novel deep learning and Entropy‐Normal Distribution Optimization Algorithm with extreme learning machine (NDOEM)‐based architecture for multiclass skin lesion classification. The proposed architecture consists of five fundamental steps. In the first step, two contrast enhancement techniques including hybridization of mathematical formulation and convolutional neural network are implemented prior to data augmentation. In the second step, two pre‐trained deep learning models, EfficientNetB0 and DarkNet19, are fine‐tuned and retrained through the transfer learning. In the third step, features are extracted from the fine‐tuned models and later the most discriminant features are selected based on novel Entropy‐NDOELM algorithm. The selected features are finally fused using a parallel correlation technique in the fourth step to generate the result feature vectors. Finally, the resultant features are again down‐sampled using the proposed algorithm and the resultant features are passed to the extreme learning machine (ELM) for the final classification. The simulations are conducted on three publicly available datasets as HAM10000, ISIC2018, and ISIC2019 to achieving an accuracy of 95.7%, 96.3%, and 94.8% respectively.
Collapse
Affiliation(s)
- Muhammad Attique Khan
- Department of Computer Science HITEC University Taxila Pakistan
- Department of Informatics University of Leicester Leicester UK
| | - Tallha Akram
- Department of Electrical and Computer Engineering COMSATS University Islamabad Wah Campus Pakistan
| | - Yu‐Dong Zhang
- Department of Informatics University of Leicester Leicester UK
| | - Majed Alhaisoni
- Computer Sciences Department, College of Computer and Information Sciences Princess Nourah bint Abdulrahman University Riyadh Saudi Arabia
| | - Abdullah Al Hejaili
- Faculty of Computers & Information Technology, Computer Science Department University of Tabuk Tabuk Saudi Arabia
| | - Khalid Adel Shaban
- Computer Science Department, College of Computing and Informatics Saudi Electronic University Ryiadh Saudi Arabia
| | - Usman Tariq
- Department of Management Information Systems College of Business Administration, Prince Sattam Bin Abdulaziz University Al‐Kharj Saudi Arabia
| | - Muhammad H. Zayyan
- Computer Science Department, Faculty of Computers and Information Sciences Mansoura University Mansoura Egypt
| |
Collapse
|
21
|
Liu N, Rejeesh MR, Sundararaj V, Gunasundari B. ACO-KELM: Anti Coronavirus Optimized Kernel-based Softplus Extreme Learning Machine for Classification of Skin Cancer. EXPERT SYSTEMS WITH APPLICATIONS 2023:120719. [PMID: 37362255 PMCID: PMC10268820 DOI: 10.1016/j.eswa.2023.120719] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Revised: 04/28/2023] [Accepted: 06/03/2023] [Indexed: 06/28/2023]
Abstract
Due to the presence of redundant and irrelevant features in large-dimensional biomedical datasets, the prediction accuracy of disease diagnosis can often be decreased. Therefore, it is important to adopt feature extraction methodologies that can deal with problem structures and identify underlying data patterns. In this paper, we propose a novel approach called the Anti Coronavirus Optimized Kernel-based Softplus Extreme Learning Machine (ACO-KSELM) to accurately predict different types of skin cancer by analyzing high-dimensional datasets. To evaluate the proposed ACO-KSELM method, we used four different skin cancer image datasets: ISIC 2016, ACS, HAM10000, and PAD-UFES-20. These dermoscopic image datasets were preprocessed using Gaussian filters to remove noise and artifacts, and relevant features based on color, texture, and shape were extracted using color histogram, Haralick texture, and Hu moment extraction approaches, respectively. Finally, the proposed ACO-KSELM method accurately predicted and classified the extracted features into Basal Cell Carcinoma (BCC), Squamous Cell Carcinoma (SCC), Actinic Keratosis (ACK), Seborrheic Keratosis (SEK), Bowen's disease (BOD), Melanoma (MEL), and Nevus (NEV) categories. The analytical results showed that the proposed method achieved a higher rate of prediction accuracy of about 98.9%, 98.7%, 98.6%, and 97.9% for the ISIC 2016, ACS, HAM10000, and PAD-UFES-20 datasets, respectively.
Collapse
Affiliation(s)
- Nannan Liu
- School of Electronic and Information Engineering, Ningbo University of Technology, Ningbo, 315211, China
| | - M R Rejeesh
- REVIRE Intelligence LLP, Eraviputoorakadi, Tamilnadu India
| | | | - B Gunasundari
- Departmentof IT, REVIRE Intelligence LLP, Tamilnadu India
| |
Collapse
|
22
|
Kim C, Gadgil SU, DeGrave AJ, Cai ZR, Daneshjou R, Lee SI. Fostering transparent medical image AI via an image-text foundation model grounded in medical literature. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.06.07.23291119. [PMID: 37398017 PMCID: PMC10312868 DOI: 10.1101/2023.06.07.23291119] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Building trustworthy and transparent image-based medical AI systems requires the ability to interrogate data and models at all stages of the development pipeline: from training models to post-deployment monitoring. Ideally, the data and associated AI systems could be described using terms already familiar to physicians, but this requires medical datasets densely annotated with semantically meaningful concepts. Here, we present a foundation model approach, named MONET (Medical cONcept rETriever), which learns how to connect medical images with text and generates dense concept annotations to enable tasks in AI transparency from model auditing to model interpretation. Dermatology provides a demanding use case for the versatility of MONET, due to the heterogeneity in diseases, skin tones, and imaging modalities. We trained MONET on the basis of 105,550 dermatological images paired with natural language descriptions from a large collection of medical literature. MONET can accurately annotate concepts across dermatology images as verified by board-certified dermatologists, outperforming supervised models built on previously concept-annotated dermatology datasets. We demonstrate how MONET enables AI transparency across the entire AI development pipeline from dataset auditing to model auditing to building inherently interpretable models.
Collapse
Affiliation(s)
- Chanwoo Kim
- Paul G. Allen School of Computer Science and Engineering, University of Washington
| | - Soham U Gadgil
- Paul G. Allen School of Computer Science and Engineering, University of Washington
| | - Alex J DeGrave
- Paul G. Allen School of Computer Science and Engineering, University of Washington
- Medical Scientist Training Program, University of Washington
| | - Zhuo Ran Cai
- Program for Clinical Research and Technology, Stanford University
| | - Roxana Daneshjou
- Department of Dermatology, Stanford School of Medicine
- Department of Biomedical Data Science, Stanford School of Medicine
| | - Su-In Lee
- Paul G. Allen School of Computer Science and Engineering, University of Washington
| |
Collapse
|
23
|
Mirikharaji Z, Abhishek K, Bissoto A, Barata C, Avila S, Valle E, Celebi ME, Hamarneh G. A survey on deep learning for skin lesion segmentation. Med Image Anal 2023; 88:102863. [PMID: 37343323 DOI: 10.1016/j.media.2023.102863] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 02/01/2023] [Accepted: 05/31/2023] [Indexed: 06/23/2023]
Abstract
Skin cancer is a major public health problem that could benefit from computer-aided diagnosis to reduce the burden of this common disease. Skin lesion segmentation from images is an important step toward achieving this goal. However, the presence of natural and artificial artifacts (e.g., hair and air bubbles), intrinsic factors (e.g., lesion shape and contrast), and variations in image acquisition conditions make skin lesion segmentation a challenging task. Recently, various researchers have explored the applicability of deep learning models to skin lesion segmentation. In this survey, we cross-examine 177 research papers that deal with deep learning-based segmentation of skin lesions. We analyze these works along several dimensions, including input data (datasets, preprocessing, and synthetic data generation), model design (architecture, modules, and losses), and evaluation aspects (data annotation requirements and segmentation performance). We discuss these dimensions both from the viewpoint of select seminal works, and from a systematic viewpoint, examining how those choices have influenced current trends, and how their limitations should be addressed. To facilitate comparisons, we summarize all examined works in a comprehensive table as well as an interactive table available online3.
Collapse
Affiliation(s)
- Zahra Mirikharaji
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada
| | - Kumar Abhishek
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada
| | - Alceu Bissoto
- RECOD.ai Lab, Institute of Computing, University of Campinas, Av. Albert Einstein 1251, Campinas 13083-852, Brazil
| | - Catarina Barata
- Institute for Systems and Robotics, Instituto Superior Técnico, Avenida Rovisco Pais, Lisbon 1049-001, Portugal
| | - Sandra Avila
- RECOD.ai Lab, Institute of Computing, University of Campinas, Av. Albert Einstein 1251, Campinas 13083-852, Brazil
| | - Eduardo Valle
- RECOD.ai Lab, School of Electrical and Computing Engineering, University of Campinas, Av. Albert Einstein 400, Campinas 13083-952, Brazil
| | - M Emre Celebi
- Department of Computer Science and Engineering, University of Central Arkansas, 201 Donaghey Ave., Conway, AR 72035, USA.
| | - Ghassan Hamarneh
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada.
| |
Collapse
|
24
|
Fogelberg K, Chamarthi S, Maron RC, Niebling J, Brinker TJ. Domain shifts in dermoscopic skin cancer datasets: Evaluation of essential limitations for clinical translation. N Biotechnol 2023:S1871-6784(23)00021-3. [PMID: 37146681 DOI: 10.1016/j.nbt.2023.04.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 04/12/2023] [Accepted: 04/26/2023] [Indexed: 05/07/2023]
Abstract
The limited ability of Convolutional Neural Networks to generalize to images from previously unseen domains is a major limitation, in particular, for safety-critical clinical tasks such as dermoscopic skin cancer classification. In order to translate CNN-based applications into the clinic, it is essential that they are able to adapt to domain shifts. Such new conditions can arise through the use of different image acquisition systems or varying lighting conditions. In dermoscopy, shifts can also occur as a change in patient age or occurence of rare lesion localizations (e.g. palms). These are not prominently represented in most training datasets and can therefore lead to a decrease in performance. In order to verify the generalizability of classification models in real world clinical settings it is crucial to have access to data which mimics such domain shifts. To our knowledge no dermoscopic image dataset exists where such domain shifts are properly described and quantified. We therefore grouped publicly available images from ISIC archive based on their metadata (e.g. acquisition location, lesion localization, patient age) to generate meaningful domains. To verify that these domains are in fact distinct, we used multiple quantification measures to estimate the presence and intensity of domain shifts. Additionally, we analyzed the performance on these domains with and without an unsupervised domain adaptation technique. We observed that in most of our grouped domains, domain shifts in fact exist. Based on our results, we believe these datasets to be helpful for testing the generalization capabilities of dermoscopic skin cancer classifiers.
Collapse
Affiliation(s)
- Katharina Fogelberg
- Digital Biomarkers for Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Sireesha Chamarthi
- Data Analysis and Intelligence, German Aerospace Center (DLR - Institute of Data science), Jena, Germany
| | - Roman C Maron
- Digital Biomarkers for Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Julia Niebling
- Data Analysis and Intelligence, German Aerospace Center (DLR - Institute of Data science), Jena, Germany
| | - Titus J Brinker
- Digital Biomarkers for Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany.
| |
Collapse
|
25
|
Nambisan AK, Maurya A, Lama N, Phan T, Patel G, Miller K, Lama B, Hagerty J, Stanley R, Stoecker WV. Improving Automatic Melanoma Diagnosis Using Deep Learning-Based Segmentation of Irregular Networks. Cancers (Basel) 2023; 15:1259. [PMID: 36831599 PMCID: PMC9953766 DOI: 10.3390/cancers15041259] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Revised: 02/08/2023] [Accepted: 02/09/2023] [Indexed: 02/18/2023] Open
Abstract
Deep learning has achieved significant success in malignant melanoma diagnosis. These diagnostic models are undergoing a transition into clinical use. However, with melanoma diagnostic accuracy in the range of ninety percent, a significant minority of melanomas are missed by deep learning. Many of the melanomas missed have irregular pigment networks visible using dermoscopy. This research presents an annotated irregular network database and develops a classification pipeline that fuses deep learning image-level results with conventional hand-crafted features from irregular pigment networks. We identified and annotated 487 unique dermoscopic melanoma lesions from images in the ISIC 2019 dermoscopic dataset to create a ground-truth irregular pigment network dataset. We trained multiple transfer learned segmentation models to detect irregular networks in this training set. A separate, mutually exclusive subset of the International Skin Imaging Collaboration (ISIC) 2019 dataset with 500 melanomas and 500 benign lesions was used for training and testing deep learning models for the binary classification of melanoma versus benign. The best segmentation model, U-Net++, generated irregular network masks on the 1000-image dataset. Other classical color, texture, and shape features were calculated for the irregular network areas. We achieved an increase in the recall of melanoma versus benign of 11% and in accuracy of 2% over DL-only models using conventional classifiers in a sequential pipeline based on the cascade generalization framework, with the highest increase in recall accompanying the use of the random forest algorithm. The proposed approach facilitates leveraging the strengths of both deep learning and conventional image processing techniques to improve the accuracy of melanoma diagnosis. Further research combining deep learning with conventional image processing on automatically detected dermoscopic features is warranted.
Collapse
Affiliation(s)
- Anand K. Nambisan
- Electrical and Computer Engineering Department, Missouri University of Science and Technology, Rolla, MO 65409, USA
| | - Akanksha Maurya
- Electrical and Computer Engineering Department, Missouri University of Science and Technology, Rolla, MO 65409, USA
| | - Norsang Lama
- Electrical and Computer Engineering Department, Missouri University of Science and Technology, Rolla, MO 65409, USA
| | - Thanh Phan
- Department of Biological Sciences, College of Arts, Sciences, and Education, Missouri University of Science and Technology, Rolla, MO 65409, USA
| | - Gehana Patel
- College of Health Sciences, University of Missouri—Columbia, Columbia, MO 65211, USA
| | - Keith Miller
- Electrical and Computer Engineering Department, Missouri University of Science and Technology, Rolla, MO 65409, USA
| | - Binita Lama
- Electrical and Computer Engineering Department, Missouri University of Science and Technology, Rolla, MO 65409, USA
| | - Jason Hagerty
- S&A Technologies, 10101 Stoltz Drive, Rolla, MO 65401, USA
| | - Ronald Stanley
- Electrical and Computer Engineering Department, Missouri University of Science and Technology, Rolla, MO 65409, USA
| | | |
Collapse
|
26
|
Zhang Y, Xie F, Song X, Zhou H, Yang Y, Zhang H, Liu J. A rotation meanout network with invariance for dermoscopy image classification and retrieval. Comput Biol Med 2022; 151:106272. [PMID: 36368111 DOI: 10.1016/j.compbiomed.2022.106272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 10/07/2022] [Accepted: 10/30/2022] [Indexed: 11/07/2022]
Abstract
The computer-aided diagnosis (CAD) system can provide a reference basis for the clinical diagnosis of skin diseases. Convolutional neural networks (CNNs) can not only extract visual elements such as colors and shapes but also semantic features. As such they have made great improvements in many tasks of dermoscopy images. The imaging of dermoscopy has no principal orientation, indicating that there are a large number of skin lesion rotations in the datasets. However, CNNs lack rotation invariance, which is bound to affect the robustness of CNNs against rotations. To tackle this issue, we propose a rotation meanout (RM) network to extract rotation-invariant features from dermoscopy images. In RM, each set of rotated feature maps corresponds to a set of outputs of the weight-sharing convolutions and they are fused using meanout strategy to obtain the final feature maps. Through theoretical derivation, the proposed RM network is rotation-equivariant and can extract rotation-invariant features when followed by the global average pooling (GAP) operation. The extracted rotation-invariant features can better represent the original data in classification and retrieval tasks for dermoscopy images. The RM is a general operation, which does not change the network structure or increase any parameters, and can be flexibly embedded in any part of CNNs. Extensive experiments are conducted on a dermoscopy image dataset. The results show that our method outperforms other anti-rotation methods and achieves great improvements in skin disease classification and retrieval tasks, indicating the potential of rotation invariance in the field of dermoscopy images.
Collapse
Affiliation(s)
- Yilan Zhang
- Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China
| | - Fengying Xie
- Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China.
| | - Xuedong Song
- Shanghai Aerospace Control Technology Institute, Shanghai 201109, China
| | - Hangning Zhou
- Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China
| | - Yiguang Yang
- Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China
| | - Haopeng Zhang
- Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China
| | - Jie Liu
- Department of Dermatology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| |
Collapse
|
27
|
Li Z, Koban KC, Schenck TL, Giunta RE, Li Q, Sun Y. Artificial Intelligence in Dermatology Image Analysis: Current Developments and Future Trends. J Clin Med 2022; 11:jcm11226826. [PMID: 36431301 PMCID: PMC9693628 DOI: 10.3390/jcm11226826] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 10/24/2022] [Accepted: 10/28/2022] [Indexed: 11/22/2022] Open
Abstract
BACKGROUND Thanks to the rapid development of computer-based systems and deep-learning-based algorithms, artificial intelligence (AI) has long been integrated into the healthcare field. AI is also particularly helpful in image recognition, surgical assistance and basic research. Due to the unique nature of dermatology, AI-aided dermatological diagnosis based on image recognition has become a modern focus and future trend. Key scientific concepts of review: The use of 3D imaging systems allows clinicians to screen and label skin pigmented lesions and distributed disorders, which can provide an objective assessment and image documentation of lesion sites. Dermatoscopes combined with intelligent software help the dermatologist to easily correlate each close-up image with the corresponding marked lesion in the 3D body map. In addition, AI in the field of prosthetics can assist in the rehabilitation of patients and help to restore limb function after amputation in patients with skin tumors. THE AIM OF THE STUDY For the benefit of patients, dermatologists have an obligation to explore the opportunities, risks and limitations of AI applications. This study focuses on the application of emerging AI in dermatology to aid clinical diagnosis and treatment, analyzes the current state of the field and summarizes its future trends and prospects so as to help dermatologists realize the impact of new technological innovations on traditional practices so that they can embrace and use AI-based medical approaches more quickly.
Collapse
Affiliation(s)
- Zhouxiao Li
- Department of Plastic and Reconstructive Surgery, Shanghai 9th People’s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200023, China
- Division of Hand, Plastic and Aesthetic Surgery, University Hospital, LMU Munich, 80339 Munich, Germany
| | | | - Thilo Ludwig Schenck
- Division of Hand, Plastic and Aesthetic Surgery, University Hospital, LMU Munich, 80339 Munich, Germany
| | - Riccardo Enzo Giunta
- Division of Hand, Plastic and Aesthetic Surgery, University Hospital, LMU Munich, 80339 Munich, Germany
| | - Qingfeng Li
- Department of Plastic and Reconstructive Surgery, Shanghai 9th People’s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200023, China
- Correspondence: (Q.L.); (Y.S.)
| | - Yangbai Sun
- Department of Plastic and Reconstructive Surgery, Shanghai 9th People’s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200023, China
- Correspondence: (Q.L.); (Y.S.)
| |
Collapse
|
28
|
Deep Learning in Dermatology: A Systematic Review of Current Approaches, Outcomes, and Limitations. JID INNOVATIONS 2022; 3:100150. [PMID: 36655135 PMCID: PMC9841357 DOI: 10.1016/j.xjidi.2022.100150] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 06/17/2022] [Accepted: 07/15/2022] [Indexed: 01/21/2023] Open
Abstract
Artificial intelligence (AI) has recently made great advances in image classification and malignancy prediction in the field of dermatology. However, understanding the applicability of AI in clinical dermatology practice remains challenging owing to the variability of models, image data, database characteristics, and variable outcome metrics. This systematic review aims to provide a comprehensive overview of dermatology literature using convolutional neural networks. Furthermore, the review summarizes the current landscape of image datasets, transfer learning approaches, challenges, and limitations within current AI literature and current regulatory pathways for approval of models as clinical decision support tools.
Collapse
|
29
|
Jojoa M, Garcia-Zapirain B, Percybrooks W. A Fair Performance Comparison between Complex-Valued and Real-Valued Neural Networks for Disease Detection. Diagnostics (Basel) 2022; 12:diagnostics12081893. [PMID: 36010243 PMCID: PMC9406326 DOI: 10.3390/diagnostics12081893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2022] [Revised: 07/15/2022] [Accepted: 07/25/2022] [Indexed: 11/16/2022] Open
Abstract
Our aim is to contribute to the classification of anomalous patterns in biosignals using this novel approach. We specifically focus on melanoma and heart murmurs. We use a comparative study of two convolution networks in the Complex and Real numerical domains. The idea is to obtain a powerful approach for building portable systems for early disease detection. Two similar algorithmic structures were chosen so that there is no bias determined by the number of parameters to train. Three clinical data sets, ISIC2017, PH2, and Pascal, were used to carry out the experiments. Mean comparison hypothesis tests were performed to ensure statistical objectivity in the conclusions. In all cases, complex-valued networks presented a superior performance for the Precision, Recall, F1 Score, Accuracy, and Specificity metrics in the detection of associated anomalies. The best complex number-based classifier obtained in the Receiving Operating Characteristic (ROC) space presents a Euclidean distance of 0.26127 with respect to the ideal classifier, as opposed to the best real number-based classifier, whose Euclidean distance to the ideal is 0.36022 for the same task of melanoma detection. The 27.46% superiority in this metric, as in the others reported in this work, suggests that complex-valued networks have a greater ability to extract features for more efficient discrimination in the dataset.
Collapse
Affiliation(s)
- Mario Jojoa
- Department of Electrical and Electronics Engineering, University of North, Barranquilla 080002, Colombia
- Correspondence:
| | | | - Winston Percybrooks
- Department of Electrical and Electronics Engineering, University of North, Barranquilla 080002, Colombia
| |
Collapse
|
30
|
Deep Residual Learning Image Recognition Model for Skin Cancer Disease Detection and Classification. ACTA INFORMATICA PRAGENSIA 2022. [DOI: 10.18267/j.aip.189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
31
|
Naeem A, Anees T, Fiza M, Naqvi RA, Lee SW. SCDNet: A Deep Learning-Based Framework for the Multiclassification of Skin Cancer Using Dermoscopy Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22155652. [PMID: 35957209 PMCID: PMC9371071 DOI: 10.3390/s22155652] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 07/19/2022] [Accepted: 07/25/2022] [Indexed: 05/27/2023]
Abstract
Skin cancer is a deadly disease, and its early diagnosis enhances the chances of survival. Deep learning algorithms for skin cancer detection have become popular in recent years. A novel framework based on deep learning is proposed in this study for the multiclassification of skin cancer types such as Melanoma, Melanocytic Nevi, Basal Cell Carcinoma and Benign Keratosis. The proposed model is named as SCDNet which combines Vgg16 with convolutional neural networks (CNN) for the classification of different types of skin cancer. Moreover, the accuracy of the proposed method is also compared with the four state-of-the-art pre-trained classifiers in the medical domain named Resnet 50, Inception v3, AlexNet and Vgg19. The performance of the proposed SCDNet classifier, as well as the four state-of-the-art classifiers, is evaluated using the ISIC 2019 dataset. The accuracy rate of the proposed SDCNet is 96.91% for the multiclassification of skin cancer whereas, the accuracy rates for Resnet 50, Alexnet, Vgg19 and Inception-v3 are 95.21%, 93.14%, 94.25% and 92.54%, respectively. The results showed that the proposed SCDNet performed better than the competing classifiers.
Collapse
Affiliation(s)
- Ahmad Naeem
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan;
| | - Tayyaba Anees
- Department of Software Engineering, University of Management and Technology, Lahore 54000, Pakistan;
| | - Makhmoor Fiza
- Department of Management Sciences and Technology, Begum Nusrat Bhutto Women University, Sukkur 65200, Pakistan;
| | - Rizwan Ali Naqvi
- Department of Unmanned Vehicle Engineering, Sejong University, Seoul 05006, Korea
| | - Seung-Won Lee
- Department of Data Science, College of Software Convergence, Sejong University, Seoul 05006, Korea
- School of Medicine, Sungkyunkwan University, Suwon 16419, Korea
| |
Collapse
|
32
|
Wu Y, Chen B, Zeng A, Pan D, Wang R, Zhao S. Skin Cancer Classification With Deep Learning: A Systematic Review. Front Oncol 2022; 12:893972. [PMID: 35912265 PMCID: PMC9327733 DOI: 10.3389/fonc.2022.893972] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 05/16/2022] [Indexed: 01/21/2023] Open
Abstract
Skin cancer is one of the most dangerous diseases in the world. Correctly classifying skin lesions at an early stage could aid clinical decision-making by providing an accurate disease diagnosis, potentially increasing the chances of cure before cancer spreads. However, achieving automatic skin cancer classification is difficult because the majority of skin disease images used for training are imbalanced and in short supply; meanwhile, the model's cross-domain adaptability and robustness are also critical challenges. Recently, many deep learning-based methods have been widely used in skin cancer classification to solve the above issues and achieve satisfactory results. Nonetheless, reviews that include the abovementioned frontier problems in skin cancer classification are still scarce. Therefore, in this article, we provide a comprehensive overview of the latest deep learning-based algorithms for skin cancer classification. We begin with an overview of three types of dermatological images, followed by a list of publicly available datasets relating to skin cancers. After that, we review the successful applications of typical convolutional neural networks for skin cancer classification. As a highlight of this paper, we next summarize several frontier problems, including data imbalance, data limitation, domain adaptation, model robustness, and model efficiency, followed by corresponding solutions in the skin cancer classification task. Finally, by summarizing different deep learning-based methods to solve the frontier challenges in skin cancer classification, we can conclude that the general development direction of these approaches is structured, lightweight, and multimodal. Besides, for readers' convenience, we have summarized our findings in figures and tables. Considering the growing popularity of deep learning, there are still many issues to overcome as well as chances to pursue in the future.
Collapse
Affiliation(s)
- Yinhao Wu
- School of Intelligent Systems Engineering, Sun Yat-Sen University, Guangzhou, China
| | - Bin Chen
- Affiliated Hangzhou First People’s Hospital, Zhejiang University School of Medicine, Zhejiang, China
| | - An Zeng
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou, China
| | - Dan Pan
- School of Electronics and Information, Guangdong Polytechnic Normal University, Guangzhou, China
| | - Ruixuan Wang
- School of Computer Science and Engineering, Sun Yat-Sen University, Guangzhou, China
| | - Shen Zhao
- School of Intelligent Systems Engineering, Sun Yat-Sen University, Guangzhou, China
| |
Collapse
|
33
|
Ozturk S, Cukur T. Deep Clustering via Center-Oriented Margin Free-Triplet Loss for Skin Lesion Detection in Highly Imbalanced Datasets. IEEE J Biomed Health Inform 2022; 26:4679-4690. [PMID: 35767499 DOI: 10.1109/jbhi.2022.3187215] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Melanoma is a fatal skin cancer that is curable and has dramatically increasing survival rate when diagnosed at early stages. Learning-based methods hold significant promise for the detection of melanoma from dermoscopic images. However, since melanoma is a rare disease, existing databases of skin lesions predominantly contain highly imbalanced numbers of benign versus malignant samples. In turn, this imbalance introduces substantial bias in classification models due to the statistical dominance of the majority class. To address this issue, we introduce a deep clustering approach based on the latent-space embedding of dermoscopic images. Clustering is achieved using a novel center-oriented margin-free triplet loss (COM-Triplet) enforced on image embeddings from a convolutional neural network backbone. The proposed method aims to form maximally-separated cluster centers as opposed to minimizing classification error, so it is less sensitive to class imbalance. To avoid the need for labeled data, we further propose to implement COM-Triplet based on pseudo-labels generated by a Gaussian mixture model (GMM). Comprehensive experiments show that deep clustering with COM-Triplet loss outperforms clustering with triplet loss, and competing classifiers in both supervised and unsupervised settings.
Collapse
|
34
|
Abstract
Skin cancer is common nowadays. Early diagnosis of skin cancer is essential to increase patients’ survival rate. In addition to traditional methods, computer-aided diagnosis is used in diagnosis of skin cancer. One of the benefits of this method is that it eliminates human error in cancer diagnosis. Skin images may contain noise such as like hair, ink spots, rulers, etc., in addition to the lesion. For this reason, noise removal is required. The noise reduction in lesion images can be referred to as noise removal. This phase is very important for the correct segmentation of the lesions. One of the most critical problems in using such automated methods is the inaccuracy in cancer diagnosis because noise removal and segmentation cannot be performed effectively. We have created a noise dataset (hair, rulers, ink spots, etc.) that includes 2500 images and masks. There is no such noise dataset in the literature. We used this dataset for noise removal in skin cancer images. Two datasets from the International Skin Imaging Collaboration (ISIC) and the PH2 were used in this study. In this study, a new approach called LinkNet-B7 for noise removal and segmentation of skin cancer images is presented. LinkNet-B7 is a LinkNet-based approach that uses EfficientNetB7 as the encoder. We used images with 16 slices. This way, we lose fewer pixel values. LinkNet-B7 has a 6% higher success rate than LinkNet with the same dataset and parameters. Training accuracy for noise removal and lesion segmentation was calculated to be 95.72% and 97.80%, respectively.
Collapse
|