1
|
Attallah O. Skin cancer classification leveraging multi-directional compact convolutional neural network ensembles and gabor wavelets. Sci Rep 2024; 14:20637. [PMID: 39232043 PMCID: PMC11375051 DOI: 10.1038/s41598-024-69954-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Accepted: 08/12/2024] [Indexed: 09/06/2024] Open
Abstract
Skin cancer (SC) is an important medical condition that necessitates prompt identification to ensure timely treatment. Although visual evaluation by dermatologists is considered the most reliable method, its efficacy is subjective and laborious. Deep learning-based computer-aided diagnostic (CAD) platforms have become valuable tools for supporting dermatologists. Nevertheless, current CAD tools frequently depend on Convolutional Neural Networks (CNNs) with huge amounts of deep layers and hyperparameters, single CNN model methodologies, large feature space, and exclusively utilise spatial image information, which restricts their effectiveness. This study presents SCaLiNG, an innovative CAD tool specifically developed to address and surpass these constraints. SCaLiNG leverages a collection of three compact CNNs and Gabor Wavelets (GW) to acquire a comprehensive feature vector consisting of spatial-textural-frequency attributes. SCaLiNG gathers a wide range of image details by breaking down these photos into multiple directional sub-bands using GW, and then learning several CNNs using those sub-bands and the original picture. SCaLiNG also combines attributes taken from various CNNs trained with the actual images and subbands derived from GW. This fusion process correspondingly improves diagnostic accuracy due to the thorough representation of attributes. Furthermore, SCaLiNG applies a feature selection approach which further enhances the model's performance by choosing the most distinguishing features. Experimental findings indicate that SCaLiNG maintains a classification accuracy of 0.9170 in categorising SC subcategories, surpassing conventional single-CNN models. The outstanding performance of SCaLiNG underlines its ability to aid dermatologists in swiftly and precisely recognising and classifying SC, thereby enhancing patient outcomes.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, 21937, Egypt.
- Wearables, Biosensing, and Biosignal Processing Laboratory, Arab Academy for Science, Technology, and Maritime Transport, Alexandria, 21937, Egypt.
| |
Collapse
|
2
|
Attallah O. Skin-CAD: Explainable deep learning classification of skin cancer from dermoscopic images by feature selection of dual high-level CNNs features and transfer learning. Comput Biol Med 2024; 178:108798. [PMID: 38925085 DOI: 10.1016/j.compbiomed.2024.108798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 05/30/2024] [Accepted: 06/19/2024] [Indexed: 06/28/2024]
Abstract
Skin cancer (SC) significantly impacts many individuals' health all over the globe. Hence, it is imperative to promptly identify and diagnose such conditions at their earliest stages using dermoscopic imaging. Computer-aided diagnosis (CAD) methods relying on deep learning techniques especially convolutional neural networks (CNN) can effectively address this issue with outstanding outcomes. Nevertheless, such black box methodologies lead to a deficiency in confidence as dermatologists are incapable of comprehending and verifying the predictions that were made by these models. This article presents an advanced an explainable artificial intelligence (XAI) based CAD system named "Skin-CAD" which is utilized for the classification of dermoscopic photographs of SC. The system accurately categorises the photographs into two categories: benign or malignant, and further classifies them into seven subclasses of SC. Skin-CAD employs four CNNs of different topologies and deep layers. It gathers features out of a pair of deep layers of every CNN, particularly the final pooling and fully connected layers, rather than merely depending on attributes from a single deep layer. Skin-CAD applies the principal component analysis (PCA) dimensionality reduction approach to minimise the dimensions of pooling layer features. This also reduces the complexity of the training procedure compared to using deep features from a CNN that has a substantial size. Furthermore, it combines the reduced pooling features with the fully connected features of each CNN. Additionally, Skin-CAD integrates the dual-layer features of the four CNNs instead of entirely depending on the features of a single CNN architecture. In the end, it utilizes a feature selection step to determine the most important deep attributes. This helps to decrease the general size of the feature set and streamline the classification process. Predictions are analysed in more depth using the local interpretable model-agnostic explanations (LIME) approach. This method is used to create visual interpretations that align with an already existing viewpoint and adhere to recommended standards for general clarifications. Two benchmark datasets are employed to validate the efficiency of Skin-CAD which are the Skin Cancer: Malignant vs. Benign and HAM10000 datasets. The maximum accuracy achieved using Skin-CAD is 97.2 % and 96.5 % for the Skin Cancer: Malignant vs. Benign and HAM10000 datasets respectively. The findings of Skin-CAD demonstrate its potential to assist professional dermatologists in detecting and classifying SC precisely and quickly.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandri, 21937, Egypt; Wearables, Biosensing, and Biosignal Processing Laboratory, Arab Academy for Science, Technology and Maritime Transport, Alexandria, 21937, Egypt.
| |
Collapse
|
3
|
Kraus M, Anteby R, Konen E, Eshed I, Klang E. Artificial intelligence for X-ray scaphoid fracture detection: a systematic review and diagnostic test accuracy meta-analysis. Eur Radiol 2024; 34:4341-4351. [PMID: 38097728 PMCID: PMC11213739 DOI: 10.1007/s00330-023-10473-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 09/22/2023] [Accepted: 09/28/2023] [Indexed: 06/29/2024]
Abstract
OBJECTIVES Scaphoid fractures are usually diagnosed using X-rays, a low-sensitivity modality. Artificial intelligence (AI) using Convolutional Neural Networks (CNNs) has been explored for diagnosing scaphoid fractures in X-rays. The aim of this systematic review and meta-analysis is to evaluate the use of AI for detecting scaphoid fractures on X-rays and analyze its accuracy and usefulness. MATERIALS AND METHODS This study followed the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) and PRISMA-Diagnostic Test Accuracy. A literature search was conducted in the PubMed database for original articles published until July 2023. The risk of bias and applicability were evaluated using the QUADAS-2 tool. A bivariate diagnostic random-effects meta-analysis was conducted, and the results were analyzed using the Summary Receiver Operating Characteristic (SROC) curve. RESULTS Ten studies met the inclusion criteria and were all retrospective. The AI's diagnostic performance for detecting scaphoid fractures ranged from AUC 0.77 to 0.96. Seven studies were included in the meta-analysis, with a total of 3373 images. The meta-analysis pooled sensitivity and specificity were 0.80 and 0.89, respectively. The meta-analysis overall AUC was 0.88. The QUADAS-2 tool found high risk of bias and concerns about applicability in 9 out of 10 studies. CONCLUSIONS The current results of AI's diagnostic performance for detecting scaphoid fractures in X-rays show promise. The results show high overall sensitivity and specificity and a high SROC result. Further research is needed to compare AI's diagnostic performance to human diagnostic performance in a clinical setting. CLINICAL RELEVANCE STATEMENT Scaphoid fractures are prone to be missed secondary to assessment with a low sensitivity modality and a high occult fracture rate. AI systems can be beneficial for clinicians and radiologists to facilitate early diagnosis, and avoid missed injuries. KEY POINTS • Scaphoid fractures are common and some can be easily missed in X-rays. • Artificial intelligence (AI) systems demonstrate high diagnostic performance for the diagnosis of scaphoid fractures in X-rays. • AI systems can be beneficial in diagnosing both obvious and occult scaphoid fractures.
Collapse
Affiliation(s)
- Matan Kraus
- Department of Diagnostic Imaging, Sheba Medical Center, 2 Sheba Road, 5262000, Ramat Gan, Israel.
- Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel.
| | - Roi Anteby
- Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
- Department of General Surgery, Sheba Medical Center, 2 Sheba Road, 5262000, Ramat Gan, Israel
| | - Eli Konen
- Department of Diagnostic Imaging, Sheba Medical Center, 2 Sheba Road, 5262000, Ramat Gan, Israel
- Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Iris Eshed
- Department of Diagnostic Imaging, Sheba Medical Center, 2 Sheba Road, 5262000, Ramat Gan, Israel
- Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Eyal Klang
- Department of Diagnostic Imaging, Sheba Medical Center, 2 Sheba Road, 5262000, Ramat Gan, Israel
- Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
4
|
Jeong HK, Park C, Jiang SW, Nicholas M, Chen S, Henao R, Kheterpal M. Image Quality Assessment Using Convolutional Neural Network in Clinical Skin Images. JID INNOVATIONS 2024; 4:100285. [PMID: 39036289 PMCID: PMC11260318 DOI: 10.1016/j.xjidi.2024.100285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 12/24/2023] [Accepted: 03/06/2024] [Indexed: 07/23/2024] Open
Abstract
The image quality received for clinical evaluation is often suboptimal. The goal is to develop an image quality analysis tool to assess patient- and primary care physician-derived images using deep learning model. Dataset included patient- and primary care physician-derived images from August 21, 2018 to June 30, 2022 with 4 unique quality labels. VGG16 model was fine tuned with input data, and optimal threshold was determined by Youden's index. Ordinal labels were transformed to binary labels using a majority vote because model distinguishes between 2 categories (good vs bad). At a threshold of 0.587, area under the curve for the test set was 0.885 (95% confidence interval = 0.838-0.933); sensitivity, specificity, positive predictive value, and negative predictive value were 0.829, 0.784, 0.906, and 0.645, respectively. Independent validation of 300 additional images (from patients and primary care physicians) demonstrated area under the curve of 0.864 (95% confidence interval = 0.818-0.909) and area under the curve of 0.902 (95% confidence interval = 0.85-0.95), respectively. The sensitivity, specificity, positive predictive value, and negative predictive value for the 300 images were 0.827, 0.800, 0.959, and 0.450, respectively. We demonstrate a practical approach improving the image quality for clinical workflow. Although users may have to capture additional images, this is offset by the improved workload and efficiency for clinical teams.
Collapse
Affiliation(s)
- Hyeon Ki Jeong
- Department of Biostatistics & Bioinformatics, Duke University School of Medicine, Durham, North Carolina, USA
| | - Christine Park
- Duke University School of Medicine, Durham, North Carolina, USA
| | - Simon W. Jiang
- Duke University School of Medicine, Durham, North Carolina, USA
| | - Matilda Nicholas
- Department of Dermatology, Duke University School of Medicine, Durham, North Carolina, USA
| | - Suephy Chen
- Department of Dermatology, Duke University School of Medicine, Durham, North Carolina, USA
- Durham VA Medical Center, Durham, North Carolina, USA
| | - Ricardo Henao
- Department of Biostatistics & Bioinformatics, Duke University School of Medicine, Durham, North Carolina, USA
| | - Meenal Kheterpal
- Department of Dermatology, Duke University School of Medicine, Durham, North Carolina, USA
| |
Collapse
|
5
|
Kandhro IA, Manickam S, Fatima K, Uddin M, Malik U, Naz A, Dandoush A. Performance evaluation of E-VGG19 model: Enhancing real-time skin cancer detection and classification. Heliyon 2024; 10:e31488. [PMID: 38826726 PMCID: PMC11141372 DOI: 10.1016/j.heliyon.2024.e31488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2024] [Accepted: 05/16/2024] [Indexed: 06/04/2024] Open
Abstract
Skin cancer is a pervasive and potentially life-threatening disease. Early detection plays a crucial role in improving patient outcomes. Machine learning (ML) techniques, particularly when combined with pre-trained deep learning models, have shown promise in enhancing the accuracy of skin cancer detection. In this paper, we enhanced the VGG19 pre-trained model with max pooling and dense layer for the prediction of skin cancer. Moreover, we also explored the pre-trained models such as Visual Geometry Group 19 (VGG19), Residual Network 152 version 2 (ResNet152v2), Inception-Residual Network version 2 (InceptionResNetV2), Dense Convolutional Network 201 (DenseNet201), Residual Network 50 (ResNet50), Inception version 3 (InceptionV3), For training, skin lesions dataset is used with malignant and benign cases. The models extract features and divide skin lesions into two categories: malignant and benign. The features are then fed into machine learning methods, including Linear Support Vector Machine (SVM), k-Nearest Neighbors (KNN), Decision Tree (DT), Logistic Regression (LR) and Support Vector Machine (SVM), our results demonstrate that combining E-VGG19 model with traditional classifiers significantly improves the overall classification accuracy for skin cancer detection and classification. Moreover, we have also compared the performance of baseline classifiers and pre-trained models with metrics (recall, F1 score, precision, sensitivity, and accuracy). The experiment results provide valuable insights into the effectiveness of various models and classifiers for accurate and efficient skin cancer detection. This research contributes to the ongoing efforts to create automated technologies for detecting skin cancer that can help healthcare professionals and individuals identify potential skin cancer cases at an early stage, ultimately leading to more timely and effective treatments.
Collapse
Affiliation(s)
- Irfan Ali Kandhro
- Department of Computer Science, Sindh Madressatul Islam University, Karachi, 74000, Pakistan
| | - Selvakumar Manickam
- National Advanced IPv6 Centre (NAv6), Universiti Sains Malaysia, Gelugor, Penang, 11800, Malaysia
| | - Kanwal Fatima
- Department of Computer Science, Sindh Madressatul Islam University, Karachi, 74000, Pakistan
| | - Mueen Uddin
- College of Computing and Information Technology, University of Doha For Science & Technology, 24449, Doha, Qatar
| | - Urooj Malik
- Department of Computer Science, Sindh Madressatul Islam University, Karachi, 74000, Pakistan
| | - Anum Naz
- Department of Computer Science, Sindh Madressatul Islam University, Karachi, 74000, Pakistan
| | - Abdulhalim Dandoush
- College of Computing and Information Technology, University of Doha For Science & Technology, 24449, Doha, Qatar
| |
Collapse
|
6
|
Malik FS, Yousaf MH, Sial HA, Viriri S. Exploring dermoscopic structures for melanoma lesions' classification. Front Big Data 2024; 7:1366312. [PMID: 38590699 PMCID: PMC10999676 DOI: 10.3389/fdata.2024.1366312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2024] [Accepted: 02/26/2024] [Indexed: 04/10/2024] Open
Abstract
Background Melanoma is one of the deadliest skin cancers that originate from melanocytes due to sun exposure, causing mutations. Early detection boosts the cure rate to 90%, but misclassification drops survival to 15-20%. Clinical variations challenge dermatologists in distinguishing benign nevi and melanomas. Current diagnostic methods, including visual analysis and dermoscopy, have limitations, emphasizing the need for Artificial Intelligence understanding in dermatology. Objectives In this paper, we aim to explore dermoscopic structures for the classification of melanoma lesions. The training of AI models faces a challenge known as brittleness, where small changes in input images impact the classification. A study explored AI vulnerability in discerning melanoma from benign lesions using features of size, color, and shape. Tests with artificial and natural variations revealed a notable decline in accuracy, emphasizing the necessity for additional information, such as dermoscopic structures. Methodology The study utilizes datasets with clinically marked dermoscopic images examined by expert clinicians. Transformers and CNN-based models are employed to classify these images based on dermoscopic structures. Classification results are validated using feature visualization. To assess model susceptibility to image variations, classifiers are evaluated on test sets with original, duplicated, and digitally modified images. Additionally, testing is done on ISIC 2016 images. The study focuses on three dermoscopic structures crucial for melanoma detection: Blue-white veil, dots/globules, and streaks. Results In evaluating model performance, adding convolutions to Vision Transformers proves highly effective for achieving up to 98% accuracy. CNN architectures like VGG-16 and DenseNet-121 reach 50-60% accuracy, performing best with features other than dermoscopic structures. Vision Transformers without convolutions exhibit reduced accuracy on diverse test sets, revealing their brittleness. OpenAI Clip, a pre-trained model, consistently performs well across various test sets. To address brittleness, a mitigation method involving extensive data augmentation during training and 23 transformed duplicates during test time, sustains accuracy. Conclusions This paper proposes a melanoma classification scheme utilizing three dermoscopic structures across Ph2 and Derm7pt datasets. The study addresses AI susceptibility to image variations. Despite a small dataset, future work suggests collecting more annotated datasets and automatic computation of dermoscopic structural features.
Collapse
Affiliation(s)
- Fiza Saeed Malik
- Department of Computer Engineering, University of Engineering and Technology, Taxila, Pakistan
| | - Muhammad Haroon Yousaf
- Department of Computer Engineering, University of Engineering and Technology, Taxila, Pakistan
- School of Computing, College of Science, Engineering and Technology, University of South Africa (UNISA), Pretoria, South Africa
| | | | - Serestina Viriri
- School of Computing, College of Science, Engineering and Technology, University of South Africa (UNISA), Pretoria, South Africa
- School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban, South Africa
| |
Collapse
|
7
|
Nishino K. Skin patch based makeup finish assessment technique by deep neural network. Skin Res Technol 2024; 30:e13561. [PMID: 38297920 PMCID: PMC10831195 DOI: 10.1111/srt.13561] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 12/11/2023] [Indexed: 02/02/2024]
Abstract
BACKGROUND Skin color and texture play a significant role in influencing impressions. To understand the influence of skin appearance and to develop better makeup products, objective evaluation methods for makeup finish have been explored. This study aims to apply machine learning technology, specifically deep neural network (DNN), to accurately analyze and evaluate delicate and complex cosmetic skin textures. METHODS "Skin patch datasets" were extracted from facial images and used to train a DNN model. The advantages of using skin patches include retaining fine texture, eliminating false correlations from non-skin features, and enabling visualization of the inferred results for the entire face. The DNN was trained in two ways: a classification task to classify skin attributes and a regression task to predict the visual assessment of experts. The trained DNNs were applied for the evaluation of actual makeup conditions. RESULTS In the classification task training, skin patch-based classifiers for age range, presence or absence of base makeup, formulation type (powder/liquid) of the applied base makeup, and immediate/while after makeup application were developed. The trained DNNs on regression task showed high prediction accuracy for the experts' visual assessment. Application of DNN to the evaluation of actual makeup conditions clearly showed appropriate evaluation results in line with the appearance of the makeup finish. CONCLUSION The proposed method of using DNNs trained on skin patches effectively evaluates makeup finish. This approach has potential applications in visual science research and cosmetics development. Further studies can explore the analysis of different skin conditions and the development of personalized cosmetics.
Collapse
Affiliation(s)
- Ken Nishino
- Makeup Products ResearchKao CorporationOdawaraKanagawaJapan
| |
Collapse
|
8
|
Chanda T, Hauser K, Hobelsberger S, Bucher TC, Garcia CN, Wies C, Kittler H, Tschandl P, Navarrete-Dechent C, Podlipnik S, Chousakos E, Crnaric I, Majstorovic J, Alhajwan L, Foreman T, Peternel S, Sarap S, Özdemir İ, Barnhill RL, Llamas-Velasco M, Poch G, Korsing S, Sondermann W, Gellrich FF, Heppt MV, Erdmann M, Haferkamp S, Drexler K, Goebeler M, Schilling B, Utikal JS, Ghoreschi K, Fröhling S, Krieghoff-Henning E, Brinker TJ. Dermatologist-like explainable AI enhances trust and confidence in diagnosing melanoma. Nat Commun 2024; 15:524. [PMID: 38225244 PMCID: PMC10789736 DOI: 10.1038/s41467-023-43095-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 10/31/2023] [Indexed: 01/17/2024] Open
Abstract
Artificial intelligence (AI) systems have been shown to help dermatologists diagnose melanoma more accurately, however they lack transparency, hindering user acceptance. Explainable AI (XAI) methods can help to increase transparency, yet often lack precise, domain-specific explanations. Moreover, the impact of XAI methods on dermatologists' decisions has not yet been evaluated. Building upon previous research, we introduce an XAI system that provides precise and domain-specific explanations alongside its differential diagnoses of melanomas and nevi. Through a three-phase study, we assess its impact on dermatologists' diagnostic accuracy, diagnostic confidence, and trust in the XAI-support. Our results show strong alignment between XAI and dermatologist explanations. We also show that dermatologists' confidence in their diagnoses, and their trust in the support system significantly increase with XAI compared to conventional AI. This study highlights dermatologists' willingness to adopt such XAI systems, promoting future use in the clinic.
Collapse
Affiliation(s)
- Tirtha Chanda
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Katja Hauser
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Sarah Hobelsberger
- Department of Dermatology, University Hospital, Technical University Dresden, Dresden, Germany
| | - Tabea-Clara Bucher
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Carina Nogueira Garcia
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Christoph Wies
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Medical Faculty of University Heidelberg, Heidelberg, Germany
| | - Harald Kittler
- Department of Dermatology, Medical University of Vienna, Vienna, Austria
| | - Philipp Tschandl
- Department of Dermatology, Medical University of Vienna, Vienna, Austria
| | - Cristian Navarrete-Dechent
- Department of Dermatology, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Sebastian Podlipnik
- Dermatology Department, Hospital Clínic of Barcelona, University of Barcelona, IDIBAPS, Barcelona, Spain
| | - Emmanouil Chousakos
- 1st Department of Pathology, Medical School, National & Kapodistrian University of Athens, Athens, Greece
| | - Iva Crnaric
- Department of Dermatovenereology, Sestre milosrdnice University Hospital Center, Zagreb, Croatia
| | | | - Linda Alhajwan
- Department of Dermatology, Dubai London Clinic, Dubai, United Arab Emirates
| | | | - Sandra Peternel
- Department of Dermatovenereology, Clinical Hospital Center Rijeka, Faculty of Medicine, University of Rijeka, Rijeka, Croatia
| | | | - İrem Özdemir
- Department of Dermatology, Faculty of Medicine, Gazi University, Ankara, Turkey
| | - Raymond L Barnhill
- Department of Translational Research, Institut Curie, Unit of Formation and Research of Medicine University of Paris, Paris, France
| | | | - Gabriela Poch
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Dermatology, Venereology and Allergology, Berlin, Germany
| | - Sören Korsing
- Department of Dermatology, University Hospital Essen, University Duisburg-Essen, Essen, Germany
| | - Wiebke Sondermann
- Department of Dermatology, Uniklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | | | - Markus V Heppt
- Department of Dermatology, Uniklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Michael Erdmann
- Department of Dermatology, Uniklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Sebastian Haferkamp
- Department of Dermatology, University Hospital Regensburg, Regensburg, Germany
| | - Konstantin Drexler
- Department of Dermatology, University Hospital Regensburg, Regensburg, Germany
| | - Matthias Goebeler
- Department of Dermatology, Venereology and Allergology, University Hospital Würzburg, Würzburg, Germany
| | - Bastian Schilling
- Department of Dermatology, Venereology and Allergology, University Hospital Würzburg, Würzburg, Germany
| | - Jochen S Utikal
- Department of Dermatology, Venereology and Allergology, University Medical Center Mannheim, Ruprecht-Karl University of Heidelberg, Mannheim, Germany
| | - Kamran Ghoreschi
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Dermatology, Venereology and Allergology, Berlin, Germany
| | - Stefan Fröhling
- Division of Translational Medical Oncology, National Center for Tumor Diseases (NCT) Heidelberg and German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Eva Krieghoff-Henning
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Titus J Brinker
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany.
| |
Collapse
|
9
|
Sharma P, Nayak DR, Balabantaray BK, Tanveer M, Nayak R. A survey on cancer detection via convolutional neural networks: Current challenges and future directions. Neural Netw 2024; 169:637-659. [PMID: 37972509 DOI: 10.1016/j.neunet.2023.11.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/21/2023] [Accepted: 11/04/2023] [Indexed: 11/19/2023]
Abstract
Cancer is a condition in which abnormal cells uncontrollably split and damage the body tissues. Hence, detecting cancer at an early stage is highly essential. Currently, medical images play an indispensable role in detecting various cancers; however, manual interpretation of these images by radiologists is observer-dependent, time-consuming, and tedious. An automatic decision-making process is thus an essential need for cancer detection and diagnosis. This paper presents a comprehensive survey on automated cancer detection in various human body organs, namely, the breast, lung, liver, prostate, brain, skin, and colon, using convolutional neural networks (CNN) and medical imaging techniques. It also includes a brief discussion about deep learning based on state-of-the-art cancer detection methods, their outcomes, and the possible medical imaging data used. Eventually, the description of the dataset used for cancer detection, the limitations of the existing solutions, future trends, and challenges in this domain are discussed. The utmost goal of this paper is to provide a piece of comprehensive and insightful information to researchers who have a keen interest in developing CNN-based models for cancer detection.
Collapse
Affiliation(s)
- Pallabi Sharma
- School of Computer Science, UPES, Dehradun, 248007, Uttarakhand, India.
| | - Deepak Ranjan Nayak
- Department of Computer Science and Engineering, Malaviya National Institute of Technology, Jaipur, 302017, Rajasthan, India.
| | - Bunil Kumar Balabantaray
- Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, 793003, Meghalaya, India.
| | - M Tanveer
- Department of Mathematics, Indian Institute of Technology Indore, Simrol, 453552, Indore, India.
| | - Rajashree Nayak
- School of Applied Sciences, Birla Global University, Bhubaneswar, 751029, Odisha, India.
| |
Collapse
|
10
|
Del Amor R, Pérez-Cano J, López-Pérez M, Terradez L, Aneiros-Fernandez J, Morales S, Mateos J, Molina R, Naranjo V. Annotation protocol and crowdsourcing multiple instance learning classification of skin histological images: The CR-AI4SkIN dataset. Artif Intell Med 2023; 145:102686. [PMID: 37925214 DOI: 10.1016/j.artmed.2023.102686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 10/11/2023] [Accepted: 10/13/2023] [Indexed: 11/06/2023]
Abstract
Digital Pathology (DP) has experienced a significant growth in recent years and has become an essential tool for diagnosing and prognosis of tumors. The availability of Whole Slide Images (WSIs) and the implementation of Deep Learning (DL) algorithms have paved the way for the appearance of Artificial Intelligence (AI) systems that support the diagnosis process. These systems require extensive and varied data for their training to be successful. However, creating labeled datasets in histopathology is laborious and time-consuming. We have developed a crowdsourcing-multiple instance labeling/learning protocol that is applied to the creation and use of the CR-AI4SkIN dataset.2 CR-AI4SkIN contains 271 WSIs of 7 Cutaneous Spindle Cell (CSC) neoplasms with expert and non-expert labels at region and WSI levels. It is the first dataset of these types of neoplasms made available. The regions selected by the experts are used to learn an automatic extractor of Regions of Interest (ROIs) from WSIs. To produce the embedding of each WSI, the representations of patches within the ROIs are obtained using a contrastive learning method, and then combined. Finally, they are fed to a Gaussian process-based crowdsourcing classifier, which utilizes the noisy non-expert WSI labels. We validate our crowdsourcing-multiple instance learning method in the CR-AI4SkIN dataset, addressing a binary classification problem (malign vs. benign). The proposed method obtains an F1 score of 0.7911 on the test set, outperforming three widely used aggregation methods for crowdsourcing tasks. Furthermore, our crowdsourcing method also outperforms the supervised model with expert labels on the test set (F1-score = 0.6035). The promising results support the proposed crowdsourcing multiple instance learning annotation protocol. It also validates the automatic extraction of interest regions and the use of contrastive embedding and Gaussian process classification to perform crowdsourcing classification tasks.
Collapse
Affiliation(s)
- Rocío Del Amor
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano, Universitat Politècnica de València, Valencia, Spain
| | - Jose Pérez-Cano
- Department of Computer Science and Artificial Intelligence, University of Granada, 18010 Granada, Spain
| | - Miguel López-Pérez
- Department of Computer Science and Artificial Intelligence, University of Granada, 18010 Granada, Spain.
| | - Liria Terradez
- Pathology Department. Hospital Clínico Universitario de Valencia, Universidad de Valencia, Spain
| | | | - Sandra Morales
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano, Universitat Politècnica de València, Valencia, Spain
| | - Javier Mateos
- Department of Computer Science and Artificial Intelligence, University of Granada, 18010 Granada, Spain
| | - Rafael Molina
- Department of Computer Science and Artificial Intelligence, University of Granada, 18010 Granada, Spain
| | - Valery Naranjo
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano, Universitat Politècnica de València, Valencia, Spain
| |
Collapse
|
11
|
Leo MM, Potter IY, Zahiri M, Vaziri A, Jung CF, Feldman JA. Using Deep Learning to Detect the Presence and Location of Hemoperitoneum on the Focused Assessment with Sonography in Trauma (FAST) Examination in Adults. J Digit Imaging 2023; 36:2035-2050. [PMID: 37286904 PMCID: PMC10501965 DOI: 10.1007/s10278-023-00845-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 04/13/2023] [Accepted: 05/04/2023] [Indexed: 06/09/2023] Open
Abstract
Abdominal ultrasonography has become an integral component of the evaluation of trauma patients. Internal hemorrhage can be rapidly diagnosed by finding free fluid with point-of-care ultrasound (POCUS) and expedite decisions to perform lifesaving interventions. However, the widespread clinical application of ultrasound is limited by the expertise required for image interpretation. This study aimed to develop a deep learning algorithm to identify the presence and location of hemoperitoneum on POCUS to assist novice clinicians in accurate interpretation of the Focused Assessment with Sonography in Trauma (FAST) exam. We analyzed right upper quadrant (RUQ) FAST exams obtained from 94 adult patients (44 confirmed hemoperitoneum) using the YoloV3 object detection algorithm. Exams were partitioned via fivefold stratified sampling for training, validation, and hold-out testing. We assessed each exam image-by-image using YoloV3 and determined hemoperitoneum presence for the exam using the detection with highest confidence score. We determined the detection threshold as the score that maximizes the geometric mean of sensitivity and specificity over the validation set. The algorithm had 95% sensitivity, 94% specificity, 95% accuracy, and 97% AUC over the test set, significantly outperforming three recent methods. The algorithm also exhibited strength in localization, while the detected box sizes varied with a 56% IOU averaged over positive cases. Image processing demonstrated only 57-ms latency, which is adequate for real-time use at the bedside. These results suggest that a deep learning algorithm can rapidly and accurately identify the presence and location of free fluid in the RUQ of the FAST exam in adult patients with hemoperitoneum.
Collapse
Affiliation(s)
- Megan M Leo
- Boston University School of Medicine, Boston, MA, USA.
- Department of Emergency Medicine, Boston Medical Center, BCD Building, 800 Harrison Ave1St Floor, Boston, MA, 02118, USA.
| | | | | | | | - Christine F Jung
- Division of Emergency Ultrasound, Department of Emergency Medicine, John H. Stroger Jr. Hospital of Cook County, Chicago, IL, USA
- Department of Emergency Medicine, Chicago Medical School of Rosalind, Franklin University of Medical Sciences, Chicago, IL, USA
- Department of Emergency Medicine, Rush Medical College, Chicago, IL, USA
| | - James A Feldman
- Boston University School of Medicine, Boston, MA, USA
- Department of Emergency Medicine, Boston Medical Center, BCD Building, 800 Harrison Ave1St Floor, Boston, MA, 02118, USA
| |
Collapse
|
12
|
Abd Elaziz M, Dahou A, Mabrouk A, El-Sappagh S, Aseeri AO. An Efficient Artificial Rabbits Optimization Based on Mutation Strategy For Skin Cancer Prediction. Comput Biol Med 2023; 163:107154. [PMID: 37364532 DOI: 10.1016/j.compbiomed.2023.107154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Revised: 05/26/2023] [Accepted: 06/07/2023] [Indexed: 06/28/2023]
Abstract
Accurate skin lesion diagnosis is critical for the early detection of melanoma. However, the existing approaches are unable to attain substantial levels of accuracy. Recently, pre-trained Deep Learning (DL) models have been applied to tackle and improve efficiency on tasks such as skin cancer detection instead of training models from scratch. Therefore, we develop a robust model for skin cancer detection with a DL-based model as a feature extraction backbone, which is achieved using MobileNetV3 architecture. In addition, a novel algorithm called the Improved Artificial Rabbits Optimizer (IARO) is introduced, which uses the Gaussian mutation and crossover operator to ignore the unimportant features from those features extracted using MobileNetV3. The PH2, ISIC-2016, and HAM10000 datasets are used to validate the developed approach's efficiency. The empirical results show that the developed approach yields outstanding accuracy results of 87.17% on the ISIC-2016 dataset, 96.79% on the PH2 dataset, and 88.71 % on the HAM10000 dataset. Experiments show that the IARO can significantly improve the prediction of skin cancer.
Collapse
Affiliation(s)
- Mohamed Abd Elaziz
- Department of Mathematics, Faculty of Science, Zagazig University, Zagazig, 44519, Egypt; Faculty of Computer Science and Engineering, Galala University, Suez 435611, Egypt; Artificial Intelligence Research Center (AIRC), College of Engineering and Information Technology, Ajman University, Ajman, United Arab Emirates; Department of Electrical and Computer Engineering, Lebanese American University, Byblos 13-5053, Lebanon; MEU Research Unit, Middle East University, Amman 11831, Jordan.
| | - Abdelghani Dahou
- Mathematics and Computer Science Department, University of Ahmed DRAIA, 01000, Adrar, Algeria.
| | - Alhassan Mabrouk
- Mathematics and Computer Science Department, Faculty of Science, Beni-Suef University, Beni Suef 62511, Egypt.
| | - Shaker El-Sappagh
- Information Systems Department, Faculty of Computers and Artificial Intelligence, Benha University, Egypt; Faculty of Computer Science and Engineering, Galala University, Suez 435611, Egypt.
| | - Ahmad O Aseeri
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj, 11942, Saudi Arabia.
| |
Collapse
|
13
|
Kaur R, GholamHosseini H. Analyzing the Impact of Image Denoising and Segmentation on Melanoma Classification Using Convolutional Neural Networks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083686 DOI: 10.1109/embc40787.2023.10340135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Early skin cancer detection and its treatment are crucial for reducing death rates worldwide. Deep learning techniques have been used successfully to develop an automatic lesion detection system. This study explores the impact of pre-processing steps such as data augmentation, contrast enhancement, and segmentation on improving the convolutional neural network (CNN) performance for lesion classification. The classification network was designed from scratch by uniquely organizing its layers and using a different number of kernels, depth of the network, size, and hyperparameters. In addition, the network's performance was improved by pre-processing and segmentation steps. The proposed network was compared with the current state-of-the-art to demonstrate its best performance on the benchmark HAM10000 lesion dataset. The experimental study revealed that the classification network using denoised+segmented data achieved an accuracy (ACC), precision (PRE), recall (REC), specificity (SPE), and F-score of 93.40%, 93.45%, 94.51%, 92.08%, and 93.98%, respectively. To conclude, classification performance can be improved by incorporating pre-processing and segmentation steps.
Collapse
|
14
|
Dahou A, Aseeri AO, Mabrouk A, Ibrahim RA, Al-Betar MA, Elaziz MA. Optimal Skin Cancer Detection Model Using Transfer Learning and Dynamic-Opposite Hunger Games Search. Diagnostics (Basel) 2023; 13:diagnostics13091579. [PMID: 37174970 PMCID: PMC10178333 DOI: 10.3390/diagnostics13091579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 04/21/2023] [Accepted: 04/25/2023] [Indexed: 05/15/2023] Open
Abstract
Recently, pre-trained deep learning (DL) models have been employed to tackle and enhance the performance on many tasks such as skin cancer detection instead of training models from scratch. However, the existing systems are unable to attain substantial levels of accuracy. Therefore, we propose, in this paper, a robust skin cancer detection framework for to improve the accuracy by extracting and learning relevant image representations using a MobileNetV3 architecture. Thereafter, the extracted features are used as input to a modified Hunger Games Search (HGS) based on Particle Swarm Optimization (PSO) and Dynamic-Opposite Learning (DOLHGS). This modification is used as a novel feature selection to alloacte the most relevant feature to maximize the model's performance. For evaluation of the efficiency of the developed DOLHGS, the ISIC-2016 dataset and the PH2 dataset were employed, including two and three categories, respectively. The proposed model has accuracy 88.19% on the ISIC-2016 dataset and 96.43% on PH2. Based on the experimental results, the proposed approach showed more accurate and efficient performance in skin cancer detection than other well-known and popular algorithms in terms of classification accuracy and optimized features.
Collapse
Affiliation(s)
- Abdelghani Dahou
- Mathematics and Computer Science Department, University of Ahmed DRAIA, Adrar 01000, Algeria
| | - Ahmad O Aseeri
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
| | - Alhassan Mabrouk
- Mathematics and Computer Science Department, Faculty of Science, Beni-Suef University, Beni-Suef 65214, Egypt
| | - Rehab Ali Ibrahim
- Department of Mathematics, Faculty of Science, Zagazig University, Zagazig 44519, Egypt
| | - Mohammed Azmi Al-Betar
- Artificial Intelligence Research Center (AIRC), College of Engineering and Information Technology, Ajman University, Ajman P.O. Box 346, United Arab Emirates
| | - Mohamed Abd Elaziz
- Department of Mathematics, Faculty of Science, Zagazig University, Zagazig 44519, Egypt
- Artificial Intelligence Research Center (AIRC), College of Engineering and Information Technology, Ajman University, Ajman P.O. Box 346, United Arab Emirates
- Faculty of Computer Science & Engineering, Galala University, Suez 43511, Egypt
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos 10999, Lebanon
| |
Collapse
|
15
|
Del Amor R, Silva-Rodríguez J, Naranjo V. Labeling confidence for uncertainty-aware histology image classification. Comput Med Imaging Graph 2023; 107:102231. [PMID: 37087899 DOI: 10.1016/j.compmedimag.2023.102231] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 02/23/2023] [Accepted: 03/27/2023] [Indexed: 04/25/2023]
Abstract
Deep learning-based models applied to digital pathology require large, curated datasets with high-quality (HQ) annotations to perform correctly. In many cases, recruiting expert pathologists to annotate large databases is not feasible, and it is necessary to collect additional labeled data with varying label qualities, e.g., pathologists-in-training (henceforth, non-expert annotators). Learning from datasets with noisy labels is more challenging in medical applications since medical imaging datasets tend to have instance-dependent noise and suffer from high inter/intra-observer variability. In this paper, we design an uncertainty-driven labeling strategy with which we generate soft labels from 10 non-expert annotators for multi-class skin cancer classification. Based on this soft annotation, we propose an uncertainty estimation-based framework to handle these noisy labels. This framework is based on a novel formulation using a dual-branch min-max entropy calibration to penalize inexact labels during the training. Comprehensive experiments demonstrate the promising performance of our labeling strategy. Results show a consistent improvement by using soft labels with standard cross-entropy loss during training (∼4.0% F1-score) and increases when calibrating the model with the proposed min-max entropy calibration (∼6.6% F1-score). These improvements are produced at negligible cost, both in terms of annotation and calculation.
Collapse
Affiliation(s)
- Rocío Del Amor
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano, Universitat Politècnica de València, Valencia, Spain.
| | | | - Valery Naranjo
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano, Universitat Politècnica de València, Valencia, Spain.
| |
Collapse
|
16
|
Obayya M, Alhebri A, Maashi M, S. Salama A, Mustafa Hilal A, Alsaid MI, Osman AE, Alneil AA. Henry Gas Solubility Optimization Algorithm based Feature Extraction in Dermoscopic Images Analysis of Skin Cancer. Cancers (Basel) 2023; 15:cancers15072146. [PMID: 37046806 PMCID: PMC10093373 DOI: 10.3390/cancers15072146] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 03/27/2023] [Accepted: 03/30/2023] [Indexed: 04/08/2023] Open
Abstract
Artificial Intelligence (AI) techniques have changed the general perceptions about medical diagnostics, especially after the introduction and development of Convolutional Neural Networks (CNN) and advanced Deep Learning (DL) and Machine Learning (ML) approaches. In general, dermatologists visually inspect the images and assess the morphological variables such as borders, colors, and shapes to diagnose the disease. In this background, AI techniques make use of algorithms and computer systems to mimic the cognitive functions of the human brain and assist clinicians and researchers. In recent years, AI has been applied extensively in the domain of dermatology, especially for the detection and classification of skin cancer and other general skin diseases. In this research article, the authors propose an Optimal Multi-Attention Fusion Convolutional Neural Network-based Skin Cancer Diagnosis (MAFCNN-SCD) technique for the detection of skin cancer in dermoscopic images. The primary aim of the proposed MAFCNN-SCD technique is to classify skin cancer on dermoscopic images. In the presented MAFCNN-SCD technique, the data pre-processing is performed at the initial stage. Next, the MAFNet method is applied as a feature extractor with Henry Gas Solubility Optimization (HGSO) algorithm as a hyperparameter optimizer. Finally, the Deep Belief Network (DBN) method is exploited for the detection and classification of skin cancer. A sequence of simulations was conducted to establish the superior performance of the proposed MAFCNN-SCD approach. The comprehensive comparative analysis outcomes confirmed the supreme performance of the proposed MAFCNN-SCD technique over other methodologies.
Collapse
Affiliation(s)
- Marwa Obayya
- Department of Biomedical Engineering, College of Engineering, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
| | - Adeeb Alhebri
- Department of Accounting, Applied College, King Khalid University, Mohail Asser 63311, Saudi Arabia
| | - Mashael Maashi
- Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia
| | - Ahmed S. Salama
- Department of Electrical Engineering, Faculty of Engineering & Technology, Future University in Egypt, New Cairo 11845, Egypt
| | - Anwer Mustafa Hilal
- Department of Computer and Self Development, Preparatory Year Deanship, Prince Sattam bin Abdulaziz University, Al Kharj 11942, Saudi Arabia
| | - Mohamed Ibrahim Alsaid
- Department of Computer and Self Development, Preparatory Year Deanship, Prince Sattam bin Abdulaziz University, Al Kharj 11942, Saudi Arabia
| | - Azza Elneil Osman
- Department of Computer and Self Development, Preparatory Year Deanship, Prince Sattam bin Abdulaziz University, Al Kharj 11942, Saudi Arabia
| | - Amani A. Alneil
- Department of Computer and Self Development, Preparatory Year Deanship, Prince Sattam bin Abdulaziz University, Al Kharj 11942, Saudi Arabia
| |
Collapse
|
17
|
Taher F, Shoaib MR, Emara HM, Abdelwahab KM, Abd El-Samie FE, Haweel MT. Efficient framework for brain tumor detection using different deep learning techniques. Front Public Health 2022; 10:959667. [PMID: 36530682 PMCID: PMC9752904 DOI: 10.3389/fpubh.2022.959667] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 08/31/2022] [Indexed: 12/03/2022] Open
Abstract
The brain tumor is an urgent malignancy caused by unregulated cell division. Tumors are classified using a biopsy, which is normally performed after the final brain surgery. Deep learning technology advancements have assisted the health professionals in medical imaging for the medical diagnosis of several symptoms. In this paper, transfer-learning-based models in addition to a Convolutional Neural Network (CNN) called BRAIN-TUMOR-net trained from scratch are introduced to classify brain magnetic resonance images into tumor or normal cases. A comparison between the pre-trained InceptionResNetv2, Inceptionv3, and ResNet50 models and the proposed BRAIN-TUMOR-net is introduced. The performance of the proposed model is tested on three publicly available Magnetic Resonance Imaging (MRI) datasets. The simulation results show that the BRAIN-TUMOR-net achieves the highest accuracy compared to other models. It achieves 100%, 97%, and 84.78% accuracy levels for three different MRI datasets. In addition, the k-fold cross-validation technique is used to allow robust classification. Moreover, three different unsupervised clustering techniques are utilized for segmentation.
Collapse
Affiliation(s)
- Fatma Taher
- College of Technological Innovative, Zayed University, Abu Dhabi, United Arab Emirates
| | - Mohamed R. Shoaib
- Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt
| | - Heba M. Emara
- Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt,*Correspondence: Heba M. Emara
| | | | - Fathi E. Abd El-Samie
- Faculty of Electronic Engineering, Menoufia University, Menouf, Egypt,Department of Information Technology, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Mohammad T. Haweel
- Department of Electrical Engineering, Shaqra University, Shaqraa, Saudi Arabia
| |
Collapse
|
18
|
Wang S, Yin Y, Wang D, Wang Y, Jin Y. Interpretability-Based Multimodal Convolutional Neural Networks for Skin Lesion Diagnosis. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:12623-12637. [PMID: 34546933 DOI: 10.1109/tcyb.2021.3069920] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Skin lesion diagnosis is a key step for skin cancer screening, which requires high accuracy and interpretability. Though many computer-aided methods, especially deep learning methods, have made remarkable achievements in skin lesion diagnosis, their generalization and interpretability are still a challenge. To solve this issue, we propose an interpretability-based multimodal convolutional neural network (IM-CNN), which is a multiclass classification model with skin lesion images and metadata of patients as input for skin lesion diagnosis. The structure of IM-CNN consists of three main paths to deal with metadata, features extracted from segmented skin lesion with domain knowledge, and skin lesion images, respectively. We add interpretable visual modules to provide explanations for both images and metadata. In addition to area under the ROC curve (AUC), sensitivity, and specificity, we introduce a new indicator, an AUC curve with a sensitivity larger than 80% (AUC_SEN_80) for performance evaluation. Extensive experimental studies are conducted on the popular HAM10000 dataset, and the results indicate that the proposed model has overwhelming advantages compared with popular deep learning models, such as DenseNet, ResNet, and other state-of-the-art models for melanoma diagnosis. The proposed multimodal model also achieves on average 72% and 21% improvement in terms of sensitivity and AUC_SEN_80, respectively, compared with the single-modal model. The visual explanations can also help gain trust from dermatologists and realize man-machine collaborations, effectively reducing the limitation of black-box models in supporting medical decision making.
Collapse
|
19
|
Bassel A, Abdulkareem AB, Alyasseri ZAA, Sani NS, Mohammed HJ. Automatic Malignant and Benign Skin Cancer Classification Using a Hybrid Deep Learning Approach. Diagnostics (Basel) 2022; 12:diagnostics12102472. [PMID: 36292161 PMCID: PMC9600556 DOI: 10.3390/diagnostics12102472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 09/10/2022] [Accepted: 09/13/2022] [Indexed: 11/28/2022] Open
Abstract
Skin cancer is one of the major types of cancer with an increasing incidence in recent decades. The source of skin cancer arises in various dermatologic disorders. Skin cancer is classified into various types based on texture, color, morphological features, and structure. The conventional approach for skin cancer identification needs time and money for the predicted results. Currently, medical science is utilizing various tools based on digital technology for the classification of skin cancer. The machine learning-based classification approach is the robust and dominant approach for automatic methods of classifying skin cancer. The various existing and proposed methods of deep neural network, support vector machine (SVM), neural network (NN), random forest (RF), and K-nearest neighbor are used for malignant and benign skin cancer identification. In this study, a method was proposed based on the stacking of classifiers with three folds towards the classification of melanoma and benign skin cancers. The system was trained with 1000 skin images with the categories of melanoma and benign. The training and testing were performed using 70 and 30 percent of the overall data set, respectively. The primary feature extraction was conducted using the Resnet50, Xception, and VGG16 methods. The accuracy, F1 scores, AUC, and sensitivity metrics were used for the overall performance evaluation. In the proposed Stacked CV method, the system was trained in three levels by deep learning, SVM, RF, NN, KNN, and logistic regression methods. The proposed method for Xception techniques of feature extraction achieved 90.9% accuracy and was stronger compared to ResNet50 and VGG 16 methods. The improvement and optimization of the proposed method with a large training dataset could provide a reliable and robust skin cancer classification system.
Collapse
Affiliation(s)
- Atheer Bassel
- Computer Center, University of Anbar, Al-Anbar 31001, Iraq
| | - Amjed Basil Abdulkareem
- Center for Artifical Intelligence Technology, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor Darul Ehsan, Malaysia
| | - Zaid Abdi Alkareem Alyasseri
- ECE Dept., Faculty of Engineering, University of Kufa, Najaf 54001, Iraq
- College of Engineering, University of Warith Al-Anbiyaa, Karbala 63514, Iraq
- Information Technology Research and Development Centre, University of Kufa, Najaf 54001, Iraq
- Correspondence: (Z.A.A.A.); (N.S.S.)
| | - Nor Samsiah Sani
- Center for Artifical Intelligence Technology, Faculty of Information Science and Technology, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor Darul Ehsan, Malaysia
- Correspondence: (Z.A.A.A.); (N.S.S.)
| | - Husam Jasim Mohammed
- Department of Business Administration, College of Administration and Financial Sciences, Imam Ja’afar Al-Sadiq University, Baghdad 10001, Iraq
| |
Collapse
|
20
|
Deep Learning Based Tongue Prickles Detection in Traditional Chinese Medicine. EVIDENCE-BASED COMPLEMENTARY AND ALTERNATIVE MEDICINE 2022; 2022:5899975. [PMID: 36185091 PMCID: PMC9522517 DOI: 10.1155/2022/5899975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Revised: 08/08/2022] [Accepted: 08/26/2022] [Indexed: 12/03/2022]
Abstract
Tongue diagnosis is a convenient and noninvasive clinical practice of traditional Chinese medicine (TCM), having existed for thousands of years. Prickle, as an essential indicator in TCM, appears as a large number of red thorns protruding from the tongue. The term “prickly tongue” has been used to describe the flow of qi and blood in TCM and assess the conditions of disease as well as the health status of subhealthy people. Different location and density of prickles indicate different symptoms. As proved by modern medical research, the prickles originate in the fungiform papillae, which are enlarged and protrude to form spikes like awn. Prickle recognition, however, is subjective, burdensome, and susceptible to external factors. To solve this issue, an end-to-end prickle detection workflow based on deep learning is proposed. First, raw tongue images are fed into the Swin Transformer to remove interference information. Then, segmented tongues are partitioned into four areas: root, center, tip, and margin. We manually labeled the prickles on 224 tongue images with the assistance of an OpenCV spot detector. After training on the labeled dataset, the super-resolutionfaster-RCNN extracts advanced tongue features and predicts the bounding box of each single prickle. We show the synergy of deep learning and TCM by achieving a 92.42% recall, which is 2.52% higher than the previous work. This work provides a quantitative perspective for symptoms and disease diagnosis according to tongue characteristics. Furthermore, it is convenient to transfer this portable model to detect petechiae or tooth-marks on tongue images.
Collapse
|
21
|
Mukhlif AA, Al-Khateeb B, Mohammed MA. An extensive review of state-of-the-art transfer learning techniques used in medical imaging: Open issues and challenges. JOURNAL OF INTELLIGENT SYSTEMS 2022. [DOI: 10.1515/jisys-2022-0198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
Deep learning techniques, which use a massive technology known as convolutional neural networks, have shown excellent results in a variety of areas, including image processing and interpretation. However, as the depth of these networks grows, so does the demand for a large amount of labeled data required to train these networks. In particular, the medical field suffers from a lack of images because the procedure for obtaining labeled medical images in the healthcare field is difficult, expensive, and requires specialized expertise to add labels to images. Moreover, the process may be prone to errors and time-consuming. Current research has revealed transfer learning as a viable solution to this problem. Transfer learning allows us to transfer knowledge gained from a previous process to improve and tackle a new problem. This study aims to conduct a comprehensive survey of recent studies that dealt with solving this problem and the most important metrics used to evaluate these methods. In addition, this study identifies problems in transfer learning techniques and highlights the problems of the medical dataset and potential problems that can be addressed in future research. According to our review, many researchers use pre-trained models on the Imagenet dataset (VGG16, ResNet, Inception v3) in many applications such as skin cancer, breast cancer, and diabetic retinopathy classification tasks. These techniques require further investigation of these models, due to training them on natural, non-medical images. In addition, many researchers use data augmentation techniques to expand their dataset and avoid overfitting. However, not enough studies have shown the effect of performance with or without data augmentation. Accuracy, recall, precision, F1 score, receiver operator characteristic curve, and area under the curve (AUC) were the most widely used measures in these studies. Furthermore, we identified problems in the datasets for melanoma and breast cancer and suggested corresponding solutions.
Collapse
Affiliation(s)
- Abdulrahman Abbas Mukhlif
- Computer Science Department, College of Computer Science and Information Technology, University of Anbar , 31001 , Ramadi , Anbar , Iraq
| | - Belal Al-Khateeb
- Computer Science Department, College of Computer Science and Information Technology, University of Anbar , 31001 , Ramadi , Anbar , Iraq
| | - Mazin Abed Mohammed
- Computer Science Department, College of Computer Science and Information Technology, University of Anbar , 31001 , Ramadi , Anbar , Iraq
| |
Collapse
|
22
|
Girdhar N, Sinha A, Gupta S. DenseNet-II: an improved deep convolutional neural network for melanoma cancer detection. Soft comput 2022; 27:1-20. [PMID: 36034768 PMCID: PMC9400005 DOI: 10.1007/s00500-022-07406-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/16/2022] [Indexed: 10/28/2022]
Abstract
Research in the field of medicine and relevant studies evince that melanoma is one of the deadliest cancers. It defines precisely that the condition develops due to uncontrolled growth of melanocytic cells. The current trends in any disease detection revolve around the usage of two main categories of models; these are general machine learning models and deep learning models. Further, the experimental analysis of melanoma has an additional requirement of visual records like dermatological scans or normal camera lens images. This further accentuates the need for a more accurate model for melanoma detection. In this work, we aim to achieve the same, primarily by the extensive usage of neural networks. Our objective is to propose a deep learning CNN framework-based model to improve the accuracy of melanoma detection by customizing the number of layers in the network architecture, activation functions applied, and the dimension of the input array. Models like Resnet, DenseNet, Inception, and VGG have proved to yield appreciable accuracy in melanoma detection. However, in most cases, the dataset was classified into malignant or benign classes only. The dataset used in our research provides seven lesions; these are melanocytic nevi, melanoma, benign keratosis, basal cell carcinoma, actinic keratoses, vascular lesions, and dermatofibroma. Thus, through the HAM10000 dataset and various deep learning models, we diversified the precision factors as well as input qualities. The obtained results are highly propitious and establish its credibility.
Collapse
Affiliation(s)
- Nancy Girdhar
- School of Computer Science Engineering and Technology, Bennett University, Greater Noida, UP India
| | - Aparna Sinha
- Amity School of Engineering and Technology, Amity University, Noida, UP India
| | - Shivang Gupta
- Amity School of Engineering and Technology, Amity University, Noida, UP India
| |
Collapse
|
23
|
Lee JRH, Pavlova M, Famouri M, Wong A. Cancer-Net SCa: tailored deep neural network designs for detection of skin cancer from dermoscopy images. BMC Med Imaging 2022; 22:143. [PMID: 35945505 PMCID: PMC9364616 DOI: 10.1186/s12880-022-00871-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Accepted: 07/26/2022] [Indexed: 11/25/2022] Open
Abstract
Background Skin cancer continues to be the most frequently diagnosed form of cancer in the U.S., with not only significant effects on health and well-being but also significant economic costs associated with treatment. A crucial step to the treatment and management of skin cancer is effective early detection with key screening approaches such as dermoscopy examinations, leading to stronger recovery prognoses. Motivated by the advances of deep learning and inspired by the open source initiatives in the research community, in this study we introduce Cancer-Net SCa, a suite of deep neural network designs tailored for the detection of skin cancer from dermoscopy images that is open source and available to the general public. To the best of the authors’ knowledge, Cancer-Net SCa comprises the first machine-driven design of deep neural network architectures tailored specifically for skin cancer detection, one of which leverages attention condensers for an efficient self-attention design. Results We investigate and audit the behaviour of Cancer-Net SCa in a responsible and transparent manner through explainability-driven performance validation. All the proposed designs achieved improved accuracy when compared to the ResNet-50 architecture while also achieving significantly reduced architectural and computational complexity. In addition, when evaluating the decision making process of the networks, it can be seen that diagnostically relevant critical factors are leveraged rather than irrelevant visual indicators and imaging artifacts. Conclusion The proposed Cancer-Net SCa designs achieve strong skin cancer detection performance on the International Skin Imaging Collaboration (ISIC) dataset, while providing a strong balance between computation and architectural efficiency and accuracy. While Cancer-Net SCa is not a production-ready screening solution, the hope is that the release of Cancer-Net SCa in open source, open access form will encourage researchers, clinicians, and citizen data scientists alike to leverage and build upon them.
Collapse
Affiliation(s)
- James Ren Hou Lee
- Vision and Image Processing Research Group, University of Waterloo, Waterloo, Canada.
| | - Maya Pavlova
- Vision and Image Processing Research Group, University of Waterloo, Waterloo, Canada.,DarwinAI Corp, Waterloo, Canada
| | | | - Alexander Wong
- Vision and Image Processing Research Group, University of Waterloo, Waterloo, Canada.,Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, Canada.,DarwinAI Corp, Waterloo, Canada
| |
Collapse
|
24
|
Aljuaid H, Alturki N, Alsubaie N, Cavallaro L, Liotta A. Computer-aided diagnosis for breast cancer classification using deep neural networks and transfer learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 223:106951. [PMID: 35767911 DOI: 10.1016/j.cmpb.2022.106951] [Citation(s) in RCA: 39] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 05/25/2022] [Accepted: 06/09/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Many developed and non-developed countries worldwide suffer from cancer-related fatal diseases. In particular, the rate of breast cancer in females increases daily, partially due to unawareness and undiagnosed at the early stages. A proper first breast cancer treatment can only be provided by adequately detecting and classifying cancer during the very early stages of its development. The use of medical image analysis techniques and computer-aided diagnosis may help the acceleration and the automation of both cancer detection and classification by also training and aiding less experienced physicians. For large datasets of medical images, convolutional neural networks play a significant role in detecting and classifying cancer effectively. METHODS This article presents a novel computer-aided diagnosis method for breast cancer classification (both binary and multi-class), using a combination of deep neural networks (ResNet 18, ShuffleNet, and Inception-V3Net) and transfer learning on the BreakHis publicly available dataset. RESULTS AND CONCLUSIONS Our proposed method provides the best average accuracy for binary classification of benign or malignant cancer cases of 99.7%, 97.66%, and 96.94% for ResNet, InceptionV3Net, and ShuffleNet, respectively. Average accuracies for multi-class classification were 97.81%, 96.07%, and 95.79% for ResNet, Inception-V3Net, and ShuffleNet, respectively.
Collapse
Affiliation(s)
- Hanan Aljuaid
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), PO Box 84428, Riyadh 11671, Saudi Arabia
| | - Nazik Alturki
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), PO Box 84428, Riyadh 11671, Saudi Arabia
| | - Najah Alsubaie
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), PO Box 84428, Riyadh 11671, Saudi Arabia
| | - Lucia Cavallaro
- Faculty of Computer Science, Free University of Bozen-Bolzano, Piazza Domenicani, 3, Bolzano 39100, Italy
| | - Antonio Liotta
- Faculty of Computer Science, Free University of Bozen-Bolzano, Piazza Domenicani, 3, Bolzano 39100, Italy.
| |
Collapse
|
25
|
An Efficient Galactic Swarm Optimization Based Fractal Neural Network Model with DWT for Malignant Melanoma Prediction. Neural Process Lett 2022. [DOI: 10.1007/s11063-022-10847-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
26
|
Computational Intelligence-Based Melanoma Detection and Classification Using Dermoscopic Images. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:2370190. [PMID: 35685142 PMCID: PMC9173896 DOI: 10.1155/2022/2370190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 04/18/2022] [Accepted: 05/09/2022] [Indexed: 11/21/2022]
Abstract
Melanoma is a kind of skin cancer caused by the irregular development of pigment-producing cells. Since melanoma detection efficiency is limited to different factors such as poor contrast among lesions and nearby skin regions, and visual resemblance among melanoma and non-melanoma lesions, intelligent computer-aided diagnosis (CAD) models are essential. Recently, computational intelligence (CI) and deep learning (DL) techniques are utilized for effective decision-making in the biomedical field. In addition, the fast-growing advancements in computer-aided surgeries and recent progress in molecular, cellular, and tissue engineering research have made CI an inevitable part of biomedical applications. In this view, the research work here develops a novel computational intelligence-based melanoma detection and classification technique using dermoscopic images (CIMDC-DIs). The proposed CIMDC-DI model encompasses different subprocesses. Primarily, bilateral filtering with fuzzy k-means (FKM) clustering-based image segmentation is applied as a preprocessing step. Besides, NasNet-based feature extractor with stochastic gradient descent is applied for feature extraction. Finally, the manta ray foraging optimization (MRFO) algorithm with a cascaded neural network (CNN) is exploited for the classification process. To ensure the potential efficiency of the CIMDC-DI technique, we conducted a wide-ranging simulation analysis, and the results reported its effectiveness over the existing recent algorithms with the maximum accuracy of 97.50%.
Collapse
|
27
|
Willem T, Krammer S, Böhm A, French LE, Hartmann D, Lasser T, Buyx A. Risks and benefits of dermatological machine learning healthcare applications – an overview and ethical analysis. J Eur Acad Dermatol Venereol 2022; 36:1660-1668. [DOI: 10.1111/jdv.18192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Accepted: 04/07/2022] [Indexed: 11/30/2022]
Affiliation(s)
- Theresa Willem
- Technical University of Munich School of Medicine, Institute of History and Ethics in Medicine Germany
- Technical University of Munich School of Social Sciences and Technology, Department of Science, Technology and Society (STS)
| | - Sebastian Krammer
- Ludwig Maximilian University of Munich Department of Dermatology and Allergology Munich Germany
| | - Anne‐Sophie Böhm
- Ludwig Maximilian University of Munich Department of Dermatology and Allergology Munich Germany
| | - Lars E. French
- Ludwig Maximilian University of Munich Department of Dermatology and Allergology Munich Germany
- Dr. Philip Frost Department of Dermatology and Cutaneous Surgery University of Miami Miller School of Medicine Miami FL USA
| | - Daniela Hartmann
- Ludwig Maximilian University of Munich Department of Dermatology and Allergology Munich Germany
| | - Tobias Lasser
- Technical University of Munich School of Computation, Information and Technology, Department of Informatics Germany
- Technical University of Munich Institute of Biomedical Engineering Germany Munich
| | - Alena Buyx
- Technical University of Munich School of Medicine, Institute of History and Ethics in Medicine Germany
| |
Collapse
|
28
|
Razzak I, Naz S. Unit-Vise: Deep Shallow Unit-Vise Residual Neural Networks With Transition Layer For Expert Level Skin Cancer Classification. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2022; 19:1225-1234. [PMID: 33211666 DOI: 10.1109/tcbb.2020.3039358] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Many modern neural network architectures with over parameterized regime have been used for identification of skin cancer. Recent work showed that network, where the hidden units are polynomially smaller in size, showed better performance than overparameterized models. Hence, in this paper, we present multistage unit-vise deep dense residual network with transition and additional supervision blocks that enforces the shorter connections resulting in better feature representation. Unlike ResNet, We divided the network into several stages, and each stage consists of several dense connected residual units that support residual learning with dense connectivity and limited the skip connectivity. Thus, each stage can consider the features from its earlier layers locally as well as less complicated in comparison to its counter network. Evaluation results on ISIC-2018 challenge consisting of 10,015 training images show considerable improvement over other approaches achieving 98.05 percent accuracy and improving on the best results achieved in comparison to state of the art methods. The code of Unit-vise network is publicly available.1.
Collapse
|
29
|
Melanoma Classification Using a Novel Deep Convolutional Neural Network with Dermoscopic Images. SENSORS 2022; 22:s22031134. [PMID: 35161878 PMCID: PMC8838143 DOI: 10.3390/s22031134] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 01/18/2022] [Accepted: 01/27/2022] [Indexed: 02/01/2023]
Abstract
Automatic melanoma detection from dermoscopic skin samples is a very challenging task. However, using a deep learning approach as a machine vision tool can overcome some challenges. This research proposes an automated melanoma classifier based on a deep convolutional neural network (DCNN) to accurately classify malignant vs. benign melanoma. The structure of the DCNN is carefully designed by organizing many layers that are responsible for extracting low to high-level features of the skin images in a unique fashion. Other vital criteria in the design of DCNN are the selection of multiple filters and their sizes, employing proper deep learning layers, choosing the depth of the network, and optimizing hyperparameters. The primary objective is to propose a lightweight and less complex DCNN than other state-of-the-art methods to classify melanoma skin cancer with high efficiency. For this study, dermoscopic images containing different cancer samples were obtained from the International Skin Imaging Collaboration datastores (ISIC 2016, ISIC2017, and ISIC 2020). We evaluated the model based on accuracy, precision, recall, specificity, and F1-score. The proposed DCNN classifier achieved accuracies of 81.41%, 88.23%, and 90.42% on the ISIC 2016, 2017, and 2020 datasets, respectively, demonstrating high performance compared with the other state-of-the-art networks. Therefore, this proposed approach could provide a less complex and advanced framework for automating the melanoma diagnostic process and expediting the identification process to save a life.
Collapse
|
30
|
Soffer S, Morgenthau AS, Shimon O, Barash Y, Konen E, Glicksberg BS, Klang E. Artificial Intelligence for Interstitial Lung Disease Analysis on Chest Computed Tomography: A Systematic Review. Acad Radiol 2022; 29 Suppl 2:S226-S235. [PMID: 34219012 DOI: 10.1016/j.acra.2021.05.014] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 05/10/2021] [Accepted: 05/11/2021] [Indexed: 12/22/2022]
Abstract
RATIONALE AND OBJECTIVES High-resolution computed tomography (HRCT) is paramount in the assessment of interstitial lung disease (ILD). Yet, HRCT interpretation of ILDs may be hampered by inter- and intra-observer variability. Recently, artificial intelligence (AI) has revolutionized medical image analysis. This technology has the potential to advance patient care in ILD. We aimed to systematically evaluate the application of AI for the analysis of ILD in HRCT. MATERIALS AND METHODS We searched MEDLINE/PubMed databases for original publications of deep learning for ILD analysis on chest CT. The search included studies published up to March 1, 2021. The risk of bias evaluation included tailored Quality Assessment of Diagnostic Accuracy Studies and the modified Joanna Briggs Institute Critical Appraisal checklist. RESULTS Data was extracted from 19 retrospective studies. Deep learning techniques included detection, segmentation, and classification of ILD on HRCT. Most studies focused on the classification of ILD into different morphological patterns. Accuracies of 78%-91% were achieved. Two studies demonstrated near-expert performance for the diagnosis of idiopathic pulmonary fibrosis (IPF). The Quality Assessment of Diagnostic Accuracy Studies tool identified a high risk of bias in 15/19 (78.9%) of the studies. CONCLUSION AI has the potential to contribute to the radiologic diagnosis and classification of ILD. However, the accuracy performance is still not satisfactory, and research is limited by a small number of retrospective studies. Hence, the existing published data may not be sufficiently reliable. Only well-designed prospective controlled studies can accurately assess the value of existing AI tools for ILD evaluation.
Collapse
|
31
|
Popescu D, El-Khatib M, El-Khatib H, Ichim L. New Trends in Melanoma Detection Using Neural Networks: A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2022; 22:496. [PMID: 35062458 PMCID: PMC8778535 DOI: 10.3390/s22020496] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 12/28/2021] [Accepted: 01/05/2022] [Indexed: 12/29/2022]
Abstract
Due to its increasing incidence, skin cancer, and especially melanoma, is a serious health disease today. The high mortality rate associated with melanoma makes it necessary to detect the early stages to be treated urgently and properly. This is the reason why many researchers in this domain wanted to obtain accurate computer-aided diagnosis systems to assist in the early detection and diagnosis of such diseases. The paper presents a systematic review of recent advances in an area of increased interest for cancer prediction, with a focus on a comparative perspective of melanoma detection using artificial intelligence, especially neural network-based systems. Such structures can be considered intelligent support systems for dermatologists. Theoretical and applied contributions were investigated in the new development trends of multiple neural network architecture, based on decision fusion. The most representative articles covering the area of melanoma detection based on neural networks, published in journals and impact conferences, were investigated between 2015 and 2021, focusing on the interval 2018-2021 as new trends. Additionally presented are the main databases and trends in their use in teaching neural networks to detect melanomas. Finally, a research agenda was highlighted to advance the field towards the new trends.
Collapse
Affiliation(s)
- Dan Popescu
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania; (M.E.-K.); (H.E.-K.); (L.I.)
| | | | | | | |
Collapse
|
32
|
Abstract
Any cancer type is one of the leading death causes around the world. Skin cancer is a condition where malignant cells are formed in the tissues of the skin, such as melanoma, known as the most aggressive and deadly skin cancer type. The mortality rates of melanoma are associated with its high potential for metastasis in later stages, spreading to other body sites such as the lungs, bones, or the brain. Thus, early detection and diagnosis are closely related to survival rates. Computer Aided Design (CAD) systems carry out a pre-diagnosis of a skin lesion based on clinical criteria or global patterns associated with its structure. A CAD system is essentially composed by three modules: (i) lesion segmentation, (ii) feature extraction, and (iii) classification. In this work, a methodology is proposed for a CAD system development that detects global patterns using texture descriptors based on statistical measurements that allow melanoma detection from dermoscopic images. Image analysis was carried out using spatial domain methods, statistical measurements were used for feature extraction, and a classifier based on cellular automata (ACA) was used for classification. The proposed model was applied to dermoscopic images obtained from the PH2 database, and it was compared with other models using accuracy, sensitivity, and specificity as metrics. With the proposed model, values of 0.978, 0.944, and 0.987 of accuracy, sensitivity and specificity, respectively, were obtained. The results of the evaluated metrics show that the proposed method is more effective than other state-of-the-art methods for melanoma detection in dermoscopic images.
Collapse
|
33
|
Hosseinzadeh Kassani S, Hosseinzadeh Kassani P, Wesolowski MJ, Schneider KA, Deters R. Deep transfer learning based model for colorectal cancer histopathology segmentation: A comparative study of deep pre-trained models. Int J Med Inform 2021; 159:104669. [PMID: 34979435 DOI: 10.1016/j.ijmedinf.2021.104669] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2021] [Revised: 11/28/2021] [Accepted: 12/16/2021] [Indexed: 11/19/2022]
Abstract
Colorectal cancer is one of the leading causes of cancer-related death, worldwide. Early detection of suspicious tissues can significantly improve the survival rate. In this study, the performance of a wide variety of deep learning-based architectures is evaluated for automatic tumor segmentation of colorectal tissue samples. The proposed approach highlights the utility of incorporating convolutional neural network modules and transfer learning in the encoder part of a segmentation architecture for histopathology image analysis. A comparative and extensive experiment was conducted on a challenging histopathological segmentation task to demonstrate the effectiveness of incorporating deep modules in the segmentation encoder-decoder network as well as the contributions of its components. Experimental results demonstrate that shared DenseNet and LinkNet architecture is promising, achieves the state-of-the-art performance, and outperforms other methods with a dice similarity index of 82.74%±1.77, accuracy of 87.07%±1.56, and f1-score value of 82.79%±1.79.
Collapse
Affiliation(s)
| | | | | | | | - Ralph Deters
- Department of Computer Science, University of Saskatchewan, Canada.
| |
Collapse
|
34
|
Nauta M, Walsh R, Dubowski A, Seifert C. Uncovering and Correcting Shortcut Learning in Machine Learning Models for Skin Cancer Diagnosis. Diagnostics (Basel) 2021; 12:diagnostics12010040. [PMID: 35054207 PMCID: PMC8774502 DOI: 10.3390/diagnostics12010040] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 12/15/2021] [Accepted: 12/23/2021] [Indexed: 11/23/2022] Open
Abstract
Machine learning models have been successfully applied for analysis of skin images. However, due to the black box nature of such deep learning models, it is difficult to understand their underlying reasoning. This prevents a human from validating whether the model is right for the right reasons. Spurious correlations and other biases in data can cause a model to base its predictions on such artefacts rather than on the true relevant information. These learned shortcuts can in turn cause incorrect performance estimates and can result in unexpected outcomes when the model is applied in clinical practice. This study presents a method to detect and quantify this shortcut learning in trained classifiers for skin cancer diagnosis, since it is known that dermoscopy images can contain artefacts. Specifically, we train a standard VGG16-based skin cancer classifier on the public ISIC dataset, for which colour calibration charts (elliptical, coloured patches) occur only in benign images and not in malignant ones. Our methodology artificially inserts those patches and uses inpainting to automatically remove patches from images to assess the changes in predictions. We find that our standard classifier partly bases its predictions of benign images on the presence of such a coloured patch. More importantly, by artificially inserting coloured patches into malignant images, we show that shortcut learning results in a significant increase in misdiagnoses, making the classifier unreliable when used in clinical practice. With our results, we, therefore, want to increase awareness of the risks of using black box machine learning models trained on potentially biased datasets. Finally, we present a model-agnostic method to neutralise shortcut learning by removing the bias in the training dataset by exchanging coloured patches with benign skin tissue using image inpainting and re-training the classifier on this de-biased dataset.
Collapse
Affiliation(s)
- Meike Nauta
- Faculty of EEEMCS, University of Twente, 7500 AE Enschede, The Netherlands
- Institute for Artificial Intelligence in Medicine, University of Duisburg-Essen, 45131 Essen, Germany
- Correspondence: (M.N.); (R.W.); (A.D.); (C.S.)
| | - Ricky Walsh
- Faculty of EEEMCS, University of Twente, 7500 AE Enschede, The Netherlands
- Correspondence: (M.N.); (R.W.); (A.D.); (C.S.)
| | - Adam Dubowski
- Faculty of EEEMCS, University of Twente, 7500 AE Enschede, The Netherlands
- Correspondence: (M.N.); (R.W.); (A.D.); (C.S.)
| | - Christin Seifert
- Institute for Artificial Intelligence in Medicine, University of Duisburg-Essen, 45131 Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), 45147 Essen, Germany
- Correspondence: (M.N.); (R.W.); (A.D.); (C.S.)
| |
Collapse
|
35
|
Melanoma Classification from Dermoscopy Images Using Ensemble of Convolutional Neural Networks. MATHEMATICS 2021. [DOI: 10.3390/math10010026] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Human skin is the most exposed part of the human body that needs constant protection and care from heat, light, dust, and direct exposure to other harmful radiation, such as UV rays. Skin cancer is one of the dangerous diseases found in humans. Melanoma is a form of skin cancer that begins in the cells (melanocytes) that control the pigment in human skin. Early detection and diagnosis of skin cancer, such as melanoma, is necessary to reduce the death rate due to skin cancer. In this paper, the classification of acral lentiginous melanoma, a type of melanoma with benign nevi, is being carried out. The proposed stacked ensemble method for melanoma classification uses different pre-trained models, such as Xception, Inceptionv3, InceptionResNet-V2, DenseNet121, and DenseNet201, by employing the concept of transfer learning and fine-tuning. The selection of pre-trained CNN architectures for transfer learning is based on models having the highest top-1 and top-5 accuracies on ImageNet. A novel stacked ensemble-based framework is presented to improve the generalizability and increase robustness by fusing fine-tuned pre-trained CNN models for acral lentiginous melanoma classification. The performance of the proposed method is evaluated by experimenting on a Figshare benchmark dataset. The impact of applying different augmentation techniques has also been analyzed through extensive experimentations. The results confirm that the proposed method outperforms state-of-the-art techniques and achieves an accuracy of 97.93%.
Collapse
|
36
|
Shoaib MR, Elshamy MR, Taha TE, El-Fishawy AS, Abd El-Samie FE. Efficient Brain Tumor Detection Based on Deep Learning Models. JOURNAL OF PHYSICS: CONFERENCE SERIES 2021; 2128:012012. [DOI: 10.1088/1742-6596/2128/1/012012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
Abstract
Brain tumor is an acute cancerous disease that results from abnormal and uncontrollable cell division. Brain tumors are classified via biopsy, which is not normally done before the brain ultimate surgery. Recent advances and improvements in deep learning technology helped the health industry in getting accurate disease diagnosis. In this paper, a Convolutional Neural Network (CNN) is adopted with image pre-processing to classify brain Magnetic Resonance (MR) images into four classes: glioma tumor, meningioma tumor, pituitary tumor and normal patients, is provided. We use a transfer learning model, a CNN-based model that is designed from scratch, a pre-trained inceptionresnetv2 model and a pre-trained inceptionv3 model. The performance of the four proposed models is tested using evaluation metrics including accuracy, sensitivity, specificity, precision, F1_score, Matthew’s correlation coefficient, error, kappa and false positive rate. The obtained results show that the two proposed models are very effective in achieving accuracies of 93.15% and 91.24% for the transfer learning model and BRAIN-TUMOR-net based on CNN, respectively. The inceptionresnetv2 model achieves an accuracy of 86.80% and the inceptionv3 model achieves an accuracy of 85.34%. Practical implementation of the proposed models is presented.
Collapse
|
37
|
Moataz L, Salama GI, Abd Elazeem MH. Skin Cancer Diseases Classification using Deep Convolutional Neural Network with Transfer Learning Model. JOURNAL OF PHYSICS: CONFERENCE SERIES 2021; 2128:012013. [DOI: 10.1088/1742-6596/2128/1/012013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
Abstract
Skin cancer is becoming increasingly common. Fortunately, early discovery can greatly improve the odds of a patient being healed. Many Artificial Intelligence based approaches to classify skin lesions have recently been proposed. but these approaches suffer from limited classification accuracy. Deep convolutional neural networks show potential for better classification of cancer lesions. This paper presents a fine-tuning on Xception pretrained model for classification of skin lesions by adding a group of layers after the basic ones of the Xception model and all model weights are set to be trained. The model is fine-tuned over HAM10,000 dataset seven classes by augmentation approach to mitigate the data imbalance effect and conducted a comparative study with the most up to date approaches. In comparison to prior models, the results indicate that the proposed model is both efficient and reliable.
Collapse
|
38
|
Sella Veluswami JR, Ezhil Prasanth M, Harini K, Ajaykumar U. Melanoma Skin Cancer Recognition and Classification Using Deep Hybrid Learning. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2021. [DOI: 10.1166/jmihi.2021.3898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Melanoma skin cancer is a common disease that develops in the melanocytes that produces melanin. In this work, a deep hybrid learning model is engaged to distinguish the skin cancer and classify them. The dataset used contains two classes of skin cancer–benign and malignant. Since
the dataset is imbalanced between the number of images in malignant lesions and benign lesions, augmentation technique is used to balance it. To improve the clarity of the images, the images are then enhanced using Contrast Limited Adaptive Histogram Equalization Technique (CLAHE) technique.
To detect only the affected lesion area, the lesions are segmented using the neural network based ensemble model which is the result of combining the segmentation algorithms of Fully Convolutional Network (FCN), SegNet and U-Net which produces a binary image of the skin and the lesion, where
the lesion is represented with white and the skin is represented by black. These binary images are further classified using different pre-trained models like Inception ResNet V2, Inception V3, Resnet 50, Densenet and CNN. Following that fine tuning of the best performing pre-trained model
is carried out to improve the performance of classification. To further improve the performance of the classification model, a method of combining deep learning (DL) and machine learning (ML) is carried out. Using this hybrid approach, the feature extraction is done using DL models and the
classification is performed by Support Vector Machine (SVM). This computer aided tool will assist doctors in diagnosing the disease faster than the traditional method. There is a significant improvement of nearly 4% increase in the performance of the proposed method is presented.
Collapse
Affiliation(s)
- Jansi Rani Sella Veluswami
- Department of Computer Science and Engineering, Sri Sivasubramaniya Nadar College of Engineering, Kalavakkam, Chennai 603110, Tamilnadu, India
| | - M. Ezhil Prasanth
- Department of Computer Science and Engineering, Sri Sivasubramaniya Nadar College of Engineering, Kalavakkam, Chennai 603110, Tamilnadu, India
| | - K. Harini
- Department of Computer Science and Engineering, Sri Sivasubramaniya Nadar College of Engineering, Kalavakkam, Chennai 603110, Tamilnadu, India
| | - U. Ajaykumar
- Department of Computer Science and Engineering, Sri Sivasubramaniya Nadar College of Engineering, Kalavakkam, Chennai 603110, Tamilnadu, India
| |
Collapse
|
39
|
Benyahia S, Meftah B, Lézoray O. Multi-features extraction based on deep learning for skin lesion classification. Tissue Cell 2021; 74:101701. [PMID: 34861582 DOI: 10.1016/j.tice.2021.101701] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 11/22/2021] [Accepted: 11/23/2021] [Indexed: 10/19/2022]
Abstract
For various forms of skin lesion, many different feature extraction methods have been investigated so far. Indeed, feature extraction is a crucial step in machine learning processes. In general, we can distinct handcrafted and deep learning features. In this paper, we investigate the efficiency of using 17 commonly pre-trained convolutional neural networks (CNN) architectures as feature extractors and of 24 machine learning classifiers to evaluate the classification of skin lesions from two different datasets: ISIC 2019 and PH2. In this research, we find out that a DenseNet201 combined with Fine KNN or Cubic SVM achieved the best results in accuracy (92.34% and 91.71%) for the ISIC 2019 dataset. The results also show that the suggested method outperforms others approaches with an accuracy of 99% on the PH2 dataset.
Collapse
Affiliation(s)
- Samia Benyahia
- Department of Computer Science, Faculty of Exact Sciences, University of Mascara, Mascara, Algeria
| | | | - Olivier Lézoray
- Normandie Univ, UNICAEN, ENSICAEN, CNRS, GREYC, Caen, France
| |
Collapse
|
40
|
Del Amor R, Launet L, Colomer A, Moscardó A, Mosquera-Zamudio A, Monteagudo C, Naranjo V. An attention-based weakly supervised framework for spitzoid melanocytic lesion diagnosis in whole slide images. Artif Intell Med 2021; 121:102197. [PMID: 34763799 DOI: 10.1016/j.artmed.2021.102197] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Revised: 10/08/2021] [Accepted: 10/12/2021] [Indexed: 02/07/2023]
Abstract
Melanoma is an aggressive neoplasm responsible for the majority of deaths from skin cancer. Specifically, spitzoid melanocytic tumors are one of the most challenging melanocytic lesions due to their ambiguous morphological features. The gold standard for its diagnosis and prognosis is the analysis of skin biopsies. In this process, dermatopathologists visualize skin histology slides under a microscope, in a highly time-consuming and subjective task. In the last years, computer-aided diagnosis (CAD) systems have emerged as a promising tool that could support pathologists in daily clinical practice. Nevertheless, no automatic CAD systems have yet been proposed for the analysis of spitzoid lesions. Regarding common melanoma, no system allows both the selection of the tumor region and the prediction of the benign or malignant form in the diagnosis. Motivated by this, we propose a novel end-to-end weakly supervised deep learning model, based on inductive transfer learning with an improved convolutional neural network (CNN) to refine the embedding features of the latent space. The framework is composed of a source model in charge of finding the tumor patch-level patterns, and a target model focuses on the specific diagnosis of a biopsy. The latter retrains the backbone of the source model through a multiple instance learning workflow to obtain the biopsy-level scoring. To evaluate the performance of the proposed methods, we performed extensive experiments on a private skin database with spitzoid lesions. Test results achieved an accuracy of 0.9231 and 0.80 for the source and the target models, respectively. In addition, the heat map findings are directly in line with the clinicians' medical decision and even highlight, in some cases, patterns of interest that were overlooked by the pathologist.
Collapse
Affiliation(s)
- Rocío Del Amor
- Instituto de Investigación e Innovación en Bioingeniería, Universitat Politècnica de València, 46022 Valencia, Spain.
| | - Laëtitia Launet
- Instituto de Investigación e Innovación en Bioingeniería, Universitat Politècnica de València, 46022 Valencia, Spain
| | - Adrián Colomer
- Instituto de Investigación e Innovación en Bioingeniería, Universitat Politècnica de València, 46022 Valencia, Spain
| | - Anaïs Moscardó
- Pathology Department, Hospital Clínico Universitario de Valencia, Universidad de Valencia, Valencia, Spain
| | - Andrés Mosquera-Zamudio
- Pathology Department, Hospital Clínico Universitario de Valencia, Universidad de Valencia, Valencia, Spain
| | - Carlos Monteagudo
- Pathology Department, Hospital Clínico Universitario de Valencia, Universidad de Valencia, Valencia, Spain
| | - Valery Naranjo
- Instituto de Investigación e Innovación en Bioingeniería, Universitat Politècnica de València, 46022 Valencia, Spain
| |
Collapse
|
41
|
k-relevance vectors: Considering relevancy beside nearness. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2021.107762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
42
|
Intelligent Dermatologist Tool for Classifying Multiple Skin Cancer Subtypes by Incorporating Manifold Radiomics Features Categories. CONTRAST MEDIA & MOLECULAR IMAGING 2021; 2021:7192016. [PMID: 34621146 PMCID: PMC8457955 DOI: 10.1155/2021/7192016] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 08/20/2021] [Accepted: 09/01/2021] [Indexed: 02/06/2023]
Abstract
The rates of skin cancer (SC) are rising every year and becoming a critical health issue worldwide. SC's early and accurate diagnosis is the key procedure to reduce these rates and improve survivability. However, the manual diagnosis is exhausting, complicated, expensive, prone to diagnostic error, and highly dependent on the dermatologist's experience and abilities. Thus, there is a vital need to create automated dermatologist tools that are capable of accurately classifying SC subclasses. Recently, artificial intelligence (AI) techniques including machine learning (ML) and deep learning (DL) have verified the success of computer-assisted dermatologist tools in the automatic diagnosis and detection of SC diseases. Previous AI-based dermatologist tools are based on features which are either high-level features based on DL methods or low-level features based on handcrafted operations. Most of them were constructed for binary classification of SC. This study proposes an intelligent dermatologist tool to accurately diagnose multiple skin lesions automatically. This tool incorporates manifold radiomics features categories involving high-level features such as ResNet-50, DenseNet-201, and DarkNet-53 and low-level features including discrete wavelet transform (DWT) and local binary pattern (LBP). The results of the proposed intelligent tool prove that merging manifold features of different categories has a high influence on the classification accuracy. Moreover, these results are superior to those obtained by other related AI-based dermatologist tools. Therefore, the proposed intelligent tool can be used by dermatologists to help them in the accurate diagnosis of the SC subcategory. It can also overcome manual diagnosis limitations, reduce the rates of infection, and enhance survival rates.
Collapse
|
43
|
Abbas Q, Ramzan F, Ghani MU. Acral melanoma detection using dermoscopic images and convolutional neural networks. Vis Comput Ind Biomed Art 2021; 4:25. [PMID: 34618260 PMCID: PMC8497676 DOI: 10.1186/s42492-021-00091-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 09/06/2021] [Indexed: 12/07/2022] Open
Abstract
Acral melanoma (AM) is a rare and lethal type of skin cancer. It can be diagnosed by expert dermatologists, using dermoscopic imaging. It is challenging for dermatologists to diagnose melanoma because of the very minor differences between melanoma and non-melanoma cancers. Most of the research on skin cancer diagnosis is related to the binary classification of lesions into melanoma and non-melanoma. However, to date, limited research has been conducted on the classification of melanoma subtypes. The current study investigated the effectiveness of dermoscopy and deep learning in classifying melanoma subtypes, such as, AM. In this study, we present a novel deep learning model, developed to classify skin cancer. We utilized a dermoscopic image dataset from the Yonsei University Health System South Korea for the classification of skin lesions. Various image processing and data augmentation techniques have been applied to develop a robust automated system for AM detection. Our custom-built model is a seven-layered deep convolutional network that was trained from scratch. Additionally, transfer learning was utilized to compare the performance of our model, where AlexNet and ResNet-18 were modified, fine-tuned, and trained on the same dataset. We achieved improved results from our proposed model with an accuracy of more than 90 % for AM and benign nevus, respectively. Additionally, using the transfer learning approach, we achieved an average accuracy of nearly 97 %, which is comparable to that of state-of-the-art methods. From our analysis and results, we found that our model performed well and was able to effectively classify skin cancer. Our results show that the proposed system can be used by dermatologists in the clinical decision-making process for the early diagnosis of AM.
Collapse
Affiliation(s)
- Qaiser Abbas
- Department of Computer Science, University of Engineering and Technology, 54890, Lahore, Pakistan.
| | - Farheen Ramzan
- Department of Computer Science, University of Engineering and Technology, 54890, Lahore, Pakistan
| | - Muhammad Usman Ghani
- Department of Computer Science, University of Engineering and Technology, 54890, Lahore, Pakistan
| |
Collapse
|
44
|
DUMAN E, TOLAN Z. Comparing Popular CNN Models for an Imbalanced Dataset of Dermoscopic Images. COMPUTER SCIENCE 2021. [DOI: 10.53070/bbd.990574] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
45
|
Sayed GI, Soliman MM, Hassanien AE. A novel melanoma prediction model for imbalanced data using optimized SqueezeNet by bald eagle search optimization. Comput Biol Med 2021; 136:104712. [PMID: 34388470 DOI: 10.1016/j.compbiomed.2021.104712] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 07/28/2021] [Accepted: 07/28/2021] [Indexed: 10/20/2022]
Abstract
Skin lesion classification plays a crucial role in diagnosing various gene and related local medical cases in the field of dermoscopy. In this paper, a new model for the classification of skin lesions as either normal or melanoma is presented. The proposed melanoma prediction model was evaluated on a large publicly available dataset called ISIC 2020. The main challenge of this dataset is severe class imbalance. This paper proposes an approach to overcome this problem using a random over-sampling method followed by data augmentation. Moreover, a new hybrid version of a convolutional neural network architecture and bald eagle search (BES) optimization is proposed. The BES algorithm is used to find the optimal values of the hyperparameters of a SqueezeNet architecture. The proposed melanoma skin cancer prediction model obtained an overall accuracy of 98.37%, specificity of 96.47%, sensitivity of 100%, f-score of 98.40%, and area under the curve of 99%. The experimental results showed the robustness and efficiency of the proposed model compared with VGG19, GoogleNet, and ResNet50. Additionally, the results showed that the proposed model was very competitive compared with the state of the art.
Collapse
Affiliation(s)
| | - Mona M Soliman
- Faculty of Computers and Artificial Intelligence, Cairo University, Giza, Egypt
| | - Aboul Ella Hassanien
- Faculty of Computers and Artificial Intelligence, Cairo University, Giza, Egypt.
| |
Collapse
|
46
|
Mackay BS, Marshall K, Grant-Jacob JA, Kanczler J, Eason RW, Oreffo ROC, Mills B. The future of bone regeneration: integrating AI into tissue engineering. Biomed Phys Eng Express 2021; 7. [PMID: 34271556 DOI: 10.1088/2057-1976/ac154f] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Accepted: 07/16/2021] [Indexed: 01/16/2023]
Abstract
Tissue engineering is a branch of regenerative medicine that harnesses biomaterial and stem cell research to utilise the body's natural healing responses to regenerate tissue and organs. There remain many unanswered questions in tissue engineering, with optimal biomaterial designs still to be developed and a lack of adequate stem cell knowledge limiting successful application. Advances in artificial intelligence (AI), and deep learning specifically, offer the potential to improve both scientific understanding and clinical outcomes in regenerative medicine. With enhanced perception of how to integrate artificial intelligence into current research and clinical practice, AI offers an invaluable tool to improve patient outcome.
Collapse
Affiliation(s)
- Benita S Mackay
- Optoelectronics Research Centre, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, SO17 1BJ, United Kingdom
| | - Karen Marshall
- Bone and Joint Research Group, Centre for Human Development, Stem Cells and Regeneration, Human Development and Health, Faculty of Medicine, University of Southampton, Southampton, SO16 6HW, United Kingdom
| | - James A Grant-Jacob
- Optoelectronics Research Centre, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, SO17 1BJ, United Kingdom
| | - Janos Kanczler
- Bone and Joint Research Group, Centre for Human Development, Stem Cells and Regeneration, Human Development and Health, Faculty of Medicine, University of Southampton, Southampton, SO16 6HW, United Kingdom
| | - Robert W Eason
- Optoelectronics Research Centre, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, SO17 1BJ, United Kingdom.,Institute of Developmental Sciences, Faculty of Life Sciences, University of Southampton, Southampton, SO17 1BJ, United Kingdom
| | - Richard O C Oreffo
- Bone and Joint Research Group, Centre for Human Development, Stem Cells and Regeneration, Human Development and Health, Faculty of Medicine, University of Southampton, Southampton, SO16 6HW, United Kingdom.,Institute of Developmental Sciences, Faculty of Life Sciences, University of Southampton, Southampton, SO17 1BJ, United Kingdom
| | - Ben Mills
- Optoelectronics Research Centre, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, SO17 1BJ, United Kingdom
| |
Collapse
|
47
|
Emara HM, Shoaib MR, Elwekeil M, El‐Shafai W, Taha TE, El‐Fishawy AS, El‐Rabaie EM, Alshebeili SA, Dessouky MI, Abd El‐Samie FE. Deep convolutional neural networks for COVID-19 automatic diagnosis. Microsc Res Tech 2021; 84:2504-2516. [PMID: 34121273 PMCID: PMC8420362 DOI: 10.1002/jemt.23713] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Revised: 12/29/2020] [Accepted: 01/06/2021] [Indexed: 11/16/2022]
Abstract
This article is mainly concerned with COVID‐19 diagnosis from X‐ray images. The number of cases infected with COVID‐19 is increasing daily, and there is a limitation in the number of test kits needed in hospitals. Therefore, there is an imperative need to implement an efficient automatic diagnosis system to alleviate COVID‐19 spreading among people. This article presents a discussion of the utilization of convolutional neural network (CNN) models with different learning strategies for automatic COVID‐19 diagnosis. First, we consider the CNN‐based transfer learning approach for automatic diagnosis of COVID‐19 from X‐ray images with different training and testing ratios. Different pre‐trained deep learning models in addition to a transfer learning model are considered and compared for the task of COVID‐19 detection from X‐ray images. Confusion matrices of these studied models are presented and analyzed. Considering the performance results obtained, ResNet models (ResNet18, ResNet50, and ResNet101) provide the highest classification accuracy on the two considered datasets with different training and testing ratios, namely 80/20, 70/30, 60/40, and 50/50. The accuracies obtained using the first dataset with 70/30 training and testing ratio are 97.67%, 98.81%, and 100% for ResNet18, ResNet50, and ResNet101, respectively. For the second dataset, the reported accuracies are 99%, 99.12%, and 99.29% for ResNet18, ResNet50, and ResNet101, respectively. The second approach is the training of a proposed CNN model from scratch. The results confirm that training of the CNN from scratch can lead to the identification of the signs of COVID‐19 disease.
Collapse
Affiliation(s)
- Heba M. Emara
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic EngineeringMenoufia UniversityMenoufEgypt
| | - Mohamed R. Shoaib
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic EngineeringMenoufia UniversityMenoufEgypt
| | - Mohamed Elwekeil
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic EngineeringMenoufia UniversityMenoufEgypt
| | - Walid El‐Shafai
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic EngineeringMenoufia UniversityMenoufEgypt
- Security Engineering LabComputer Science Department, Prince Sultan UniversityRiyadhSaudi Arabia
| | - Taha E. Taha
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic EngineeringMenoufia UniversityMenoufEgypt
| | - Adel S. El‐Fishawy
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic EngineeringMenoufia UniversityMenoufEgypt
| | - El‐Sayed M. El‐Rabaie
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic EngineeringMenoufia UniversityMenoufEgypt
| | - Saleh A. Alshebeili
- Electrical Engineering DepartmentKACST‐TIC in Radio Frequency and Photonics for the e‐Society (RFTONICS), King Saud UniversityRiyadhSaudi Arabia
- Department of Electrical EngineeringKing Saud UniversityRiyadhSaudi Arabia
| | - Moawad I. Dessouky
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic EngineeringMenoufia UniversityMenoufEgypt
| | - Fathi E. Abd El‐Samie
- Department of Electronics and Electrical Communications Engineering, Faculty of Electronic EngineeringMenoufia UniversityMenoufEgypt
- Department of Information TechnologyCollege of Computer and Information Sciences, Princess Nourah Bint Abdulrahman UniversityRiyadhSaudi Arabia
| |
Collapse
|
48
|
Multiclass skin lesion classification using image augmentation technique and transfer learning models. INTERNATIONAL JOURNAL OF INTELLIGENT UNMANNED SYSTEMS 2021. [DOI: 10.1108/ijius-02-2021-0010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Purpose
The mortality rate due to skin cancers has been increasing over the past decades. Early detection and treatment of skin cancers can save lives. However, due to visual resemblance of normal skin and lesion and blurred lesion borders, skin cancer diagnosis has become a challenging task even for skilled dermatologists. Hence, the purpose of this study is to present an image-based automatic approach for multiclass skin lesion classification and compare the performance of various models.
Design/methodology/approach
In this paper, the authors have presented a multiclass skin lesion classification approach based on transfer learning of deep convolutional neural network. The following pre-trained models have been used: VGG16, VGG19, ResNet50, ResNet101, ResNet152, Xception, MobileNet and compared their performances on skin cancer classification.
Findings
The experiments have been performed on HAM10000 dataset, which contains 10,015 dermoscopic images of seven skin lesion classes. The categorical accuracy of 83.69%, Top2 accuracy of 91.48% and Top3 accuracy of 96.19% has been obtained.
Originality/value
Early detection and treatment of skin cancer can save millions of lives. This work demonstrates that the transfer learning can be an effective way to classify skin cancer images, providing adequate performance with less computational complexity.
Collapse
|
49
|
An Efficient Approach Based on Privacy-Preserving Deep Learning for Satellite Image Classification. REMOTE SENSING 2021. [DOI: 10.3390/rs13112221] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Satellite images have drawn increasing interest from a wide variety of users, including business and government, ever since their increased usage in important fields ranging from weather, forestry and agriculture to surface changes and biodiversity monitoring. Recent updates in the field have also introduced various deep learning (DL) architectures to satellite imagery as a means of extracting useful information. However, this new approach comes with its own issues, including the fact that many users utilize ready-made cloud services (both public and private) in order to take advantage of built-in DL algorithms and thus avoid the complexity of developing their own DL architectures. However, this presents new challenges to protecting data against unauthorized access, mining and usage of sensitive information extracted from that data. Therefore, new privacy concerns regarding sensitive data in satellite images have arisen. This research proposes an efficient approach that takes advantage of privacy-preserving deep learning (PPDL)-based techniques to address privacy concerns regarding data from satellite images when applying public DL models. In this paper, we proposed a partially homomorphic encryption scheme (a Paillier scheme), which enables processing of confidential information without exposure of the underlying data. Our method achieves robust results when applied to a custom convolutional neural network (CNN) as well as to existing transfer learning methods. The proposed encryption scheme also allows for training CNN models on encrypted data directly, which requires lower computational overhead. Our experiments have been performed on a real-world dataset covering several regions across Saudi Arabia. The results demonstrate that our CNN-based models were able to retain data utility while maintaining data privacy. Security parameters such as correlation coefficient (−0.004), entropy (7.95), energy (0.01), contrast (10.57), number of pixel change rate (4.86), unified average change intensity (33.66), and more are in favor of our proposed encryption scheme. To the best of our knowledge, this research is also one of the first studies that applies PPDL-based techniques to satellite image data in any capacity.
Collapse
|
50
|
Alrahhal M, K P S. COVID-19 Diagnostic System Using Medical Image Classification and Retrieval: A Novel Method for Image Analysis. THE COMPUTER JOURNAL 2021:bxab051. [PMCID: PMC8194842 DOI: 10.1093/comjnl/bxab051] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
With the rapid increase in the number of people infected with COVID-19 disease in the entire world, and with the limited medical equipment used to detect it (testing kit), it becomes necessary to provide another detection method that mainly relies on Artificial Intelligence and radiographic Image Analysis to determine the disease infection. In this study, we proposed a diagnosis system that detects the COVID-19 using chest X-ray or computed tomography (CT) scan images knowing that this system does not eliminate the reverse transcription-polymerase chain reaction test but rather complements it. The proposed system consists of the following steps, starting with extracting the image’s features using Visual Words Fusion of ResNet-50 (deep neural network) and Histogram of Oriented Gradient descriptors based on Bag of Visual Word methodology. Then training the Adaptive Boosting classifier to classify the image to COVID-19 or NOTCOVID-19 and finally retrieving the most similar images. We implemented our work on X-ray and CT scan databases, and the experimental results demonstrate the effectiveness of the proposed system. The performance of the classification task in terms of accuracy was as follows: 100% for classifying the input image to X-ray or CT scan, 99.18% for classifying X-ray image to COVID-19 or NOTCOVID-19 and 97.84% for classifying CT scan to COVID-19 or NOTCOVID-19.
Collapse
Affiliation(s)
| | - Supreethi K P
- Jawaharlal Nehru Technological University Hyderabad, College of Engineering, Computer Science Department, Hyderabad 500085, India
| |
Collapse
|