1
|
Musthafa MM, T R M, V VK, Guluwadi S. Enhanced skin cancer diagnosis using optimized CNN architecture and checkpoints for automated dermatological lesion classification. BMC Med Imaging 2024; 24:201. [PMID: 39095688 PMCID: PMC11295341 DOI: 10.1186/s12880-024-01356-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2024] [Accepted: 07/04/2024] [Indexed: 08/04/2024] Open
Abstract
Skin cancer stands as one of the foremost challenges in oncology, with its early detection being crucial for successful treatment outcomes. Traditional diagnostic methods depend on dermatologist expertise, creating a need for more reliable, automated tools. This study explores deep learning, particularly Convolutional Neural Networks (CNNs), to enhance the accuracy and efficiency of skin cancer diagnosis. Leveraging the HAM10000 dataset, a comprehensive collection of dermatoscopic images encompassing a diverse range of skin lesions, this study introduces a sophisticated CNN model tailored for the nuanced task of skin lesion classification. The model's architecture is intricately designed with multiple convolutional, pooling, and dense layers, aimed at capturing the complex visual features of skin lesions. To address the challenge of class imbalance within the dataset, an innovative data augmentation strategy is employed, ensuring a balanced representation of each lesion category during training. Furthermore, this study introduces a CNN model with optimized layer configuration and data augmentation, significantly boosting diagnostic precision in skin cancer detection. The model's learning process is optimized using the Adam optimizer, with parameters fine-tuned over 50 epochs and a batch size of 128 to enhance the model's ability to discern subtle patterns in the image data. A Model Checkpoint callback ensures the preservation of the best model iteration for future use. The proposed model demonstrates an accuracy of 97.78% with a notable precision of 97.9%, recall of 97.9%, and an F2 score of 97.8%, underscoring its potential as a robust tool in the early detection and classification of skin cancer, thereby supporting clinical decision-making and contributing to improved patient outcomes in dermatology.
Collapse
Affiliation(s)
| | - Mahesh T R
- Department of Computer Science and Engineering, JAIN (Deemed-to-be University), Bengaluru, 562112, India
| | - Vinoth Kumar V
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology University, Vellore, 632014, India
| | - Suresh Guluwadi
- Adama Science and Technology University, Adama, 302120, Ethiopia.
| |
Collapse
|
2
|
Auñón J, Hurtado-Ramírez D, Porras-Díaz L, Irigoyen-Peña B, Rahmian S, Al-Khazraji Y, Soler-Garrido J, Kotsev A. Evaluation and utilisation of privacy enhancing technologies-A data spaces perspective. Data Brief 2024; 55:110560. [PMID: 38948408 PMCID: PMC11214200 DOI: 10.1016/j.dib.2024.110560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Revised: 05/14/2024] [Accepted: 05/21/2024] [Indexed: 07/02/2024] Open
Abstract
Data sharing has facilitated the digitisation of society. We can access our bank accounts or make an appointment with our doctor anytime and anywhere. To achieve this, we have to share certain information, whether personal, professional, etc. This may seem like a minor cost for an individual user, but actually the data economy as the backbone of a digital transformation that is reshaping all aspects of human life. However, one of the major concerns arises regarding what happens to such individual data; once shared, control over it is often lost. For that reason, users and companies are reluctant to share their data. The European Union, through its European Strategy for Data, is establishing a policy and legal framework for establishing a single market for data in Europe by improving the trust and fairness of the data economy. Data spaces are a commitment to sharing data in a reliable and secure way, but this endeavour should, of course, not be at the expense of privacy rights. In recent years, Privacy-Enhancing Technologies (PETs) have emerged to achieve data sharing and privacy preservation that can address the requirements of data spaces around sensitive citizen and business data. In this work, we review existing PETs and assess their relevance, technological maturity, and applicability in the context of common European data spaces. Finally, we illustrate the benefits of secure data sharing via Federated Learning in a healthcare use case, where the preservation of privacy is a primer requirement and is therefore to be guaranteed.
Collapse
Affiliation(s)
- J.M. Auñón
- GMV, Department of Artificial Intelligence and Big Data, Isaac Newton 11, Tres Cantos, Madrid 28760, Spain
| | - D. Hurtado-Ramírez
- GMV, Department of Artificial Intelligence and Big Data, Isaac Newton 11, Tres Cantos, Madrid 28760, Spain
| | - L. Porras-Díaz
- GMV, Department of Artificial Intelligence and Big Data, Isaac Newton 11, Tres Cantos, Madrid 28760, Spain
| | - B. Irigoyen-Peña
- GMV, Department of Artificial Intelligence and Big Data, Isaac Newton 11, Tres Cantos, Madrid 28760, Spain
| | - S. Rahmian
- GMV, Department of Artificial Intelligence and Big Data, Isaac Newton 11, Tres Cantos, Madrid 28760, Spain
| | - Y. Al-Khazraji
- GMV, Department of Artificial Intelligence and Big Data, Isaac Newton 11, Tres Cantos, Madrid 28760, Spain
| | - J. Soler-Garrido
- European Commission, Joint Research Centre (JRC), Algorithmic Transparency Unit, Inca Garcilaso 3, Seville 41092, Spain
| | - A. Kotsev
- European Commission, Joint Research Centre (JRC), Digital Economy Unit, Via Enrico Fermi 2749, Ispra 21027, Italy
| |
Collapse
|
3
|
Hoffmann L, Runkel CB, Künzel S, Kabiri P, Rübsam A, Bonaventura T, Marquardt P, Haas V, Biniaminov N, Biniaminov S, Joussen AM, Zeitz O. Using Deep Learning to Distinguish Highly Malignant Uveal Melanoma from Benign Choroidal Nevi. J Clin Med 2024; 13:4141. [PMID: 39064181 PMCID: PMC11277885 DOI: 10.3390/jcm13144141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 06/24/2024] [Accepted: 07/11/2024] [Indexed: 07/28/2024] Open
Abstract
Background: This study aimed to evaluate the potential of human-machine interaction (HMI) in a deep learning software for discerning the malignancy of choroidal melanocytic lesions based on fundus photographs. Methods: The study enrolled individuals diagnosed with a choroidal melanocytic lesion at a tertiary clinic between 2011 and 2023, resulting in a cohort of 762 eligible cases. A deep learning-based assistant integrated into the software underwent training using a dataset comprising 762 color fundus photographs (CFPs) of choroidal lesions captured by various fundus cameras. The dataset was categorized into benign nevi, untreated choroidal melanomas, and irradiated choroidal melanomas. The reference standard for evaluation was established by retinal specialists using multimodal imaging. Trinary and binary models were trained, and their classification performance was evaluated on a test set consisting of 100 independent images. The discriminative performance of deep learning models was evaluated based on accuracy, recall, and specificity. Results: The final accuracy rates on the independent test set for multi-class and binary (benign vs. malignant) classification were 84.8% and 90.9%, respectively. Recall and specificity ranged from 0.85 to 0.90 and 0.91 to 0.92, respectively. The mean area under the curve (AUC) values were 0.96 and 0.99, respectively. Optimal discriminative performance was observed in binary classification with the incorporation of a single imaging modality, achieving an accuracy of 95.8%. Conclusions: The deep learning models demonstrated commendable performance in distinguishing the malignancy of choroidal lesions. The software exhibits promise for resource-efficient and cost-effective pre-stratification.
Collapse
Affiliation(s)
- Laura Hoffmann
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| | - Constance B. Runkel
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| | - Steffen Künzel
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| | - Payam Kabiri
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| | - Anne Rübsam
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| | - Theresa Bonaventura
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| | | | | | | | | | - Antonia M. Joussen
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| | - Oliver Zeitz
- Department of Ophthalmology, Charité University Hospital Berlin, 12203 Berlin, Germany
| |
Collapse
|
4
|
De Souza J, Viswanath VK, Echterhoff JM, Chamberlain K, Wang EJ. Augmenting Telepostpartum Care With Vision-Based Detection of Breastfeeding-Related Conditions: Algorithm Development and Validation. JMIR AI 2024; 3:e54798. [PMID: 38913995 PMCID: PMC11231616 DOI: 10.2196/54798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 04/20/2024] [Accepted: 05/09/2024] [Indexed: 06/26/2024]
Abstract
BACKGROUND Breastfeeding benefits both the mother and infant and is a topic of attention in public health. After childbirth, untreated medical conditions or lack of support lead many mothers to discontinue breastfeeding. For instance, nipple damage and mastitis affect 80% and 20% of US mothers, respectively. Lactation consultants (LCs) help mothers with breastfeeding, providing in-person, remote, and hybrid lactation support. LCs guide, encourage, and find ways for mothers to have a better experience breastfeeding. Current telehealth services help mothers seek LCs for breastfeeding support, where images help them identify and address many issues. Due to the disproportional ratio of LCs and mothers in need, these professionals are often overloaded and burned out. OBJECTIVE This study aims to investigate the effectiveness of 5 distinct convolutional neural networks in detecting healthy lactating breasts and 6 breastfeeding-related issues by only using red, green, and blue images. Our goal was to assess the applicability of this algorithm as an auxiliary resource for LCs to identify painful breast conditions quickly, better manage their patients through triage, respond promptly to patient needs, and enhance the overall experience and care for breastfeeding mothers. METHODS We evaluated the potential for 5 classification models to detect breastfeeding-related conditions using 1078 breast and nipple images gathered from web-based and physical educational resources. We used the convolutional neural networks Resnet50, Visual Geometry Group model with 16 layers (VGG16), InceptionV3, EfficientNetV2, and DenseNet169 to classify the images across 7 classes: healthy, abscess, mastitis, nipple blebs, dermatosis, engorgement, and nipple damage by improper feeding or misuse of breast pumps. We also evaluated the models' ability to distinguish between healthy and unhealthy images. We present an analysis of the classification challenges, identifying image traits that may confound the detection model. RESULTS The best model achieves an average area under the receiver operating characteristic curve of 0.93 for all conditions after data augmentation for multiclass classification. For binary classification, we achieved, with the best model, an average area under the curve of 0.96 for all conditions after data augmentation. Several factors contributed to the misclassification of images, including similar visual features in the conditions that precede other conditions (such as the mastitis spectrum disorder), partially covered breasts or nipples, and images depicting multiple conditions in the same breast. CONCLUSIONS This vision-based automated detection technique offers an opportunity to enhance postpartum care for mothers and can potentially help alleviate the workload of LCs by expediting decision-making processes.
Collapse
Affiliation(s)
- Jessica De Souza
- Department of Electrical and Computer Engineering, University of California, San Diego, La Jolla, CA, United States
| | - Varun Kumar Viswanath
- Department of Electrical and Computer Engineering, University of California, San Diego, La Jolla, CA, United States
| | - Jessica Maria Echterhoff
- Department of Computer Science and Engineering, University of California, San Diego, La Jolla, CA, United States
| | - Kristina Chamberlain
- Division of Extended Studies, University of California, San Diego, La Jolla, CA, United States
| | - Edward Jay Wang
- Department of Electrical and Computer Engineering, University of California, San Diego, La Jolla, CA, United States
| |
Collapse
|
5
|
Ramamurthy K, Thayumanaswamy I, Radhakrishnan M, Won D, Lingaswamy S. Integration of Localized, Contextual, and Hierarchical Features in Deep Learning for Improved Skin Lesion Classification. Diagnostics (Basel) 2024; 14:1338. [PMID: 39001229 PMCID: PMC11241006 DOI: 10.3390/diagnostics14131338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Revised: 06/11/2024] [Accepted: 06/12/2024] [Indexed: 07/16/2024] Open
Abstract
Skin lesion classification is vital for the early detection and diagnosis of skin diseases, facilitating timely intervention and treatment. However, existing classification methods face challenges in managing complex information and long-range dependencies in dermoscopic images. Therefore, this research aims to enhance the feature representation by incorporating local, global, and hierarchical features to improve the performance of skin lesion classification. We introduce a novel dual-track deep learning (DL) model in this research for skin lesion classification. The first track utilizes a modified Densenet-169 architecture that incorporates a Coordinate Attention Module (CoAM). The second track employs a customized convolutional neural network (CNN) comprising a Feature Pyramid Network (FPN) and Global Context Network (GCN) to capture multiscale features and global contextual information. The local features from the first track and the global features from second track are used for precise localization and modeling of the long-range dependencies. By leveraging these architectural advancements within the DenseNet framework, the proposed neural network achieved better performance compared to previous approaches. The network was trained and validated using the HAM10000 dataset, achieving a classification accuracy of 93.2%.
Collapse
Affiliation(s)
- Karthik Ramamurthy
- Centre for Cyber Physical Systems, Vellore Institute of Technology, Chennai 600127, India
| | - Illakiya Thayumanaswamy
- Department of Computational Intelligence, School of Computing, SRM Institute of Science and Technology, Kattankulathur 603203, India
| | - Menaka Radhakrishnan
- Centre for Cyber Physical Systems, Vellore Institute of Technology, Chennai 600127, India
| | - Daehan Won
- System Sciences and Industrial Engineering, Binghamton University, Binghamton, NY 13902, USA
| | - Sindhia Lingaswamy
- Department of Computer Applications, National Institute of Technology, Tiruchirappalli 620015, India
| |
Collapse
|
6
|
Li B, Chen H, Duan H. Artificial intelligence-driven prognostic system for conception prediction and management in intrauterine adhesions following hysteroscopic adhesiolysis: a diagnostic study using hysteroscopic images. Front Bioeng Biotechnol 2024; 12:1327207. [PMID: 38638324 PMCID: PMC11024240 DOI: 10.3389/fbioe.2024.1327207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Accepted: 03/04/2024] [Indexed: 04/20/2024] Open
Abstract
Introduction Intrauterine adhesions (IUAs) caused by endometrial injury, commonly occurring in developing countries, can lead to subfertility. This study aimed to develop and evaluate a DeepSurv architecture-based artificial intelligence (AI) system for predicting fertility outcomes after hysteroscopic adhesiolysis. Methods This diagnostic study included 555 intrauterine adhesions (IUAs) treated with hysteroscopic adhesiolysis with 4,922 second-look hysteroscopic images from a prospective clinical database (IUADB, NCT05381376) with a minimum of 2 years of follow-up. These patients were randomly divided into training, validation, and test groups for model development, tuning, and external validation. Four transfer learning models were built using the DeepSurv architecture and a code-free AI application for pregnancy prediction was also developed. The primary outcome was the model's ability to predict pregnancy within a year after adhesiolysis. Secondary outcomes were model performance which evaluated using time-dependent area under the curves (AUCs) and C-index, and ART benefits evaluated by hazard ratio (HR) among different risk groups. Results External validation revealed that using the DeepSurv architecture, InceptionV3+ DeepSurv, InceptionResNetV2+ DeepSurv, and ResNet50+ DeepSurv achieved AUCs of 0.94, 0.95, and 0.93, respectively, for one-year pregnancy prediction, outperforming other models and clinical score systems. A code-free AI application was developed to identify candidates for ART. Patients with lower natural conception probability indicated by the application had a higher ART benefit hazard ratio (HR) of 3.13 (95% CI: 1.22-8.02, p = 0.017). Conclusion InceptionV3+ DeepSurv, InceptionResNetV2+ DeepSurv, and ResNet50+ DeepSurv show potential in predicting the fertility outcomes of IUAs after hysteroscopic adhesiolysis. The code-free AI application based on the DeepSurv architecture facilitates personalized therapy following hysteroscopic adhesiolysis.
Collapse
Affiliation(s)
- Bohan Li
- Department of Minimally Invasive Gynecologic Center, Beijing Obstetrics and Gynecology Hospital, Capital Medical University, Beijing Maternal and Child Healthcare Hospital, Beijing, China
| | - Hui Chen
- School of Biomedical Engineering, Capital Medical University, Beijing, China
- Beijing Advanced Innovation Center for Big Data-based Precision Medicine, Capital Medical University, Beijing, China
| | - Hua Duan
- Department of Minimally Invasive Gynecologic Center, Beijing Obstetrics and Gynecology Hospital, Capital Medical University, Beijing Maternal and Child Healthcare Hospital, Beijing, China
| |
Collapse
|
7
|
Gayatri E, Aarthy SL. Reduction of overfitting on the highly imbalanced ISIC-2019 skin dataset using deep learning frameworks. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:53-68. [PMID: 38189730 DOI: 10.3233/xst-230204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2024]
Abstract
BACKGROUND With the rapid growth of Deep Neural Networks (DNN) and Computer-Aided Diagnosis (CAD), more significant works have been analysed for cancer related diseases. Skin cancer is the most hazardous type of cancer that cannot be diagnosed in the early stages. OBJECTIVE The diagnosis of skin cancer is becoming a challenge to dermatologists as an abnormal lesion looks like an ordinary nevus at the initial stages. Therefore, early identification of lesions (origin of skin cancer) is essential and helpful for treating skin cancer patients effectively. The enormous development of automated skin cancer diagnosis systems significantly supports dermatologists. METHODS This paper performs a classification of skin cancer by utilising various deep-learning frameworks after resolving the class Imbalance problem in the ISIC-2019 dataset. A fine-tuned ResNet-50 model is used to evaluate the performance of original data, augmented data, and after by adding the focal loss. Focal loss is the best technique to solve overfitting problems by assigning weights to hard misclassified images. RESULTS Finally, augmented data with focal loss is given a good classification performance with 98.85% accuracy, 95.52% precision, and 95.93% recall. Matthews Correlation coefficient (MCC) is the best metric to evaluate the quality of multi-class images. It has given outstanding performance by using augmented data and focal loss.
Collapse
Affiliation(s)
| | - S L Aarthy
- SCOPE, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| |
Collapse
|
8
|
Kushimo OO, Salau AO, Adeleke OJ, Olaoye DS. Deep learning model to improve melanoma detection in people of color. ARAB JOURNAL OF BASIC AND APPLIED SCIENCES 2023. [DOI: 10.1080/25765299.2023.2170066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/07/2023] Open
Affiliation(s)
- Oluwatobi O. Kushimo
- Department of Electronic and Electrical Engineering, Obafemi Awolowo University, Ile-Ife, Nigeria
| | - Ayodeji Olalekan Salau
- Department of Electrical/Electronics and Computer Engineering, Afe Babalola University, Ado-Ekiti, Nigeria
- Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, India
| | - Oladapo J. Adeleke
- Department of Electronic and Electrical Engineering, Obafemi Awolowo University, Ile-Ife, Nigeria
| | - Doyinsola S. Olaoye
- Department of Electronic and Electrical Engineering, Obafemi Awolowo University, Ile-Ife, Nigeria
| |
Collapse
|
9
|
Hossain MM, Hossain MM, Arefin MB, Akhtar F, Blake J. Combining State-of-the-Art Pre-Trained Deep Learning Models: A Noble Approach for Skin Cancer Detection Using Max Voting Ensemble. Diagnostics (Basel) 2023; 14:89. [PMID: 38201399 PMCID: PMC10795598 DOI: 10.3390/diagnostics14010089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 12/21/2023] [Accepted: 12/22/2023] [Indexed: 01/12/2024] Open
Abstract
Skin cancer poses a significant healthcare challenge, requiring precise and prompt diagnosis for effective treatment. While recent advances in deep learning have dramatically improved medical image analysis, including skin cancer classification, ensemble methods offer a pathway for further enhancing diagnostic accuracy. This study introduces a cutting-edge approach employing the Max Voting Ensemble Technique for robust skin cancer classification on ISIC 2018: Task 1-2 dataset. We incorporate a range of cutting-edge, pre-trained deep neural networks, including MobileNetV2, AlexNet, VGG16, ResNet50, DenseNet201, DenseNet121, InceptionV3, ResNet50V2, InceptionResNetV2, and Xception. These models have been extensively trained on skin cancer datasets, achieving individual accuracies ranging from 77.20% to 91.90%. Our method leverages the synergistic capabilities of these models by combining their complementary features to elevate classification performance further. In our approach, input images undergo preprocessing for model compatibility. The ensemble integrates the pre-trained models with their architectures and weights preserved. For each skin lesion image under examination, every model produces a prediction. These are subsequently aggregated using the max voting ensemble technique to yield the final classification, with the majority-voted class serving as the conclusive prediction. Through comprehensive testing on a diverse dataset, our ensemble outperformed individual models, attaining an accuracy of 93.18% and an AUC score of 0.9320, thus demonstrating superior diagnostic reliability and accuracy. We evaluated the effectiveness of our proposed method on the HAM10000 dataset to ensure its generalizability. Our ensemble method delivers a robust, reliable, and effective tool for the classification of skin cancer. By utilizing the power of advanced deep neural networks, we aim to assist healthcare professionals in achieving timely and accurate diagnoses, ultimately reducing mortality rates and enhancing patient outcomes.
Collapse
Affiliation(s)
- Md. Mamun Hossain
- Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology, Saidpur 5310, Bangladesh
| | - Md. Moazzem Hossain
- Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology, Saidpur 5310, Bangladesh
| | - Most. Binoee Arefin
- Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology, Saidpur 5310, Bangladesh
| | - Fahima Akhtar
- Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology, Saidpur 5310, Bangladesh
| | - John Blake
- School of Computer Science and Engineering, University of Aizu, Aizuwakamatsu 965-8580, Japan
| |
Collapse
|
10
|
Li M, Hu Z, Qiu S, Zhou C, Weng J, Dong Q, Sheng X, Ren N, Zhou M. Dual-branch hybrid encoding embedded network for histopathology image classification. Phys Med Biol 2023; 68:195002. [PMID: 37647919 DOI: 10.1088/1361-6560/acf556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2023] [Accepted: 08/30/2023] [Indexed: 09/01/2023]
Abstract
Objective.Learning-based histopathology image (HI) classification methods serve as important tools for auxiliary diagnosis in the prognosis stage. However, most existing methods are focus on a single target cancer due to inter-domain differences among different cancer types, limiting their applicability to different cancer types. To overcome these limitations, this paper presents a high-performance HI classification method that aims to address inter-domain differences and provide an improved solution for reliable and practical HI classification.Approach.Firstly, we collect a high-quality hepatocellular carcinoma (HCC) dataset with enough data to verify the stability and practicability of the method. Secondly, a novel dual-branch hybrid encoding embedded network is proposed, which integrates the feature extraction capabilities of convolutional neural network and Transformer. This well-designed structure enables the network to extract diverse features while minimizing redundancy from a single complex network. Lastly, we develop a salient area constraint loss function tailored to the unique characteristics of HIs to address inter-domain differences and enhance the robustness and universality of the methods.Main results.Extensive experiments have conducted on the proposed HCC dataset and two other publicly available datasets. The proposed method demonstrates outstanding performance with an impressive accuracy of 99.09% on the HCC dataset and achieves state-of-the-art results on the other two public datasets. These remarkable outcomes underscore the superior performance and versatility of our approach in multiple HI classification.Significance.The advancements presented in this study contribute to the field of HI analysis by providing a reliable and practical solution for multiple cancer classification, potentially improving diagnostic accuracy and patient outcomes. Our code is available athttps://github.com/lms-design/DHEE-net.
Collapse
Affiliation(s)
- Mingshuai Li
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, 200241, People's Republic of China
| | - Zhiqiu Hu
- Department of Hepatobiliary and Pancreatic Surgery, Minhang Hospital, Fudan University, Shanghai, 201199, People's Republic of China
- Key Laboratory of Whole-Period Monitoring and Precise Intervention of Digestive Cancer of Shanghai Municipal Health Commission, Shanghai, 201199, People's Republic of China
- Institute of Fudan-Minhang Academic Health System, Minhang Hospital, Fudan University, Shanghai, 201199, People's Republic of China
| | - Song Qiu
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, 200241, People's Republic of China
- MOE Engineering Research Center of Software/Hardware Co-design Technology and Application, East China Normal University, Shanghai, 200241, People's Republic of China
| | - Chenhao Zhou
- Key Laboratory of Whole-Period Monitoring and Precise Intervention of Digestive Cancer of Shanghai Municipal Health Commission, Shanghai, 201199, People's Republic of China
- Department of Liver Surgery and Transplantation, Liver Cancer Institute, Zhongshan Hospital, Fudan University, Key Laboratory of Carcinogenesis and Cancer Invasion, Ministry of Education, Shanghai, 200032, People's Republic of China
| | - Jialei Weng
- Key Laboratory of Whole-Period Monitoring and Precise Intervention of Digestive Cancer of Shanghai Municipal Health Commission, Shanghai, 201199, People's Republic of China
- Department of Liver Surgery and Transplantation, Liver Cancer Institute, Zhongshan Hospital, Fudan University, Key Laboratory of Carcinogenesis and Cancer Invasion, Ministry of Education, Shanghai, 200032, People's Republic of China
| | - Qiongzhu Dong
- Key Laboratory of Whole-Period Monitoring and Precise Intervention of Digestive Cancer of Shanghai Municipal Health Commission, Shanghai, 201199, People's Republic of China
- Institute of Fudan-Minhang Academic Health System, Minhang Hospital, Fudan University, Shanghai, 201199, People's Republic of China
| | - Xia Sheng
- Department of Pathology, Minhang Hospital, Fudan University, Shanghai, 201199, People's Republic of China
| | - Ning Ren
- Key Laboratory of Whole-Period Monitoring and Precise Intervention of Digestive Cancer of Shanghai Municipal Health Commission, Shanghai, 201199, People's Republic of China
- Institute of Fudan-Minhang Academic Health System, Minhang Hospital, Fudan University, Shanghai, 201199, People's Republic of China
- Department of Liver Surgery and Transplantation, Liver Cancer Institute, Zhongshan Hospital, Fudan University, Key Laboratory of Carcinogenesis and Cancer Invasion, Ministry of Education, Shanghai, 200032, People's Republic of China
| | - Mei Zhou
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, 200241, People's Republic of China
| |
Collapse
|
11
|
Mehmood A, Gulzar Y, Ilyas QM, Jabbari A, Ahmad M, Iqbal S. SBXception: A Shallower and Broader Xception Architecture for Efficient Classification of Skin Lesions. Cancers (Basel) 2023; 15:3604. [PMID: 37509267 PMCID: PMC10377736 DOI: 10.3390/cancers15143604] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 07/05/2023] [Accepted: 07/08/2023] [Indexed: 07/30/2023] Open
Abstract
Skin cancer is a major public health concern around the world. Skin cancer identification is critical for effective treatment and improved results. Deep learning models have shown considerable promise in assisting dermatologists in skin cancer diagnosis. This study proposes SBXception: a shallower and broader variant of the Xception network. It uses Xception as the base model for skin cancer classification and increases its performance by reducing the depth and expanding the breadth of the architecture. We used the HAM10000 dataset, which contains 10,015 dermatoscopic images of skin lesions classified into seven categories, for training and testing the proposed model. Using the HAM10000 dataset, we fine-tuned the new model and reached an accuracy of 96.97% on a holdout test set. SBXception also achieved significant performance enhancement with 54.27% fewer training parameters and reduced training time compared to the base model. Our findings show that reducing and expanding the Xception model architecture can greatly improve its performance in skin cancer categorization.
Collapse
Affiliation(s)
- Abid Mehmood
- Department of Management Information Systems, College of Business Administration, King Faisal University, Al Ahsa 31982, Saudi Arabia
| | - Yonis Gulzar
- Department of Management Information Systems, College of Business Administration, King Faisal University, Al Ahsa 31982, Saudi Arabia
| | - Qazi Mudassar Ilyas
- Department of Information Systems, College of Computer Sciences and Information Technology, King Faisal University, Al Ahsa 31982, Saudi Arabia
| | - Abdoh Jabbari
- College of Computer Science and Information Technology, Jazan University, Jazan 45142, Saudi Arabia
| | - Muneer Ahmad
- Department of Human and Digital Interface, Woosong University, Daejeon 34606, Republic of Korea
| | - Sajid Iqbal
- Department of Information Systems, College of Computer Sciences and Information Technology, King Faisal University, Al Ahsa 31982, Saudi Arabia
| |
Collapse
|
12
|
Hasan MK, Ahamad MA, Yap CH, Yang G. A survey, review, and future trends of skin lesion segmentation and classification. Comput Biol Med 2023; 155:106624. [PMID: 36774890 DOI: 10.1016/j.compbiomed.2023.106624] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 01/04/2023] [Accepted: 01/28/2023] [Indexed: 02/03/2023]
Abstract
The Computer-aided Diagnosis or Detection (CAD) approach for skin lesion analysis is an emerging field of research that has the potential to alleviate the burden and cost of skin cancer screening. Researchers have recently indicated increasing interest in developing such CAD systems, with the intention of providing a user-friendly tool to dermatologists to reduce the challenges encountered or associated with manual inspection. This article aims to provide a comprehensive literature survey and review of a total of 594 publications (356 for skin lesion segmentation and 238 for skin lesion classification) published between 2011 and 2022. These articles are analyzed and summarized in a number of different ways to contribute vital information regarding the methods for the development of CAD systems. These ways include: relevant and essential definitions and theories, input data (dataset utilization, preprocessing, augmentations, and fixing imbalance problems), method configuration (techniques, architectures, module frameworks, and losses), training tactics (hyperparameter settings), and evaluation criteria. We intend to investigate a variety of performance-enhancing approaches, including ensemble and post-processing. We also discuss these dimensions to reveal their current trends based on utilization frequencies. In addition, we highlight the primary difficulties associated with evaluating skin lesion segmentation and classification systems using minimal datasets, as well as the potential solutions to these difficulties. Findings, recommendations, and trends are disclosed to inform future research on developing an automated and robust CAD system for skin lesion analysis.
Collapse
Affiliation(s)
- Md Kamrul Hasan
- Department of Bioengineering, Imperial College London, UK; Department of Electrical and Electronic Engineering (EEE), Khulna University of Engineering & Technology (KUET), Khulna 9203, Bangladesh.
| | - Md Asif Ahamad
- Department of Electrical and Electronic Engineering (EEE), Khulna University of Engineering & Technology (KUET), Khulna 9203, Bangladesh.
| | - Choon Hwai Yap
- Department of Bioengineering, Imperial College London, UK.
| | - Guang Yang
- National Heart and Lung Institute, Imperial College London, UK; Cardiovascular Research Centre, Royal Brompton Hospital, UK.
| |
Collapse
|
13
|
Ding Y, Yi Z, Li M, long J, Lei S, Guo Y, Fan P, Zuo C, Wang Y. HI-MViT: A lightweight model for explainable skin disease classification based on modified MobileViT. Digit Health 2023; 9:20552076231207197. [PMID: 37846401 PMCID: PMC10576942 DOI: 10.1177/20552076231207197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Accepted: 09/26/2023] [Indexed: 10/18/2023] Open
Abstract
Objective To develop an explainable lightweight skin disease high-precision classification model that can be deployed to the mobile terminal. Methods In this study, we present HI-MViT, a lightweight network for explainable skin disease classification based on Modified MobileViT. HI-MViT is mainly composed of ordinary convolution, Improved-MV2, MobileViT block, global pooling, and fully connected layers. Improved-MV2 uses the combination of shortcut and depth classifiable convolution to substantially decrease the amount of computation while ensuring the efficient implementation of information interaction and memory. The MobileViT block can efficiently encode local and global information. In addition, semantic feature dimensionality reduction visualization and class activation mapping visualization methods are used for HI-MViT to further understand the attention area of the model when learning skin lesion images. Results The International Skin Imaging Collaboration has assembled and made available the ISIC series dataset. Experiments using the HI-MViT model on the ISIC-2018 dataset achieved scores of 0.931, 0.932, 0.961, and 0.977 on F1-Score, Accuracy, Average Precision (AP), and area under the curve (AUC). Compared with the top five algorithms of ISIC-2018 Task 3, Marco's average F1-Score, AP, and AUC have increased by 6.9%, 6.8%, and 0.8% compared with the suboptimal performance model. Compared with ConvNeXt, the most competitive convolutional neural network architecture, our model is 5.0%, 3.4%, 2.3%, and 2.2% higher in F1-Score, Accuracy, AP, and AUC, respectively. The experiments on the ISIC-2017 dataset also achieved excellent results, and all indicators were better than the top five algorithms of ISIC-2017 Task 3. Using the trained model to test on the PH2 dataset, an excellent performance score is obtained, which shows that it has good generalization performance. Conclusions The skin disease classification model HI-MViT proposed in this article shows excellent classification performance and generalization performance in experiments. It demonstrates how the classification outcomes can be applied to dermatologists' computer-assisted diagnostics, enabling medical professionals to classify various dermoscopic images more rapidly and reliably.
Collapse
Affiliation(s)
- Yuhan Ding
- Department of Burns and Plastic Surgery, Xiangya Hospital, Central South University, Changsha, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
- School of Computer Science and Engineering, Central South University, Changsha, Hunan, China
| | - Zhenglin Yi
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
- Departments of Urology, Xiangya Hospital, Central South University, Changsha, China
| | - Mengjuan Li
- Department of Burns and Plastic Surgery, Xiangya Hospital, Central South University, Changsha, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
| | - Jianhong long
- Department of Burns and Plastic Surgery, Xiangya Hospital, Central South University, Changsha, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
| | - Shaorong Lei
- Department of Burns and Plastic Surgery, Xiangya Hospital, Central South University, Changsha, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
| | - Yu Guo
- Department of Burns and Plastic Surgery, Xiangya Hospital, Central South University, Changsha, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
| | - Pengju Fan
- Department of Burns and Plastic Surgery, Xiangya Hospital, Central South University, Changsha, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
| | - Chenchen Zuo
- Department of Burns and Plastic Surgery, Xiangya Hospital, Central South University, Changsha, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
| | - Yongjie Wang
- Department of Burns and Plastic Surgery, Xiangya Hospital, Central South University, Changsha, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
| |
Collapse
|
14
|
Magdy A, Hussein H, Abdel-Kader RF, Salam KAE. Performance Enhancement of Skin Cancer Classification Using Computer Vision. IEEE ACCESS 2023; 11:72120-72133. [DOI: 10.1109/access.2023.3294974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Affiliation(s)
- Ahmed Magdy
- Electrical Engineering Department, Suez Canal University, Ismailia, Egypt
| | - Hadeer Hussein
- Electrical Engineering Department, Suez Canal University, Ismailia, Egypt
| | | | | |
Collapse
|
15
|
Shinde RK, Alam MS, Hossain MB, Md Imtiaz S, Kim J, Padwal AA, Kim N. Squeeze-MNet: Precise Skin Cancer Detection Model for Low Computing IoT Devices Using Transfer Learning. Cancers (Basel) 2022; 15:cancers15010012. [PMID: 36612010 PMCID: PMC9817940 DOI: 10.3390/cancers15010012] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 12/15/2022] [Accepted: 12/16/2022] [Indexed: 12/24/2022] Open
Abstract
Cancer remains a deadly disease. We developed a lightweight, accurate, general-purpose deep learning algorithm for skin cancer classification. Squeeze-MNet combines a Squeeze algorithm for digital hair removal during preprocessing and a MobileNet deep learning model with predefined weights. The Squeeze algorithm extracts important image features from the image, and the black-hat filter operation removes noise. The MobileNet model (with a dense neural network) was developed using the International Skin Imaging Collaboration (ISIC) dataset to fine-tune the model. The proposed model is lightweight; the prototype was tested on a Raspberry Pi 4 Internet of Things device with a Neo pixel 8-bit LED ring; a medical doctor validated the device. The average precision (AP) for benign and malignant diagnoses was 99.76% and 98.02%, respectively. Using our approach, the required dataset size decreased by 66%. The hair removal algorithm increased the accuracy of skin cancer detection to 99.36% with the ISIC dataset. The area under the receiver operating curve was 98.9%.
Collapse
Affiliation(s)
- Rupali Kiran Shinde
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
| | | | - Md. Biddut Hossain
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
| | - Shariar Md Imtiaz
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
| | - JoonHyun Kim
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
| | | | - Nam Kim
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
- Correspondence:
| |
Collapse
|
16
|
Li Z, Koban KC, Schenck TL, Giunta RE, Li Q, Sun Y. Artificial Intelligence in Dermatology Image Analysis: Current Developments and Future Trends. J Clin Med 2022; 11:jcm11226826. [PMID: 36431301 PMCID: PMC9693628 DOI: 10.3390/jcm11226826] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 10/24/2022] [Accepted: 10/28/2022] [Indexed: 11/22/2022] Open
Abstract
BACKGROUND Thanks to the rapid development of computer-based systems and deep-learning-based algorithms, artificial intelligence (AI) has long been integrated into the healthcare field. AI is also particularly helpful in image recognition, surgical assistance and basic research. Due to the unique nature of dermatology, AI-aided dermatological diagnosis based on image recognition has become a modern focus and future trend. Key scientific concepts of review: The use of 3D imaging systems allows clinicians to screen and label skin pigmented lesions and distributed disorders, which can provide an objective assessment and image documentation of lesion sites. Dermatoscopes combined with intelligent software help the dermatologist to easily correlate each close-up image with the corresponding marked lesion in the 3D body map. In addition, AI in the field of prosthetics can assist in the rehabilitation of patients and help to restore limb function after amputation in patients with skin tumors. THE AIM OF THE STUDY For the benefit of patients, dermatologists have an obligation to explore the opportunities, risks and limitations of AI applications. This study focuses on the application of emerging AI in dermatology to aid clinical diagnosis and treatment, analyzes the current state of the field and summarizes its future trends and prospects so as to help dermatologists realize the impact of new technological innovations on traditional practices so that they can embrace and use AI-based medical approaches more quickly.
Collapse
Affiliation(s)
- Zhouxiao Li
- Department of Plastic and Reconstructive Surgery, Shanghai 9th People’s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200023, China
- Division of Hand, Plastic and Aesthetic Surgery, University Hospital, LMU Munich, 80339 Munich, Germany
| | | | - Thilo Ludwig Schenck
- Division of Hand, Plastic and Aesthetic Surgery, University Hospital, LMU Munich, 80339 Munich, Germany
| | - Riccardo Enzo Giunta
- Division of Hand, Plastic and Aesthetic Surgery, University Hospital, LMU Munich, 80339 Munich, Germany
| | - Qingfeng Li
- Department of Plastic and Reconstructive Surgery, Shanghai 9th People’s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200023, China
- Correspondence: (Q.L.); (Y.S.)
| | - Yangbai Sun
- Department of Plastic and Reconstructive Surgery, Shanghai 9th People’s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200023, China
- Correspondence: (Q.L.); (Y.S.)
| |
Collapse
|
17
|
Foahom Gouabou AC, Collenne J, Monnier J, Iguernaissi R, Damoiseaux JL, Moudafi A, Merad D. Computer Aided Diagnosis of Melanoma Using Deep Neural Networks and Game Theory: Application on Dermoscopic Images of Skin Lesions. Int J Mol Sci 2022; 23:ijms232213838. [PMID: 36430315 PMCID: PMC9696950 DOI: 10.3390/ijms232213838] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 10/31/2022] [Accepted: 11/07/2022] [Indexed: 11/12/2022] Open
Abstract
Early detection of melanoma remains a daily challenge due to the increasing number of cases and the lack of dermatologists. Thus, AI-assisted diagnosis is considered as a possible solution for this issue. Despite the great advances brought by deep learning and especially convolutional neural networks (CNNs), computer-aided diagnosis (CAD) systems are still not used in clinical practice. This may be explained by the dermatologist's fear of being misled by a false negative and the assimilation of CNNs to a "black box", making their decision process difficult to understand by a non-expert. Decision theory, especially game theory, is a potential solution as it focuses on identifying the best decision option that maximizes the decision-maker's expected utility. This study presents a new framework for automated melanoma diagnosis. Pursuing the goal of improving the performance of existing systems, our approach also attempts to bring more transparency in the decision process. The proposed framework includes a multi-class CNN and six binary CNNs assimilated to players. The players' strategies is to first cluster the pigmented lesions (melanoma, nevus, and benign keratosis), using the introduced method of evaluating the confidence of the predictions, into confidence level (confident, medium, uncertain). Then, a subset of players has the strategy to refine the diagnosis for difficult lesions with medium and uncertain prediction. We used EfficientNetB5 as the backbone of our networks and evaluated our approach on the public ISIC dataset consisting of 8917 lesions: melanoma (1113), nevi (6705) and benign keratosis (1099). The proposed framework achieved an area under the receiver operating curve (AUROC) of 0.93 for melanoma, 0.96 for nevus and 0.97 for benign keratosis. Furthermore, our approach outperformed existing methods in this task, improving the balanced accuracy (BACC) of the best compared method from 77% to 86%. These results suggest that our framework provides an effective and explainable decision-making strategy. This approach could help dermatologists in their clinical practice for patients with atypical and difficult-to-diagnose pigmented lesions. We also believe that our system could serve as a didactic tool for less experienced dermatologists.
Collapse
Affiliation(s)
| | - Jules Collenne
- LIS, CNRS, Aix Marseille University, 13288 Marseille, France
| | - Jilliana Monnier
- LIS, CNRS, Aix Marseille University, 13288 Marseille, France
- Research Cancer Centre of Marseille, Inserm, CNRS, Aix-Marseille University, 13273 Marseille, France
- Dermatology and Skin Cancer Department, La Timone Hospital, AP-HM, Aix-Marseille University, 13385 Marseille, France
| | | | | | | | - Djamal Merad
- LIS, CNRS, Aix Marseille University, 13288 Marseille, France
- Correspondence: (A.C.F.G.); (D.M.)
| |
Collapse
|
18
|
Fraiwan M, Faouri E. On the Automatic Detection and Classification of Skin Cancer Using Deep Transfer Learning. SENSORS 2022; 22:s22134963. [PMID: 35808463 PMCID: PMC9269808 DOI: 10.3390/s22134963] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/04/2022] [Revised: 06/22/2022] [Accepted: 06/29/2022] [Indexed: 12/15/2022]
Abstract
Skin cancer (melanoma and non-melanoma) is one of the most common cancer types and leads to hundreds of thousands of yearly deaths worldwide. It manifests itself through abnormal growth of skin cells. Early diagnosis drastically increases the chances of recovery. Moreover, it may render surgical, radiographic, or chemical therapies unnecessary or lessen their overall usage. Thus, healthcare costs can be reduced. The process of diagnosing skin cancer starts with dermoscopy, which inspects the general shape, size, and color characteristics of skin lesions, and suspected lesions undergo further sampling and lab tests for confirmation. Image-based diagnosis has undergone great advances recently due to the rise of deep learning artificial intelligence. The work in this paper examines the applicability of raw deep transfer learning in classifying images of skin lesions into seven possible categories. Using the HAM1000 dataset of dermoscopy images, a system that accepts these images as input without explicit feature extraction or preprocessing was developed using 13 deep transfer learning models. Extensive evaluation revealed the advantages and shortcomings of such a method. Although some cancer types were correctly classified with high accuracy, the imbalance of the dataset, the small number of images in some categories, and the large number of classes reduced the best overall accuracy to 82.9%.
Collapse
|
19
|
An Effective Skin Cancer Classification Mechanism via Medical Vision Transformer. SENSORS 2022; 22:s22114008. [PMID: 35684627 PMCID: PMC9182815 DOI: 10.3390/s22114008] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Revised: 05/12/2022] [Accepted: 05/20/2022] [Indexed: 11/17/2022]
Abstract
Skin Cancer (SC) is considered the deadliest disease in the world, killing thousands of people every year. Early SC detection can increase the survival rate for patients up to 70%, hence it is highly recommended that regular head-to-toe skin examinations are conducted to determine whether there are any signs or symptoms of SC. The use of Machine Learning (ML)-based methods is having a significant impact on the classification and detection of SC diseases. However, there are certain challenges associated with the accurate classification of these diseases such as a lower detection accuracy, poor generalization of the models, and an insufficient amount of labeled data for training. To address these challenges, in this work we developed a two-tier framework for the accurate classification of SC. During the first stage of the framework, we applied different methods for data augmentation to increase the number of image samples for effective training. As part of the second tier of the framework, taking into consideration the promising performance of the Medical Vision Transformer (MVT) in the analysis of medical images, we developed an MVT-based classification model for SC. This MVT splits the input image into image patches and then feeds these patches to the transformer in a sequence structure, like word embedding. Finally, Multi-Layer Perceptron (MLP) is used to classify the input image into the corresponding class. Based on the experimental results achieved on the Human Against Machine (HAM10000) datasets, we concluded that the proposed MVT-based model achieves better results than current state-of-the-art techniques for SC classification.
Collapse
|
20
|
Bhimavarapu U, Battineni G. Skin Lesion Analysis for Melanoma Detection Using the Novel Deep Learning Model Fuzzy GC-SCNN. Healthcare (Basel) 2022; 10:healthcare10050962. [PMID: 35628098 PMCID: PMC9141659 DOI: 10.3390/healthcare10050962] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 05/19/2022] [Accepted: 05/21/2022] [Indexed: 02/01/2023] Open
Abstract
Melanoma is easily detectable by visual examination since it occurs on the skin’s surface. In melanomas, which are the most severe types of skin cancer, the cells that make melanin are affected. However, the lack of expert opinion increases the processing time and cost of computer-aided skin cancer detection. As such, we aimed to incorporate deep learning algorithms to conduct automatic melanoma detection from dermoscopic images. The fuzzy-based GrabCut-stacked convolutional neural networks (GC-SCNN) model was applied for image training. The image features extraction and lesion classification were performed on different publicly available datasets. The fuzzy GC-SCNN coupled with the support vector machines (SVM) produced 99.75% classification accuracy and 100% sensitivity and specificity, respectively. Additionally, model performance was compared with existing techniques and outcomes suggesting the proposed model could detect and classify the lesion segments with higher accuracy and lower processing time than other techniques.
Collapse
Affiliation(s)
- Usharani Bhimavarapu
- School of Competitive Coding, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Vijayawada 522502, India;
| | - Gopi Battineni
- Clinical Research Centre, School of Medicinal and Health Products Sciences, University of Camerino, 62032 Camerino, Italy
- Correspondence: ; Tel.: +39-3331728206
| |
Collapse
|