1
|
Lyakhova UA, Lyakhov PA. Systematic review of approaches to detection and classification of skin cancer using artificial intelligence: Development and prospects. Comput Biol Med 2024; 178:108742. [PMID: 38875908 DOI: 10.1016/j.compbiomed.2024.108742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 06/03/2024] [Accepted: 06/08/2024] [Indexed: 06/16/2024]
Abstract
In recent years, there has been a significant improvement in the accuracy of the classification of pigmented skin lesions using artificial intelligence algorithms. Intelligent analysis and classification systems are significantly superior to visual diagnostic methods used by dermatologists and oncologists. However, the application of such systems in clinical practice is severely limited due to a lack of generalizability and risks of potential misclassification. Successful implementation of artificial intelligence-based tools into clinicopathological practice requires a comprehensive study of the effectiveness and performance of existing models, as well as further promising areas for potential research development. The purpose of this systematic review is to investigate and evaluate the accuracy of artificial intelligence technologies for detecting malignant forms of pigmented skin lesions. For the study, 10,589 scientific research and review articles were selected from electronic scientific publishers, of which 171 articles were included in the presented systematic review. All selected scientific articles are distributed according to the proposed neural network algorithms from machine learning to multimodal intelligent architectures and are described in the corresponding sections of the manuscript. This research aims to explore automated skin cancer recognition systems, from simple machine learning algorithms to multimodal ensemble systems based on advanced encoder-decoder models, visual transformers (ViT), and generative and spiking neural networks. In addition, as a result of the analysis, future directions of research, prospects, and potential for further development of automated neural network systems for classifying pigmented skin lesions are discussed.
Collapse
Affiliation(s)
- U A Lyakhova
- Department of Mathematical Modeling, North-Caucasus Federal University, 355017, Stavropol, Russia.
| | - P A Lyakhov
- Department of Mathematical Modeling, North-Caucasus Federal University, 355017, Stavropol, Russia; North-Caucasus Center for Mathematical Research, North-Caucasus Federal University, 355017, Stavropol, Russia.
| |
Collapse
|
2
|
Myslicka M, Kawala-Sterniuk A, Bryniarska A, Sudol A, Podpora M, Gasz R, Martinek R, Kahankova Vilimkova R, Vilimek D, Pelc M, Mikolajewski D. Review of the application of the most current sophisticated image processing methods for the skin cancer diagnostics purposes. Arch Dermatol Res 2024; 316:99. [PMID: 38446274 DOI: 10.1007/s00403-024-02828-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Revised: 12/28/2023] [Accepted: 01/25/2024] [Indexed: 03/07/2024]
Abstract
This paper presents the most current and innovative solutions applying modern digital image processing methods for the purpose of skin cancer diagnostics. Skin cancer is one of the most common types of cancers. It is said that in the USA only, one in five people will develop skin cancer and this trend is constantly increasing. Implementation of new, non-invasive methods plays a crucial role in both identification and prevention of skin cancer occurrence. Early diagnosis and treatment are needed in order to decrease the number of deaths due to this disease. This paper also contains some information regarding the most common skin cancer types, mortality and epidemiological data for Poland, Europe, Canada and the USA. It also covers the most efficient and modern image recognition methods based on the artificial intelligence applied currently for diagnostics purposes. In this work, both professional, sophisticated as well as inexpensive solutions were presented. This paper is a review paper and covers the period of 2017 and 2022 when it comes to solutions and statistics. The authors decided to focus on the latest data, mostly due to the rapid technology development and increased number of new methods, which positively affects diagnosis and prognosis.
Collapse
Affiliation(s)
- Maria Myslicka
- Faculty of Medicine, Wroclaw Medical University, J. Mikulicza-Radeckiego 5, 50-345, Wroclaw, Poland.
| | - Aleksandra Kawala-Sterniuk
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland.
| | - Anna Bryniarska
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
| | - Adam Sudol
- Faculty of Natural Sciences and Technology, University of Opole, Dmowskiego 7-9, 45-368, Opole, Poland
| | - Michal Podpora
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
| | - Rafal Gasz
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
| | - Radek Martinek
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
- Department of Cybernetics and Biomedical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, Ostrava, 70800, Czech Republic
| | - Radana Kahankova Vilimkova
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
- Department of Cybernetics and Biomedical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, Ostrava, 70800, Czech Republic
| | - Dominik Vilimek
- Department of Cybernetics and Biomedical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, Ostrava, 70800, Czech Republic
| | - Mariusz Pelc
- Institute of Computer Science, University of Opole, Oleska 48, 45-052, Opole, Poland
- School of Computing and Mathematical Sciences, University of Greenwich, Old Royal Naval College, Park Row, SE10 9LS, London, UK
| | - Dariusz Mikolajewski
- Institute of Computer Science, Kazimierz Wielki University in Bydgoszcz, ul. Kopernika 1, 85-074, Bydgoszcz, Poland
- Neuropsychological Research Unit, 2nd Clinic of the Psychiatry and Psychiatric Rehabilitation, Medical University in Lublin, Gluska 1, 20-439, Lublin, Poland
| |
Collapse
|
3
|
Hossain MM, Hossain MM, Arefin MB, Akhtar F, Blake J. Combining State-of-the-Art Pre-Trained Deep Learning Models: A Noble Approach for Skin Cancer Detection Using Max Voting Ensemble. Diagnostics (Basel) 2023; 14:89. [PMID: 38201399 PMCID: PMC10795598 DOI: 10.3390/diagnostics14010089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 12/21/2023] [Accepted: 12/22/2023] [Indexed: 01/12/2024] Open
Abstract
Skin cancer poses a significant healthcare challenge, requiring precise and prompt diagnosis for effective treatment. While recent advances in deep learning have dramatically improved medical image analysis, including skin cancer classification, ensemble methods offer a pathway for further enhancing diagnostic accuracy. This study introduces a cutting-edge approach employing the Max Voting Ensemble Technique for robust skin cancer classification on ISIC 2018: Task 1-2 dataset. We incorporate a range of cutting-edge, pre-trained deep neural networks, including MobileNetV2, AlexNet, VGG16, ResNet50, DenseNet201, DenseNet121, InceptionV3, ResNet50V2, InceptionResNetV2, and Xception. These models have been extensively trained on skin cancer datasets, achieving individual accuracies ranging from 77.20% to 91.90%. Our method leverages the synergistic capabilities of these models by combining their complementary features to elevate classification performance further. In our approach, input images undergo preprocessing for model compatibility. The ensemble integrates the pre-trained models with their architectures and weights preserved. For each skin lesion image under examination, every model produces a prediction. These are subsequently aggregated using the max voting ensemble technique to yield the final classification, with the majority-voted class serving as the conclusive prediction. Through comprehensive testing on a diverse dataset, our ensemble outperformed individual models, attaining an accuracy of 93.18% and an AUC score of 0.9320, thus demonstrating superior diagnostic reliability and accuracy. We evaluated the effectiveness of our proposed method on the HAM10000 dataset to ensure its generalizability. Our ensemble method delivers a robust, reliable, and effective tool for the classification of skin cancer. By utilizing the power of advanced deep neural networks, we aim to assist healthcare professionals in achieving timely and accurate diagnoses, ultimately reducing mortality rates and enhancing patient outcomes.
Collapse
Affiliation(s)
- Md. Mamun Hossain
- Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology, Saidpur 5310, Bangladesh
| | - Md. Moazzem Hossain
- Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology, Saidpur 5310, Bangladesh
| | - Most. Binoee Arefin
- Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology, Saidpur 5310, Bangladesh
| | - Fahima Akhtar
- Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology, Saidpur 5310, Bangladesh
| | - John Blake
- School of Computer Science and Engineering, University of Aizu, Aizuwakamatsu 965-8580, Japan
| |
Collapse
|
4
|
Jiang L, Huang S, Luo C, Zhang J, Chen W, Liu Z. An improved multi-scale gradient generative adversarial network for enhancing classification of colorectal cancer histological images. Front Oncol 2023; 13:1240645. [PMID: 38023227 PMCID: PMC10679330 DOI: 10.3389/fonc.2023.1240645] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 10/23/2023] [Indexed: 12/01/2023] Open
Abstract
Introduction Deep learning-based solutions for histological image classification have gained attention in recent years due to their potential for objective evaluation of histological images. However, these methods often require a large number of expert annotations, which are both time-consuming and labor-intensive to obtain. Several scholars have proposed generative models to augment labeled data, but these often result in label uncertainty due to incomplete learning of the data distribution. Methods To alleviate these issues, a method called InceptionV3-SMSG-GAN has been proposed to enhance classification performance by generating high-quality images. Specifically, images synthesized by Multi-Scale Gradients Generative Adversarial Network (MSG-GAN) are selectively added to the training set through a selection mechanism utilizing a trained model to choose generated images with higher class probabilities. The selection mechanism filters the synthetic images that contain ambiguous category information, thus alleviating label uncertainty. Results Experimental results show that compared with the baseline method which uses InceptionV3, the proposed method can significantly improve the performance of pathological image classification from 86.87% to 89.54% for overall accuracy. Additionally, the quality of generated images is evaluated quantitatively using various commonly used evaluation metrics. Discussion The proposed InceptionV3-SMSG-GAN method exhibited good classification ability, where histological image could be divided into nine categories. Future work could focus on further refining the image generation and selection processes to optimize classification performance.
Collapse
Affiliation(s)
- Liwen Jiang
- Department of Pathology, Affiliated Cancer Hospital and Institution of Guangzhou Medical University, Guangzhou, China
| | - Shuting Huang
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Chaofan Luo
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Jiangyu Zhang
- Department of Pathology, Affiliated Cancer Hospital and Institution of Guangzhou Medical University, Guangzhou, China
| | - Wenjing Chen
- Department of Pathology, Guangdong Women and Children Hospital, Guangzhou, China
| | - Zhenyu Liu
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| |
Collapse
|
5
|
Mehmood A, Gulzar Y, Ilyas QM, Jabbari A, Ahmad M, Iqbal S. SBXception: A Shallower and Broader Xception Architecture for Efficient Classification of Skin Lesions. Cancers (Basel) 2023; 15:3604. [PMID: 37509267 PMCID: PMC10377736 DOI: 10.3390/cancers15143604] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 07/05/2023] [Accepted: 07/08/2023] [Indexed: 07/30/2023] Open
Abstract
Skin cancer is a major public health concern around the world. Skin cancer identification is critical for effective treatment and improved results. Deep learning models have shown considerable promise in assisting dermatologists in skin cancer diagnosis. This study proposes SBXception: a shallower and broader variant of the Xception network. It uses Xception as the base model for skin cancer classification and increases its performance by reducing the depth and expanding the breadth of the architecture. We used the HAM10000 dataset, which contains 10,015 dermatoscopic images of skin lesions classified into seven categories, for training and testing the proposed model. Using the HAM10000 dataset, we fine-tuned the new model and reached an accuracy of 96.97% on a holdout test set. SBXception also achieved significant performance enhancement with 54.27% fewer training parameters and reduced training time compared to the base model. Our findings show that reducing and expanding the Xception model architecture can greatly improve its performance in skin cancer categorization.
Collapse
Affiliation(s)
- Abid Mehmood
- Department of Management Information Systems, College of Business Administration, King Faisal University, Al Ahsa 31982, Saudi Arabia
| | - Yonis Gulzar
- Department of Management Information Systems, College of Business Administration, King Faisal University, Al Ahsa 31982, Saudi Arabia
| | - Qazi Mudassar Ilyas
- Department of Information Systems, College of Computer Sciences and Information Technology, King Faisal University, Al Ahsa 31982, Saudi Arabia
| | - Abdoh Jabbari
- College of Computer Science and Information Technology, Jazan University, Jazan 45142, Saudi Arabia
| | - Muneer Ahmad
- Department of Human and Digital Interface, Woosong University, Daejeon 34606, Republic of Korea
| | - Sajid Iqbal
- Department of Information Systems, College of Computer Sciences and Information Technology, King Faisal University, Al Ahsa 31982, Saudi Arabia
| |
Collapse
|
6
|
Sistaninejhad B, Rasi H, Nayeri P. A Review Paper about Deep Learning for Medical Image Analysis. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2023; 2023:7091301. [PMID: 37284172 PMCID: PMC10241570 DOI: 10.1155/2023/7091301] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 02/12/2023] [Accepted: 04/21/2023] [Indexed: 06/08/2023]
Abstract
Medical imaging refers to the process of obtaining images of internal organs for therapeutic purposes such as discovering or studying diseases. The primary objective of medical image analysis is to improve the efficacy of clinical research and treatment options. Deep learning has revamped medical image analysis, yielding excellent results in image processing tasks such as registration, segmentation, feature extraction, and classification. The prime motivations for this are the availability of computational resources and the resurgence of deep convolutional neural networks. Deep learning techniques are good at observing hidden patterns in images and supporting clinicians in achieving diagnostic perfection. It has proven to be the most effective method for organ segmentation, cancer detection, disease categorization, and computer-assisted diagnosis. Many deep learning approaches have been published to analyze medical images for various diagnostic purposes. In this paper, we review the work exploiting current state-of-the-art deep learning approaches in medical image processing. We begin the survey by providing a synopsis of research works in medical imaging based on convolutional neural networks. Second, we discuss popular pretrained models and general adversarial networks that aid in improving convolutional networks' performance. Finally, to ease direct evaluation, we compile the performance metrics of deep learning models focusing on COVID-19 detection and child bone age prediction.
Collapse
Affiliation(s)
| | - Habib Rasi
- Sahand University of Technology, East Azerbaijan, New City of Sahand, Iran
| | - Parisa Nayeri
- Khoy University of Medical Sciences, West Azerbaijan, Khoy, Iran
| |
Collapse
|
7
|
Gao C, Killeen BD, Hu Y, Grupp RB, Taylor RH, Armand M, Unberath M. Synthetic data accelerates the development of generalizable learning-based algorithms for X-ray image analysis. NAT MACH INTELL 2023; 5:294-308. [PMID: 38523605 PMCID: PMC10959504 DOI: 10.1038/s42256-023-00629-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 02/06/2023] [Indexed: 03/26/2024]
Abstract
Artificial intelligence (AI) now enables automated interpretation of medical images. However, AI's potential use for interventional image analysis remains largely untapped. This is because the post hoc analysis of data collected during live procedures has fundamental and practical limitations, including ethical considerations, expense, scalability, data integrity and a lack of ground truth. Here we demonstrate that creating realistic simulated images from human models is a viable alternative and complement to large-scale in situ data collection. We show that training AI image analysis models on realistically synthesized data, combined with contemporary domain generalization techniques, results in machine learning models that on real data perform comparably to models trained on a precisely matched real data training set. We find that our model transfer paradigm for X-ray image analysis, which we refer to as SyntheX, can even outperform real-data-trained models due to the effectiveness of training on a larger dataset. SyntheX provides an opportunity to markedly accelerate the conception, design and evaluation of X-ray-based intelligent systems. In addition, SyntheX provides the opportunity to test novel instrumentation, design complementary surgical approaches, and envision novel techniques that improve outcomes, save time or mitigate human error, free from the ethical and practical considerations of live human data collection.
Collapse
Affiliation(s)
- Cong Gao
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Benjamin D. Killeen
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Yicheng Hu
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Robert B. Grupp
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Russell H. Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Mehran Armand
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Department of Orthopaedic Surgery, Johns Hopkins Applied Physics Laboratory, Baltimore, MD, USA
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
8
|
Cui W, Bai L, Yang X, Liang J. A new contrastive learning framework for reducing the effect of hard negatives. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.110121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
9
|
Song X, Guo S, Han L, Wang L, Yang W, Wang G, Anil Baris C. Research on hair removal algorithm of dermatoscopic images based on maximum variance fuzzy clustering and optimization Criminisi algorithm. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
10
|
Wu Y, Chen B, Zeng A, Pan D, Wang R, Zhao S. Skin Cancer Classification With Deep Learning: A Systematic Review. Front Oncol 2022; 12:893972. [PMID: 35912265 PMCID: PMC9327733 DOI: 10.3389/fonc.2022.893972] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 05/16/2022] [Indexed: 01/21/2023] Open
Abstract
Skin cancer is one of the most dangerous diseases in the world. Correctly classifying skin lesions at an early stage could aid clinical decision-making by providing an accurate disease diagnosis, potentially increasing the chances of cure before cancer spreads. However, achieving automatic skin cancer classification is difficult because the majority of skin disease images used for training are imbalanced and in short supply; meanwhile, the model’s cross-domain adaptability and robustness are also critical challenges. Recently, many deep learning-based methods have been widely used in skin cancer classification to solve the above issues and achieve satisfactory results. Nonetheless, reviews that include the abovementioned frontier problems in skin cancer classification are still scarce. Therefore, in this article, we provide a comprehensive overview of the latest deep learning-based algorithms for skin cancer classification. We begin with an overview of three types of dermatological images, followed by a list of publicly available datasets relating to skin cancers. After that, we review the successful applications of typical convolutional neural networks for skin cancer classification. As a highlight of this paper, we next summarize several frontier problems, including data imbalance, data limitation, domain adaptation, model robustness, and model efficiency, followed by corresponding solutions in the skin cancer classification task. Finally, by summarizing different deep learning-based methods to solve the frontier challenges in skin cancer classification, we can conclude that the general development direction of these approaches is structured, lightweight, and multimodal. Besides, for readers’ convenience, we have summarized our findings in figures and tables. Considering the growing popularity of deep learning, there are still many issues to overcome as well as chances to pursue in the future.
Collapse
Affiliation(s)
- Yinhao Wu
- School of Intelligent Systems Engineering, Sun Yat-Sen University, Guangzhou, China
| | - Bin Chen
- Affiliated Hangzhou First People’s Hospital, Zhejiang University School of Medicine, Zhejiang, China
| | - An Zeng
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou, China
| | - Dan Pan
- School of Electronics and Information, Guangdong Polytechnic Normal University, Guangzhou, China
| | - Ruixuan Wang
- School of Computer Science and Engineering, Sun Yat-Sen University, Guangzhou, China
| | - Shen Zhao
- School of Intelligent Systems Engineering, Sun Yat-Sen University, Guangzhou, China
- *Correspondence: Shen Zhao,
| |
Collapse
|
11
|
Ahmad B, Sun J, You Q, Palade V, Mao Z. Brain Tumor Classification Using a Combination of Variational Autoencoders and Generative Adversarial Networks. Biomedicines 2022; 10:biomedicines10020223. [PMID: 35203433 PMCID: PMC8869455 DOI: 10.3390/biomedicines10020223] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 12/23/2021] [Accepted: 01/03/2022] [Indexed: 11/16/2022] Open
Abstract
Brain tumors are a pernicious cancer with one of the lowest five-year survival rates. Neurologists often use magnetic resonance imaging (MRI) to diagnose the type of brain tumor. Automated computer-assisted tools can help them speed up the diagnosis process and reduce the burden on the health care systems. Recent advances in deep learning for medical imaging have shown remarkable results, especially in the automatic and instant diagnosis of various cancers. However, we need a large amount of data (images) to train the deep learning models in order to obtain good results. Large public datasets are rare in medicine. This paper proposes a framework based on unsupervised deep generative neural networks to solve this limitation. We combine two generative models in the proposed framework: variational autoencoders (VAEs) and generative adversarial networks (GANs). We swap the encoder–decoder network after initially training it on the training set of available MR images. The output of this swapped network is a noise vector that has information of the image manifold, and the cascaded generative adversarial network samples the input from this informative noise vector instead of random Gaussian noise. The proposed method helps the GAN to avoid mode collapse and generate realistic-looking brain tumor magnetic resonance images. These artificially generated images could solve the limitation of small medical datasets up to a reasonable extent and help the deep learning models perform acceptably. We used the ResNet50 as a classifier, and the artificially generated brain tumor images are used to augment the real and available images during the classifier training. We compared the classification results with several existing studies and state-of-the-art machine learning models. Our proposed methodology noticeably achieved better results. By using brain tumor images generated artificially by our proposed method, the classification average accuracy improved from 72.63% to 96.25%. For the most severe class of brain tumor, glioma, we achieved 0.769, 0.837, 0.833, and 0.80 values for recall, specificity, precision, and F1-score, respectively. The proposed generative model framework could be used to generate medical images in any domain, including PET (positron emission tomography) and MRI scans of various parts of the body, and the results show that it could be a useful clinical tool for medical experts.
Collapse
Affiliation(s)
- Bilal Ahmad
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China; (B.A.); (Q.Y.); (Z.M.)
| | - Jun Sun
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China; (B.A.); (Q.Y.); (Z.M.)
- Correspondence:
| | - Qi You
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China; (B.A.); (Q.Y.); (Z.M.)
| | - Vasile Palade
- Centre for Computational Science and Mathematical Modelling, Coventry University, Coventry CV1 5FB, UK;
| | - Zhongjie Mao
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China; (B.A.); (Q.Y.); (Z.M.)
| |
Collapse
|
12
|
Deep Learning and Machine Learning Techniques of Diagnosis Dermoscopy Images for Early Detection of Skin Diseases. ELECTRONICS 2021. [DOI: 10.3390/electronics10243158] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
With the increasing incidence of severe skin diseases, such as skin cancer, endoscopic medical imaging has become urgent for revealing the internal and hidden tissues under the skin. Diagnostic information to help doctors make an accurate diagnosis is provided by endoscopy devices. Nonetheless, most skin diseases have similar features, which make it challenging for dermatologists to diagnose patients accurately. Therefore, machine and deep learning techniques can have a critical role in diagnosing dermatoscopy images and in the accurate early detection of skin diseases. In this study, systems for the early detection of skin lesions were developed. The performance of the machine learning and deep learning was evaluated on two datasets (e.g., the International Skin Imaging Collaboration (ISIC 2018) and Pedro Hispano (PH2)). First, the proposed system was based on hybrid features that were extracted by three algorithms: local binary pattern (LBP), gray level co-occurrence matrix (GLCM), and wavelet transform (DWT). Such features were then integrated into a feature vector and classified using artificial neural network (ANN) and feedforward neural network (FFNN) classifiers. The FFNN and ANN classifiers achieved superior results compared to the other methods. Accuracy rates of 95.24% for diagnosing the ISIC 2018 dataset and 97.91% for diagnosing the PH2 dataset were achieved using the FFNN algorithm. Second, convolutional neural networks (CNNs) (e.g., ResNet-50 and AlexNet models) were applied to diagnose skin diseases using the transfer learning method. It was found that the ResNet-50 model fared better than AlexNet. Accuracy rates of 90% for diagnosing the ISIC 2018 dataset and 95.8% for the PH2 dataset were reached using the ResNet-50 model.
Collapse
|