1
|
Islam S, Wishart GC, Walls J, Hall P, Seco de Herrera AG, Gan JQ, Raza H. Leveraging AI and patient metadata to develop a novel risk score for skin cancer detection. Sci Rep 2024; 14:20842. [PMID: 39242690 PMCID: PMC11379912 DOI: 10.1038/s41598-024-71244-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2024] [Accepted: 08/26/2024] [Indexed: 09/09/2024] Open
Abstract
Melanoma of the skin is the 17th most common cancer worldwide. Early detection of suspicious skin lesions (melanoma) can increase 5-year survival rates by 20%. The 7-point checklist (7PCL) has been extensively used to suggest urgent referrals for patients with a possible melanoma. However, the 7PCL method only considers seven meta-features to calculate a risk score and is only relevant for patients with suspected melanoma. There are limited studies on the extensive use of patient metadata for the detection of all skin cancer subtypes. This study investigates artificial intelligence (AI) models that utilise patient metadata consisting of 23 attributes for suspicious skin lesion detection. We have identified a new set of most important risk factors, namely "C4C risk factors", which is not just for melanoma, but for all types of skin cancer. The performance of the C4C risk factors for suspicious skin lesion detection is compared to that of the 7PCL and the Williams risk factors that predict the lifetime risk of melanoma. Our proposed AI framework ensembles five machine learning models and identifies seven new skin cancer risk factors: lesion pink, lesion size, lesion colour, lesion inflamed, lesion shape, lesion age, and natural hair colour, which achieved a sensitivity of 80.46 ± 2.50 % and a specificity of 62.09 ± 1.90 % in detecting suspicious skin lesions when evaluated using the metadata of 53,601 skin lesions collected from different skin cancer diagnostic clinics across the UK, significantly outperforming the 7PCL-based method (sensitivity 68.09 ± 2.10 % , specificity 61.07 ± 0.90 % ) and the Williams risk factors (sensitivity 66.32 ± 1.90 % , specificity 61.71 ± 0.6 % ). Furthermore, through weighting the seven new risk factors we came up with a new risk score, namely "C4C risk score", which alone achieved a sensitivity of 76.09 ± 1.20 % and a specificity of 61.71 ± 0.50 % , significantly outperforming the 7PCL-based risk score (sensitivity 73.91 ± 1.10 % , specificity 49.49 ± 0.50 % ) and the Williams risk score (sensitivity 60.68 ± 1.30 % , specificity 60.87 ± 0.80 % ). Finally, fusing the C4C risk factors with the 7PCL and Williams risk factors achieved the best performance, with a sensitivity of 85.24 ± 2.20 % and a specificity of 61.12 ± 0.90 % . We believe that fusing these newly found risk factors and new risk score with image data will further boost the AI model performance for suspicious skin lesion detection. Hence, the new set of skin cancer risk factors has the potential to be used to modify current skin cancer referral guidelines for all skin cancer subtypes, including melanoma.
Collapse
Affiliation(s)
- Shafiqul Islam
- School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK.
| | - Gordon C Wishart
- Check4Cancer Ltd., Cambridge, UK
- School of Medicine, Anglia Ruskin University, Chelmsford, UK
| | - Joseph Walls
- Check4Cancer Ltd., Cambridge, UK
- Fitzwilliam Hospital, Peterborough, UK
| | - Per Hall
- Check4Cancer Ltd., Cambridge, UK
- Addenbrookes Hospital NHS Foundation Trust, Cambridge, UK
| | - Alba G Seco de Herrera
- School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK
| | - John Q Gan
- School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK
| | - Haider Raza
- School of Computer Science and Electronic Engineering, University of Essex, Colchester, UK.
| |
Collapse
|
2
|
Kalidindi S. The Role of Artificial Intelligence in the Diagnosis of Melanoma. Cureus 2024; 16:e69818. [PMID: 39308840 PMCID: PMC11415605 DOI: 10.7759/cureus.69818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/20/2024] [Indexed: 09/25/2024] Open
Abstract
The incidence of melanoma, the most aggressive form of skin cancer, continues to rise globally, particularly among fair-skinned populations (type I and II). Early detection is crucial for improving patient outcomes, and recent advancements in artificial intelligence (AI) have shown promise in enhancing the accuracy and efficiency of melanoma diagnosis and management. This review examines the role of AI in skin lesion diagnostics, highlighting two main approaches: machine learning, particularly convolutional neural networks (CNNs), and expert systems. AI techniques have demonstrated high accuracy in classifying dermoscopic images, often matching or surpassing dermatologists' performance. Integrating AI into dermatology has improved tasks, such as lesion classification, segmentation, and risk prediction, facilitating earlier and more accurate interventions. Despite these advancements, challenges remain, including biases in training data, interpretability issues, and integration of AI into clinical workflows. Ensuring diverse data representation and maintaining high standards of image quality are essential for reliable AI performance. Future directions involve the development of more sophisticated models, such as vision-language and multimodal models, and federated learning to address data privacy and generalizability concerns. Continuous validation and ethical integration of AI into clinical practice are vital for realizing its full potential for improving melanoma diagnosis and patient care.
Collapse
Affiliation(s)
- Sadhana Kalidindi
- Clinical Research, Apollo Radiology International Academy, Hyderabad, IND
| |
Collapse
|
3
|
Lyakhova UA, Lyakhov PA. Systematic review of approaches to detection and classification of skin cancer using artificial intelligence: Development and prospects. Comput Biol Med 2024; 178:108742. [PMID: 38875908 DOI: 10.1016/j.compbiomed.2024.108742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 06/03/2024] [Accepted: 06/08/2024] [Indexed: 06/16/2024]
Abstract
In recent years, there has been a significant improvement in the accuracy of the classification of pigmented skin lesions using artificial intelligence algorithms. Intelligent analysis and classification systems are significantly superior to visual diagnostic methods used by dermatologists and oncologists. However, the application of such systems in clinical practice is severely limited due to a lack of generalizability and risks of potential misclassification. Successful implementation of artificial intelligence-based tools into clinicopathological practice requires a comprehensive study of the effectiveness and performance of existing models, as well as further promising areas for potential research development. The purpose of this systematic review is to investigate and evaluate the accuracy of artificial intelligence technologies for detecting malignant forms of pigmented skin lesions. For the study, 10,589 scientific research and review articles were selected from electronic scientific publishers, of which 171 articles were included in the presented systematic review. All selected scientific articles are distributed according to the proposed neural network algorithms from machine learning to multimodal intelligent architectures and are described in the corresponding sections of the manuscript. This research aims to explore automated skin cancer recognition systems, from simple machine learning algorithms to multimodal ensemble systems based on advanced encoder-decoder models, visual transformers (ViT), and generative and spiking neural networks. In addition, as a result of the analysis, future directions of research, prospects, and potential for further development of automated neural network systems for classifying pigmented skin lesions are discussed.
Collapse
Affiliation(s)
- U A Lyakhova
- Department of Mathematical Modeling, North-Caucasus Federal University, 355017, Stavropol, Russia.
| | - P A Lyakhov
- Department of Mathematical Modeling, North-Caucasus Federal University, 355017, Stavropol, Russia; North-Caucasus Center for Mathematical Research, North-Caucasus Federal University, 355017, Stavropol, Russia.
| |
Collapse
|
4
|
Agrawal R, Jurel P, Deshmukh R, Harwansh RK, Garg A, Kumar A, Singh S, Guru A, Kumar A, Kumarasamy V. Emerging Trends in the Treatment of Skin Disorders by Herbal Drugs: Traditional and Nanotechnological Approach. Pharmaceutics 2024; 16:869. [PMID: 39065566 PMCID: PMC11279890 DOI: 10.3390/pharmaceutics16070869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Revised: 06/21/2024] [Accepted: 06/27/2024] [Indexed: 07/28/2024] Open
Abstract
Since the earliest days, people have been employing herbal treatments extensively around the world. The development of phytochemical and phytopharmacological sciences has made it possible to understand the chemical composition and biological properties of a number of medicinal plant products. Due to certain challenges like large molecular weight and low bioavailability, some components of herbal extracts are not utilized for therapeutic purposes. It has been suggested that herbal medicine and nanotechnology can be combined to enhance the benefits of plant extracts by lowering dosage requirements and adverse effects and increasing therapeutic activity. Using nanotechnology, the active ingredient can be delivered in an adequate concentration and transported to the targeted site of action. Conventional therapy does not fulfill these requirements. This review focuses on different skin diseases and nanotechnology-based herbal medicines that have been utilized to treat them.
Collapse
Affiliation(s)
- Rutvi Agrawal
- Rajiv Academy for Pharmacy, Mathura 281001, Uttar Pradesh, India; (R.A.); (A.G.)
| | - Priyanka Jurel
- Institute of Pharmaceutical Research, GLA University, Mathura 281406, Uttar Pradesh, India; (P.J.); (R.D.); (R.K.H.)
| | - Rohitas Deshmukh
- Institute of Pharmaceutical Research, GLA University, Mathura 281406, Uttar Pradesh, India; (P.J.); (R.D.); (R.K.H.)
| | - Ranjit Kumar Harwansh
- Institute of Pharmaceutical Research, GLA University, Mathura 281406, Uttar Pradesh, India; (P.J.); (R.D.); (R.K.H.)
| | - Akash Garg
- Rajiv Academy for Pharmacy, Mathura 281001, Uttar Pradesh, India; (R.A.); (A.G.)
| | - Ashwini Kumar
- Research and Development Cell, Department of Mechanical Engineering, School of Engineering and Technology, Manav Rachna International Institute of Research and Studies, Faridabad 121003, Haryana, India;
| | - Sudarshan Singh
- Faculty of Pharmacy, Chiang Mai University, Chiang Mai 50200, Thailand;
- Office of Research Administration, Chiang Mai University, Chiang Mai 50200, Thailand
| | - Ajay Guru
- Department of Cariology, Saveetha Dental College and Hospital, Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai 600077, Tamil Nadu, India;
| | - Arun Kumar
- School of Pharmacy, Sharda University, Greater Noida 201306, Uttar Pradesh, India
| | - Vinoth Kumarasamy
- Department of Parasitology and Medical Entomology, Faculty of Medicine, Universiti Kebangsaan Malaysia, Jalan Yaacob Latif, Cheras, Kuala Lumpur 56000, Malaysia
| |
Collapse
|
5
|
Wen D, Soltan A, Trucco E, Matin RN. From data to diagnosis: skin cancer image datasets for artificial intelligence. Clin Exp Dermatol 2024; 49:675-685. [PMID: 38549552 DOI: 10.1093/ced/llae112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 02/11/2024] [Accepted: 03/25/2024] [Indexed: 06/26/2024]
Abstract
Artificial intelligence (AI) solutions for skin cancer diagnosis continue to gain momentum, edging closer towards broad clinical use. These AI models, particularly deep-learning architectures, require large digital image datasets for development. This review provides an overview of the datasets used to develop AI algorithms and highlights the importance of dataset transparency for the evaluation of algorithm generalizability across varying populations and settings. Current challenges for curation of clinically valuable datasets are detailed, which include dataset shifts arising from demographic variations and differences in data collection methodologies, along with inconsistencies in labelling. These shifts can lead to differential algorithm performance, compromise of clinical utility, and the propagation of discriminatory biases when developed algorithms are implemented in mismatched populations. Limited representation of rare skin cancers and minoritized groups in existing datasets are highlighted, which can further skew algorithm performance. Strategies to address these challenges are presented, which include improving transparency, representation and interoperability. Federated learning and generative methods, which may improve dataset size and diversity without compromising privacy, are also examined. Lastly, we discuss model-level techniques that may address biases entrained through the use of datasets derived from routine clinical care. As the role of AI in skin cancer diagnosis becomes more prominent, ensuring the robustness of underlying datasets is increasingly important.
Collapse
Affiliation(s)
- David Wen
- Department of Dermatology, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
- Oxford University Clinical Academic Graduate School, University of Oxford, Oxford, UK
| | - Andrew Soltan
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
- Oxford Cancer and Haematology Centre, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
- Department of Oncology, University of Oxford, Oxford, UK
| | - Emanuele Trucco
- VAMPIRE Project, Computing, School of Science and Engineering, University of Dundee, Dundee, UK
| | - Rubeta N Matin
- Department of Dermatology, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
- Artificial Intelligence Working Party Group, British Association of Dermatologists, London, UK
| |
Collapse
|
6
|
Lin Q, Guo X, Feng B, Guo J, Ni S, Dong H. A novel multi-task learning network for skin lesion classification based on multi-modal clues and label-level fusion. Comput Biol Med 2024; 175:108549. [PMID: 38704901 DOI: 10.1016/j.compbiomed.2024.108549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 04/20/2024] [Accepted: 04/28/2024] [Indexed: 05/07/2024]
Abstract
In this paper, we propose a multi-task learning (MTL) network based on the label-level fusion of metadata and hand-crafted features by unsupervised clustering to generate new clustering labels as an optimization goal. We propose a MTL module (MTLM) that incorporates an attention mechanism to enable the model to learn more integrated, variable information. We propose a dynamic strategy to adjust the loss weights of different tasks, and trade off the contributions of multiple branches. Instead of feature-level fusion, we propose label-level fusion and combine the results of our proposed MTLM with the results of the image classification network to achieve better lesion prediction on multiple dermatological datasets. We verify the effectiveness of the proposed model by quantitative and qualitative measures. The MTL network using multi-modal clues and label-level fusion can yield the significant performance improvement for skin lesion classification.
Collapse
Affiliation(s)
- Qifeng Lin
- College of Software, Jilin University, 2699 Qianjin Street, Changchun, 130012, China
| | - Xiaoxin Guo
- Key Laboratory of Symbol Computation and Knowledge Engineering of Ministry of Education, Jilin University, 2699 Qianjin Street, Changchun, 130012, China; College of Computer Science and Technology, Jilin University, 2699 Qianjin Street, Changchun, 130012, China.
| | - Bo Feng
- College of Computer Science and Technology, Jilin University, 2699 Qianjin Street, Changchun, 130012, China
| | - Juntong Guo
- College of Software, Jilin University, 2699 Qianjin Street, Changchun, 130012, China
| | - Shuang Ni
- College of Software, Jilin University, 2699 Qianjin Street, Changchun, 130012, China
| | - Hongliang Dong
- College of Computer Science and Technology, Jilin University, 2699 Qianjin Street, Changchun, 130012, China
| |
Collapse
|
7
|
Kandhro IA, Manickam S, Fatima K, Uddin M, Malik U, Naz A, Dandoush A. Performance evaluation of E-VGG19 model: Enhancing real-time skin cancer detection and classification. Heliyon 2024; 10:e31488. [PMID: 38826726 PMCID: PMC11141372 DOI: 10.1016/j.heliyon.2024.e31488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2024] [Accepted: 05/16/2024] [Indexed: 06/04/2024] Open
Abstract
Skin cancer is a pervasive and potentially life-threatening disease. Early detection plays a crucial role in improving patient outcomes. Machine learning (ML) techniques, particularly when combined with pre-trained deep learning models, have shown promise in enhancing the accuracy of skin cancer detection. In this paper, we enhanced the VGG19 pre-trained model with max pooling and dense layer for the prediction of skin cancer. Moreover, we also explored the pre-trained models such as Visual Geometry Group 19 (VGG19), Residual Network 152 version 2 (ResNet152v2), Inception-Residual Network version 2 (InceptionResNetV2), Dense Convolutional Network 201 (DenseNet201), Residual Network 50 (ResNet50), Inception version 3 (InceptionV3), For training, skin lesions dataset is used with malignant and benign cases. The models extract features and divide skin lesions into two categories: malignant and benign. The features are then fed into machine learning methods, including Linear Support Vector Machine (SVM), k-Nearest Neighbors (KNN), Decision Tree (DT), Logistic Regression (LR) and Support Vector Machine (SVM), our results demonstrate that combining E-VGG19 model with traditional classifiers significantly improves the overall classification accuracy for skin cancer detection and classification. Moreover, we have also compared the performance of baseline classifiers and pre-trained models with metrics (recall, F1 score, precision, sensitivity, and accuracy). The experiment results provide valuable insights into the effectiveness of various models and classifiers for accurate and efficient skin cancer detection. This research contributes to the ongoing efforts to create automated technologies for detecting skin cancer that can help healthcare professionals and individuals identify potential skin cancer cases at an early stage, ultimately leading to more timely and effective treatments.
Collapse
Affiliation(s)
- Irfan Ali Kandhro
- Department of Computer Science, Sindh Madressatul Islam University, Karachi, 74000, Pakistan
| | - Selvakumar Manickam
- National Advanced IPv6 Centre (NAv6), Universiti Sains Malaysia, Gelugor, Penang, 11800, Malaysia
| | - Kanwal Fatima
- Department of Computer Science, Sindh Madressatul Islam University, Karachi, 74000, Pakistan
| | - Mueen Uddin
- College of Computing and Information Technology, University of Doha For Science & Technology, 24449, Doha, Qatar
| | - Urooj Malik
- Department of Computer Science, Sindh Madressatul Islam University, Karachi, 74000, Pakistan
| | - Anum Naz
- Department of Computer Science, Sindh Madressatul Islam University, Karachi, 74000, Pakistan
| | - Abdulhalim Dandoush
- College of Computing and Information Technology, University of Doha For Science & Technology, 24449, Doha, Qatar
| |
Collapse
|
8
|
Kumar A, Kumar M, Bhardwaj VP, Kumar S, Selvarajan S. A novel skin cancer detection model using modified finch deep CNN classifier model. Sci Rep 2024; 14:11235. [PMID: 38755202 PMCID: PMC11099129 DOI: 10.1038/s41598-024-60954-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2023] [Accepted: 04/29/2024] [Indexed: 05/18/2024] Open
Abstract
Skin cancer is one of the most life-threatening diseases caused by the abnormal growth of the skin cells, when exposed to ultraviolet radiation. Early detection seems to be more crucial for reducing aberrant cell proliferation because the mortality rate is rapidly rising. Although multiple researches are available based on the skin cancer detection, there still exists challenges in improving the accuracy, reducing the computational time and so on. In this research, a novel skin cancer detection is performed using a modified falcon finch deep Convolutional neural network classifier (Modified Falcon finch deep CNN) that efficiently detects the disease with higher efficiency. The usage of modified falcon finch deep CNN classifier effectively analyzed the information relevant to the skin cancer and the errors are also minimized. The inclusion of the falcon finch optimization in the deep CNN classifier is necessary for efficient parameter tuning. This tuning enhanced the robustness and boosted the convergence of the classifier that detects the skin cancer in less stipulated time. The modified falcon finch deep CNN classifier achieved accuracy, sensitivity, and specificity values of 93.59%, 92.14%, and 95.22% regarding k-fold and 96.52%, 96.69%, and 96.54% regarding training percentage, proving more effective than literary works.
Collapse
Affiliation(s)
- Ashwani Kumar
- Department of Computer Science and Engineering, School of Engineering and Technology, Sharda University, Greater Noida, India
| | - Mohit Kumar
- Department of Information Technology, School of Engineering, MIT-ADT University, Pune, 412201, India
| | | | - Sunil Kumar
- Department of CSE, Galgotias College of Engineering & Technology, 1, Knowledge Park-II, Greater Noida, 201310, India
| | | |
Collapse
|
9
|
Strzelecki M, Kociołek M, Strąkowska M, Kozłowski M, Grzybowski A, Szczypiński PM. Artificial intelligence in the detection of skin cancer: State of the art. Clin Dermatol 2024; 42:280-295. [PMID: 38181888 DOI: 10.1016/j.clindermatol.2023.12.022] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2024]
Abstract
The incidence of melanoma is increasing rapidly. This cancer has a good prognosis if detected early. For this reason, various systems of skin lesion image analysis, which support imaging diagnostics of this neoplasm, are developing very dynamically. To detect and recognize neoplastic lesions, such systems use various artificial intelligence (AI) algorithms. This area of computer science applications has recently undergone dynamic development, abounding in several solutions that are effective tools supporting diagnosticians in many medical specialties. In this contribution, a number of applications of different classes of AI algorithms for the detection of this skin melanoma are presented and evaluated. Both classic systems based on the analysis of dermatoscopic images as well as total body systems, enabling the analysis of the patient's whole body to detect moles and pathologic changes, are discussed. These increasingly popular applications that allow the analysis of lesion images using smartphones are also described. The quantitative evaluation of the discussed systems with particular emphasis on the method of validation of the implemented algorithms is presented. The advantages and limitations of AI in the analysis of lesion images are also discussed, and problems requiring a solution for more effective use of AI in dermatology are identified.
Collapse
Affiliation(s)
- Michał Strzelecki
- Institute of Electronics, Lodz University of Technology, Łódź, Poland.
| | - Marcin Kociołek
- Institute of Electronics, Lodz University of Technology, Łódź, Poland
| | - Maria Strąkowska
- Institute of Electronics, Lodz University of Technology, Łódź, Poland
| | - Michał Kozłowski
- Department of Mechatronics and Technical and IT Education, Faculty of Technical Science, University of Warmia and Mazury, Olsztyn, Poland
| | - Andrzej Grzybowski
- Institute for Research in Ophthalmology, Foundation for Ophthalmology Development, Poznan, Poland
| | | |
Collapse
|
10
|
Mahmoud NM, Soliman AM. Early automated detection system for skin cancer diagnosis using artificial intelligent techniques. Sci Rep 2024; 14:9749. [PMID: 38679633 PMCID: PMC11056372 DOI: 10.1038/s41598-024-59783-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Accepted: 04/15/2024] [Indexed: 05/01/2024] Open
Abstract
Recently, skin cancer is one of the spread and dangerous cancers around the world. Early detection of skin cancer can reduce mortality. Traditional methods for skin cancer detection are painful, time-consuming, expensive, and may cause the disease to spread out. Dermoscopy is used for noninvasive diagnosis of skin cancer. Artificial Intelligence (AI) plays a vital role in diseases' diagnosis especially in biomedical engineering field. The automated detection systems based on AI reduce the complications in the traditional methods and can improve skin cancer's diagnosis rate. In this paper, automated early detection system for skin cancer dermoscopic images using artificial intelligent is presented. Adaptive snake (AS) and region growing (RG) algorithms are used for automated segmentation and compared with each other. The results show that AS is accurate and efficient (accuracy = 96%) more than RG algorithm (accuracy = 90%). Artificial Neural networks (ANN) and support vector machine (SVM) algorithms are used for automated classification compared with each other. The proposed system with ANN algorithm shows high accuracy (94%), precision (96%), specificity (95.83%), sensitivity (recall) (92.30%), and F1-score (0.94). The proposed system is easy to use, time consuming, enables patients to make early detection for skin cancer and has high efficiency.
Collapse
Affiliation(s)
- Nourelhoda M Mahmoud
- Biomedical Engineering Department, Faculty of Engineering, Minia University, Minya, Egypt.
| | - Ahmed M Soliman
- Biomedical Engineering Department, Faculty of Engineering, Helwan University, Cairo, Egypt
| |
Collapse
|
11
|
Paramasivam GB, Ramasamy Rajammal R. Modelling a self-defined CNN for effectual classification of PCOS from ultrasound images. Technol Health Care 2024; 32:2893-2909. [PMID: 39177615 DOI: 10.3233/thc-230935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/24/2024]
Abstract
BACKGROUND Polycystic Ovary Syndrome (PCOS) is a medical condition that causes hormonal disorders in women in their childbearing years. The hormonal imbalance leads to a delayed or even absent menstrual cycle. Women with PCOS mainly suffer from extreme weight gain, facial hair growth, acne, hair loss, skin darkening, and irregular periods, leading to infertility in rare cases. Doctors usually examine ultrasound images and conclude the affected ovary but are incapable of deciding whether it is a normal cyst, PCOS, or cancer cyst manually. OBJECTIVE To have access to the high-risk crucial PCOS and to detect the condition and the treatment aimed at mitigating health hazards such as endometrial hyperplasia/cancer, infertility, pregnancy complications, and the long-term burden of chronic diseases such as cardiometabolic disorders linked with PCOS. METHODS The proposed Self-Defined Convolution Neural Network method (SD_CNN) is used to extract the features and machine learning models such as SVM, Random Forest, and Logistic Regression are used to classify PCOS images. The parameter tuning is done with lesser parameters in order to overcome over-fitting issues. The self-defined model predicts the occurrence of the cyst based on the analyzed features and classifies the class labels effectively. RESULTS The Random Forest Classifier was found to be the most reliable and accurate among Support Vector Machine (SVM) and Logistic Regression (LR), with accuracy being 96.43%. CONCLUSION The proposed model establishes better trade-off compared to various other approaches and works effectually for PCOS prediction.
Collapse
|
12
|
Nazari S, Garcia R. Automatic Skin Cancer Detection Using Clinical Images: A Comprehensive Review. Life (Basel) 2023; 13:2123. [PMID: 38004263 PMCID: PMC10672549 DOI: 10.3390/life13112123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 10/21/2023] [Accepted: 10/23/2023] [Indexed: 11/26/2023] Open
Abstract
Skin cancer has become increasingly common over the past decade, with melanoma being the most aggressive type. Hence, early detection of skin cancer and melanoma is essential in dermatology. Computational methods can be a valuable tool for assisting dermatologists in identifying skin cancer. Most research in machine learning for skin cancer detection has focused on dermoscopy images due to the existence of larger image datasets. However, general practitioners typically do not have access to a dermoscope and must rely on naked-eye examinations or standard clinical images. By using standard, off-the-shelf cameras to detect high-risk moles, machine learning has also proven to be an effective tool. The objective of this paper is to provide a comprehensive review of image-processing techniques for skin cancer detection using clinical images. In this study, we evaluate 51 state-of-the-art articles that have used machine learning methods to detect skin cancer over the past decade, focusing on clinical datasets. Even though several studies have been conducted in this field, there are still few publicly available clinical datasets with sufficient data that can be used as a benchmark, especially when compared to the existing dermoscopy databases. In addition, we observed that the available artifact removal approaches are not quite adequate in some cases and may also have a negative impact on the models. Moreover, the majority of the reviewed articles are working with single-lesion images and do not consider typical mole patterns and temporal changes in the lesions of each patient.
Collapse
|
13
|
Kommoss KS, Haenssle HA. Response to letter: Re: Observational study investigating the level of support from a convolutional neural network in face and scalp lesions deemed diagnostically 'unclear' by dermatologists. Eur J Cancer 2023; 195:113395. [PMID: 39492291 DOI: 10.1016/j.ejca.2023.113395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 10/15/2023] [Indexed: 11/05/2024]
Affiliation(s)
| | - Holger A Haenssle
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany.
| |
Collapse
|
14
|
Riaz S, Naeem A, Malik H, Naqvi RA, Loh WK. Federated and Transfer Learning Methods for the Classification of Melanoma and Nonmelanoma Skin Cancers: A Prospective Study. SENSORS (BASEL, SWITZERLAND) 2023; 23:8457. [PMID: 37896548 PMCID: PMC10611214 DOI: 10.3390/s23208457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 10/09/2023] [Accepted: 10/12/2023] [Indexed: 10/29/2023]
Abstract
Skin cancer is considered a dangerous type of cancer with a high global mortality rate. Manual skin cancer diagnosis is a challenging and time-consuming method due to the complexity of the disease. Recently, deep learning and transfer learning have been the most effective methods for diagnosing this deadly cancer. To aid dermatologists and other healthcare professionals in classifying images into melanoma and nonmelanoma cancer and enabling the treatment of patients at an early stage, this systematic literature review (SLR) presents various federated learning (FL) and transfer learning (TL) techniques that have been widely applied. This study explores the FL and TL classifiers by evaluating them in terms of the performance metrics reported in research studies, which include true positive rate (TPR), true negative rate (TNR), area under the curve (AUC), and accuracy (ACC). This study was assembled and systemized by reviewing well-reputed studies published in eminent fora between January 2018 and July 2023. The existing literature was compiled through a systematic search of seven well-reputed databases. A total of 86 articles were included in this SLR. This SLR contains the most recent research on FL and TL algorithms for classifying malignant skin cancer. In addition, a taxonomy is presented that summarizes the many malignant and non-malignant cancer classes. The results of this SLR highlight the limitations and challenges of recent research. Consequently, the future direction of work and opportunities for interested researchers are established that help them in the automated classification of melanoma and nonmelanoma skin cancers.
Collapse
Affiliation(s)
- Shafia Riaz
- Department of Computer Science, National College of Business Administration & Economics Sub Campus Multan, Multan 60000, Pakistan; (S.R.); (H.M.)
| | - Ahmad Naeem
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan;
| | - Hassaan Malik
- Department of Computer Science, National College of Business Administration & Economics Sub Campus Multan, Multan 60000, Pakistan; (S.R.); (H.M.)
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan;
| | - Rizwan Ali Naqvi
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Woong-Kee Loh
- School of Computing, Gachon University, Seongnam 13120, Republic of Korea
| |
Collapse
|
15
|
Derekas P, Spyridonos P, Likas A, Zampeta A, Gaitanis G, Bassukas I. The Promise of Semantic Segmentation in Detecting Actinic Keratosis Using Clinical Photography in the Wild. Cancers (Basel) 2023; 15:4861. [PMID: 37835555 PMCID: PMC10571759 DOI: 10.3390/cancers15194861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 10/01/2023] [Accepted: 10/02/2023] [Indexed: 10/15/2023] Open
Abstract
AK is a common precancerous skin condition that requires effective detection and treatment monitoring. To improve the monitoring of the AK burden in clinical settings with enhanced automation and precision, the present study evaluates the application of semantic segmentation based on the U-Net architecture (i.e., AKU-Net). AKU-Net employs transfer learning to compensate for the relatively small dataset of annotated images and integrates a recurrent process based on convLSTM to exploit contextual information and address the challenges related to the low contrast and ambiguous boundaries of AK-affected skin regions. We used an annotated dataset of 569 clinical photographs from 115 patients with actinic keratosis to train and evaluate the model. From each photograph, patches of 512 × 512 pixels were extracted using translation lesion boxes that encompassed lesions in different positions and captured different contexts of perilesional skin. In total, 16,488 translation-augmented crops were used for training the model, and 403 lesion center crops were used for testing. To demonstrate the improvements in AK detection, AKU-Net was compared with plain U-Net and U-Net++ architectures. The experimental results highlighted the effectiveness of AKU-Net, improving upon both automation and precision over existing approaches, paving the way for more effective and reliable evaluation of actinic keratosis in clinical settings.
Collapse
Affiliation(s)
- Panagiotis Derekas
- Department of Computer Science & Engineering, School of Engineering, University of Ioannina, 45110 Ioannina, Greece; (P.D.); (A.L.)
| | - Panagiota Spyridonos
- Department of Medical Physics, Faculty of Medicine, School of Health Sciences, University of Ioannina, 45110 Ioannina, Greece
| | - Aristidis Likas
- Department of Computer Science & Engineering, School of Engineering, University of Ioannina, 45110 Ioannina, Greece; (P.D.); (A.L.)
| | - Athanasia Zampeta
- Department of Skin and Venereal Diseases, Faculty of Medicine, School of Health Sciences, University of Ioannina, 45110 Ioannina, Greece; (A.Z.); (G.G.); (I.B.)
| | - Georgios Gaitanis
- Department of Skin and Venereal Diseases, Faculty of Medicine, School of Health Sciences, University of Ioannina, 45110 Ioannina, Greece; (A.Z.); (G.G.); (I.B.)
| | - Ioannis Bassukas
- Department of Skin and Venereal Diseases, Faculty of Medicine, School of Health Sciences, University of Ioannina, 45110 Ioannina, Greece; (A.Z.); (G.G.); (I.B.)
| |
Collapse
|
16
|
Josphineleela R, Raja Rao PBV, Shaikh A, Sudhakar K. A Multi-Stage Faster RCNN-Based iSPLInception for Skin Disease Classification Using Novel Optimization. J Digit Imaging 2023; 36:2210-2226. [PMID: 37322306 PMCID: PMC10502001 DOI: 10.1007/s10278-023-00848-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Revised: 04/15/2023] [Accepted: 05/09/2023] [Indexed: 06/17/2023] Open
Abstract
Nowadays, skin cancer is considered a serious disorder in which early identification and treatment of the disease are essential to ensure the stability of the patients. Several existing skin cancer detection methods are introduced by employing deep learning (DL) to perform skin disease classification. Convolutional neural networks (CNNs) can classify melanoma skin cancer images. But, it suffers from an overfitting problem. Therefore, to overcome this problem and to classify both benign and malignant tumors efficiently, the multi-stage faster RCNN-based iSPLInception (MFRCNN-iSPLI) method is proposed. Then, the test dataset is used for evaluating the proposed model performance. The faster RCNN is employed directly to perform image classification. This may heavily raise computation time and network complications. So, the iSPLInception model is applied in the multi-stage classification. In this, the iSPLInception model is formulated using the Inception-ResNet design. For candidate box deletion, the prairie dog optimization algorithm is utilized. We have utilized two skin disease datasets, namely, ISIC 2019 Skin lesion image classification and the HAM10000 dataset for conducting experimental results. The methods' accuracy, precision, recall, and F1 score values are calculated, and the results are compared with the existing methods such as CNN, hybrid DL, Inception v3, and VGG19. With 95.82% accuracy, 96.85% precision, 96.52% recall, and 0.95% F1 score values, the output analysis of each measure verified the prediction and classification effectiveness of the method.
Collapse
Affiliation(s)
- R Josphineleela
- Department of Computer Science and Engineering, Panimalar Engineering College, Poonamallee, Chennai, Tamil Nadu, India.
| | - P B V Raja Rao
- Department of Computer Science and Engineering, Shri Vishnu Engineering College for Women (A), JNTUK, Bhimavaram, Kakinada, Andhra Pradesh, India
| | - Amir Shaikh
- Department of Mechanical Engineering, Graphic Era Deemed to be University, Dehradun, India
| | - K Sudhakar
- Department of Computer Science & Engineering, Madanapalle Institute of Technology & Science, Madanapalle, Andhra Pradesh, India
| |
Collapse
|
17
|
Bibi S, Khan MA, Shah JH, Damaševičius R, Alasiry A, Marzougui M, Alhaisoni M, Masood A. MSRNet: Multiclass Skin Lesion Recognition Using Additional Residual Block Based Fine-Tuned Deep Models Information Fusion and Best Feature Selection. Diagnostics (Basel) 2023; 13:3063. [PMID: 37835807 PMCID: PMC10572512 DOI: 10.3390/diagnostics13193063] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 09/19/2023] [Accepted: 09/24/2023] [Indexed: 10/15/2023] Open
Abstract
Cancer is one of the leading significant causes of illness and chronic disease worldwide. Skin cancer, particularly melanoma, is becoming a severe health problem due to its rising prevalence. The considerable death rate linked with melanoma requires early detection to receive immediate and successful treatment. Lesion detection and classification are more challenging due to many forms of artifacts such as hairs, noise, and irregularity of lesion shape, color, irrelevant features, and textures. In this work, we proposed a deep-learning architecture for classifying multiclass skin cancer and melanoma detection. The proposed architecture consists of four core steps: image preprocessing, feature extraction and fusion, feature selection, and classification. A novel contrast enhancement technique is proposed based on the image luminance information. After that, two pre-trained deep models, DarkNet-53 and DensNet-201, are modified in terms of a residual block at the end and trained through transfer learning. In the learning process, the Genetic algorithm is applied to select hyperparameters. The resultant features are fused using a two-step approach named serial-harmonic mean. This step increases the accuracy of the correct classification, but some irrelevant information is also observed. Therefore, an algorithm is developed to select the best features called marine predator optimization (MPA) controlled Reyni Entropy. The selected features are finally classified using machine learning classifiers for the final classification. Two datasets, ISIC2018 and ISIC2019, have been selected for the experimental process. On these datasets, the obtained maximum accuracy of 85.4% and 98.80%, respectively. To prove the effectiveness of the proposed methods, a detailed comparison is conducted with several recent techniques and shows the proposed framework outperforms.
Collapse
Affiliation(s)
- Sobia Bibi
- Department of CS, COMSATS University Islamabad, Wah Campus, Islamabad 45550, Pakistan; (S.B.); (J.H.S.)
| | - Muhammad Attique Khan
- Department of Computer Science and Mathematics, Lebanese American University, Beirut 1102-2801, Lebanon;
- Department of CS, HITEC University, Taxila 47080, Pakistan
| | - Jamal Hussain Shah
- Department of CS, COMSATS University Islamabad, Wah Campus, Islamabad 45550, Pakistan; (S.B.); (J.H.S.)
| | - Robertas Damaševičius
- Center of Excellence Forest 4.0, Faculty of Informatics, Kaunas University of Technology, 51368 Kaunas, Lithuania;
| | - Areej Alasiry
- College of Computer Science, King Khalid University, Abha 61413, Saudi Arabia; (A.A.); (M.M.)
| | - Mehrez Marzougui
- College of Computer Science, King Khalid University, Abha 61413, Saudi Arabia; (A.A.); (M.M.)
| | - Majed Alhaisoni
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh 11564, Saudi Arabia;
| | - Anum Masood
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology (NTNU), 7034 Trondheim, Norway
| |
Collapse
|
18
|
Jin T, Pan S, Li X, Chen S. Metadata and Image Features Co-Aware Personalized Federated Learning for Smart Healthcare. IEEE J Biomed Health Inform 2023; 27:4110-4119. [PMID: 37220032 DOI: 10.1109/jbhi.2023.3279096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Recently, artificial intelligence has been widely used in intelligent disease diagnosis and has achieved great success. However, most of the works mainly rely on the extraction of image features but ignore the use of clinical text information of patients, which may limit the diagnosis accuracy fundamentally. In this paper, we propose a metadata and image features co-aware personalized federated learning scheme for smart healthcare. Specifically, we construct an intelligent diagnosis model, by which users can obtain fast and accurate diagnosis services. Meanwhile, a personalized federated learning scheme is designed to utilize the knowledge learned from other edge nodes with larger contributions and customize high-quality personalized classification models for each edge node. Subsequently, a Naïve Bayes classifier is devised for classifying patient metadata. And then the image and metadata diagnosis results are jointly aggregated by different weights to improve the accuracy of intelligent diagnosis. Finally, the simulation results illustrate that, compared with the existing methods, our proposed algorithm achieves better classification accuracy, reaching about 97.16% on PAD-UFES-20 dataset.
Collapse
|
19
|
Ain QU, Khan MA, Yaqoob MM, Khattak UF, Sajid Z, Khan MI, Al-Rasheed A. Privacy-Aware Collaborative Learning for Skin Cancer Prediction. Diagnostics (Basel) 2023; 13:2264. [PMID: 37443658 DOI: 10.3390/diagnostics13132264] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 06/15/2023] [Accepted: 06/24/2023] [Indexed: 07/15/2023] Open
Abstract
Cancer, including the highly dangerous melanoma, is marked by uncontrolled cell growth and the possibility of spreading to other parts of the body. However, the conventional approach to machine learning relies on centralized training data, posing challenges for data privacy in healthcare systems driven by artificial intelligence. The collection of data from diverse sensors leads to increased computing costs, while privacy restrictions make it challenging to employ traditional machine learning methods. Researchers are currently confronted with the formidable task of developing a skin cancer prediction technique that takes privacy concerns into account while simultaneously improving accuracy. In this work, we aimed to propose a decentralized privacy-aware learning mechanism to accurately predict melanoma skin cancer. In this research we analyzed federated learning from the skin cancer database. The results from the study showed that 92% accuracy was achieved by the proposed method, which was higher than baseline algorithms.
Collapse
Affiliation(s)
- Qurat Ul Ain
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad 22060, Pakistan
| | - Muhammad Amir Khan
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad 22060, Pakistan
| | - Muhammad Mateen Yaqoob
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad 22060, Pakistan
| | - Umar Farooq Khattak
- School of Information Technology, UNITAR International University, Kelana Jaya, Petaling Jaya 47301, Selangor, Malaysia
| | - Zohaib Sajid
- Computer Science Department, Faculty of Computer Sciences, ILMA University, Karachi 75190, Pakistan
| | - Muhammad Ijaz Khan
- Institute of Computing and Information Technology, Gomal University, Dera Ismail Khan 29220, Pakistan
| | - Amal Al-Rasheed
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
| |
Collapse
|
20
|
Yan R, Qu L, Wei Q, Huang SC, Shen L, Rubin DL, Xing L, Zhou Y. Label-Efficient Self-Supervised Federated Learning for Tackling Data Heterogeneity in Medical Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1932-1943. [PMID: 37018314 PMCID: PMC10880587 DOI: 10.1109/tmi.2022.3233574] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
The collection and curation of large-scale medical datasets from multiple institutions is essential for training accurate deep learning models, but privacy concerns often hinder data sharing. Federated learning (FL) is a promising solution that enables privacy-preserving collaborative learning among different institutions, but it generally suffers from performance deterioration due to heterogeneous data distributions and a lack of quality labeled data. In this paper, we present a robust and label-efficient self-supervised FL framework for medical image analysis. Our method introduces a novel Transformer-based self-supervised pre-training paradigm that pre-trains models directly on decentralized target task datasets using masked image modeling, to facilitate more robust representation learning on heterogeneous data and effective knowledge transfer to downstream models. Extensive empirical results on simulated and real-world medical imaging non-IID federated datasets show that masked image modeling with Transformers significantly improves the robustness of models against various degrees of data heterogeneity. Notably, under severe data heterogeneity, our method, without relying on any additional pre-training data, achieves an improvement of 5.06%, 1.53% and 4.58% in test accuracy on retinal, dermatology and chest X-ray classification compared to the supervised baseline with ImageNet pre-training. In addition, we show that our federated self-supervised pre-training methods yield models that generalize better to out-of-distribution data and perform more effectively when fine-tuning with limited labeled data, compared to existing FL algorithms. The code is available at https://github.com/rui-yan/SSL-FL.
Collapse
|
21
|
Grossarth S, Mosley D, Madden C, Ike J, Smith I, Huo Y, Wheless L. Recent Advances in Melanoma Diagnosis and Prognosis Using Machine Learning Methods. Curr Oncol Rep 2023; 25:635-645. [PMID: 37000340 PMCID: PMC10339689 DOI: 10.1007/s11912-023-01407-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/13/2023] [Indexed: 04/01/2023]
Abstract
PURPOSE OF REVIEW The purpose was to summarize the current role and state of artificial intelligence and machine learning in the diagnosis and management of melanoma. RECENT FINDINGS Deep learning algorithms can identify melanoma from clinical, dermoscopic, and whole slide pathology images with increasing accuracy. Efforts to provide more granular annotation to datasets and to identify new predictors are ongoing. There have been many incremental advances in both melanoma diagnostics and prognostic tools using artificial intelligence and machine learning. Higher quality input data will further improve these models' capabilities.
Collapse
Affiliation(s)
- Sarah Grossarth
- Quillen College of Medicine, East Tennessee State University, Johnson City, TN, USA
| | | | - Christopher Madden
- Department of Dermatology, Vanderbilt University Medicine Center, Nashville, TN, USA
- State University of New York Downstate College of Medicine, Brooklyn, NY, USA
| | - Jacqueline Ike
- Department of Dermatology, Vanderbilt University Medicine Center, Nashville, TN, USA
- Meharry Medical College, Nashville, TN, USA
| | - Isabelle Smith
- Department of Dermatology, Vanderbilt University Medicine Center, Nashville, TN, USA
- Vanderbilt University, Nashville, TN, USA
| | - Yuankai Huo
- Department of Computer Science and Electrical Engineering, Vanderbilt University, Nashville, TN, 37235, USA
| | - Lee Wheless
- Department of Dermatology, Vanderbilt University Medicine Center, Nashville, TN, USA.
- Department of Medicine, Division of Epidemiology, Vanderbilt University Medical Center, Nashville, TN, USA.
- Tennessee Valley Healthcare System VA Medical Center, Nashville, TN, USA.
| |
Collapse
|
22
|
A Deep Learning Fusion Approach to Diagnosis the Polycystic Ovary Syndrome (PCOS). APPLIED COMPUTATIONAL INTELLIGENCE AND SOFT COMPUTING 2023. [DOI: 10.1155/2023/9686697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2023] Open
Abstract
One of the leading causes of female infertility is PCOS, which is a hormonal disorder affecting women of childbearing age. The common symptoms of PCOS include increased acne, irregular period, increase in body hair, and overweight. Early diagnosis of PCOS is essential to manage the symptoms and reduce the associated health risks. Nonetheless, the diagnosis is based on Rotterdam criteria, including a high level of androgen hormones, ovulation failure, and polycystic ovaries on the ultrasound image (PCOM). At present, doctors and radiologists manually perform PCOM detection using ovary ultrasound by counting the number of follicles and determining their volume in the ovaries, which is one of the challenging PCOS diagnostic criteria. Moreover, such physicians require more tests and checks for biochemical/clinical signs in addition to the patient’s symptoms in order to decide the PCOS diagnosis. Furthermore, clinicians do not utilize a single diagnostic test or specific method to examine patients. This paper introduces the data set that includes the ultrasound image of the ovary with clinical data related to the patient that has been classified as PCOS and non-PCOS. Next, we proposed a deep learning model that can diagnose the PCOM based on the ultrasound image, which achieved 84.81% accuracy using the Inception model. Then, we proposed a fusion model that includes the ultrasound image with clinical data to diagnose the patient if they have PCOS or not. The best model that has been developed achieved 82.46% accuracy by extracting the image features using MobileNet architecture and combine with clinical features.
Collapse
|
23
|
Spyridonos P, Gaitanis G, Likas A, Bassukas ID. A convolutional neural network based system for detection of actinic keratosis in clinical images of cutaneous field cancerization. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104059] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
24
|
Yin W, Huang J, Chen J, Ji Y. A study on skin tumor classification based on dense convolutional networks with fused metadata. Front Oncol 2022; 12:989894. [PMID: 36601473 PMCID: PMC9806866 DOI: 10.3389/fonc.2022.989894] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Accepted: 11/21/2022] [Indexed: 12/23/2022] Open
Abstract
Skin cancer is the most common cause of death in humans. Statistics show that competent dermatologists have a diagnostic accuracy rate of less than 80%, while inexperienced dermatologists have a diagnostic accuracy rate of less than 60%. The higher rate of misdiagnosis will cause many patients to miss the most effective treatment window, risking the patients' life safety. However, the majority of the current study of neural network-based skin cancer diagnosis remains at the image level without patient clinical data. A deep convolutional network incorporating clinical patient metadata of skin cancer is presented to realize the classification model of skin cancer in order to further increase the accuracy of skin cancer diagnosis. There are three basic steps in the approach. First, the high-level features (edge features, color features, texture features, form features, etc.). Implied by the image were retrieved using the pre-trained DenseNet-169 model on the ImageNet dataset. Second, the MetaNet module is introduced, which uses metadata to control a certain portion of each feature channel in the DenseNet-169 network in order to produce weighted features. The MetaBlock module was added at the same time to improve the features retrieved from photos using metadata, choosing the most pertinent characteristics in accordance with the metadata data. The features of the MetaNet and MetaBlock modules were finally combined to create the MD-Net module, which was then used as input into the classifier to get the classification results for skin cancers. On the PAD-UFES-20 and ISIC 2019 datasets, the suggested methodology was assessed. The DenseNet-169 network model combined with this module, according to experimental data, obtains 81.4% in the balancing accuracy index, and its diagnostic accuracy is up between 8% and 15.6% compared to earlier efforts. Additionally, it solves the problem of actinic keratosis and poorly classified skin fibromas.
Collapse
Affiliation(s)
- Wenjun Yin
- School of Information and Communication, Guilin University Of Electronic Technology, Guilin, China
| | - Jianhua Huang
- School of Information and Communication, Guilin University Of Electronic Technology, Guilin, China,*Correspondence: Jianhua Huang, ; Jianlin Chen,
| | - Jianlin Chen
- Reproductive Endocrinology Clinic, Second Xiangya Hospital of Central South University, Changsha, China,*Correspondence: Jianhua Huang, ; Jianlin Chen,
| | - Yuanfa Ji
- School of Information and Communication, Guilin University Of Electronic Technology, Guilin, China
| |
Collapse
|
25
|
Kani MAJM, Parvathy MS, Banu SM, Kareem MSA. Classification of skin lesion images using modified Inception V3 model with transfer learning and augmentation techniques. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-221386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
In this article, a methodological approach to classifying malignant melanoma in dermoscopy images is presented. Early treatment of skin cancer increases the patient’s survival rate. The classification of melanoma skin cancer in the early stages is decided by dermatologists to treat the patient appropriately. Dermatologists need more time to diagnose affected skin lesions due to high resemblance between melanoma and benign. In this paper, a deep learning based Computer-Aided Diagnosis (CAD) system is developed to accurately classify skin lesions with a high classification rate. A new architecture has been framed to classify the skin lesion diseases using the Inception v3 model as a baseline architecture. The extracted features from the Inception Net are then flattened and are given to the DenseNet block to extracts more fine grained features of the lesion disease. The International Skin Imaging Collaboration (ISIC) archive datasets contains 3307 dermoscopy images which includes both benign and malignant skin images. The dataset images are trained using the proposed architecture with the learning rate of 0.0001, batch size 64 using various optimizer. The performance of the proposed model has also been evaluated using confusion matrix and ROC-AUC curves. The experimental results show that the proposed model attains a highest accuracy rate of 91.29 % compared to other state-of-the-art methods like ResNet, VGG-16, DenseNet, MobileNet. A confusion matrix and ROC curve are used to evaluate the performance analysis of skin images. The classification accuracy, sensitivity, specificity, testing accuracy, and AUC values were obtained at 90.33%, 82.87%, 91.29%, 87.12%, and 87.40% .
Collapse
Affiliation(s)
- Mohamed Ali Jinna Mathina Kani
- Computer Science and Engineering, Sethu Institute of Technology Affiliated to Anna University, Pulloor, Kariyapatti, Tamilnadu, India
| | - Meenakshi Sundaram Parvathy
- Computer Science and Engineering, Sethu Institute of Technology Affiliated to Anna University, Pulloor, Kariyapatti, Tamilnadu, India
| | | | | |
Collapse
|
26
|
Alwakid G, Gouda W, Humayun M, Sama NU. Melanoma Detection Using Deep Learning-Based Classifications. Healthcare (Basel) 2022; 10:healthcare10122481. [PMID: 36554004 PMCID: PMC9777935 DOI: 10.3390/healthcare10122481] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 12/02/2022] [Accepted: 12/05/2022] [Indexed: 12/13/2022] Open
Abstract
One of the most prevalent cancers worldwide is skin cancer, and it is becoming more common as the population ages. As a general rule, the earlier skin cancer can be diagnosed, the better. As a result of the success of deep learning (DL) algorithms in other industries, there has been a substantial increase in automated diagnosis systems in healthcare. This work proposes DL as a method for extracting a lesion zone with precision. First, the image is enhanced using Enhanced Super-Resolution Generative Adversarial Networks (ESRGAN) to improve the image's quality. Then, segmentation is used to segment Regions of Interest (ROI) from the full image. We employed data augmentation to rectify the data disparity. The image is then analyzed with a convolutional neural network (CNN) and a modified version of Resnet-50 to classify skin lesions. This analysis utilized an unequal sample of seven kinds of skin cancer from the HAM10000 dataset. With an accuracy of 0.86, a precision of 0.84, a recall of 0.86, and an F-score of 0.86, the proposed CNN-based Model outperformed the earlier study's results by a significant margin. The study culminates with an improved automated method for diagnosing skin cancer that benefits medical professionals and patients.
Collapse
Affiliation(s)
- Ghadah Alwakid
- Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Al Jouf, Saudi Arabia
- Correspondence:
| | - Walaa Gouda
- Department of Computer Engineering and Networks, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Al Jouf, Saudi Arabia
| | - Mamoona Humayun
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Al Jouf, Saudi Arabia
| | - Najm Us Sama
- Faculty of Computer Science and Information Technology, Universiti Malaysia Sarawak, Kota Samarahan 94300, Sarawak, Malaysia
| |
Collapse
|
27
|
Ahmadi Mehr R, Ameri A. Skin Cancer Detection Based on Deep Learning. J Biomed Phys Eng 2022; 12:559-568. [PMID: 36569567 PMCID: PMC9759648 DOI: 10.31661/jbpe.v0i0.2207-1517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2022] [Accepted: 10/28/2022] [Indexed: 06/17/2023]
Abstract
Background The conventional procedure of skin-related disease detection is a visual inspection by a dermatologist or a primary care clinician, using a dermatoscope. The suspected patients with early signs of skin cancer are referred for biopsy and histopathological examination to ensure the correct diagnosis and the best treatment. Recent advancements in deep convolutional neural networks (CNNs) have achieved excellent performance in automated skin cancer classification with accuracy similar to that of dermatologists. However, such improvements are yet to bring about a clinically trusted and popular system for skin cancer detection. Objective This study aimed to propose viable deep learning (DL) based method for the detection of skin cancer in lesion images, to help physicians in diagnosis. Material and Methods In this analytical study, a novel DL based model was proposed, in which other than the lesion image, the patient's data, including the anatomical site of the lesion, age, and gender were used as the model input to predict the type of the lesion. An Inception-ResNet-v2 CNN pretrained for object recognition was employed in the proposed model. Results Based on the results, the proposed method achieved promising performance for various skin conditions, and also using the patient's metadata in addition to the lesion image for classification improved the classification accuracy by at least 5% in all cases investigated. On a dataset of 57536 dermoscopic images, the proposed approach achieved an accuracy of 89.3%±1.1% in the discrimination of 4 major skin conditions and 94.5%±0.9% in the classification of benign vs. malignant lesions. Conclusion The promising results highlight the efficacy of the proposed approach and indicate that the inclusion of the patient's metadata with the lesion image can enhance the skin cancer detection performance.
Collapse
Affiliation(s)
- Reza Ahmadi Mehr
- MSc, Department of Biomedical Engineering and Medical Physics, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Ali Ameri
- PhD, Department of Biomedical Engineering and Medical Physics, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
28
|
Attention Cost-Sensitive Deep Learning-Based Approach for Skin Cancer Detection and Classification. Cancers (Basel) 2022; 14:cancers14235872. [PMID: 36497355 PMCID: PMC9735681 DOI: 10.3390/cancers14235872] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 11/21/2022] [Accepted: 11/23/2022] [Indexed: 12/03/2022] Open
Abstract
Deep learning-based models have been employed for the detection and classification of skin diseases through medical imaging. However, deep learning-based models are not effective for rare skin disease detection and classification. This is mainly due to the reason that rare skin disease has very a smaller number of data samples. Thus, the dataset will be highly imbalanced, and due to the bias in learning, most of the models give better performances. The deep learning models are not effective in detecting the affected tiny portions of skin disease in the overall regions of the image. This paper presents an attention-cost-sensitive deep learning-based feature fusion ensemble meta-classifier approach for skin cancer detection and classification. Cost weights are included in the deep learning models to handle the data imbalance during training. To effectively learn the optimal features from the affected tiny portions of skin image samples, attention is integrated into the deep learning models. The features from the finetuned models are extracted and the dimensionality of the features was further reduced by using a kernel-based principal component (KPCA) analysis. The reduced features of the deep learning-based finetuned models are fused and passed into ensemble meta-classifiers for skin disease detection and classification. The ensemble meta-classifier is a two-stage model. The first stage performs the prediction of skin disease and the second stage performs the classification by considering the prediction of the first stage as features. Detailed analysis of the proposed approach is demonstrated for both skin disease detection and skin disease classification. The proposed approach demonstrated an accuracy of 99% on skin disease detection and 99% on skin disease classification. In all the experimental settings, the proposed approach outperformed the existing methods and demonstrated a performance improvement of 4% accuracy for skin disease detection and 9% accuracy for skin disease classification. The proposed approach can be used as a computer-aided diagnosis (CAD) tool for the early diagnosis of skin cancer detection and classification in healthcare and medical environments. The tool can accurately detect skin diseases and classify the skin disease into their skin disease family.
Collapse
|
29
|
Dubuc A, Zitouni A, Thomas C, Kémoun P, Cousty S, Monsarrat P, Laurencin S. Improvement of Mucosal Lesion Diagnosis with Machine Learning Based on Medical and Semiological Data: An Observational Study. J Clin Med 2022; 11:jcm11216596. [PMID: 36362822 PMCID: PMC9654969 DOI: 10.3390/jcm11216596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Revised: 11/01/2022] [Accepted: 11/02/2022] [Indexed: 11/09/2022] Open
Abstract
Despite artificial intelligence used in skin dermatology diagnosis is booming, application in oral pathology remains to be developed. Early diagnosis and therefore early management, remain key points in the successful management of oral mucosa cancers. The objective was to develop and evaluate a machine learning algorithm that allows the prediction of oral mucosa lesions diagnosis. This cohort study included patients followed between January 2015 and December 2020 in the oral mucosal pathology consultation of the Toulouse University Hospital. Photographs and demographic and medical data were collected from each patient to constitute clinical cases. A machine learning model was then developed and optimized and compared to 5 models classically used in the field. A total of 299 patients representing 1242 records of oral mucosa lesions were used to train and evaluate machine learning models. Our model reached a mean accuracy of 0.84 for diagnostic prediction. The specificity and sensitivity range from 0.89 to 1.00 and 0.72 to 0.92, respectively. The other models were proven to be less efficient in performing this task. These results suggest the utility of machine learning-based tools in diagnosing oral mucosal lesions with high accuracy. Moreover, the results of this study confirm that the consideration of clinical data and medical history, in addition to the lesion itself, appears to play an important role.
Collapse
Affiliation(s)
- Antoine Dubuc
- School of Dental Medicine and CHU de Toulouse—Toulouse Institute of Oral Medicine and Science, 31062 Toulouse, France
- Center for Epidemiology and Research in POPulation Health (CERPOP), UMR 1295, Paul Sabatier University, 31062 Toulouse, France
| | - Anissa Zitouni
- Oral Surgery and Oral Medicine Department, CHU Limoges, 87000 Limoges, France
| | - Charlotte Thomas
- School of Dental Medicine and CHU de Toulouse—Toulouse Institute of Oral Medicine and Science, 31062 Toulouse, France
- InCOMM, I2MC, UMR 1297, Paul Sabatier University, 31062 Toulouse, France
| | - Philippe Kémoun
- School of Dental Medicine and CHU de Toulouse—Toulouse Institute of Oral Medicine and Science, 31062 Toulouse, France
- RESTORE Research Center, Université de Toulouse, INSERM, CNRS, EFS, ENVT, Université P. Sabatier, CHU de Toulouse, 31300 Toulouse, France
| | - Sarah Cousty
- School of Dental Medicine and CHU de Toulouse—Toulouse Institute of Oral Medicine and Science, 31062 Toulouse, France
- LAPLACE, UMR 5213 CNRS, Paul Sabatier University, 31062 Toulouse, France
| | - Paul Monsarrat
- School of Dental Medicine and CHU de Toulouse—Toulouse Institute of Oral Medicine and Science, 31062 Toulouse, France
- RESTORE Research Center, Université de Toulouse, INSERM, CNRS, EFS, ENVT, Université P. Sabatier, CHU de Toulouse, 31300 Toulouse, France
- Artificial and Natural Intelligence Toulouse Institute ANITI, 31013 Toulouse, France
| | - Sara Laurencin
- School of Dental Medicine and CHU de Toulouse—Toulouse Institute of Oral Medicine and Science, 31062 Toulouse, France
- Center for Epidemiology and Research in POPulation Health (CERPOP), UMR 1295, Paul Sabatier University, 31062 Toulouse, France
- Correspondence:
| |
Collapse
|
30
|
Khristoforova Y, Bratchenko I, Bratchenko L, Moryatov A, Kozlov S, Kaganov O, Zakharov V. Combination of Optical Biopsy with Patient Data for Improvement of Skin Tumor Identification. Diagnostics (Basel) 2022; 12:diagnostics12102503. [PMID: 36292192 PMCID: PMC9600416 DOI: 10.3390/diagnostics12102503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 10/11/2022] [Accepted: 10/12/2022] [Indexed: 11/29/2022] Open
Abstract
In this study, patient data were combined with Raman and autofluorescence spectral parameters for more accurate identification of skin tumors. The spectral and patient data of skin tumors were classified by projection on latent structures and discriminant analysis. The importance of patient risk factors was determined using statistical improvement of ROC AUCs when spectral parameters were combined with risk factors. Gender, age and tumor localization were found significant for classification of malignant versus benign neoplasms, resulting in improvement of ROC AUCs from 0.610 to 0.818 (p < 0.05). To distinguish melanoma versus pigmented skin tumors, the same factors significantly improved ROC AUCs from 0.709 to 0.810 (p < 0.05) when analyzed together according to the spectral data, but insignificantly (p > 0.05) when analyzed individually. For classification of melanoma versus seborrheic keratosis, no statistical improvement of ROC AUC was observed when the patient data were added to the spectral data. In all three classification models, additional risk factors such as occupational hazards, family history, sun exposure, size, and personal history did not statistically improve the ROC AUCs. In summary, combined analysis of spectral and patient data can be significant for certain diagnostic tasks: patient data demonstrated the distribution of skin tumor incidence in different demographic groups, whereas tumors within each group were distinguished using the spectral differences.
Collapse
Affiliation(s)
- Yulia Khristoforova
- Laser and Biotechnical Systems Department, Samara National Research University, 34 Moskovskoe Shosse, 443086 Samara, Russia
- Correspondence:
| | - Ivan Bratchenko
- Laser and Biotechnical Systems Department, Samara National Research University, 34 Moskovskoe Shosse, 443086 Samara, Russia
| | - Lyudmila Bratchenko
- Laser and Biotechnical Systems Department, Samara National Research University, 34 Moskovskoe Shosse, 443086 Samara, Russia
| | - Alexander Moryatov
- Department of Oncology, Samara State Medical University, 89 Chapaevskaya Str., 443099 Samara, Russia
| | - Sergey Kozlov
- Department of Oncology, Samara State Medical University, 89 Chapaevskaya Str., 443099 Samara, Russia
| | - Oleg Kaganov
- Department of Oncology, Samara State Medical University, 89 Chapaevskaya Str., 443099 Samara, Russia
| | - Valery Zakharov
- Laser and Biotechnical Systems Department, Samara National Research University, 34 Moskovskoe Shosse, 443086 Samara, Russia
| |
Collapse
|
31
|
Wang Y, Fariah Haq N, Cai J, Kalia S, Lui H, Jane Wang Z, Lee TK. Multi-channel content based image retrieval method for skin diseases using similarity network fusion and deep community analysis. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
32
|
Yilmaz A, Gencoglan G, Varol R, Demircali AA, Keshavarz M, Uvet H. MobileSkin: Classification of Skin Lesion Images Acquired Using Mobile Phone-Attached Hand-Held Dermoscopes. J Clin Med 2022; 11:5102. [PMID: 36079042 PMCID: PMC9457478 DOI: 10.3390/jcm11175102] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 08/17/2022] [Accepted: 08/26/2022] [Indexed: 11/16/2022] Open
Abstract
Dermoscopy is the visual examination of the skin under a polarized or non-polarized light source. By using dermoscopic equipment, many lesion patterns that are invisible under visible light can be clearly distinguished. Thus, more accurate decisions can be made regarding the treatment of skin lesions. The use of images collected from a dermoscope has both increased the performance of human examiners and allowed the development of deep learning models. The availability of large-scale dermoscopic datasets has allowed the development of deep learning models that can classify skin lesions with high accuracy. However, most dermoscopic datasets contain images that were collected from digital dermoscopic devices, as these devices are frequently used for clinical examination. However, dermatologists also often use non-digital hand-held (optomechanical) dermoscopes. This study presents a dataset consisting of dermoscopic images taken using a mobile phone-attached hand-held dermoscope. Four deep learning models based on the MobileNetV1, MobileNetV2, NASNetMobile, and Xception architectures have been developed to classify eight different lesion types using this dataset. The number of images in the dataset was increased with different data augmentation methods. The models were initialized with weights that were pre-trained on the ImageNet dataset, and then they were further fine-tuned using the presented dataset. The most successful models on the unseen test data, MobileNetV2 and Xception, had performances of 89.18% and 89.64%. The results were evaluated with the 5-fold cross-validation method and compared. Our method allows for automated examination of dermoscopic images taken with mobile phone-attached hand-held dermoscopes.
Collapse
Affiliation(s)
- Abdurrahim Yilmaz
- Mechatronics Engineering, Yildiz Technical University, 34349 Istanbul, Turkey
- Department of Business Administration, Bundeswehr University Munich, 85579 Munich, Germany
| | - Gulsum Gencoglan
- Department of Dermatology, Liv Hospital Vadistanbul, Istinye University, 34396 Istanbul, Turkey
| | - Rahmetullah Varol
- Mechatronics Engineering, Yildiz Technical University, 34349 Istanbul, Turkey
- Department of Business Administration, Bundeswehr University Munich, 85579 Munich, Germany
| | - Ali Anil Demircali
- Department of Metabolism, Digestion and Reproduction, The Hamlyn Centre, Imperial College London, Bessemer Building, London SW7 2AZ, UK
| | - Meysam Keshavarz
- Department of Electrical and Electronic Engineering, The Hamlyn Centre, Imperial College London, Bessemer Building, London SW7 2AZ, UK
| | - Huseyin Uvet
- Mechatronics Engineering, Yildiz Technical University, 34349 Istanbul, Turkey
| |
Collapse
|
33
|
MDFNet: application of multimodal fusion method based on skin image and clinical data to skin cancer classification. J Cancer Res Clin Oncol 2022:10.1007/s00432-022-04180-1. [PMID: 35918465 DOI: 10.1007/s00432-022-04180-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Accepted: 06/27/2022] [Indexed: 10/16/2022]
Abstract
PURPOSE Skin cancer is one of the ten most common cancer types in the world. Early diagnosis and treatment can effectively reduce the mortality of patients. Therefore, it is of great significance to develop an intelligent diagnosis system for skin cancer. According to the survey, at present, most intelligent diagnosis systems of skin cancer only use skin image data, but the multi-modal cross-fusion analysis using image data and patient clinical data is limited. Therefore, to further explore the complementary relationship between image data and patient clinical data, we propose multimode data fusion diagnosis network (MDFNet), a framework for skin cancer based on data fusion strategy. METHODS MDFNet establishes an effective mapping among heterogeneous data features, effectively fuses clinical skin images and patient clinical data, and effectively solves the problems of feature paucity and insufficient feature richness that only use single-mode data. RESULTS The experimental results present that our proposed smart skin cancer diagnosis model has an accuracy of 80.42%, which is an improvement of about 9% compared with the model accuracy using only medical images, thus effectively confirming the unique fusion advantages exhibited by MDFNet. CONCLUSIONS This illustrates that MDFNet can not only be applied as an effective auxiliary diagnostic tool for skin cancer diagnosis, help physicians improve clinical decision-making ability and effectively improve the efficiency of clinical medicine diagnosis, but also its proposed data fusion method fully exerts the advantage of information convergence and has a certain reference value for the intelligent diagnosis of numerous clinical diseases.
Collapse
|
34
|
Bratchenko IA, Bratchenko LA, Khristoforova YA, Moryatov AA, Kozlov SV, Zakharov VP. Classification of skin cancer using convolutional neural networks analysis of Raman spectra. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 219:106755. [PMID: 35349907 DOI: 10.1016/j.cmpb.2022.106755] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 01/21/2022] [Accepted: 03/11/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Skin cancer is the most common malignancy in whites accounting for about one third of all cancers diagnosed per year. Portable Raman spectroscopy setups for skin cancer "optical biopsy" are utilized to detect tumors based on their spectral features caused by the comparative presence of different chemical components. However, low signal-to-noise ratio in such systems may prevent accurate tumors classification. Thus, there is a challenge to develop methods for efficient skin tumors classification. METHODS We compare the performance of convolutional neural networks and the projection on latent structures with discriminant analysis for discriminating skin cancer using the analysis of Raman spectra with a high autofluorescence background stimulated by a 785 nm laser. We have registered the spectra of 617 cases of skin neoplasms (615 patients, 70 melanomas, 122 basal cell carcinomas, 12 squamous cell carcinomas and 413 benign tumors) in vivo with a portable Raman setup and created classification models both for convolutional neural networks and projection on latent structures approaches. To check the classification models stability, a 10-fold cross-validation was performed for all created models. To avoid models overfitting, the data was divided into a training set (80% of spectral dataset) and a test set (20% of spectral dataset). RESULTS The results for different classification tasks demonstrate that the convolutional neural networks significantly (p<0.01) outperforms the projection on latent structures. For the convolutional neural networks implementation we obtained ROC AUCs of 0.96 (0.94 - 0.97; 95% CI), 0.90 (0.85-0.94; 95% CI), and 0.92 (0.87 - 0.97; 95% CI) for classifying a) malignant vs benign tumors, b) melanomas vs pigmented tumors and c) melanomas vs seborrheic keratosis respectively. CONCLUSIONS The performance of the convolutional neural networks classification of skin tumors based on Raman spectra analysis is higher or comparable to the accuracy provided by trained dermatologists. The increased accuracy with the convolutional neural networks implementation is due to a more precise accounting of low intensity Raman bands in the intense autofluorescence background. The achieved high performance of skin tumors classifications with convolutional neural networks analysis opens a possibility for wide implementation of Raman setups in clinical setting.
Collapse
Affiliation(s)
- Ivan A Bratchenko
- Department of Laser and Biotechnical Systems, Samara University, 34 Moskovskoe Shosse, Samara, 443086, Russian Federation.
| | - Lyudmila A Bratchenko
- Department of Laser and Biotechnical Systems, Samara University, 34 Moskovskoe Shosse, Samara, 443086, Russian Federation
| | - Yulia A Khristoforova
- Department of Laser and Biotechnical Systems, Samara University, 34 Moskovskoe Shosse, Samara, 443086, Russian Federation
| | - Alexander A Moryatov
- Department of Oncology, Samara State Medical University, 159 Tashkentskaya Street, Samara, 443095, Russian Federation; Department of Visual Localization Tumors, Samara Regional Clinical Oncology Dispensary, 50 Solnechnaya Street, Samara, 443095, Russian Federation
| | - Sergey V Kozlov
- Department of Oncology, Samara State Medical University, 159 Tashkentskaya Street, Samara, 443095, Russian Federation; Department of Visual Localization Tumors, Samara Regional Clinical Oncology Dispensary, 50 Solnechnaya Street, Samara, 443095, Russian Federation
| | - Valery P Zakharov
- Department of Laser and Biotechnical Systems, Samara University, 34 Moskovskoe Shosse, Samara, 443086, Russian Federation
| |
Collapse
|
35
|
Bhimavarapu U, Battineni G. Skin Lesion Analysis for Melanoma Detection Using the Novel Deep Learning Model Fuzzy GC-SCNN. Healthcare (Basel) 2022; 10:healthcare10050962. [PMID: 35628098 PMCID: PMC9141659 DOI: 10.3390/healthcare10050962] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 05/19/2022] [Accepted: 05/21/2022] [Indexed: 02/01/2023] Open
Abstract
Melanoma is easily detectable by visual examination since it occurs on the skin’s surface. In melanomas, which are the most severe types of skin cancer, the cells that make melanin are affected. However, the lack of expert opinion increases the processing time and cost of computer-aided skin cancer detection. As such, we aimed to incorporate deep learning algorithms to conduct automatic melanoma detection from dermoscopic images. The fuzzy-based GrabCut-stacked convolutional neural networks (GC-SCNN) model was applied for image training. The image features extraction and lesion classification were performed on different publicly available datasets. The fuzzy GC-SCNN coupled with the support vector machines (SVM) produced 99.75% classification accuracy and 100% sensitivity and specificity, respectively. Additionally, model performance was compared with existing techniques and outcomes suggesting the proposed model could detect and classify the lesion segments with higher accuracy and lower processing time than other techniques.
Collapse
Affiliation(s)
- Usharani Bhimavarapu
- School of Competitive Coding, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Vijayawada 522502, India;
| | - Gopi Battineni
- Clinical Research Centre, School of Medicinal and Health Products Sciences, University of Camerino, 62032 Camerino, Italy
- Correspondence: ; Tel.: +39-3331728206
| |
Collapse
|
36
|
The Application of Differing Machine Learning Algorithms and Their Related Performance in Detecting Skin Cancers and Melanomas. J Skin Cancer 2022; 2022:2839162. [PMID: 35573163 PMCID: PMC9095410 DOI: 10.1155/2022/2839162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Revised: 03/04/2022] [Accepted: 03/15/2022] [Indexed: 11/17/2022] Open
Abstract
Skin cancer, and its less common form melanoma, is a disease affecting a wide variety of people. Since it is usually detected initially by visual inspection, it makes for a good candidate for the application of machine learning. With early detection being key to good outcomes, any method that can enhance the diagnostic accuracy of dermatologists and oncologists is of significant interest. When comparing different existing implementations of machine learning against public datasets and several we seek to create, we attempted to create a more accurate model that can be readily adapted to use in clinical settings. We tested combinations of models, including convolutional neural networks (CNNs), and various layers of data manipulation, such as the application of Gaussian functions and trimming of images to improve accuracy. We also created more traditional data models, including support vector classification, K-nearest neighbor, Naïve Bayes, random forest, and gradient boosting algorithms, and compared them to the CNN-based models we had created. Results had indicated that CNN-based algorithms significantly outperformed other data models we had created. Partial results of this work were presented at the CSET Presentations for Research Month at the Minnesota State University, Mankato.
Collapse
|
37
|
DİMİLİLER K, SEKEROGLU B. Skin Lesion Classification Using CNN-based Transfer Learning Model. GAZI UNIVERSITY JOURNAL OF SCIENCE 2022. [DOI: 10.35378/gujs.1063289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
38
|
Muhaba KA, Dese K, Aga TM, Zewdu FT, Simegn GL. Automatic skin disease diagnosis using deep learning from clinical image and patient information. SKIN HEALTH AND DISEASE 2022; 2:e81. [PMID: 35665205 PMCID: PMC9060152 DOI: 10.1002/ski2.81] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 11/08/2021] [Accepted: 11/09/2021] [Indexed: 01/15/2023]
Abstract
Background Skin diseases are the fourth most common cause of human illness which results in enormous non-fatal burden in daily life activities. They are caused by chemical, physical and biological factors. Visual assessment in combination with clinical information is the common diagnostic procedure for diseases. However, these procedures are manual, time-consuming, and require experience and excellent visual perception. Objectives In this study, an automated system is proposed for the diagnosis of five common skin diseases by using data from clinical images and patient information using deep learning pre-trained mobilenet-v2 model. Methods Clinical images were acquired using different smartphone cameras and patient's information were collected during patient registration. Different data preprocessing and augmentation techniques were applied to boost the performance of the model prior to training. Results A multiclass classification accuracy of 97.5%, sensitivity of 97.7% and precision of 97.7% has been achieved using the proposed technique for the common five skin disease. The results demonstrate that, the developed system provides excellent diagnosis performance for the five skin diseases. Conclusion The system has been designed as a smartphone application and it has the potential to be used as a decision support system in low resource settings, where both the expert dermatologist and the means are limited.
Collapse
Affiliation(s)
- K. A. Muhaba
- Biomedical Imaging UnitSchool of Biomedical EngineeringJimma Institute of TechnologyJimma UniversityJimmaEthiopia
- Department of Biomedical EngineeringKombolcha Institute of TechnologyWollo UniversityDessieEthiopia
| | - K. Dese
- Biomedical Imaging UnitSchool of Biomedical EngineeringJimma Institute of TechnologyJimma UniversityJimmaEthiopia
| | - T. M. Aga
- Department of Dermatology and VenereologyJimma Institute of Health SciencesJimma UniversityJimmaEthiopia
| | - F. T. Zewdu
- Department of DermatovenereologyBoru‐meda HospitalDessieEthiopia
| | - G. L. Simegn
- Biomedical Imaging UnitSchool of Biomedical EngineeringJimma Institute of TechnologyJimma UniversityJimmaEthiopia
| |
Collapse
|
39
|
InSiNet: a deep convolutional approach to skin cancer detection and segmentation. Med Biol Eng Comput 2022; 60:643-662. [PMID: 35028864 DOI: 10.1007/s11517-021-02473-0] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Accepted: 11/08/2021] [Indexed: 12/29/2022]
Abstract
Cancer is among the common causes of death around the world. Skin cancer is one of the most lethal types of cancer. Early diagnosis and treatment are vital in skin cancer. In addition to traditional methods, method such as deep learning is frequently used to diagnose and classify the disease. Expert experience plays a major role in diagnosing skin cancer. Therefore, for more reliable results in the diagnosis of skin lesions, deep learning algorithms can help in the correct diagnosis. In this study, we propose InSiNet, a deep learning-based convolutional neural network to detect benign and malignant lesions. The performance of the method is tested on International Skin Imaging Collaboration HAM10000 images (ISIC 2018), ISIC 2019, and ISIC 2020, under the same conditions. The computation time and accuracy comparison analysis was performed between the proposed algorithm and other machine learning techniques (GoogleNet, DenseNet-201, ResNet152V2, EfficientNetB0, RBF-support vector machine, logistic regression, and random forest). The results show that the developed InSiNet architecture outperforms the other methods achieving an accuracy of 94.59%, 91.89%, and 90.54% in ISIC 2018, 2019, and 2020 datasets, respectively. Since the deep learning algorithms eliminate the human factor during diagnosis, they can give reliable results in addition to traditional methods.
Collapse
|
40
|
Diagnosis of Skin Cancer Using Hierarchical Neural Networks and Metadata. PATTERN RECOGNITION AND IMAGE ANALYSIS 2022. [DOI: 10.1007/978-3-031-04881-4_6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
41
|
Rajeshwari J, Sughasiny M. Modified PNN classifier for diagnosing skin cancer severity condition using SMO optimization technique. AIMS ELECTRONICS AND ELECTRICAL ENGINEERING 2022. [DOI: 10.3934/electreng.2023005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
<abstract>
<p>Skin cancer is a pandemic disease now worldwide, and it is responsible for numerous deaths. Early phase detection is pre-eminent for controlling the spread of tumours throughout the body. However, existing algorithms for skin cancer severity detections still have some drawbacks, such as the analysis of skin lesions is not insignificant, slightly worse than that of dermatologists, and costly and time-consuming. Various machine learning algorithms have been used to detect the severity of the disease diagnosis. But it is more complex when detecting the disease. To overcome these issues, a modified Probabilistic Neural Network (MPNN) classifier has been proposed to determine the severity of skin cancer. The proposed method contains two phases such as training and testing the data. The collected features from the data of infected people are used as input to the modified PNN classifier in the current model. The neural network is also trained using Spider Monkey Optimization (SMO) approach. For analyzing the severity level, the classifier predicts four classes. The degree of skin cancer is determined depending on classifications. According to findings, the system achieved a 0.10% False Positive Rate (FPR), 0.03% error and 0.98% accuracy, while previous methods like KNN, NB, RF and SVM have accuracies of 0.90%, 0.70%, 0.803% and 0.86% correspondingly, which is lesser than the proposed approach.</p>
</abstract>
Collapse
|
42
|
Jaya J, Sasi A, Paulchamy B, Sabareesaan K, Rajagopal S, Balakrishnan N. Neural Network Based Filtering Method for Cancer Detection. Open Biomed Eng J 2021. [DOI: 10.2174/1874120702115010163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Objective:
The growth of anomalous cells in the human body in an uncontrolled manner is characterized as cancer. The detection of cancer is a multi-stage process in the clinical examination.
Methods:
It is mainly involved with the assistance of radiological imaging. The imaging technique is used to identify the spread of cancer in the human body. This imaging-based detection can be improved by incorporating the Image Processing methodologies. In image processing, the preprocessing is applied at the lower-level abstraction. It removes the unwanted noise pixel present in the image, which also distributes the pixel values based on the specific distribution method.
Results:
Neural Network is a learning and processing engine, which is mainly used to create cognitive intelligence in various domains. In this work, the Neural Network (NN) based filtering approach is developed to improve the preprocessing operation in the cancer detection process.
Conclusion:
The performance of the proposed filtering method is compared with the existing linear and non-linear filters in terms of Mean Squared Error (MSE), Peak Signal to Noise Ratio (PSNR) and Image Enhancement Factor (IEF).
Collapse
|
43
|
A survey on artificial intelligence techniques for chronic diseases: open issues and challenges. Artif Intell Rev 2021. [DOI: 10.1007/s10462-021-10084-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
44
|
Wen D, Khan SM, Ji Xu A, Ibrahim H, Smith L, Caballero J, Zepeda L, de Blas Perez C, Denniston AK, Liu X, Matin RN. Characteristics of publicly available skin cancer image datasets: a systematic review. LANCET DIGITAL HEALTH 2021; 4:e64-e74. [PMID: 34772649 DOI: 10.1016/s2589-7500(21)00252-1] [Citation(s) in RCA: 63] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 08/26/2021] [Accepted: 10/21/2021] [Indexed: 12/17/2022]
Abstract
Publicly available skin image datasets are increasingly used to develop machine learning algorithms for skin cancer diagnosis. However, the total number of datasets and their respective content is currently unclear. This systematic review aimed to identify and evaluate all publicly available skin image datasets used for skin cancer diagnosis by exploring their characteristics, data access requirements, and associated image metadata. A combined MEDLINE, Google, and Google Dataset search identified 21 open access datasets containing 106 950 skin lesion images, 17 open access atlases, eight regulated access datasets, and three regulated access atlases. Images and accompanying data from open access datasets were evaluated by two independent reviewers. Among the 14 datasets that reported country of origin, most (11 [79%]) originated from Europe, North America, and Oceania exclusively. Most datasets (19 [91%]) contained dermoscopic images or macroscopic photographs only. Clinical information was available regarding age for 81 662 images (76·4%), sex for 82 848 (77·5%), and body site for 79 561 (74·4%). Subject ethnicity data were available for 1415 images (1·3%), and Fitzpatrick skin type data for 2236 (2·1%). There was limited and variable reporting of characteristics and metadata among datasets, with substantial under-representation of darker skin types. This is the first systematic review to characterise publicly available skin image datasets, highlighting limited applicability to real-life clinical settings and restricted population representation, precluding generalisability. Quality standards for characteristics and metadata reporting for skin image datasets are needed.
Collapse
Affiliation(s)
- David Wen
- Oxford University Clinical Academic Graduate School, University of Oxford, Oxford, UK; Institute of Clinical Sciences, University of Birmingham, Birmingham, UK; Royal Berkshire Hospital, Royal Berkshire NHS Foundation Trust, Reading, UK
| | - Saad M Khan
- Royal Berkshire Hospital, Royal Berkshire NHS Foundation Trust, Reading, UK
| | - Antonio Ji Xu
- Department of Dermatology, Churchill Hospital, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
| | - Hussein Ibrahim
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK; Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, University of Birmingham, Birmingham, UK; Centre for Regulatory Science and Innovation, Birmingham Health Partners, Birmingham, UK
| | | | | | | | | | - Alastair K Denniston
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK; Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, University of Birmingham, Birmingham, UK; Centre for Regulatory Science and Innovation, Birmingham Health Partners, Birmingham, UK; Health Data Research UK, London, UK; National Institute for Health Research Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust, London, UK; UCL Institute of Ophthalmology, London, UK
| | - Xiaoxuan Liu
- University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK; Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, University of Birmingham, Birmingham, UK; Centre for Regulatory Science and Innovation, Birmingham Health Partners, Birmingham, UK; Health Data Research UK, London, UK
| | - Rubeta N Matin
- Department of Dermatology, Churchill Hospital, Oxford University Hospitals NHS Foundation Trust, Oxford, UK.
| |
Collapse
|
45
|
Decision and feature level fusion of deep features extracted from public COVID-19 data-sets. APPL INTELL 2021; 52:8551-8571. [PMID: 34764623 PMCID: PMC8556802 DOI: 10.1007/s10489-021-02945-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/19/2021] [Indexed: 10/26/2022]
Abstract
The Coronavirus disease (COVID-19), which is an infectious pulmonary disorder, has affected millions of people and has been declared as a global pandemic by the WHO. Due to highly contagious nature of COVID-19 and its high possibility of causing severe conditions in the patients, the development of rapid and accurate diagnostic tools have gained importance. The real-time reverse transcription-polymerize chain reaction (RT-PCR) is used to detect the presence of Coronavirus RNA by using the mucus and saliva mixture samples taken by the nasopharyngeal swab technique. But, RT-PCR suffers from having low-sensitivity especially in the early stage. Therefore, the usage of chest radiography has been increasing in the early diagnosis of COVID-19 due to its fast imaging speed, significantly low cost and low dosage exposure of radiation. In our study, a computer-aided diagnosis system for X-ray images based on convolutional neural networks (CNNs) and ensemble learning idea, which can be used by radiologists as a supporting tool in COVID-19 detection, has been proposed. Deep feature sets extracted by using seven CNN architectures were concatenated for feature level fusion and fed to multiple classifiers in terms of decision level fusion idea with the aim of discriminating COVID-19, pneumonia and no-finding classes. In the decision level fusion idea, a majority voting scheme was applied to the resultant decisions of classifiers. The obtained accuracy values and confusion matrix based evaluation criteria were presented for three progressively created data-sets. The aspects of the proposed method that are superior to existing COVID-19 detection studies have been discussed and the fusion performance of proposed approach was validated visually by using Class Activation Mapping technique. The experimental results show that the proposed approach has attained high COVID-19 detection performance that was proven by its comparable accuracy and superior precision/recall values with the existing studies.
Collapse
|
46
|
Wang Y, Cai J, Louie DC, Wang ZJ, Lee TK. Incorporating clinical knowledge with constrained classifier chain into a multimodal deep network for melanoma detection. Comput Biol Med 2021; 137:104812. [PMID: 34507158 DOI: 10.1016/j.compbiomed.2021.104812] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Revised: 08/25/2021] [Accepted: 08/25/2021] [Indexed: 10/20/2022]
Abstract
In recent years, vast developments in Computer-Aided Diagnosis (CAD) for skin diseases have generated much interest from clinicians and other eventual end-users of this technology. Introducing clinical domain knowledge to these machine learning strategies can help dispel the black box nature of these tools, strengthening clinician trust. Clinical domain knowledge also provides new information channels which can improve CAD diagnostic performance. In this paper, we propose a novel framework for malignant melanoma (MM) detection by fusing clinical images and dermoscopic images. The proposed method combines a multi-labeled deep feature extractor and clinically constrained classifier chain (CC). This allows the 7-point checklist, a clinician diagnostic algorithm, to be included in the decision level while maintaining the clinical importance of the major and minor criteria in the checklist. Our proposed framework achieved an average accuracy of 81.3% for detecting all criteria and melanoma when testing on a publicly available 7-point checklist dataset. This is the highest reported results, outperforming state-of-the-art methods in the literature by 6.4% or more. Analyses also show that the proposed system surpasses the single modality system of using either clinical images or dermoscopic images alone and the systems without adopting the approach of multi-label and clinically constrained classifier chain. Our carefully designed system demonstrates a substantial improvement over melanoma detection. By keeping the familiar major and minor criteria of the 7-point checklist and their corresponding weights, the proposed system may be more accepted by physicians as a human-interpretable CAD tool for automated melanoma detection.
Collapse
Affiliation(s)
- Yuheng Wang
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada; Department of Dermatology and Skin Science, University of British Columbia, Vancouver, BC, Canada; Photomedicine Institute, Vancouver Coast Health Research Institute, Vancouver, BC, Canada; Departments of Cancer Control Research and Integrative Oncology, BC Cancer, Vancouver, BC, Canada
| | - Jiayue Cai
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada.
| | - Daniel C Louie
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada; Department of Dermatology and Skin Science, University of British Columbia, Vancouver, BC, Canada; Photomedicine Institute, Vancouver Coast Health Research Institute, Vancouver, BC, Canada; Departments of Cancer Control Research and Integrative Oncology, BC Cancer, Vancouver, BC, Canada
| | - Z Jane Wang
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada; Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Tim K Lee
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada; Department of Dermatology and Skin Science, University of British Columbia, Vancouver, BC, Canada; Photomedicine Institute, Vancouver Coast Health Research Institute, Vancouver, BC, Canada; Departments of Cancer Control Research and Integrative Oncology, BC Cancer, Vancouver, BC, Canada
| |
Collapse
|
47
|
Pacheco AGC, Krohling RA. An Attention-Based Mechanism to Combine Images and Metadata in Deep Learning Models Applied to Skin Cancer Classification. IEEE J Biomed Health Inform 2021; 25:3554-3563. [DOI: 10.1109/jbhi.2021.3062002] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
48
|
Yamamoto N, Sukegawa S, Yamashita K, Manabe M, Nakano K, Takabatake K, Kawai H, Ozaki T, Kawasaki K, Nagatsuka H, Furuki Y, Yorifuji T. Effect of Patient Clinical Variables in Osteoporosis Classification Using Hip X-rays in Deep Learning Analysis. ACTA ACUST UNITED AC 2021; 57:medicina57080846. [PMID: 34441052 PMCID: PMC8398956 DOI: 10.3390/medicina57080846] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2021] [Revised: 08/09/2021] [Accepted: 08/18/2021] [Indexed: 01/08/2023]
Abstract
Background and Objectives: A few deep learning studies have reported that combining image features with patient variables enhanced identification accuracy compared with image-only models. However, previous studies have not statistically reported the additional effect of patient variables on the image-only models. This study aimed to statistically evaluate the osteoporosis identification ability of deep learning by combining hip radiographs with patient variables. Materials andMethods: We collected a dataset containing 1699 images from patients who underwent skeletal-bone-mineral density measurements and hip radiography at a general hospital from 2014 to 2021. Osteoporosis was assessed from hip radiographs using convolutional neural network (CNN) models (ResNet18, 34, 50, 101, and 152). We also investigated ensemble models with patient clinical variables added to each CNN. Accuracy, precision, recall, specificity, F1 score, and area under the curve (AUC) were calculated as performance metrics. Furthermore, we statistically compared the accuracy of the image-only model with that of an ensemble model that included images plus patient factors, including effect size for each performance metric. Results: All metrics were improved in the ResNet34 ensemble model compared with the image-only model. The AUC score in the ensemble model was significantly improved compared with the image-only model (difference 0.004; 95% CI 0.002–0.0007; p = 0.0004, effect size: 0.871). Conclusions: This study revealed the additional effect of patient variables in identification of osteoporosis using deep CNNs with hip radiographs. Our results provided evidence that the patient variables had additive synergistic effects on the image in osteoporosis identification.
Collapse
Affiliation(s)
- Norio Yamamoto
- Department of Epidemiology, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama 700-8558, Japan; (N.Y.); (T.Y.)
- Department of Orthopedic Surgery, Kagawa Prefectural Central Hospital, Kagawa 760-8557, Japan; (K.Y.); (K.K.)
- Systematic Review Workshop Peer Support Group (SRWS-PSG), Osaka 530-000, Japan
| | - Shintaro Sukegawa
- Department of Oral and Maxillofacial Surgery, Kagawa Prefectural Central Hospital, Kagawa 760-8557, Japan;
- Department of Oral Pathology and Medicine, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama 700-8558, Japan; (K.N.); (K.T.); (H.K.); (H.N.)
- Correspondence: ; Tel.: +81-878-113-333
| | - Kazutaka Yamashita
- Department of Orthopedic Surgery, Kagawa Prefectural Central Hospital, Kagawa 760-8557, Japan; (K.Y.); (K.K.)
| | - Masaki Manabe
- Department of Radiation Technology, Kagawa Prefectural Central Hospital, Kagawa 760-8557, Japan;
| | - Keisuke Nakano
- Department of Oral Pathology and Medicine, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama 700-8558, Japan; (K.N.); (K.T.); (H.K.); (H.N.)
| | - Kiyofumi Takabatake
- Department of Oral Pathology and Medicine, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama 700-8558, Japan; (K.N.); (K.T.); (H.K.); (H.N.)
| | - Hotaka Kawai
- Department of Oral Pathology and Medicine, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama 700-8558, Japan; (K.N.); (K.T.); (H.K.); (H.N.)
| | - Toshifumi Ozaki
- Department of Orthopaedic Surgery, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama 700-8558, Japan;
| | - Keisuke Kawasaki
- Department of Orthopedic Surgery, Kagawa Prefectural Central Hospital, Kagawa 760-8557, Japan; (K.Y.); (K.K.)
| | - Hitoshi Nagatsuka
- Department of Oral Pathology and Medicine, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama 700-8558, Japan; (K.N.); (K.T.); (H.K.); (H.N.)
| | - Yoshihiko Furuki
- Department of Oral and Maxillofacial Surgery, Kagawa Prefectural Central Hospital, Kagawa 760-8557, Japan;
| | - Takashi Yorifuji
- Department of Epidemiology, Graduate School of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama 700-8558, Japan; (N.Y.); (T.Y.)
| |
Collapse
|
49
|
Höhn J, Hekler A, Krieghoff-Henning E, Kather JN, Utikal JS, Meier F, Gellrich FF, Hauschild A, French L, Schlager JG, Ghoreschi K, Wilhelm T, Kutzner H, Heppt M, Haferkamp S, Sondermann W, Schadendorf D, Schilling B, Maron RC, Schmitt M, Jutzi T, Fröhling S, Lipka DB, Brinker TJ. Integrating Patient Data Into Skin Cancer Classification Using Convolutional Neural Networks: Systematic Review. J Med Internet Res 2021; 23:e20708. [PMID: 34255646 PMCID: PMC8285747 DOI: 10.2196/20708] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Revised: 10/29/2020] [Accepted: 04/13/2021] [Indexed: 11/17/2022] Open
Abstract
BACKGROUND Recent years have been witnessing a substantial improvement in the accuracy of skin cancer classification using convolutional neural networks (CNNs). CNNs perform on par with or better than dermatologists with respect to the classification tasks of single images. However, in clinical practice, dermatologists also use other patient data beyond the visual aspects present in a digitized image, further increasing their diagnostic accuracy. Several pilot studies have recently investigated the effects of integrating different subtypes of patient data into CNN-based skin cancer classifiers. OBJECTIVE This systematic review focuses on the current research investigating the impact of merging information from image features and patient data on the performance of CNN-based skin cancer image classification. This study aims to explore the potential in this field of research by evaluating the types of patient data used, the ways in which the nonimage data are encoded and merged with the image features, and the impact of the integration on the classifier performance. METHODS Google Scholar, PubMed, MEDLINE, and ScienceDirect were screened for peer-reviewed studies published in English that dealt with the integration of patient data within a CNN-based skin cancer classification. The search terms skin cancer classification, convolutional neural network(s), deep learning, lesions, melanoma, metadata, clinical information, and patient data were combined. RESULTS A total of 11 publications fulfilled the inclusion criteria. All of them reported an overall improvement in different skin lesion classification tasks with patient data integration. The most commonly used patient data were age, sex, and lesion location. The patient data were mostly one-hot encoded. There were differences in the complexity that the encoded patient data were processed with regarding deep learning methods before and after fusing them with the image features for a combined classifier. CONCLUSIONS This study indicates the potential benefits of integrating patient data into CNN-based diagnostic algorithms. However, how exactly the individual patient data enhance classification performance, especially in the case of multiclass classification problems, is still unclear. Moreover, a substantial fraction of patient data used by dermatologists remains to be analyzed in the context of CNN-based skin cancer classification. Further exploratory analyses in this promising field may optimize patient data integration into CNN-based skin cancer diagnostics for patients' benefits.
Collapse
Affiliation(s)
- Julia Höhn
- Digital Biomarkers for Oncology Group (DBO), National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Achim Hekler
- Digital Biomarkers for Oncology Group (DBO), National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Eva Krieghoff-Henning
- Digital Biomarkers for Oncology Group (DBO), National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Jakob Nikolas Kather
- Department of Medicine III, RWTH University Hospital Aachen, Aachen, Germany
- National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Jochen Sven Utikal
- Department of Dermatology, University Hospital of Mannheim, Mannheim, Germany
- Skin Cancer Unit, German Cancer Research Center, Heidelberg, Germany
| | - Friedegund Meier
- Skin Cancer Center at the University Cancer Centre and National Center for Tumor Diseases Dresden, Department of Dermatology, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Frank Friedrich Gellrich
- Skin Cancer Center at the University Cancer Centre and National Center for Tumor Diseases Dresden, Department of Dermatology, University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
| | - Axel Hauschild
- Department of Dermatology, University Hospital of Kiel, Kiel, Germany
| | - Lars French
- Department of Dermatology and Allergology, Ludwig Maximilian University of Munich, Munich, Germany
| | - Justin Gabriel Schlager
- Department of Dermatology and Allergology, Ludwig Maximilian University of Munich, Munich, Germany
| | - Kamran Ghoreschi
- Department of Dermatology, Venereology and Allergology, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Tabea Wilhelm
- Department of Dermatology, Venereology and Allergology, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Heinz Kutzner
- Dermatopathology Laboratory, Friedrichshafen, Germany
| | - Markus Heppt
- Department of Dermatology, University Hospital Erlangen, Erlangen, Germany
| | - Sebastian Haferkamp
- Department of Dermatology, University Hospital of Regensburg, Regensburg, Germany
| | - Wiebke Sondermann
- Department of Dermatology, University Hospital Essen, Essen, Germany
| | - Dirk Schadendorf
- Department of Dermatology, University Hospital Essen, Essen, Germany
| | - Bastian Schilling
- Department of Dermatology, University Hospital Würzburg, Würzburg, Germany
| | - Roman C Maron
- Digital Biomarkers for Oncology Group (DBO), National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Max Schmitt
- Digital Biomarkers for Oncology Group (DBO), National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tanja Jutzi
- Digital Biomarkers for Oncology Group (DBO), National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Stefan Fröhling
- National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany
- Translational Cancer Epigenomics, Division of Translational Medical Oncology, German Cancer Research Center, Heidelberg, Germany
| | - Daniel B Lipka
- National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany
- Translational Cancer Epigenomics, Division of Translational Medical Oncology, German Cancer Research Center, Heidelberg, Germany
- Faculty of Medicine, Medical Center, Otto-von-Guericke-University, Magdeburg, Germany
| | - Titus Josef Brinker
- Digital Biomarkers for Oncology Group (DBO), National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany
| |
Collapse
|
50
|
Cheong KH, Tang KJW, Zhao X, Koh JEW, Faust O, Gururajan R, Ciaccio EJ, Rajinikanth V, Acharya UR. An automated skin melanoma detection system with melanoma-index based on entropy features. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.05.010] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|