1
|
Rajinikanth V, Kadry S, Mohan R, Rama A, Khan MA, Kim J. Colon histology slide classification with deep-learning framework using individual and fused features. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:19454-19467. [PMID: 38052609 DOI: 10.3934/mbe.2023861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2023]
Abstract
Cancer occurrence rates are gradually rising in the population, which reasons a heavy diagnostic burden globally. The rate of colorectal (bowel) cancer (CC) is gradually rising, and is currently listed as the third most common cancer globally. Therefore, early screening and treatments with a recommended clinical protocol are necessary to trat cancer. The proposed research aim of this paper to develop a Deep-Learning Framework (DLF) to classify the colon histology slides into normal/cancer classes using deep-learning-based features. The stages of the framework include the following: (ⅰ) Image collection, resizing, and pre-processing; (ⅱ) Deep-Features (DF) extraction with a chosen scheme; (ⅲ) Binary classification with a 5-fold cross-validation; and (ⅳ) Verification of the clinical significance. This work classifies the considered image database using the follwing: (ⅰ) Individual DF, (ⅱ) Fused DF, and (ⅲ) Ensemble DF. The achieved results are separately verified using binary classifiers. The proposed work considered 4000 (2000 normal and 2000 cancer) histology slides for the examination. The result of this research confirms that the fused DF helps to achieve a detection accuracy of 99% with the K-Nearest Neighbor (KNN) classifier. In contrast, the individual and ensemble DF provide classification accuracies of 93.25 and 97.25%, respectively.
Collapse
Affiliation(s)
- Venkatesan Rajinikanth
- Department of Computer Science and Engineering, Division of Research and Innovation, Saveetha School of Engineering, SIMATS, Chennai 602105, India
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
- Artificial Intelligence Research Center (AIRC), College of Engineering and Information Technology, Ajman University, Ajman 346, United Arab Emirates
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos 1401, Lebanon
| | - Ramya Mohan
- Department of Computer Science and Engineering, Division of Research and Innovation, Saveetha School of Engineering, SIMATS, Chennai 602105, India
| | - Arunmozhi Rama
- Department of Computer Science and Engineering, Division of Research and Innovation, Saveetha School of Engineering, SIMATS, Chennai 602105, India
| | - Muhammad Attique Khan
- Department of Computer Science and Mathematics, Lebanese American University, Beirut, Lebanon
| | - Jungeun Kim
- Department of Software, Kongju National University, Cheonan, 31080, Korea
| |
Collapse
|
2
|
Ajmal M, Khan MA, Akram T, Alqahtani A, Alhaisoni M, Armghan A, Althubiti SA, Alenezi F. BF2SkNet: best deep learning features fusion-assisted framework for multiclass skin lesion classification. Neural Comput Appl 2023; 35:22115-22131. [DOI: 10.1007/s00521-022-08084-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Accepted: 11/22/2022] [Indexed: 12/14/2022]
|
3
|
Patel RH, Foltz EA, Witkowski A, Ludzik J. Analysis of Artificial Intelligence-Based Approaches Applied to Non-Invasive Imaging for Early Detection of Melanoma: A Systematic Review. Cancers (Basel) 2023; 15:4694. [PMID: 37835388 PMCID: PMC10571810 DOI: 10.3390/cancers15194694] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Revised: 09/05/2023] [Accepted: 09/19/2023] [Indexed: 10/15/2023] Open
Abstract
BACKGROUND Melanoma, the deadliest form of skin cancer, poses a significant public health challenge worldwide. Early detection is crucial for improved patient outcomes. Non-invasive skin imaging techniques allow for improved diagnostic accuracy; however, their use is often limited due to the need for skilled practitioners trained to interpret images in a standardized fashion. Recent innovations in artificial intelligence (AI)-based techniques for skin lesion image interpretation show potential for the use of AI in the early detection of melanoma. OBJECTIVE The aim of this study was to evaluate the current state of AI-based techniques used in combination with non-invasive diagnostic imaging modalities including reflectance confocal microscopy (RCM), optical coherence tomography (OCT), and dermoscopy. We also aimed to determine whether the application of AI-based techniques can lead to improved diagnostic accuracy of melanoma. METHODS A systematic search was conducted via the Medline/PubMed, Cochrane, and Embase databases for eligible publications between 2018 and 2022. Screening methods adhered to the 2020 version of the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. Included studies utilized AI-based algorithms for melanoma detection and directly addressed the review objectives. RESULTS We retrieved 40 papers amongst the three databases. All studies directly comparing the performance of AI-based techniques with dermatologists reported the superior or equivalent performance of AI-based techniques in improving the detection of melanoma. In studies directly comparing algorithm performance on dermoscopy images to dermatologists, AI-based algorithms achieved a higher ROC (>80%) in the detection of melanoma. In these comparative studies using dermoscopic images, the mean algorithm sensitivity was 83.01% and the mean algorithm specificity was 85.58%. Studies evaluating machine learning in conjunction with OCT boasted accuracy of 95%, while studies evaluating RCM reported a mean accuracy rate of 82.72%. CONCLUSIONS Our results demonstrate the robust potential of AI-based techniques to improve diagnostic accuracy and patient outcomes through the early identification of melanoma. Further studies are needed to assess the generalizability of these AI-based techniques across different populations and skin types, improve standardization in image processing, and further compare the performance of AI-based techniques with board-certified dermatologists to evaluate clinical applicability.
Collapse
Affiliation(s)
- Raj H. Patel
- Edward Via College of Osteopathic Medicine, VCOM-Louisiana, 4408 Bon Aire Dr, Monroe, LA 71203, USA
- Department of Dermatology, Oregon Health & Science University, Portland, OR 97239, USA (A.W.); (J.L.)
| | - Emilie A. Foltz
- Department of Dermatology, Oregon Health & Science University, Portland, OR 97239, USA (A.W.); (J.L.)
- Elson S. Floyd College of Medicine, Washington State University, Spokane, WA 99202, USA
| | - Alexander Witkowski
- Department of Dermatology, Oregon Health & Science University, Portland, OR 97239, USA (A.W.); (J.L.)
| | - Joanna Ludzik
- Department of Dermatology, Oregon Health & Science University, Portland, OR 97239, USA (A.W.); (J.L.)
| |
Collapse
|
4
|
Radhika V, Chandana BS. MSCDNet-based multi-class classification of skin cancer using dermoscopy images. PeerJ Comput Sci 2023; 9:e1520. [PMID: 37705664 PMCID: PMC10495937 DOI: 10.7717/peerj-cs.1520] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 07/18/2023] [Indexed: 09/15/2023]
Abstract
Background Skin cancer is a life-threatening disease, and early detection of skin cancer improves the chances of recovery. Skin cancer detection based on deep learning algorithms has recently grown popular. In this research, a new deep learning-based network model for the multiple skin cancer classification including melanoma, benign keratosis, melanocytic nevi, and basal cell carcinoma is presented. We propose an automatic Multi-class Skin Cancer Detection Network (MSCD-Net) model in this research. Methods The study proposes an efficient semantic segmentation deep learning model "DenseUNet" for skin lesion segmentation. The semantic skin lesions are segmented by using the DenseUNet model with a substantially deeper network and fewer trainable parameters. Some of the most relevant features are selected using Binary Dragonfly Algorithm (BDA). SqueezeNet-based classification can be made in the selected features. Results The performance of the proposed model is evaluated using the ISIC 2019 dataset. The DenseNet connections and UNet links are used by the proposed DenseUNet segmentation model, which produces low-level features and provides better segmentation results. The performance results of the proposed MSCD-Net model are superior to previous research in terms of effectiveness and efficiency on the standard ISIC 2019 dataset.
Collapse
Affiliation(s)
| | - B. Sai Chandana
- School of Computer Science Engineering, VIT-AP University, Amaravathi, India
| |
Collapse
|
5
|
Abd El-Fattah I, Ali AM, El-Shafai W, Taha TE, Abd El-Samie FE. Deep-learning-based super-resolution and classification framework for skin disease detection applications. OPTICAL AND QUANTUM ELECTRONICS 2023; 55:427. [DOI: 10.1007/s11082-022-04432-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Accepted: 11/25/2022] [Indexed: 09/01/2023]
|
6
|
Baskaran D, Nagamani Y, Merugula S, Premnath SP. MSRFNet for skin lesion segmentation and deep learning with hybrid optimization for skin cancer detection. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2187518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
|
7
|
Hasan MK, Ahamad MA, Yap CH, Yang G. A survey, review, and future trends of skin lesion segmentation and classification. Comput Biol Med 2023; 155:106624. [PMID: 36774890 DOI: 10.1016/j.compbiomed.2023.106624] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 01/04/2023] [Accepted: 01/28/2023] [Indexed: 02/03/2023]
Abstract
The Computer-aided Diagnosis or Detection (CAD) approach for skin lesion analysis is an emerging field of research that has the potential to alleviate the burden and cost of skin cancer screening. Researchers have recently indicated increasing interest in developing such CAD systems, with the intention of providing a user-friendly tool to dermatologists to reduce the challenges encountered or associated with manual inspection. This article aims to provide a comprehensive literature survey and review of a total of 594 publications (356 for skin lesion segmentation and 238 for skin lesion classification) published between 2011 and 2022. These articles are analyzed and summarized in a number of different ways to contribute vital information regarding the methods for the development of CAD systems. These ways include: relevant and essential definitions and theories, input data (dataset utilization, preprocessing, augmentations, and fixing imbalance problems), method configuration (techniques, architectures, module frameworks, and losses), training tactics (hyperparameter settings), and evaluation criteria. We intend to investigate a variety of performance-enhancing approaches, including ensemble and post-processing. We also discuss these dimensions to reveal their current trends based on utilization frequencies. In addition, we highlight the primary difficulties associated with evaluating skin lesion segmentation and classification systems using minimal datasets, as well as the potential solutions to these difficulties. Findings, recommendations, and trends are disclosed to inform future research on developing an automated and robust CAD system for skin lesion analysis.
Collapse
Affiliation(s)
- Md Kamrul Hasan
- Department of Bioengineering, Imperial College London, UK; Department of Electrical and Electronic Engineering (EEE), Khulna University of Engineering & Technology (KUET), Khulna 9203, Bangladesh.
| | - Md Asif Ahamad
- Department of Electrical and Electronic Engineering (EEE), Khulna University of Engineering & Technology (KUET), Khulna 9203, Bangladesh.
| | - Choon Hwai Yap
- Department of Bioengineering, Imperial College London, UK.
| | - Guang Yang
- National Heart and Lung Institute, Imperial College London, UK; Cardiovascular Research Centre, Royal Brompton Hospital, UK.
| |
Collapse
|
8
|
Liu Z, Xiong R, Jiang T. CI-Net: Clinical-Inspired Network for Automated Skin Lesion Recognition. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:619-632. [PMID: 36279355 DOI: 10.1109/tmi.2022.3215547] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The lesion recognition of dermoscopy images is significant for automated skin cancer diagnosis. Most of the existing methods ignore the medical perspective, which is crucial since this task requires a large amount of medical knowledge. A few methods are designed according to medical knowledge, but they ignore to be fully in line with doctors' entire learning and diagnosis process, since certain strategies and steps of those are conducted in practice for doctors. Thus, we put forward Clinical-Inspired Network (CI-Net) to involve the learning strategy and diagnosis process of doctors, as for a better analysis. The diagnostic process contains three main steps: the zoom step, the observe step and the compare step. To simulate these, we introduce three corresponding modules: a lesion area attention module, a feature extraction module and a lesion feature attention module. To simulate the distinguish strategy, which is commonly used by doctors, we introduce a distinguish module. We evaluate our proposed CI-Net on six challenging datasets, including ISIC 2016, ISIC 2017, ISIC 2018, ISIC 2019, ISIC 2020 and PH2 datasets, and the results indicate that CI-Net outperforms existing work. The code is publicly available at https://github.com/lzh19961031/Dermoscopy_classification.
Collapse
|
9
|
A comprehensive analysis of dermoscopy images for melanoma detection via deep CNN features. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104186] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
10
|
Irkham I, Ibrahim AU, Nwekwo CW, Al-Turjman F, Hartati YW. Current Technologies for Detection of COVID-19: Biosensors, Artificial Intelligence and Internet of Medical Things (IoMT): Review. SENSORS (BASEL, SWITZERLAND) 2022; 23:426. [PMID: 36617023 PMCID: PMC9824404 DOI: 10.3390/s23010426] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 12/14/2022] [Accepted: 12/21/2022] [Indexed: 06/17/2023]
Abstract
Despite the fact that COVID-19 is no longer a global pandemic due to development and integration of different technologies for the diagnosis and treatment of the disease, technological advancement in the field of molecular biology, electronics, computer science, artificial intelligence, Internet of Things, nanotechnology, etc. has led to the development of molecular approaches and computer aided diagnosis for the detection of COVID-19. This study provides a holistic approach on COVID-19 detection based on (1) molecular diagnosis which includes RT-PCR, antigen-antibody, and CRISPR-based biosensors and (2) computer aided detection based on AI-driven models which include deep learning and transfer learning approach. The review also provide comparison between these two emerging technologies and open research issues for the development of smart-IoMT-enabled platforms for the detection of COVID-19.
Collapse
Affiliation(s)
- Irkham Irkham
- Department of Chemistry, Faculty of Mathematics and Natural Sciences, Padjadjaran University, Bandung 40173, Indonesia
| | | | - Chidi Wilson Nwekwo
- Department of Biomedical Engineering, Near East University, Mersin 99138, Turkey
| | - Fadi Al-Turjman
- Research Center for AI and IoT, Faculty of Engineering, University of Kyrenia, Mersin 99138, Turkey
- Artificial Intelligence Engineering Department, AI and Robotics Institute, Near East University, Mersin 99138, Turkey
| | - Yeni Wahyuni Hartati
- Department of Chemistry, Faculty of Mathematics and Natural Sciences, Padjadjaran University, Bandung 40173, Indonesia
| |
Collapse
|
11
|
Nawaz M, Nazir T, Javed A, Malik KM, Saudagar AKJ, Khan MB, Abul Hasanat MH, AlTameem A, AlKhathami M. Efficient-ECGNet framework for COVID-19 classification and correlation prediction with the cardio disease through electrocardiogram medical imaging. Front Med (Lausanne) 2022; 9:1005920. [PMID: 36405585 PMCID: PMC9672089 DOI: 10.3389/fmed.2022.1005920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 10/19/2022] [Indexed: 11/06/2022] Open
Abstract
In the last 2 years, we have witnessed multiple waves of coronavirus that affected millions of people around the globe. The proper cure for COVID-19 has not been diagnosed as vaccinated people also got infected with this disease. Precise and timely detection of COVID-19 can save human lives and protect them from complicated treatment procedures. Researchers have employed several medical imaging modalities like CT-Scan and X-ray for COVID-19 detection, however, little concentration is invested in the ECG imaging analysis. ECGs are quickly available image modality in comparison to CT-Scan and X-ray, therefore, we use them for diagnosing COVID-19. Efficient and effective detection of COVID-19 from the ECG signal is a complex and time-taking task, as researchers usually convert them into numeric values before applying any method which ultimately increases the computational burden. In this work, we tried to overcome these challenges by directly employing the ECG images in a deep-learning (DL)-based approach. More specifically, we introduce an Efficient-ECGNet method that presents an improved version of the EfficientNetV2-B4 model with additional dense layers and is capable of accurately classifying the ECG images into healthy, COVID-19, myocardial infarction (MI), abnormal heartbeats (AHB), and patients with Previous History of Myocardial Infarction (PMI) classes. Moreover, we introduce a module to measure the similarity of COVID-19-affected ECG images with the rest of the diseases. To the best of our knowledge, this is the first effort to approximate the correlation of COVID-19 patients with those having any previous or current history of cardio or respiratory disease. Further, we generate the heatmaps to demonstrate the accurate key-points computation ability of our method. We have performed extensive experimentation on a publicly available dataset to show the robustness of the proposed approach and confirmed that the Efficient-ECGNet framework is reliable to classify the ECG-based COVID-19 samples.
Collapse
Affiliation(s)
- Marriam Nawaz
- Department of Computer Science, University of Engineering and Technology, Taxila, Pakistan
- Department of Software Engineering, University of Engineering and Technology, Taxila, Pakistan
| | - Tahira Nazir
- Department of Computer Science, University of Engineering and Technology, Taxila, Pakistan
- Department of Computer Science, Faculty of Computing, Riphah International University Gulberg Green Campus, Islamabad, Pakistan
| | - Ali Javed
- Department of Software Engineering, University of Engineering and Technology, Taxila, Pakistan
| | - Khalid Mahmood Malik
- Department of Computer Science and Engineering, Oakland University, Rochester, NY, United States
| | - Abdul Khader Jilani Saudagar
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
- *Correspondence: Abdul Khader Jilani Saudagar,
| | - Muhammad Badruddin Khan
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| | - Mozaherul Hoque Abul Hasanat
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| | - Abdullah AlTameem
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| | - Mohammed AlKhathami
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| |
Collapse
|
12
|
Wang D, Chen X, Wu Y, Tang H, Deng P. Artificial intelligence for assessing the severity of microtia via deep convolutional neural networks. Front Surg 2022; 9:929110. [PMID: 36157410 PMCID: PMC9492961 DOI: 10.3389/fsurg.2022.929110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 08/23/2022] [Indexed: 11/21/2022] Open
Abstract
Background Microtia is a congenital abnormality varying from slightly structural abnormalities to the complete absence of the external ear. However, there is no gold standard for assessing the severity of microtia. Objectives The purpose of this study was to develop and test models of artificial intelligence to assess the severity of microtia using clinical photographs. Methods A total of 800 ear images were included, and randomly divided into training, validation, and test set. Nine convolutional neural networks (CNNs) were trained for classifying the severity of microtia. The evaluation metrics, including accuracy, precision, recall, F1 score, receiver operating characteristic curve, and area under the curve (AUC) values, were used to evaluate the performance of the models. Results Eight CNNs were tested with accuracy greater than 0.8. Among them, Alexnet and Mobilenet achieved the highest accuracy of 0.9. Except for Mnasnet, all CNNs achieved high AUC values higher than 0.9 for each grade of microtia. In most CNNs, the grade I microtia had the lowest AUC values and the normal ear had the highest AUC values. Conclusion CNN can classify the severity of microtia with high accuracy. Artificial intelligence is expected to provide an objective, automated assessment of the severity of microtia.
Collapse
Affiliation(s)
| | | | | | | | - Pei Deng
- Correspondence: Pei Deng Hongbo Tang
| |
Collapse
|
13
|
An improved transformer network for skin cancer classification. Comput Biol Med 2022; 149:105939. [PMID: 36037629 DOI: 10.1016/j.compbiomed.2022.105939] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Revised: 08/04/2022] [Accepted: 08/06/2022] [Indexed: 11/23/2022]
Abstract
BACKGROUND Use of artificial intelligence to identify dermoscopic images has brought major breakthroughs in recent years to the early diagnosis and early treatment of skin cancer, the incidence of which is increasing year by year worldwide and poses a great threat to human health. Achievements have been made in the research of skin cancer image classification by using the deep backbone of the convolutional neural network (CNN). This approach, however, only extracts the features of small objects in the image, and cannot locate the important parts. OBJECTIVES As a result, researchers of the paper turn to vision transformers (VIT) which has demonstrated powerful performance in traditional classification tasks. The self-attention is to improve the value of important features and suppress the features that cause noise. Specifically, an improved transformer network named SkinTrans is proposed. INNOVATIONS To verify its efficiency, a three step procedure is followed. Firstly, a VIT network is established to verify the effectiveness of SkinTrans in skin cancer classification. Then multi-scale and overlapping sliding windows are used to serialize the image and multi-scale patch embedding is carried out which pay more attention to multi-scale features. Finally, contrastive learning is used which makes the similar data of skin cancer encode similarly so that the encoding results of different data are as different as possible. MAIN RESULTS The experiment is carried out based on two datasets, namely (1) HAM10000: a large dataset of multi-source dermatoscopic images of common skin cancers; (2)A clinical dataset of skin cancer collected by dermoscopy. The model proposed has achieved 94.3% accuracy on HAM10000 and 94.1% accuracy on our datasets, which verifies the efficiency of SkinTrans. CONCLUSIONS The transformer network has not only achieved good results in natural language but also achieved ideal results in the field of vision, which also lays a good foundation for skin cancer classification based on multimodal data. This paper is convinced that it will be of interest to dermatologists, clinical researchers, computer scientists and researchers in other related fields, and provide greater convenience for patients.
Collapse
|
14
|
Naeem A, Anees T, Fiza M, Naqvi RA, Lee SW. SCDNet: A Deep Learning-Based Framework for the Multiclassification of Skin Cancer Using Dermoscopy Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22155652. [PMID: 35957209 PMCID: PMC9371071 DOI: 10.3390/s22155652] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 07/19/2022] [Accepted: 07/25/2022] [Indexed: 05/27/2023]
Abstract
Skin cancer is a deadly disease, and its early diagnosis enhances the chances of survival. Deep learning algorithms for skin cancer detection have become popular in recent years. A novel framework based on deep learning is proposed in this study for the multiclassification of skin cancer types such as Melanoma, Melanocytic Nevi, Basal Cell Carcinoma and Benign Keratosis. The proposed model is named as SCDNet which combines Vgg16 with convolutional neural networks (CNN) for the classification of different types of skin cancer. Moreover, the accuracy of the proposed method is also compared with the four state-of-the-art pre-trained classifiers in the medical domain named Resnet 50, Inception v3, AlexNet and Vgg19. The performance of the proposed SCDNet classifier, as well as the four state-of-the-art classifiers, is evaluated using the ISIC 2019 dataset. The accuracy rate of the proposed SDCNet is 96.91% for the multiclassification of skin cancer whereas, the accuracy rates for Resnet 50, Alexnet, Vgg19 and Inception-v3 are 95.21%, 93.14%, 94.25% and 92.54%, respectively. The results showed that the proposed SCDNet performed better than the competing classifiers.
Collapse
Affiliation(s)
- Ahmad Naeem
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan;
| | - Tayyaba Anees
- Department of Software Engineering, University of Management and Technology, Lahore 54000, Pakistan;
| | - Makhmoor Fiza
- Department of Management Sciences and Technology, Begum Nusrat Bhutto Women University, Sukkur 65200, Pakistan;
| | - Rizwan Ali Naqvi
- Department of Unmanned Vehicle Engineering, Sejong University, Seoul 05006, Korea
| | - Seung-Won Lee
- Department of Data Science, College of Software Convergence, Sejong University, Seoul 05006, Korea
- School of Medicine, Sungkyunkwan University, Suwon 16419, Korea
| |
Collapse
|
15
|
Classification of multi-differentiated liver cancer pathological images based on deep learning attention mechanism. BMC Med Inform Decis Mak 2022; 22:176. [PMID: 35787805 PMCID: PMC9254605 DOI: 10.1186/s12911-022-01919-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Accepted: 06/23/2022] [Indexed: 12/24/2022] Open
Abstract
PURPOSE Liver cancer is one of the most common malignant tumors in the world, ranking fifth in malignant tumors. The degree of differentiation can reflect the degree of malignancy. The degree of malignancy of liver cancer can be divided into three types: poorly differentiated, moderately differentiated, and well differentiated. Diagnosis and treatment of different levels of differentiation are crucial to the survival rate and survival time of patients. As the gold standard for liver cancer diagnosis, histopathological images can accurately distinguish liver cancers of different levels of differentiation. Therefore, the study of intelligent classification of histopathological images is of great significance to patients with liver cancer. At present, the classification of histopathological images of liver cancer with different degrees of differentiation has disadvantages such as time-consuming, labor-intensive, and large manual investment. In this context, the importance of intelligent classification of histopathological images is obvious. METHODS Based on the development of a complete data acquisition scheme, this paper applies the SENet deep learning model to the intelligent classification of all types of differentiated liver cancer histopathological images for the first time, and compares it with the four deep learning models of VGG16, ResNet50, ResNet_CBAM, and SKNet. The evaluation indexes adopted in this paper include confusion matrix, Precision, recall, F1 Score, etc. These evaluation indexes can be used to evaluate the model in a very comprehensive and accurate way. RESULTS Five different deep learning classification models are applied to collect the data set and evaluate model. The experimental results show that the SENet model has achieved the best classification effect with an accuracy of 95.27%. The model also has good reliability and generalization ability. The experiment proves that the SENet deep learning model has a good application prospect in the intelligent classification of histopathological images. CONCLUSIONS This study also proves that deep learning has great application value in solving the time-consuming and laborious problems existing in traditional manual film reading, and it has certain practical significance for the intelligent classification research of other cancer histopathological images.
Collapse
|
16
|
Li S, Wang H, Xiao Y, Zhang M, Yu N, Zeng A, Wang X. A Workflow for Computer-Aided Evaluation of Keloid Based on Laser Speckle Contrast Imaging and Deep Learning. J Pers Med 2022; 12:jpm12060981. [PMID: 35743764 PMCID: PMC9224605 DOI: 10.3390/jpm12060981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2022] [Revised: 06/05/2022] [Accepted: 06/07/2022] [Indexed: 11/16/2022] Open
Abstract
A keloid results from abnormal wound healing, which has different blood perfusion and growth states among patients. Active monitoring and treatment of actively growing keloids at the initial stage can effectively inhibit keloid enlargement and has important medical and aesthetic implications. LSCI (laser speckle contrast imaging) has been developed to obtain the blood perfusion of the keloid and shows a high relationship with the severity and prognosis. However, the LSCI-based method requires manual annotation and evaluation of the keloid, which is time consuming. Although many studies have designed deep-learning networks for the detection and classification of skin lesions, there are still challenges to the assessment of keloid growth status, especially based on small samples. This retrospective study included 150 untreated keloid patients, intensity images, and blood perfusion images obtained from LSCI. A newly proposed workflow based on cascaded vision transformer architecture was proposed, reaching a dice coefficient value of 0.895 for keloid segmentation by 2% improvement, an error of 8.6 ± 5.4 perfusion units, and a relative error of 7.8% ± 6.6% for blood calculation, and an accuracy of 0.927 for growth state prediction by 1.4% improvement than baseline.
Collapse
Affiliation(s)
- Shuo Li
- Department of Plastic Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China; (S.L.); (Y.X.); (M.Z.); (N.Y.); (A.Z.)
| | - He Wang
- Department of Neurological Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China;
| | - Yiding Xiao
- Department of Plastic Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China; (S.L.); (Y.X.); (M.Z.); (N.Y.); (A.Z.)
| | - Mingzi Zhang
- Department of Plastic Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China; (S.L.); (Y.X.); (M.Z.); (N.Y.); (A.Z.)
| | - Nanze Yu
- Department of Plastic Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China; (S.L.); (Y.X.); (M.Z.); (N.Y.); (A.Z.)
| | - Ang Zeng
- Department of Plastic Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China; (S.L.); (Y.X.); (M.Z.); (N.Y.); (A.Z.)
| | - Xiaojun Wang
- Department of Plastic Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China; (S.L.); (Y.X.); (M.Z.); (N.Y.); (A.Z.)
- Correspondence:
| |
Collapse
|
17
|
Cat Swarm Optimization-Based Computer-Aided Diagnosis Model for Lung Cancer Classification in Computed Tomography Images. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12115491] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
Lung cancer is the most significant cancer that heavily contributes to cancer-related mortality rate, due to its violent nature and late diagnosis at advanced stages. Early identification of lung cancer is essential for improving the survival rate. Various imaging modalities, including X-rays and computed tomography (CT) scans, are employed to diagnose lung cancer. Computer-aided diagnosis (CAD) models are necessary for minimizing the burden upon radiologists and enhancing detection efficiency. Currently, computer vision (CV) and deep learning (DL) models are employed to detect and classify the lung cancer in a precise manner. In this background, the current study presents a cat swarm optimization-based computer-aided diagnosis model for lung cancer classification (CSO-CADLCC) model. The proposed CHO-CADLCC technique initially pre-process the data using the Gabor filtering-based noise removal technique. Furthermore, feature extraction of the pre-processed images is performed with the help of NASNetLarge model. This model is followed by the CSO algorithm with weighted extreme learning machine (WELM) model, which is exploited for lung nodule classification. Finally, the CSO algorithm is utilized for optimal parameter tuning of the WELM model, resulting in an improved classification performance. The experimental validation of the proposed CSO-CADLCC technique was conducted against a benchmark dataset, and the results were assessed under several aspects. The experimental outcomes established the promising performance of the CSO-CADLCC approach over recent approaches under different measures.
Collapse
|
18
|
Sharif MI, Li JP, Khan MA, Kadry S, Tariq U. M3BTCNet: multi model brain tumor classification using metaheuristic deep neural network features optimization. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07204-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
19
|
Afza F, Sharif M, Khan MA, Tariq U, Yong HS, Cha J. Multiclass Skin Lesion Classification Using Hybrid Deep Features Selection and Extreme Learning Machine. SENSORS (BASEL, SWITZERLAND) 2022; 22:799. [PMID: 35161553 PMCID: PMC8838278 DOI: 10.3390/s22030799] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Revised: 01/13/2022] [Accepted: 01/17/2022] [Indexed: 01/27/2023]
Abstract
The variation in skin textures and injuries, as well as the detection and classification of skin cancer, is a difficult task. Manually detecting skin lesions from dermoscopy images is a difficult and time-consuming process. Recent advancements in the domains of the internet of things (IoT) and artificial intelligence for medical applications demonstrated improvements in both accuracy and computational time. In this paper, a new method for multiclass skin lesion classification using best deep learning feature fusion and an extreme learning machine is proposed. The proposed method includes five primary steps: image acquisition and contrast enhancement; deep learning feature extraction using transfer learning; best feature selection using hybrid whale optimization and entropy-mutual information (EMI) approach; fusion of selected features using a modified canonical correlation based approach; and, finally, extreme learning machine based classification. The feature selection step improves the system's computational efficiency and accuracy. The experiment is carried out on two publicly available datasets, HAM10000 and ISIC2018. The achieved accuracy on both datasets is 93.40 and 94.36 percent. When compared to state-of-the-art (SOTA) techniques, the proposed method's accuracy is improved. Furthermore, the proposed method is computationally efficient.
Collapse
Affiliation(s)
- Farhat Afza
- Department of Computer Science, Wah Campus, COMSATS University Islamabad, Wah Cantt 47040, Pakistan;
| | - Muhammad Sharif
- Department of Computer Science, Wah Campus, COMSATS University Islamabad, Wah Cantt 47040, Pakistan;
| | | | - Usman Tariq
- College of Computer Engineering and Science, Prince Sattam Bin Abdulaziz University, Al-Kharaj 11942, Saudi Arabia;
| | - Hwan-Seung Yong
- Department of Computer Science & Engineering, Ewha Womans University, Seoul 03760, Korea;
| | - Jaehyuk Cha
- Department of Computer Science, Hanyang University, Seoul 04763, Korea;
| |
Collapse
|
20
|
Nawaz M, Nazir T, Javed A, Tariq U, Yong HS, Khan MA, Cha J. An Efficient Deep Learning Approach to Automatic Glaucoma Detection Using Optic Disc and Optic Cup Localization. SENSORS (BASEL, SWITZERLAND) 2022; 22:434. [PMID: 35062405 PMCID: PMC8780798 DOI: 10.3390/s22020434] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 12/29/2021] [Accepted: 01/05/2022] [Indexed: 02/04/2023]
Abstract
Glaucoma is an eye disease initiated due to excessive intraocular pressure inside it and caused complete sightlessness at its progressed stage. Whereas timely glaucoma screening-based treatment can save the patient from complete vision loss. Accurate screening procedures are dependent on the availability of human experts who performs the manual analysis of retinal samples to identify the glaucomatous-affected regions. However, due to complex glaucoma screening procedures and shortage of human resources, we often face delays which can increase the vision loss ratio around the globe. To cope with the challenges of manual systems, there is an urgent demand for designing an effective automated framework that can accurately identify the Optic Disc (OD) and Optic Cup (OC) lesions at the earliest stage. Efficient and effective identification and classification of glaucomatous regions is a complicated job due to the wide variations in the mass, shade, orientation, and shapes of lesions. Furthermore, the extensive similarity between the lesion and eye color further complicates the classification process. To overcome the aforementioned challenges, we have presented a Deep Learning (DL)-based approach namely EfficientDet-D0 with EfficientNet-B0 as the backbone. The presented framework comprises three steps for glaucoma localization and classification. Initially, the deep features from the suspected samples are computed with the EfficientNet-B0 feature extractor. Then, the Bi-directional Feature Pyramid Network (BiFPN) module of EfficientDet-D0 takes the computed features from the EfficientNet-B0 and performs the top-down and bottom-up keypoints fusion several times. In the last step, the resultant localized area containing glaucoma lesion with associated class is predicted. We have confirmed the robustness of our work by evaluating it on a challenging dataset namely an online retinal fundus image database for glaucoma analysis (ORIGA). Furthermore, we have performed cross-dataset validation on the High-Resolution Fundus (HRF), and Retinal Image database for Optic Nerve Evaluation (RIM ONE DL) datasets to show the generalization ability of our work. Both the numeric and visual evaluations confirm that EfficientDet-D0 outperforms the newest frameworks and is more proficient in glaucoma classification.
Collapse
Affiliation(s)
- Marriam Nawaz
- Department of Computer Science, University of Engineering and Technology Taxila, Rawalpindi 47050, Pakistan; (M.N.); (T.N.); (A.J.)
| | - Tahira Nazir
- Department of Computer Science, University of Engineering and Technology Taxila, Rawalpindi 47050, Pakistan; (M.N.); (T.N.); (A.J.)
| | - Ali Javed
- Department of Computer Science, University of Engineering and Technology Taxila, Rawalpindi 47050, Pakistan; (M.N.); (T.N.); (A.J.)
| | - Usman Tariq
- Information Systems Department, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al Khraj 11942, Saudi Arabia;
| | - Hwan-Seung Yong
- Department of Computer Science and Engineering, Ewha Womans University, Seoul 03760, Korea;
| | | | - Jaehyuk Cha
- Department of Computer Science, Hanyang University, Seoul 04763, Korea;
| |
Collapse
|
21
|
Ren Z, Zhang Y, Wang S. LCDAE: Data Augmented Ensemble Framework for Lung Cancer Classification. Technol Cancer Res Treat 2022; 21:15330338221124372. [PMID: 36148908 PMCID: PMC9511553 DOI: 10.1177/15330338221124372] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 07/15/2022] [Accepted: 08/02/2022] [Indexed: 11/15/2022] Open
Abstract
Objective: The only possible solution to increase the patients' fatality rate is lung cancer early-stage detection. Recently, deep learning techniques became the most promising methods in medical image analysis compared with other numerous computer-aided diagnostic techniques. However, deep learning models always get lower performance when the model is overfitting. Methods: We present a Lung Cancer Data Augmented Ensemble (LCDAE) framework to solve the overfitting and lower performance problems in the lung cancer classification tasks. The LCDAE has 3 parts: The Lung Cancer Deep Convolutional GAN, which can synthesize images of lung cancer; A Data Augmented Ensemble model (DA-ENM), which ensembled 6 fine-tuned transfer learning models for training, testing, and validating on a lung cancer dataset; The third part is a Hybrid Data Augmentation (HDA) which combines all the data augmentation techniques in the LCDAE. Results: By comparing with existing state-of-the-art methods, the LCDAE obtains the best accuracy of 99.99%, the precision of 99.99%, and the F1-score of 99.99%. Conclusion: Our proposed LCDAE can overcome the overfitting issue for the lung cancer classification tasks by applying different data augmentation techniques, our method also has the best performance compared to state-of-the-art approaches.
Collapse
Affiliation(s)
- Zeyu Ren
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| | - Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| | - Shuihua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| |
Collapse
|
22
|
Naeem Akbar M, Riaz F, Bilal Awan A, Attique Khan M, Tariq U, Rehman S. A Hybrid Duo-Deep Learning and Best Features Based Framework for燗ction燫ecognition. COMPUTERS, MATERIALS & CONTINUA 2022; 73:2555-2576. [DOI: 10.32604/cmc.2022.028696] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/25/2024]
|