1
|
Lee WK, Hong JS, Lin YH, Lu YF, Hsu YY, Lee CC, Yang HC, Wu CC, Lu CF, Sun MH, Pan HC, Wu HM, Chung WY, Guo WY, You WC, Wu YT. Federated Learning: A Cross-Institutional Feasibility Study of Deep Learning Based Intracranial Tumor Delineation Framework for Stereotactic Radiosurgery. J Magn Reson Imaging 2024; 59:1967-1975. [PMID: 37572087 DOI: 10.1002/jmri.28950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 07/27/2023] [Accepted: 07/28/2023] [Indexed: 08/14/2023] Open
Abstract
BACKGROUND Deep learning-based segmentation algorithms usually required large or multi-institute data sets to improve the performance and ability of generalization. However, protecting patient privacy is a key concern in the multi-institutional studies when conventional centralized learning (CL) is used. PURPOSE To explores the feasibility of a proposed lesion delineation for stereotactic radiosurgery (SRS) scheme for federated learning (FL), which can solve decentralization and privacy protection concerns. STUDY TYPE Retrospective. SUBJECTS 506 and 118 vestibular schwannoma patients aged 15-88 and 22-85 from two institutes, respectively; 1069 and 256 meningioma patients aged 12-91 and 23-85, respectively; 574 and 705 brain metastasis patients aged 26-92 and 28-89, respectively. FIELD STRENGTH/SEQUENCE 1.5T, spin-echo, and gradient-echo [Correction added after first online publication on 21 August 2023. Field Strength has been changed to "1.5T" from "5T" in this sentence.]. ASSESSMENT The proposed lesion delineation method was integrated into an FL framework, and CL models were established as the baseline. The effect of image standardization strategies was also explored. The dice coefficient was used to evaluate the segmentation between the predicted delineation and the ground truth, which was manual delineated by neurosurgeons and a neuroradiologist. STATISTICAL TESTS The paired t-test was applied to compare the mean for the evaluated dice scores (p < 0.05). RESULTS FL performed the comparable mean dice coefficient to CL for the testing set of Taipei Veterans General Hospital regardless of standardization and parameter; for the Taichung Veterans General Hospital data, CL significantly (p < 0.05) outperformed FL while using bi-parameter, but comparable results while using single-parameter. For the non-SRS data, FL achieved the comparable applicability to CL with mean dice 0.78 versus 0.78 (without standardization), and outperformed to the baseline models of two institutes. DATA CONCLUSION The proposed lesion delineation successfully implemented into an FL framework. The FL models were applicable on SRS data of each participating institute, and the FL exhibited comparable mean dice coefficient to CL on non-SRS dataset. Standardization strategies would be recommended when FL is used. LEVEL OF EVIDENCE 4 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Wei-Kai Lee
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei City, Taiwan
| | - Jia-Sheng Hong
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei City, Taiwan
| | - Yi-Hui Lin
- Department of Radiation Oncology, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Yung-Fa Lu
- Department of Radiation Oncology, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Ying-Yi Hsu
- Department of Radiation Oncology, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Cheng-Chia Lee
- Department of Neurosurgery, Taipei Veterans General Hospital, Taipei City, Taiwan
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei City, Taiwan
| | - Huai-Che Yang
- Department of Neurosurgery, Taipei Veterans General Hospital, Taipei City, Taiwan
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei City, Taiwan
| | - Chih-Chun Wu
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei City, Taiwan
- Department of Radiology, Taipei Veterans General Hospital, Taipei City, Taiwan
| | - Chia-Feng Lu
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei City, Taiwan
| | - Ming-His Sun
- Department of Neurosurgery, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Hung-Chuan Pan
- Department of Neurosurgery, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Hsiu-Mei Wu
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei City, Taiwan
- Department of Radiology, Taipei Veterans General Hospital, Taipei City, Taiwan
| | - Wen-Yuh Chung
- Department of Neurosurgery, Taipei Veterans General Hospital, Taipei City, Taiwan
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei City, Taiwan
- Taipei Neuroscience Institute, Taipei Medical University, Shuang Ho Hospital, New Taipei City, Taiwan
| | - Wan-Yuo Guo
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei City, Taiwan
- Department of Radiology, Taipei Veterans General Hospital, Taipei City, Taiwan
| | - Weir-Chiang You
- Department of Radiation Oncology, Taichung Veterans General Hospital, Taichung, Taiwan
| | - Yu-Te Wu
- Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei City, Taiwan
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei City, Taiwan
- Brain Research Center, National Yang Ming Chiao Tung University, Taipei City, Taiwan
- College Medical Device Innovation and Translation Center, National Yang Ming Chiao Tung University, Taipei City, Taiwan
| |
Collapse
|
2
|
Batool A, Byun YC. Brain tumor detection with integrating traditional and computational intelligence approaches across diverse imaging modalities - Challenges and future directions. Comput Biol Med 2024; 175:108412. [PMID: 38691914 DOI: 10.1016/j.compbiomed.2024.108412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 03/18/2024] [Accepted: 04/02/2024] [Indexed: 05/03/2024]
Abstract
Brain tumor segmentation and classification play a crucial role in the diagnosis and treatment planning of brain tumors. Accurate and efficient methods for identifying tumor regions and classifying different tumor types are essential for guiding medical interventions. This study comprehensively reviews brain tumor segmentation and classification techniques, exploring various approaches based on image processing, machine learning, and deep learning. Furthermore, our study aims to review existing methodologies, discuss their advantages and limitations, and highlight recent advancements in this field. The impact of existing segmentation and classification techniques for automated brain tumor detection is also critically examined using various open-source datasets of Magnetic Resonance Images (MRI) of different modalities. Moreover, our proposed study highlights the challenges related to segmentation and classification techniques and datasets having various MRI modalities to enable researchers to develop innovative and robust solutions for automated brain tumor detection. The results of this study contribute to the development of automated and robust solutions for analyzing brain tumors, ultimately aiding medical professionals in making informed decisions and providing better patient care.
Collapse
Affiliation(s)
- Amreen Batool
- Department of Electronic Engineering, Institute of Information Science & Technology, Jeju National University, Jeju, 63243, South Korea
| | - Yung-Cheol Byun
- Department of Computer Engineering, Major of Electronic Engineering, Jeju National University, Institute of Information Science Technology, Jeju, 63243, South Korea.
| |
Collapse
|
3
|
Naeem A, Anees T. DVFNet: A deep feature fusion-based model for the multiclassification of skin cancer utilizing dermoscopy images. PLoS One 2024; 19:e0297667. [PMID: 38507348 PMCID: PMC10954125 DOI: 10.1371/journal.pone.0297667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 01/11/2024] [Indexed: 03/22/2024] Open
Abstract
Skin cancer is a common cancer affecting millions of people annually. Skin cells inside the body that grow in unusual patterns are a sign of this invasive disease. The cells then spread to other organs and tissues through the lymph nodes and destroy them. Lifestyle changes and increased solar exposure contribute to the rise in the incidence of skin cancer. Early identification and staging are essential due to the high mortality rate associated with skin cancer. In this study, we presented a deep learning-based method named DVFNet for the detection of skin cancer from dermoscopy images. To detect skin cancer images are pre-processed using anisotropic diffusion methods to remove artifacts and noise which enhances the quality of images. A combination of the VGG19 architecture and the Histogram of Oriented Gradients (HOG) is used in this research for discriminative feature extraction. SMOTE Tomek is used to resolve the problem of imbalanced images in the multiple classes of the publicly available ISIC 2019 dataset. This study utilizes segmentation to pinpoint areas of significantly damaged skin cells. A feature vector map is created by combining the features of HOG and VGG19. Multiclassification is accomplished by CNN using feature vector maps. DVFNet achieves an accuracy of 98.32% on the ISIC 2019 dataset. Analysis of variance (ANOVA) statistical test is used to validate the model's accuracy. Healthcare experts utilize the DVFNet model to detect skin cancer at an early clinical stage.
Collapse
Affiliation(s)
- Ahmad Naeem
- Department of Computer Science, School of Systems and Technology, University of Management and Technology, Lahore, Pakistan
| | - Tayyaba Anees
- Department of Software Engineering, School of Systems and Technology, University of Management and Technology, Lahore, Pakistan
| |
Collapse
|
4
|
Zhao YJ, Shen PF, Fu JH, Yang FR, Chen ZP, Yu RQ. A target-triggered fluorescence-SERS dual-signal nano-system for real-time imaging of intracellular telomerase activity. Talanta 2024; 269:125469. [PMID: 38043337 DOI: 10.1016/j.talanta.2023.125469] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Revised: 11/19/2023] [Accepted: 11/21/2023] [Indexed: 12/05/2023]
Abstract
Telomerase (TE) is a promising diagnostic and prognostic biomarker for many cancers. Quantification of TE activity in living cells is of great significance in biomedical and clinical research. Conventional fluorescence-based sensors for quantification of intracellular TE may suffer from problems of fast photobleaching and auto-fluorescence of some endogenous molecules, and hence are liable to produce false negative or positive results. To address this issue, a fluorescence-SERS dual-signal nano-system for real-time imaging of intracellular TE was designed by functionalizing a bimetallic Au@Ag nanostructure with 4-p-mercaptobenzoic acid (internal standard SERS tag) and a DNA hybrid complex consisted of a telomerase primer strand and its partially complimentary strand modified with Rhodamine 6G. The bimetallic Au@Ag nanostructure serves as an excellent SERS-enhancing and fluorescence-quenching substrate. Intracellular TE will trigger the extension of the primer strand and cause the shedding of Rhodamine 6G-modified complimentary strand from the nano-system through intramolecular DNA strand displacement, resulting in the recovery of the fluorescence of Rhodamine 6G and decrease in its SERS signal. Both the fluorescence of R6G and the ratio between the SERS signals of 4-p-mercaptobenzoic acid and Rhodamine 6G can be used for in situ imaging of intracellular TE. Experimental results showed that the proposed nano-system was featured with low background, excellent cell internalization efficiency, good biocompatibility, high sensitivity, good selectivity, and robustness to false positive results. It can be used to distinguish cancer cells from normal ones, identify different types of cancer cells, as well as perform absolute quantification of intracellular TE, which endows it with great potential in clinical diagnosis, target therapy and prognosis of cancer patients.
Collapse
Affiliation(s)
- Yu-Jie Zhao
- State Key Laboratory of Chemo/Biosensing and Chemometrics, College of Chemistry and Chemical Engineering, Hunan University, Changsha, Hunan 410082, PR China
| | - Ping-Fan Shen
- State Key Laboratory of Chemo/Biosensing and Chemometrics, College of Chemistry and Chemical Engineering, Hunan University, Changsha, Hunan 410082, PR China
| | - Jing-Hao Fu
- State Key Laboratory of Chemo/Biosensing and Chemometrics, College of Chemistry and Chemical Engineering, Hunan University, Changsha, Hunan 410082, PR China
| | - Feng-Rui Yang
- State Key Laboratory of Chemo/Biosensing and Chemometrics, College of Chemistry and Chemical Engineering, Hunan University, Changsha, Hunan 410082, PR China
| | - Zeng-Ping Chen
- State Key Laboratory of Chemo/Biosensing and Chemometrics, College of Chemistry and Chemical Engineering, Hunan University, Changsha, Hunan 410082, PR China.
| | - Ru-Qin Yu
- State Key Laboratory of Chemo/Biosensing and Chemometrics, College of Chemistry and Chemical Engineering, Hunan University, Changsha, Hunan 410082, PR China
| |
Collapse
|
5
|
Felefly T, Roukoz C, Fares G, Achkar S, Yazbeck S, Meyer P, Kordahi M, Azoury F, Nasr DN, Nasr E, Noël G, Francis Z. An Explainable MRI-Radiomic Quantum Neural Network to Differentiate Between Large Brain Metastases and High-Grade Glioma Using Quantum Annealing for Feature Selection. J Digit Imaging 2023; 36:2335-2346. [PMID: 37507581 PMCID: PMC10584786 DOI: 10.1007/s10278-023-00886-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 06/11/2023] [Accepted: 07/17/2023] [Indexed: 07/30/2023] Open
Abstract
Solitary large brain metastases (LBM) and high-grade gliomas (HGG) are sometimes hard to differentiate on MRI. The management differs significantly between these two entities, and non-invasive methods that help differentiate between them are eagerly needed to avoid potentially morbid biopsies and surgical procedures. We explore herein the performance and interpretability of an MRI-radiomics variational quantum neural network (QNN) using a quantum-annealing mutual-information (MI) feature selection approach. We retrospectively included 423 patients with HGG and LBM (> 2 cm) who had a contrast-enhanced T1-weighted (CE-T1) MRI between 2012 and 2019. After exclusion, 72 HGG and 129 LBM were kept. Tumors were manually segmented, and a 5-mm peri-tumoral ring was created. MRI images were pre-processed, and 1813 radiomic features were extracted. A set of best features based on MI was selected. MI and conditional-MI were embedded into a quadratic unconstrained binary optimization (QUBO) formulation that was mapped to an Ising-model and submitted to D'Wave's quantum annealer to solve for the best combination of 10 features. The 10 selected features were embedded into a 2-qubits QNN using PennyLane library. The model was evaluated for balanced-accuracy (bACC) and area under the receiver operating characteristic curve (ROC-AUC) on the test set. The model performance was benchmarked against two classical models: dense neural networks (DNN) and extreme gradient boosting (XGB). Shapley values were calculated to interpret sample-wise predictions on the test set. The best 10-feature combination included 6 tumor and 4 ring features. For QNN, DNN, and XGB, respectively, training ROC-AUC was 0.86, 0.95, and 0.94; test ROC-AUC was 0.76, 0.75, and 0.79; and test bACC was 0.74, 0.73, and 0.72. The two most influential features were tumor Laplacian-of-Gaussian-GLRLM-Entropy and sphericity. We developed an accurate interpretable QNN model with quantum-informed feature selection to differentiate between LBM and HGG on CE-T1 brain MRI. The model performance is comparable to state-of-the-art classical models.
Collapse
Affiliation(s)
- Tony Felefly
- Radiation Oncology Department, Hôtel-Dieu de France Hospital, Saint Joseph University, Beirut, Lebanon.
- ICube Laboratory, University of Strasbourg, Strasbourg, France.
- Radiation Oncology Department, Hôtel-Dieu de Lévis, Lévis, QC, Canada.
| | - Camille Roukoz
- Radiation Oncology Department, Hôtel-Dieu de France Hospital, Saint Joseph University, Beirut, Lebanon
| | - Georges Fares
- Radiation Oncology Department, Hôtel-Dieu de France Hospital, Saint Joseph University, Beirut, Lebanon
- Physics Department, Saint Joseph University, Beirut, Lebanon
| | - Samir Achkar
- Radiation Oncology Department, Gustave Roussy Cancer Campus, 94805, Villejuif, France
| | - Sandrine Yazbeck
- Department of Radiology, University of Maryland School of Medicine, 655 W Baltimore St S, Baltimore, MD, 21201, USA
| | - Philippe Meyer
- Medical Physics Department, Institut de Cancérologie de Strasbourg (ICANS), 67200, Strasbourg, France
- IMAGeS Unit, IRIS Platform, ICube, University of Strasbourg, 67085, Strasbourg Cedex, France
| | | | - Fares Azoury
- Radiation Oncology Department, Hôtel-Dieu de France Hospital, Saint Joseph University, Beirut, Lebanon
| | - Dolly Nehme Nasr
- Radiation Oncology Department, Hôtel-Dieu de France Hospital, Saint Joseph University, Beirut, Lebanon
| | - Elie Nasr
- Radiation Oncology Department, Hôtel-Dieu de France Hospital, Saint Joseph University, Beirut, Lebanon
| | - Georges Noël
- Radiotherapy Department, Institut de Cancérologie de Strasbourg (ICANS), 67200, Strasbourg, France
- Radiobiology Department, IMIS Unit, IRIS Platform, ICube, University of Strasbourg, 67085, Strasbourg Cedex, France
- Faculty of Medicine, University of Strasbourg, 67000, Strasbourg, France
| | - Ziad Francis
- Physics Department, Saint Joseph University, Beirut, Lebanon
| |
Collapse
|
6
|
Riaz S, Naeem A, Malik H, Naqvi RA, Loh WK. Federated and Transfer Learning Methods for the Classification of Melanoma and Nonmelanoma Skin Cancers: A Prospective Study. SENSORS (BASEL, SWITZERLAND) 2023; 23:8457. [PMID: 37896548 PMCID: PMC10611214 DOI: 10.3390/s23208457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 10/09/2023] [Accepted: 10/12/2023] [Indexed: 10/29/2023]
Abstract
Skin cancer is considered a dangerous type of cancer with a high global mortality rate. Manual skin cancer diagnosis is a challenging and time-consuming method due to the complexity of the disease. Recently, deep learning and transfer learning have been the most effective methods for diagnosing this deadly cancer. To aid dermatologists and other healthcare professionals in classifying images into melanoma and nonmelanoma cancer and enabling the treatment of patients at an early stage, this systematic literature review (SLR) presents various federated learning (FL) and transfer learning (TL) techniques that have been widely applied. This study explores the FL and TL classifiers by evaluating them in terms of the performance metrics reported in research studies, which include true positive rate (TPR), true negative rate (TNR), area under the curve (AUC), and accuracy (ACC). This study was assembled and systemized by reviewing well-reputed studies published in eminent fora between January 2018 and July 2023. The existing literature was compiled through a systematic search of seven well-reputed databases. A total of 86 articles were included in this SLR. This SLR contains the most recent research on FL and TL algorithms for classifying malignant skin cancer. In addition, a taxonomy is presented that summarizes the many malignant and non-malignant cancer classes. The results of this SLR highlight the limitations and challenges of recent research. Consequently, the future direction of work and opportunities for interested researchers are established that help them in the automated classification of melanoma and nonmelanoma skin cancers.
Collapse
Affiliation(s)
- Shafia Riaz
- Department of Computer Science, National College of Business Administration & Economics Sub Campus Multan, Multan 60000, Pakistan; (S.R.); (H.M.)
| | - Ahmad Naeem
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan;
| | - Hassaan Malik
- Department of Computer Science, National College of Business Administration & Economics Sub Campus Multan, Multan 60000, Pakistan; (S.R.); (H.M.)
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan;
| | - Rizwan Ali Naqvi
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Woong-Kee Loh
- School of Computing, Gachon University, Seongnam 13120, Republic of Korea
| |
Collapse
|
7
|
Abbas Q, Malik KM, Saudagar AKJ, Khan MB. Context-aggregator: An approach of loss- and class imbalance-aware aggregation in federated learning. Comput Biol Med 2023; 163:107167. [PMID: 37421740 DOI: 10.1016/j.compbiomed.2023.107167] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Revised: 05/26/2023] [Accepted: 06/08/2023] [Indexed: 07/10/2023]
Abstract
Federated Learning (FL) is an emerging distributed learning paradigm which offers data privacy to contributing nodes in the collaborating environment. By exploiting the Individual datasets of different hospitals in FL setting could be used to develop reliable screening, diagnosis, and treatment predictive models to tackle major challenges such as pandemics. FL can enable the development of very diverse medical imaging datasets and thus provide more reliable models for all participating nodes, including those with low quality data. However, the issue with the traditional Federated Learning paradigm is the degradation of generalization power due to poorly trained local models at the client nodes. The generalization power of the FL paradigm can be improved by considering the relative learning contribution of client nodes. Simple aggregation of learning parameters in the standard FL model faces a diversity issue and results in more validation loss during the learning process. This issue can be resolved by considering the relative contribution of each client node participating in the learning process. The class imbalance at each site is another significant challenge that greatly impacts the performance of the aggregated learning model. This work considers Context Aggregator FL based on the context of loss-factor and class-imbalance issues by incorporating the relative contribution of the collaborating nodes in FL by proposing Validation-Loss based Context Aggregator (CAVL) and Class Imbalance based Context Aggregator (CACI). The proposed Context Aggregator is evaluated on several different Covid-19 imaging classification datasets present on participating nodes. The evaluation results show that Context Aggregator performs better than standard Federating average Learning algorithms and FedProx Algorithm for Covid-19 image classification problems.
Collapse
Affiliation(s)
- Qamar Abbas
- Department of Computer Science, Faculty of Computing and Information Technology, International Islamic University, Islamabad, 44000, Pakistan
| | - Khalid Mahmood Malik
- Department of Computer Science and Engineering, Oakland University, Rochester, MI, 48309-4401, USA.
| | - Abdul Khader Jilani Saudagar
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University, Riyadh, 11432, Saudi Arabia
| | - Muhammad Badruddin Khan
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University, Riyadh, 11432, Saudi Arabia
| |
Collapse
|
8
|
Tahir M, Naeem A, Malik H, Tanveer J, Naqvi RA, Lee SW. DSCC_Net: Multi-Classification Deep Learning Models for Diagnosing of Skin Cancer Using Dermoscopic Images. Cancers (Basel) 2023; 15:cancers15072179. [PMID: 37046840 PMCID: PMC10093058 DOI: 10.3390/cancers15072179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 04/04/2023] [Accepted: 04/04/2023] [Indexed: 04/08/2023] Open
Abstract
Skin cancer is one of the most lethal kinds of human illness. In the present state of the health care system, skin cancer identification is a time-consuming procedure and if it is not diagnosed initially then it can be threatening to human life. To attain a high prospect of complete recovery, early detection of skin cancer is crucial. In the last several years, the application of deep learning (DL) algorithms for the detection of skin cancer has grown in popularity. Based on a DL model, this work intended to build a multi-classification technique for diagnosing skin cancers such as melanoma (MEL), basal cell carcinoma (BCC), squamous cell carcinoma (SCC), and melanocytic nevi (MN). In this paper, we have proposed a novel model, a deep learning-based skin cancer classification network (DSCC_Net) that is based on a convolutional neural network (CNN), and evaluated it on three publicly available benchmark datasets (i.e., ISIC 2020, HAM10000, and DermIS). For the skin cancer diagnosis, the classification performance of the proposed DSCC_Net model is compared with six baseline deep networks, including ResNet-152, Vgg-16, Vgg-19, Inception-V3, EfficientNet-B0, and MobileNet. In addition, we used SMOTE Tomek to handle the minority classes issue that exists in this dataset. The proposed DSCC_Net obtained a 99.43% AUC, along with a 94.17%, accuracy, a recall of 93.76%, a precision of 94.28%, and an F1-score of 93.93% in categorizing the four distinct types of skin cancer diseases. The rates of accuracy for ResNet-152, Vgg-19, MobileNet, Vgg-16, EfficientNet-B0, and Inception-V3 are 89.32%, 91.68%, 92.51%, 91.12%, 89.46% and 91.82%, respectively. The results showed that our proposed DSCC_Net model performs better as compared to baseline models, thus offering significant support to dermatologists and health experts to diagnose skin cancer.
Collapse
Affiliation(s)
- Maryam Tahir
- Department of Computer Science, National College of Business Administration & Economics Lahore, Multan Sub Campus, Multan 60000, Pakistan
| | - Ahmad Naeem
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan
| | - Hassaan Malik
- Department of Computer Science, National College of Business Administration & Economics Lahore, Multan Sub Campus, Multan 60000, Pakistan
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan
| | - Jawad Tanveer
- Department of Computer Science and Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Rizwan Ali Naqvi
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Seung-Won Lee
- School of Medicine, Sungkyunkwan University, Suwon 16419, Republic of Korea
| |
Collapse
|
9
|
Malik H, Anees T, Naeem A, Naqvi RA, Loh WK. Blockchain-Federated and Deep-Learning-Based Ensembling of Capsule Network with Incremental Extreme Learning Machines for Classification of COVID-19 Using CT Scans. Bioengineering (Basel) 2023; 10:bioengineering10020203. [PMID: 36829697 PMCID: PMC9952069 DOI: 10.3390/bioengineering10020203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Revised: 01/30/2023] [Accepted: 02/01/2023] [Indexed: 02/09/2023] Open
Abstract
Due to the rapid rate of SARS-CoV-2 dissemination, a conversant and effective strategy must be employed to isolate COVID-19. When it comes to determining the identity of COVID-19, one of the most significant obstacles that researchers must overcome is the rapid propagation of the virus, in addition to the dearth of trustworthy testing models. This problem continues to be the most difficult one for clinicians to deal with. The use of AI in image processing has made the formerly insurmountable challenge of finding COVID-19 situations more manageable. In the real world, there is a problem that has to be handled about the difficulties of sharing data between hospitals while still honoring the privacy concerns of the organizations. When training a global deep learning (DL) model, it is crucial to handle fundamental concerns such as user privacy and collaborative model development. For this study, a novel framework is designed that compiles information from five different databases (several hospitals) and edifies a global model using blockchain-based federated learning (FL). The data is validated through the use of blockchain technology (BCT), and FL trains the model on a global scale while maintaining the secrecy of the organizations. The proposed framework is divided into three parts. First, we provide a method of data normalization that can handle the diversity of data collected from five different sources using several computed tomography (CT) scanners. Second, to categorize COVID-19 patients, we ensemble the capsule network (CapsNet) with incremental extreme learning machines (IELMs). Thirdly, we provide a strategy for interactively training a global model using BCT and FL while maintaining anonymity. Extensive tests employing chest CT scans and a comparison of the classification performance of the proposed model to that of five DL algorithms for predicting COVID-19, while protecting the privacy of the data for a variety of users, were undertaken. Our findings indicate improved effectiveness in identifying COVID-19 patients and achieved an accuracy of 98.99%. Thus, our model provides substantial aid to medical practitioners in their diagnosis of COVID-19.
Collapse
Affiliation(s)
- Hassaan Malik
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan
| | - Tayyaba Anees
- Department of Software Engineering, University of Management and Technology, Lahore 54000, Pakistan
| | - Ahmad Naeem
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan
| | - Rizwan Ali Naqvi
- Department of Unmanned Vehicle Engineering, Sejong University, Seoul 05006, Republic of Korea
- Correspondence: (R.A.N.); (W.-K.L.)
| | - Woong-Kee Loh
- School of Computing, Gachon University, Seongnam 13120, Republic of Korea
- Correspondence: (R.A.N.); (W.-K.L.)
| |
Collapse
|
10
|
The Role of Machine Learning and Deep Learning Approaches for the Detection of Skin Cancer. Healthcare (Basel) 2023; 11:healthcare11030415. [PMID: 36766989 PMCID: PMC9914395 DOI: 10.3390/healthcare11030415] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Revised: 01/28/2023] [Accepted: 01/29/2023] [Indexed: 02/04/2023] Open
Abstract
Machine learning (ML) can enhance a dermatologist's work, from diagnosis to customized care. The development of ML algorithms in dermatology has been supported lately regarding links to digital data processing (e.g., electronic medical records, Image Archives, omics), quicker computing and cheaper data storage. This article describes the fundamentals of ML-based implementations, as well as future limits and concerns for the production of skin cancer detection and classification systems. We also explored five fields of dermatology using deep learning applications: (1) the classification of diseases by clinical photos, (2) der moto pathology visual classification of cancer, and (3) the measurement of skin diseases by smartphone applications and personal tracking systems. This analysis aims to provide dermatologists with a guide that helps demystify the basics of ML and its different applications to identify their possible challenges correctly. This paper surveyed studies on skin cancer detection using deep learning to assess the features and advantages of other techniques. Moreover, this paper also defined the basic requirements for creating a skin cancer detection application, which revolves around two main issues: the full segmentation image and the tracking of the lesion on the skin using deep learning. Most of the techniques found in this survey address these two problems. Some of the methods also categorize the type of cancer too.
Collapse
|
11
|
Malik H, Naeem A, Naqvi RA, Loh WK. DMFL_Net: A Federated Learning-Based Framework for the Classification of COVID-19 from Multiple Chest Diseases Using X-rays. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23020743. [PMID: 36679541 PMCID: PMC9864925 DOI: 10.3390/s23020743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 01/02/2023] [Accepted: 01/03/2023] [Indexed: 05/14/2023]
Abstract
Coronavirus Disease 2019 (COVID-19) is still a threat to global health and safety, and it is anticipated that deep learning (DL) will be the most effective way of detecting COVID-19 and other chest diseases such as lung cancer (LC), tuberculosis (TB), pneumothorax (PneuTh), and pneumonia (Pneu). However, data sharing across hospitals is hampered by patients' right to privacy, leading to unexpected results from deep neural network (DNN) models. Federated learning (FL) is a game-changing concept since it allows clients to train models together without sharing their source data with anybody else. Few studies, however, focus on improving the model's accuracy and stability, whereas most existing FL-based COVID-19 detection techniques aim to maximize secondary objectives such as latency, energy usage, and privacy. In this work, we design a novel model named decision-making-based federated learning network (DMFL_Net) for medical diagnostic image analysis to distinguish COVID-19 from four distinct chest disorders including LC, TB, PneuTh, and Pneu. The DMFL_Net model that has been suggested gathers data from a variety of hospitals, constructs the model using the DenseNet-169, and produces accurate predictions from information that is kept secure and only released to authorized individuals. Extensive experiments were carried out with chest X-rays (CXR), and the performance of the proposed model was compared with two transfer learning (TL) models, i.e., VGG-19 and VGG-16 in terms of accuracy (ACC), precision (PRE), recall (REC), specificity (SPF), and F1-measure. Additionally, the DMFL_Net model is also compared with the default FL configurations. The proposed DMFL_Net + DenseNet-169 model achieves an accuracy of 98.45% and outperforms other approaches in classifying COVID-19 from four chest diseases and successfully protects the privacy of the data among diverse clients.
Collapse
Affiliation(s)
- Hassaan Malik
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan
| | - Ahmad Naeem
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan
| | - Rizwan Ali Naqvi
- Department of Unmanned Vehicle Engineering, Sejong University, Seoul 05006, Republic of Korea
- Correspondence: (R.A.N.); (W.-K.L.)
| | - Woong-Kee Loh
- School of Computing, Gachon University, Seongnam 13120, Republic of Korea
- Correspondence: (R.A.N.); (W.-K.L.)
| |
Collapse
|
12
|
Naeem A, Anees T, Fiza M, Naqvi RA, Lee SW. SCDNet: A Deep Learning-Based Framework for the Multiclassification of Skin Cancer Using Dermoscopy Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22155652. [PMID: 35957209 PMCID: PMC9371071 DOI: 10.3390/s22155652] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 07/19/2022] [Accepted: 07/25/2022] [Indexed: 05/27/2023]
Abstract
Skin cancer is a deadly disease, and its early diagnosis enhances the chances of survival. Deep learning algorithms for skin cancer detection have become popular in recent years. A novel framework based on deep learning is proposed in this study for the multiclassification of skin cancer types such as Melanoma, Melanocytic Nevi, Basal Cell Carcinoma and Benign Keratosis. The proposed model is named as SCDNet which combines Vgg16 with convolutional neural networks (CNN) for the classification of different types of skin cancer. Moreover, the accuracy of the proposed method is also compared with the four state-of-the-art pre-trained classifiers in the medical domain named Resnet 50, Inception v3, AlexNet and Vgg19. The performance of the proposed SCDNet classifier, as well as the four state-of-the-art classifiers, is evaluated using the ISIC 2019 dataset. The accuracy rate of the proposed SDCNet is 96.91% for the multiclassification of skin cancer whereas, the accuracy rates for Resnet 50, Alexnet, Vgg19 and Inception-v3 are 95.21%, 93.14%, 94.25% and 92.54%, respectively. The results showed that the proposed SCDNet performed better than the competing classifiers.
Collapse
Affiliation(s)
- Ahmad Naeem
- Department of Computer Science, University of Management and Technology, Lahore 54000, Pakistan;
| | - Tayyaba Anees
- Department of Software Engineering, University of Management and Technology, Lahore 54000, Pakistan;
| | - Makhmoor Fiza
- Department of Management Sciences and Technology, Begum Nusrat Bhutto Women University, Sukkur 65200, Pakistan;
| | - Rizwan Ali Naqvi
- Department of Unmanned Vehicle Engineering, Sejong University, Seoul 05006, Korea
| | - Seung-Won Lee
- Department of Data Science, College of Software Convergence, Sejong University, Seoul 05006, Korea
- School of Medicine, Sungkyunkwan University, Suwon 16419, Korea
| |
Collapse
|