51
|
Pedrosa M, Zuquete A, Costa C. A Pseudonymisation Protocol With Implicit and Explicit Consent Routes for Health Records in Federated Ledgers. IEEE J Biomed Health Inform 2021; 25:2172-2183. [PMID: 33006933 DOI: 10.1109/jbhi.2020.3028454] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Healthcare data for primary use (diagnosis) may be encrypted for confidentiality purposes; however, secondary uses such as feeding machine learning algorithms requires open access. Full anonymity has no traceable identifiers to report diagnosis results. Moreover, implicit and explicit consent routes are of practical importance under recent data protection regulations (GDPR), translating directly into break-the-glass requirements. Pseudonymisation is an acceptable compromise when dealing with such orthogonal requirements and is an advisable measure to protect data. Our work presents a pseudonymisation protocol that is compliant with implicit and explicit consent routes. The protocol is constructed on a (t,n)-threshold secret sharing scheme and public key cryptography. The pseudonym is safely derived from a fragment of public information without requiring any data-subject's secret. The method is proven secure under reasonable cryptographic assumptions and scalable from the experimental results.
Collapse
|
52
|
|
53
|
Xie X, Niu J, Liu X, Chen Z, Tang S, Yu S. A survey on incorporating domain knowledge into deep learning for medical image analysis. Med Image Anal 2021; 69:101985. [PMID: 33588117 DOI: 10.1016/j.media.2021.101985] [Citation(s) in RCA: 87] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Revised: 12/04/2020] [Accepted: 01/26/2021] [Indexed: 12/27/2022]
Abstract
Although deep learning models like CNNs have achieved great success in medical image analysis, the small size of medical datasets remains a major bottleneck in this area. To address this problem, researchers have started looking for external information beyond current available medical datasets. Traditional approaches generally leverage the information from natural images via transfer learning. More recent works utilize the domain knowledge from medical doctors, to create networks that resemble how medical doctors are trained, mimic their diagnostic patterns, or focus on the features or areas they pay particular attention to. In this survey, we summarize the current progress on integrating medical domain knowledge into deep learning models for various tasks, such as disease diagnosis, lesion, organ and abnormality detection, lesion and organ segmentation. For each task, we systematically categorize different kinds of medical domain knowledge that have been utilized and their corresponding integrating methods. We also provide current challenges and directions for future research.
Collapse
Affiliation(s)
- Xiaozheng Xie
- State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing 100191, China
| | - Jianwei Niu
- State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing 100191, China; Beijing Advanced Innovation Center for Big Data and Brain Computing (BDBC) and Hangzhou Innovation Institute of Beihang University, 18 Chuanghui Street, Binjiang District, Hangzhou 310000, China
| | - Xuefeng Liu
- State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing 100191, China.
| | - Zhengsu Chen
- State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing 100191, China
| | - Shaojie Tang
- Jindal School of Management, The University of Texas at Dallas, 800 W Campbell Rd, Richardson, TX 75080-3021, USA
| | - Shui Yu
- School of Computer Science, University of Technology Sydney, 15 Broadway, Ultimo NSW 2007, Australia
| |
Collapse
|
54
|
Diagnosing of Diabetic Retinopathy with Image Dehazing and Capsule Network. DEEP LEARNING FOR MEDICAL DECISION SUPPORT SYSTEMS 2021. [PMCID: PMC7298988 DOI: 10.1007/978-981-15-6325-6_9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
As it was discussed before in Chap. 10.1007/978-981-15-6325-6_4, the disease of diabetic retinopathy (DR) ensure terrible results such as blindness, it has been a remarkable medical problem examined recently. Here, especially retinal pathologies can be the biggest problem for millions of blindness cases seen world-wide [1]. When all the cases of blindness are examined in detail, it was reported that there are around 2 million diabetic retinopathy cases causing the blindness so that early diagnosis has taken many steps away in terms of having the highest priority in eliminating or at least slowing down disease factors (causing blindness) and so that reducing the rates of blindness at the final [2, 3].
Collapse
|
55
|
Ramzan M, Raza M, Sharif M, Attique Khan M, Nam Y. Gastrointestinal Tract Infections Classification Using Deep Learning. COMPUTERS, MATERIALS & CONTINUA 2021; 69:3239-3257. [DOI: 10.32604/cmc.2021.015920] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2020] [Accepted: 03/29/2021] [Indexed: 08/25/2024]
|
56
|
Khadidos A, Khadidos AO, Kannan S, Natarajan Y, Mohanty SN, Tsaramirsis G. Analysis of COVID-19 Infections on a CT Image Using DeepSense Model. Front Public Health 2020; 8:599550. [PMID: 33330341 PMCID: PMC7714903 DOI: 10.3389/fpubh.2020.599550] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Accepted: 10/16/2020] [Indexed: 11/17/2022] Open
Abstract
In this paper, a data mining model on a hybrid deep learning framework is designed to diagnose the medical conditions of patients infected with the coronavirus disease 2019 (COVID-19) virus. The hybrid deep learning model is designed as a combination of convolutional neural network (CNN) and recurrent neural network (RNN) and named as DeepSense method. It is designed as a series of layers to extract and classify the related features of COVID-19 infections from the lungs. The computerized tomography image is used as an input data, and hence, the classifier is designed to ease the process of classification on learning the multidimensional input data using the Expert Hidden layers. The validation of the model is conducted against the medical image datasets to predict the infections using deep learning classifiers. The results show that the DeepSense classifier offers accuracy in an improved manner than the conventional deep and machine learning classifiers. The proposed method is validated against three different datasets, where the training data are compared with 70%, 80%, and 90% training data. It specifically provides the quality of the diagnostic method adopted for the prediction of COVID-19 infections in a patient.
Collapse
Affiliation(s)
- Adil Khadidos
- Department of Information Technology, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Alaa O Khadidos
- Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Srihari Kannan
- Department of Computer Science and Engineering, SNS College of Engineering, Coimbatore, India
| | - Yuvaraj Natarajan
- Research and Development, Information Communication Technology Academy, Chennai, India
| | - Sachi Nandan Mohanty
- Department of Computer Science and Engineering, Institute of Chartered Financial Analysts of India Foundation of Higher Education, Hyderabad, India
| | | |
Collapse
|
57
|
Shankar K, Perumal E. A novel hand-crafted with deep learning features based fusion model for COVID-19 diagnosis and classification using chest X-ray images. COMPLEX INTELL SYST 2020; 7:1277-1293. [PMID: 34777955 PMCID: PMC7659408 DOI: 10.1007/s40747-020-00216-6] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Accepted: 10/06/2020] [Indexed: 11/25/2022]
Abstract
COVID-19 pandemic is increasing in an exponential rate, with restricted accessibility of rapid test kits. So, the design and implementation of COVID-19 testing kits remain an open research problem. Several findings attained using radio-imaging approaches recommend that the images comprise important data related to coronaviruses. The application of recently developed artificial intelligence (AI) techniques, integrated with radiological imaging, is helpful in the precise diagnosis and classification of the disease. In this view, the current research paper presents a novel fusion model hand-crafted with deep learning features called FM-HCF-DLF model for diagnosis and classification of COVID-19. The proposed FM-HCF-DLF model comprises three major processes, namely Gaussian filtering-based preprocessing, FM for feature extraction and classification. FM model incorporates the fusion of handcrafted features with the help of local binary patterns (LBP) and deep learning (DL) features and it also utilizes convolutional neural network (CNN)-based Inception v3 technique. To further improve the performance of Inception v3 model, the learning rate scheduler using Adam optimizer is applied. At last, multilayer perceptron (MLP) is employed to carry out the classification process. The proposed FM-HCF-DLF model was experimentally validated using chest X-ray dataset. The experimental outcomes inferred that the proposed model yielded superior performance with maximum sensitivity of 93.61%, specificity of 94.56%, precision of 94.85%, accuracy of 94.08%, F score of 93.2% and kappa value of 93.5%.
Collapse
Affiliation(s)
- K Shankar
- Department of Computer Applications, Alagappa University, Karaikudi, India
| | - Eswaran Perumal
- Department of Computer Applications, Alagappa University, Karaikudi, India
| |
Collapse
|
58
|
Pérez E, Reyes O, Ventura S. Convolutional neural networks for the automatic diagnosis of melanoma: An extensive experimental study. Med Image Anal 2020; 67:101858. [PMID: 33129155 DOI: 10.1016/j.media.2020.101858] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Revised: 08/24/2020] [Accepted: 09/18/2020] [Indexed: 12/12/2022]
Abstract
Melanoma is the type of skin cancer with the highest levels of mortality, and it is more dangerous because it can spread to other parts of the body if not caught and treated early. Melanoma diagnosis is a complex task, even for expert dermatologists, mainly due to the great variety of morphologies in moles of patients. Accordingly, the automatic diagnosis of melanoma is a task that poses the challenge of developing efficient computational methods that ease the diagnostic and, therefore, aid dermatologists in decision-making. In this work, an extensive analysis was conducted, aiming at assessing and illustrating the effectiveness of convolutional neural networks in coping with this complex task. To achieve this objective, twelve well-known convolutional network models were evaluated on eleven public image datasets. The experimental study comprised five phases, where first it was analyzed the sensitivity of the models regarding the optimization algorithm used for their training, and then it was analyzed the impact in performance when using different techniques such as cost-sensitive learning, data augmentation and transfer learning. The conducted study confirmed the usefulness, effectiveness and robustness of different convolutional architectures in solving melanoma diagnosis problem. Also, important guidelines to researchers working on this area were provided, easing the selection of both the proper convolutional model and technique according the characteristics of data.
Collapse
Affiliation(s)
- Eduardo Pérez
- Knowledge Discovery and Intelligent Systems in Biomedicine Laboratory, Maimónides Biomedical Research Institute of Córdoba, Córdoba, Spain; Department of Computer Science and Numerical Analysis, University of Córdoba, Córdoba, Spain.
| | - Oscar Reyes
- Department of Computer Science and Numerical Analysis, University of Córdoba, Córdoba, Spain; Knowledge Discovery and Intelligent Systems in Biomedicine Laboratory, Maimónides Biomedical Research Institute of Córdoba, Córdoba, Spain.
| | - Sebastián Ventura
- Knowledge Discovery and Intelligent Systems in Biomedicine Laboratory, Maimónides Biomedical Research Institute of Córdoba, Córdoba, Spain; Department of Information Systems, King Abdulaziz University, Saudi Arabia Kingdom; Department of Computer Science and Numerical Analysis, University of Córdoba, Córdoba, Spain.
| |
Collapse
|
59
|
Mahbod A, Schaefer G, Wang C, Dorffner G, Ecker R, Ellinger I. Transfer learning using a multi-scale and multi-network ensemble for skin lesion classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 193:105475. [PMID: 32268255 DOI: 10.1016/j.cmpb.2020.105475] [Citation(s) in RCA: 80] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Revised: 02/15/2020] [Accepted: 03/20/2020] [Indexed: 05/27/2023]
Abstract
BACKGROUND AND OBJECTIVE Skin cancer is among the most common cancer types in the white population and consequently computer aided methods for skin lesion classification based on dermoscopic images are of great interest. A promising approach for this uses transfer learning to adapt pre-trained convolutional neural networks (CNNs) for skin lesion diagnosis. Since pre-training commonly occurs with natural images of a fixed image resolution and these training images are usually significantly smaller than dermoscopic images, downsampling or cropping of skin lesion images is required. This however may result in a loss of useful medical information, while the ideal resizing or cropping factor of dermoscopic images for the fine-tuning process remains unknown. METHODS We investigate the effect of image size for skin lesion classification based on pre-trained CNNs and transfer learning. Dermoscopic images from the International Skin Imaging Collaboration (ISIC) skin lesion classification challenge datasets are either resized to or cropped at six different sizes ranging from 224 × 224 to 450 × 450. The resulting classification performance of three well established CNNs, namely EfficientNetB0, EfficientNetB1 and SeReNeXt-50 is explored. We also propose and evaluate a multi-scale multi-CNN (MSM-CNN) fusion approach based on a three-level ensemble strategy that utilises the three network architectures trained on cropped dermoscopic images of various scales. RESULTS Our results show that image cropping is a better strategy compared to image resizing delivering superior classification performance at all explored image scales. Moreover, fusing the results of all three fine-tuned networks using cropped images at all six scales in the proposed MSM-CNN approach boosts the classification performance compared to a single network or a single image scale. On the ISIC 2018 skin lesion classification challenge test set, our MSM-CNN algorithm yields a balanced multi-class accuracy of 86.2% making it the currently second ranked algorithm on the live leaderboard. CONCLUSIONS We confirm that the image size has an effect on skin lesion classification performance when employing transfer learning of CNNs. We also show that image cropping results in better performance compared to image resizing. Finally, a straightforward ensembling approach that fuses the results from images cropped at six scales and three fine-tuned CNNs is shown to lead to the best classification performance.
Collapse
Affiliation(s)
- Amirreza Mahbod
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, Austria; Research and Development Department of TissueGnostics GmbH, Vienna, Austria.
| | - Gerald Schaefer
- Department of Computer Science, Loughborough University, Loughborough, United Kingdom
| | - Chunliang Wang
- Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Georg Dorffner
- Section for Artificial Intelligence and Decision Support, Center for Medical Statistics, Informatics and Intelligent Systems, Medical University of Vienna, Vienna, Austria
| | - Rupert Ecker
- Research and Development Department of TissueGnostics GmbH, Vienna, Austria
| | - Isabella Ellinger
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, Austria
| |
Collapse
|
60
|
Hussain AA, Bouachir O, Al-Turjman F, Aloqaily M. AI Techniques for COVID-19. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:128776-128795. [PMID: 34976554 PMCID: PMC8545328 DOI: 10.1109/access.2020.3007939] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Accepted: 07/04/2020] [Indexed: 05/18/2023]
Abstract
Artificial Intelligence (AI) intent is to facilitate human limits. It is getting a standpoint on human administrations, filled by the growing availability of restorative clinical data and quick progression of insightful strategies. Motivated by the need to highlight the need for employing AI in battling the COVID-19 Crisis, this survey summarizes the current state of AI applications in clinical administrations while battling COVID-19. Furthermore, we highlight the application of Big Data while understanding this virus. We also overview various intelligence techniques and methods that can be applied to various types of medical information-based pandemic. We classify the existing AI techniques in clinical data analysis, including neural systems, classical SVM, and edge significant learning. Also, an emphasis has been made on regions that utilize AI-oriented cloud computing in combating various similar viruses to COVID-19. This survey study is an attempt to benefit medical practitioners and medical researchers in overpowering their faced difficulties while handling COVID-19 big data. The investigated techniques put forth advances in medical data analysis with an exactness of up to 90%. We further end up with a detailed discussion about how AI implementation can be a huge advantage in combating various similar viruses.
Collapse
Affiliation(s)
- Adedoyin Ahmed Hussain
- Department of Computer EngineeringNear East University99138NicosiaMersin 10Turkey
- Research Centre for AI and IoTDepartment of Artificial Intelligence EngineeringNear East University99138NicosiaMersin 10Turkey
| | - Ouns Bouachir
- Department of Computer EngineeringZayed UniversityDubaiUnited Arab Emirates
- College of Technological InnovationZayed UniversityDubaiUnited Arab Emirates
| | - Fadi Al-Turjman
- Research Centre for AI and IoTDepartment of Artificial Intelligence EngineeringNear East University99138NicosiaMersin 10Turkey
| | - Moayad Aloqaily
- College of EngineeringAl Ain UniversityAl AinUnited Arab Emirates
| |
Collapse
|
61
|
Xie Y, Zhang J, Xia Y, Shen C. A Mutual Bootstrapping Model for Automated Skin Lesion Segmentation and Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2482-2493. [PMID: 32070946 DOI: 10.1109/tmi.2020.2972964] [Citation(s) in RCA: 72] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Automated skin lesion segmentation and classification are two most essential and related tasks in the computer-aided diagnosis of skin cancer. Despite their prevalence, deep learning models are usually designed for only one task, ignoring the potential benefits in jointly performing both tasks. In this paper, we propose the mutual bootstrapping deep convolutional neural networks (MB-DCNN) model for simultaneous skin lesion segmentation and classification. This model consists of a coarse segmentation network (coarse-SN), a mask-guided classification network (mask-CN), and an enhanced segmentation network (enhanced-SN). On one hand, the coarse-SN generates coarse lesion masks that provide a prior bootstrapping for mask-CN to help it locate and classify skin lesions accurately. On the other hand, the lesion localization maps produced by mask-CN are then fed into enhanced-SN, aiming to transfer the localization information learned by mask-CN to enhanced-SN for accurate lesion segmentation. In this way, both segmentation and classification networks mutually transfer knowledge between each other and facilitate each other in a bootstrapping way. Meanwhile, we also design a novel rank loss and jointly use it with the Dice loss in segmentation networks to address the issues caused by class imbalance and hard-easy pixel imbalance. We evaluate the proposed MB-DCNN model on the ISIC-2017 and PH2 datasets, and achieve a Jaccard index of 80.4% and 89.4% in skin lesion segmentation and an average AUC of 93.8% and 97.7% in skin lesion classification, which are superior to the performance of representative state-of-the-art skin lesion segmentation and classification methods. Our results suggest that it is possible to boost the performance of skin lesion segmentation and classification simultaneously via training a unified model to perform both tasks in a mutual bootstrapping way.
Collapse
|
62
|
Al-Masni MA, Kim DH, Kim TS. Multiple skin lesions diagnostics via integrated deep convolutional networks for segmentation and classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 190:105351. [PMID: 32028084 DOI: 10.1016/j.cmpb.2020.105351] [Citation(s) in RCA: 87] [Impact Index Per Article: 21.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Revised: 01/03/2020] [Accepted: 01/19/2020] [Indexed: 05/06/2023]
Abstract
BACKGROUND AND OBJECTIVE Computer automated diagnosis of various skin lesions through medical dermoscopy images remains a challenging task. METHODS In this work, we propose an integrated diagnostic framework that combines a skin lesion boundary segmentation stage and a multiple skin lesions classification stage. Firstly, we segment the skin lesion boundaries from the entire dermoscopy images using deep learning full resolution convolutional network (FrCN). Then, a convolutional neural network classifier (i.e., Inception-v3, ResNet-50, Inception-ResNet-v2, and DenseNet-201) is applied on the segmented skin lesions for classification. The former stage is a critical prerequisite step for skin lesion diagnosis since it extracts prominent features of various types of skin lesions. A promising classifier is selected by testing well-established classification convolutional neural networks. The proposed integrated deep learning model has been evaluated using three independent datasets (i.e., International Skin Imaging Collaboration (ISIC) 2016, 2017, and 2018, which contain two, three, and seven types of skin lesions, respectively) with proper balancing, segmentation, and augmentation. RESULTS In the integrated diagnostic system, segmented lesions improve the classification performance of Inception-ResNet-v2 by 2.72% and 4.71% in terms of the F1-score for benign and malignant cases of the ISIC 2016 test dataset, respectively. The classifiers of Inception-v3, ResNet-50, Inception-ResNet-v2, and DenseNet-201 exhibit their capability with overall weighted prediction accuracies of 77.04%, 79.95%, 81.79%, and 81.27% for two classes of ISIC 2016, 81.29%, 81.57%, 81.34%, and 73.44% for three classes of ISIC 2017, and 88.05%, 89.28%, 87.74%, and 88.70% for seven classes of ISIC 2018, respectively, demonstrating the superior performance of ResNet-50. CONCLUSIONS The proposed integrated diagnostic networks could be used to support and aid dermatologists for further improvement in skin cancer diagnosis.
Collapse
Affiliation(s)
- Mohammed A Al-Masni
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul, Republic of Korea
| | - Dong-Hyun Kim
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul, Republic of Korea
| | - Tae-Seong Kim
- Department of Biomedical Engineering, College of Electronics and Information, Kyung Hee University, Yongin, Republic of Korea.
| |
Collapse
|
63
|
Evaluation of Robust Spatial Pyramid Pooling Based on Convolutional Neural Network for Traffic Sign Recognition System. ELECTRONICS 2020. [DOI: 10.3390/electronics9060889] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Traffic sign recognition (TSR) is a noteworthy issue for real-world applications such as systems for autonomous driving as it has the main role in guiding the driver. This paper focuses on Taiwan’s prohibitory sign due to the lack of a database or research system for Taiwan’s traffic sign recognition. This paper investigates the state-of-the-art of various object detection systems (Yolo V3, Resnet 50, Densenet, and Tiny Yolo V3) combined with spatial pyramid pooling (SPP). We adopt the concept of SPP to improve the backbone network of Yolo V3, Resnet 50, Densenet, and Tiny Yolo V3 for building feature extraction. Furthermore, we use a spatial pyramid pooling to study multi-scale object features thoroughly. The observation and evaluation of certain models include vital metrics measurements, such as the mean average precision (mAP), workspace size, detection time, intersection over union (IoU), and the number of billion floating-point operations (BFLOPS). Our findings show that Yolo V3 SPP strikes the best total BFLOPS (65.69), and mAP (98.88%). Besides, the highest average accuracy is Yolo V3 SPP at 99%, followed by Densenet SPP at 87%, Resnet 50 SPP at 70%, and Tiny Yolo V3 SPP at 50%. Hence, SPP can improve the performance of all models in the experiment.
Collapse
|
64
|
Pathak Y, Shukla PK, Tiwari A, Stalin S, Singh S, Shukla PK. Deep Transfer Learning Based Classification Model for COVID-19 Disease. Ing Rech Biomed 2020; 43:87-92. [PMID: 32837678 PMCID: PMC7238986 DOI: 10.1016/j.irbm.2020.05.003] [Citation(s) in RCA: 155] [Impact Index Per Article: 38.8] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2020] [Revised: 05/10/2020] [Accepted: 05/15/2020] [Indexed: 12/15/2022]
Abstract
The COVID-19 infection is increasing at a rapid rate, with the availability of limited number of testing kits. Therefore, the development of COVID-19 testing kits is still an open area of research. Recently, many studies have shown that chest Computed Tomography (CT) images can be used for COVID-19 testing, as chest CT images show a bilateral change in COVID-19 infected patients. However, the classification of COVID-19 patients from chest CT images is not an easy task as predicting the bilateral change is defined as an ill-posed problem. Therefore, in this paper, a deep transfer learning technique is used to classify COVID-19 infected patients. Additionally, a top-2 smooth loss function with cost-sensitive attributes is also utilized to handle noisy and imbalanced COVID-19 dataset kind of problems. Experimental results reveal that the proposed deep transfer learning-based COVID-19 classification model provides efficient results as compared to the other supervised learning models.
Collapse
Affiliation(s)
- Y Pathak
- Department of Information Technology, Indian Institute of Information Technology (IIIT-Bhopal), Bhopal (MP), 462003, India
| | - P K Shukla
- Department of Computer Science & Engineering, School of Engineering & Technology, Jagran Lake City University (JLU), Bhopal-462044 (MP), India
| | - A Tiwari
- Department of CSE & IT, Madhav Institute of Technology and Science, Gwalior-474005 (MP), India
| | - S Stalin
- Department of CSE, Maulana Azad National Institute of Technology (MANIT), Bhopal, MP, 462003, India
| | - S Singh
- Department of Computer Science & Engineering, Jabalpur Engineering College, Jabalpur-482001 (MP), India
| | - P K Shukla
- Department of Computer Science & Engineering, University Institute of Technology, RGPV, Bhopal (MP), 462033, India
| |
Collapse
|
65
|
Melanoma and Nevus Skin Lesion Classification Using Handcraft and Deep Learning Feature Fusion via Mutual Information Measures. ENTROPY 2020; 22:e22040484. [PMID: 33286257 PMCID: PMC7516968 DOI: 10.3390/e22040484] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/26/2020] [Revised: 04/16/2020] [Accepted: 04/20/2020] [Indexed: 11/17/2022]
Abstract
In this paper, a new Computer-Aided Detection (CAD) system for the detection and classification of dangerous skin lesions (melanoma type) is presented, through a fusion of handcraft features related to the medical algorithm ABCD rule (Asymmetry Borders-Colors-Dermatoscopic Structures) and deep learning features employing Mutual Information (MI) measurements. The steps of a CAD system can be summarized as preprocessing, feature extraction, feature fusion, and classification. During the preprocessing step, a lesion image is enhanced, filtered, and segmented, with the aim to obtain the Region of Interest (ROI); in the next step, the feature extraction is performed. Handcraft features such as shape, color, and texture are used as the representation of the ABCD rule, and deep learning features are extracted using a Convolutional Neural Network (CNN) architecture, which is pre-trained on Imagenet (an ILSVRC Imagenet task). MI measurement is used as a fusion rule, gathering the most important information from both types of features. Finally, at the Classification step, several methods are employed such as Linear Regression (LR), Support Vector Machines (SVMs), and Relevant Vector Machines (RVMs). The designed framework was tested using the ISIC 2018 public dataset. The proposed framework appears to demonstrate an improved performance in comparison with other state-of-the-art methods in terms of the accuracy, specificity, and sensibility obtained in the training and test stages. Additionally, we propose and justify a novel procedure that should be used in adjusting the evaluation metrics for imbalanced datasets that are common for different kinds of skin lesions.
Collapse
|
66
|
|