51
|
Oukil S, Kasmi R, Mokrani K, García-Zapirain B. Automatic segmentation and melanoma detection based on color and texture features in dermoscopic images. Skin Res Technol 2021; 28:203-211. [PMID: 34779062 PMCID: PMC9907597 DOI: 10.1111/srt.13111] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2021] [Accepted: 09/25/2021] [Indexed: 02/06/2023]
Abstract
PURPOSE Melanoma is known as the most aggressive form of skin cancer and one of the fastest growing malignant tumors worldwide. Several computer-aided diagnosis systems for melanoma have been proposed, still, the algorithms encounter difficulties in the early stage of lesions. This paper aims to discriminate melanoma and benign skin lesion in dermoscopic images. METHODS The proposed algorithm is based on the color and texture of skin lesions by introducing a novel feature extraction technique. The algorithm uses an automatic segmentation based on k-means generating a fairly accurate mask for each lesion. The feature extraction consists of the existing and novel color and texture attributes measuring how color and texture vary inside the lesion. To find the optimal results, all the attributes are extracted from lesions in five different color spaces (RGB, HSV, Lab, XYZ, and YCbCr) and used as the inputs for three classifiers (K nearest neighbors, support vector machine , and artificial neural network). RESULTS The PH2 set is used to assess the performance of the proposed algorithm. The results of our algorithm are compared to the results of published articles that used the same dataset, and it shows that the proposed method outperforms the state of the art by attaining a sensitivity of 99.25%, specificity of 99.58%, and accuracy of 99.51%. CONCLUSION The final results show that the colors combined with texture are powerful and relevant attributes for melanoma detection and show improvement over the state of the art.
Collapse
Affiliation(s)
- S Oukil
- LTII Laboratory University of Bejaia-Algeria, Faculty of Technology, University of Bejaia, Bejaia, Algeria
| | - R Kasmi
- LTII Laboratory University of Bejaia-Algeria, Faculty of Technology, University of Bejaia, Bejaia, Algeria.,Electrical Engineering Department, University of Bouira, Bouira, Algeria
| | - K Mokrani
- LTII Laboratory University of Bejaia-Algeria, Faculty of Technology, University of Bejaia, Bejaia, Algeria
| | | |
Collapse
|
52
|
Pereira PMM, Thomaz LA, Tavora LMN, Assuncao PAA, Fonseca-Pinto RM, Paiva RP, Faria SMMD. Melanoma classification using light-Fields with morlet scattering transform and CNN: Surface depth as a valuable tool to increase detection rate. Med Image Anal 2021; 75:102254. [PMID: 34649195 DOI: 10.1016/j.media.2021.102254] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 07/27/2021] [Accepted: 09/22/2021] [Indexed: 11/15/2022]
Abstract
Medical image classification through learning-based approaches has been increasingly used, namely in the discrimination of melanoma. However, for skin lesion classification in general, such methods commonly rely on dermoscopic or other 2D-macro RGB images. This work proposes to exploit beyond conventional 2D image characteristics, by considering a third dimension (depth) that characterises the skin surface rugosity, which can be obtained from light-field images, such as those available in the SKINL2 dataset. To achieve this goal, a processing pipeline was deployed using a morlet scattering transform and a CNN model, allowing to perform a comparison between using 2D information, only 3D information, or both. Results show that discrimination between Melanoma and Nevus reaches an accuracy of 84.00, 74.00 or 94.00% when using only 2D, only 3D, or both, respectively. An increase of 14.29pp in sensitivity and 8.33pp in specificity is achieved when expanding beyond conventional 2D information by also using depth. When discriminating between Melanoma and all other types of lesions (a further imbalanced setting), an increase of 28.57pp in sensitivity and decrease of 1.19pp in specificity is achieved for the same test conditions. Overall the results of this work demonstrate significant improvements over conventional approaches.
Collapse
Affiliation(s)
- Pedro M M Pereira
- Instituto de Telecomunicações, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal; University of Coimbra, Centre for Informatics and Systems of the University of Coimbra, Department of Informatics Engineering, Pinhal de Marrocos, Coimbra 3030-290, Portugal.
| | - Lucas A Thomaz
- Instituto de Telecomunicações, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal; ESTG, Polytechnic of Leiria, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal
| | - Luis M N Tavora
- ESTG, Polytechnic of Leiria, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal
| | - Pedro A A Assuncao
- Instituto de Telecomunicações, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal; ESTG, Polytechnic of Leiria, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal
| | - Rui M Fonseca-Pinto
- Instituto de Telecomunicações, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal; ESTG, Polytechnic of Leiria, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal
| | - Rui Pedro Paiva
- University of Coimbra, Centre for Informatics and Systems of the University of Coimbra, Department of Informatics Engineering, Pinhal de Marrocos, Coimbra 3030-290, Portugal
| | - Sergio M M de Faria
- Instituto de Telecomunicações, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal; ESTG, Polytechnic of Leiria, Morro do Lena - Alto do Vieiro, Leiria 2411-901, Portugal
| |
Collapse
|
53
|
Wang Y, Cai J, Louie DC, Wang ZJ, Lee TK. Incorporating clinical knowledge with constrained classifier chain into a multimodal deep network for melanoma detection. Comput Biol Med 2021; 137:104812. [PMID: 34507158 DOI: 10.1016/j.compbiomed.2021.104812] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Revised: 08/25/2021] [Accepted: 08/25/2021] [Indexed: 10/20/2022]
Abstract
In recent years, vast developments in Computer-Aided Diagnosis (CAD) for skin diseases have generated much interest from clinicians and other eventual end-users of this technology. Introducing clinical domain knowledge to these machine learning strategies can help dispel the black box nature of these tools, strengthening clinician trust. Clinical domain knowledge also provides new information channels which can improve CAD diagnostic performance. In this paper, we propose a novel framework for malignant melanoma (MM) detection by fusing clinical images and dermoscopic images. The proposed method combines a multi-labeled deep feature extractor and clinically constrained classifier chain (CC). This allows the 7-point checklist, a clinician diagnostic algorithm, to be included in the decision level while maintaining the clinical importance of the major and minor criteria in the checklist. Our proposed framework achieved an average accuracy of 81.3% for detecting all criteria and melanoma when testing on a publicly available 7-point checklist dataset. This is the highest reported results, outperforming state-of-the-art methods in the literature by 6.4% or more. Analyses also show that the proposed system surpasses the single modality system of using either clinical images or dermoscopic images alone and the systems without adopting the approach of multi-label and clinically constrained classifier chain. Our carefully designed system demonstrates a substantial improvement over melanoma detection. By keeping the familiar major and minor criteria of the 7-point checklist and their corresponding weights, the proposed system may be more accepted by physicians as a human-interpretable CAD tool for automated melanoma detection.
Collapse
Affiliation(s)
- Yuheng Wang
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada; Department of Dermatology and Skin Science, University of British Columbia, Vancouver, BC, Canada; Photomedicine Institute, Vancouver Coast Health Research Institute, Vancouver, BC, Canada; Departments of Cancer Control Research and Integrative Oncology, BC Cancer, Vancouver, BC, Canada
| | - Jiayue Cai
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada.
| | - Daniel C Louie
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada; Department of Dermatology and Skin Science, University of British Columbia, Vancouver, BC, Canada; Photomedicine Institute, Vancouver Coast Health Research Institute, Vancouver, BC, Canada; Departments of Cancer Control Research and Integrative Oncology, BC Cancer, Vancouver, BC, Canada
| | - Z Jane Wang
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada; Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Tim K Lee
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada; Department of Dermatology and Skin Science, University of British Columbia, Vancouver, BC, Canada; Photomedicine Institute, Vancouver Coast Health Research Institute, Vancouver, BC, Canada; Departments of Cancer Control Research and Integrative Oncology, BC Cancer, Vancouver, BC, Canada
| |
Collapse
|
54
|
Kassem MA, Hosny KM, Damaševičius R, Eltoukhy MM. Machine Learning and Deep Learning Methods for Skin Lesion Classification and Diagnosis: A Systematic Review. Diagnostics (Basel) 2021; 11:1390. [PMID: 34441324 PMCID: PMC8391467 DOI: 10.3390/diagnostics11081390] [Citation(s) in RCA: 69] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2021] [Revised: 07/25/2021] [Accepted: 07/27/2021] [Indexed: 12/04/2022] Open
Abstract
Computer-aided systems for skin lesion diagnosis is a growing area of research. Recently, researchers have shown an increasing interest in developing computer-aided diagnosis systems. This paper aims to review, synthesize and evaluate the quality of evidence for the diagnostic accuracy of computer-aided systems. This study discusses the papers published in the last five years in ScienceDirect, IEEE, and SpringerLink databases. It includes 53 articles using traditional machine learning methods and 49 articles using deep learning methods. The studies are compared based on their contributions, the methods used and the achieved results. The work identified the main challenges of evaluating skin lesion segmentation and classification methods such as small datasets, ad hoc image selection and racial bias.
Collapse
Affiliation(s)
- Mohamed A. Kassem
- Department of Robotics and Intelligent Machines, Faculty of Artificial Intelligence, Kaferelshiekh University, Kaferelshiekh 33511, Egypt;
| | - Khalid M. Hosny
- Department of Information Technology, Faculty of Computers and Informatics, Zagazig University, Zagazig 44519, Egypt
| | - Robertas Damaševičius
- Department of Applied Informatics, Vytautas Magnus University, 44404 Kaunas, Lithuania
| | - Mohamed Meselhy Eltoukhy
- Computer Science Department, Faculty of Computers and Informatics, Suez Canal University, Ismailia 41522, Egypt;
| |
Collapse
|
55
|
Ursuleanu TF, Luca AR, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Preda C, Grigorovici A. Deep Learning Application for Analyzing of Constituents and Their Correlations in the Interpretations of Medical Images. Diagnostics (Basel) 2021; 11:1373. [PMID: 34441307 PMCID: PMC8393354 DOI: 10.3390/diagnostics11081373] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 07/25/2021] [Accepted: 07/27/2021] [Indexed: 12/13/2022] Open
Abstract
The need for time and attention, given by the doctor to the patient, due to the increased volume of medical data to be interpreted and filtered for diagnostic and therapeutic purposes has encouraged the development of the option to support, constructively and effectively, deep learning models. Deep learning (DL) has experienced an exponential development in recent years, with a major impact on interpretations of the medical image. This has influenced the development, diversification and increase of the quality of scientific data, the development of knowledge construction methods and the improvement of DL models used in medical applications. All research papers focus on description, highlighting, classification of one of the constituent elements of deep learning models (DL), used in the interpretation of medical images and do not provide a unified picture of the importance and impact of each constituent in the performance of DL models. The novelty in our paper consists primarily in the unitary approach, of the constituent elements of DL models, namely, data, tools used by DL architectures or specifically constructed DL architecture combinations and highlighting their "key" features, for completion of tasks in current applications in the interpretation of medical images. The use of "key" characteristics specific to each constituent of DL models and the correct determination of their correlations, may be the subject of future research, with the aim of increasing the performance of DL models in the interpretation of medical images.
Collapse
Affiliation(s)
- Tudor Florin Ursuleanu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
- Department of Surgery I, Regional Institute of Oncology, 700483 Iasi, Romania
| | - Andreea Roxana Luca
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department Obstetrics and Gynecology, Integrated Ambulatory of Hospital “Sf. Spiridon”, 700106 Iasi, Romania
| | - Liliana Gheorghe
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Radiology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Roxana Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Stefan Iancu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Maria Hlusneac
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Cristina Preda
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Endocrinology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Alexandru Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| |
Collapse
|
56
|
Pedrosa M, Zuquete A, Costa C. A Pseudonymisation Protocol With Implicit and Explicit Consent Routes for Health Records in Federated Ledgers. IEEE J Biomed Health Inform 2021; 25:2172-2183. [PMID: 33006933 DOI: 10.1109/jbhi.2020.3028454] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Healthcare data for primary use (diagnosis) may be encrypted for confidentiality purposes; however, secondary uses such as feeding machine learning algorithms requires open access. Full anonymity has no traceable identifiers to report diagnosis results. Moreover, implicit and explicit consent routes are of practical importance under recent data protection regulations (GDPR), translating directly into break-the-glass requirements. Pseudonymisation is an acceptable compromise when dealing with such orthogonal requirements and is an advisable measure to protect data. Our work presents a pseudonymisation protocol that is compliant with implicit and explicit consent routes. The protocol is constructed on a (t,n)-threshold secret sharing scheme and public key cryptography. The pseudonym is safely derived from a fragment of public information without requiring any data-subject's secret. The method is proven secure under reasonable cryptographic assumptions and scalable from the experimental results.
Collapse
|
57
|
|
58
|
Xie X, Niu J, Liu X, Chen Z, Tang S, Yu S. A survey on incorporating domain knowledge into deep learning for medical image analysis. Med Image Anal 2021; 69:101985. [PMID: 33588117 DOI: 10.1016/j.media.2021.101985] [Citation(s) in RCA: 87] [Impact Index Per Article: 21.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Revised: 12/04/2020] [Accepted: 01/26/2021] [Indexed: 12/27/2022]
Abstract
Although deep learning models like CNNs have achieved great success in medical image analysis, the small size of medical datasets remains a major bottleneck in this area. To address this problem, researchers have started looking for external information beyond current available medical datasets. Traditional approaches generally leverage the information from natural images via transfer learning. More recent works utilize the domain knowledge from medical doctors, to create networks that resemble how medical doctors are trained, mimic their diagnostic patterns, or focus on the features or areas they pay particular attention to. In this survey, we summarize the current progress on integrating medical domain knowledge into deep learning models for various tasks, such as disease diagnosis, lesion, organ and abnormality detection, lesion and organ segmentation. For each task, we systematically categorize different kinds of medical domain knowledge that have been utilized and their corresponding integrating methods. We also provide current challenges and directions for future research.
Collapse
Affiliation(s)
- Xiaozheng Xie
- State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing 100191, China
| | - Jianwei Niu
- State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing 100191, China; Beijing Advanced Innovation Center for Big Data and Brain Computing (BDBC) and Hangzhou Innovation Institute of Beihang University, 18 Chuanghui Street, Binjiang District, Hangzhou 310000, China
| | - Xuefeng Liu
- State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing 100191, China.
| | - Zhengsu Chen
- State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing 100191, China
| | - Shaojie Tang
- Jindal School of Management, The University of Texas at Dallas, 800 W Campbell Rd, Richardson, TX 75080-3021, USA
| | - Shui Yu
- School of Computer Science, University of Technology Sydney, 15 Broadway, Ultimo NSW 2007, Australia
| |
Collapse
|
59
|
Diagnosing of Diabetic Retinopathy with Image Dehazing and Capsule Network. DEEP LEARNING FOR MEDICAL DECISION SUPPORT SYSTEMS 2021. [PMCID: PMC7298988 DOI: 10.1007/978-981-15-6325-6_9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
As it was discussed before in Chap. 10.1007/978-981-15-6325-6_4, the disease of diabetic retinopathy (DR) ensure terrible results such as blindness, it has been a remarkable medical problem examined recently. Here, especially retinal pathologies can be the biggest problem for millions of blindness cases seen world-wide [1]. When all the cases of blindness are examined in detail, it was reported that there are around 2 million diabetic retinopathy cases causing the blindness so that early diagnosis has taken many steps away in terms of having the highest priority in eliminating or at least slowing down disease factors (causing blindness) and so that reducing the rates of blindness at the final [2, 3].
Collapse
|
60
|
Ramzan M, Raza M, Sharif M, Attique Khan M, Nam Y. Gastrointestinal Tract Infections Classification Using Deep Learning. COMPUTERS, MATERIALS & CONTINUA 2021; 69:3239-3257. [DOI: 10.32604/cmc.2021.015920] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2020] [Accepted: 03/29/2021] [Indexed: 08/25/2024]
|
61
|
Khadidos A, Khadidos AO, Kannan S, Natarajan Y, Mohanty SN, Tsaramirsis G. Analysis of COVID-19 Infections on a CT Image Using DeepSense Model. Front Public Health 2020; 8:599550. [PMID: 33330341 PMCID: PMC7714903 DOI: 10.3389/fpubh.2020.599550] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Accepted: 10/16/2020] [Indexed: 11/17/2022] Open
Abstract
In this paper, a data mining model on a hybrid deep learning framework is designed to diagnose the medical conditions of patients infected with the coronavirus disease 2019 (COVID-19) virus. The hybrid deep learning model is designed as a combination of convolutional neural network (CNN) and recurrent neural network (RNN) and named as DeepSense method. It is designed as a series of layers to extract and classify the related features of COVID-19 infections from the lungs. The computerized tomography image is used as an input data, and hence, the classifier is designed to ease the process of classification on learning the multidimensional input data using the Expert Hidden layers. The validation of the model is conducted against the medical image datasets to predict the infections using deep learning classifiers. The results show that the DeepSense classifier offers accuracy in an improved manner than the conventional deep and machine learning classifiers. The proposed method is validated against three different datasets, where the training data are compared with 70%, 80%, and 90% training data. It specifically provides the quality of the diagnostic method adopted for the prediction of COVID-19 infections in a patient.
Collapse
Affiliation(s)
- Adil Khadidos
- Department of Information Technology, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Alaa O Khadidos
- Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Srihari Kannan
- Department of Computer Science and Engineering, SNS College of Engineering, Coimbatore, India
| | - Yuvaraj Natarajan
- Research and Development, Information Communication Technology Academy, Chennai, India
| | - Sachi Nandan Mohanty
- Department of Computer Science and Engineering, Institute of Chartered Financial Analysts of India Foundation of Higher Education, Hyderabad, India
| | | |
Collapse
|
62
|
Shankar K, Perumal E. A novel hand-crafted with deep learning features based fusion model for COVID-19 diagnosis and classification using chest X-ray images. COMPLEX INTELL SYST 2020; 7:1277-1293. [PMID: 34777955 PMCID: PMC7659408 DOI: 10.1007/s40747-020-00216-6] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Accepted: 10/06/2020] [Indexed: 11/25/2022]
Abstract
COVID-19 pandemic is increasing in an exponential rate, with restricted accessibility of rapid test kits. So, the design and implementation of COVID-19 testing kits remain an open research problem. Several findings attained using radio-imaging approaches recommend that the images comprise important data related to coronaviruses. The application of recently developed artificial intelligence (AI) techniques, integrated with radiological imaging, is helpful in the precise diagnosis and classification of the disease. In this view, the current research paper presents a novel fusion model hand-crafted with deep learning features called FM-HCF-DLF model for diagnosis and classification of COVID-19. The proposed FM-HCF-DLF model comprises three major processes, namely Gaussian filtering-based preprocessing, FM for feature extraction and classification. FM model incorporates the fusion of handcrafted features with the help of local binary patterns (LBP) and deep learning (DL) features and it also utilizes convolutional neural network (CNN)-based Inception v3 technique. To further improve the performance of Inception v3 model, the learning rate scheduler using Adam optimizer is applied. At last, multilayer perceptron (MLP) is employed to carry out the classification process. The proposed FM-HCF-DLF model was experimentally validated using chest X-ray dataset. The experimental outcomes inferred that the proposed model yielded superior performance with maximum sensitivity of 93.61%, specificity of 94.56%, precision of 94.85%, accuracy of 94.08%, F score of 93.2% and kappa value of 93.5%.
Collapse
Affiliation(s)
- K Shankar
- Department of Computer Applications, Alagappa University, Karaikudi, India
| | - Eswaran Perumal
- Department of Computer Applications, Alagappa University, Karaikudi, India
| |
Collapse
|
63
|
Pérez E, Reyes O, Ventura S. Convolutional neural networks for the automatic diagnosis of melanoma: An extensive experimental study. Med Image Anal 2020; 67:101858. [PMID: 33129155 DOI: 10.1016/j.media.2020.101858] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Revised: 08/24/2020] [Accepted: 09/18/2020] [Indexed: 12/12/2022]
Abstract
Melanoma is the type of skin cancer with the highest levels of mortality, and it is more dangerous because it can spread to other parts of the body if not caught and treated early. Melanoma diagnosis is a complex task, even for expert dermatologists, mainly due to the great variety of morphologies in moles of patients. Accordingly, the automatic diagnosis of melanoma is a task that poses the challenge of developing efficient computational methods that ease the diagnostic and, therefore, aid dermatologists in decision-making. In this work, an extensive analysis was conducted, aiming at assessing and illustrating the effectiveness of convolutional neural networks in coping with this complex task. To achieve this objective, twelve well-known convolutional network models were evaluated on eleven public image datasets. The experimental study comprised five phases, where first it was analyzed the sensitivity of the models regarding the optimization algorithm used for their training, and then it was analyzed the impact in performance when using different techniques such as cost-sensitive learning, data augmentation and transfer learning. The conducted study confirmed the usefulness, effectiveness and robustness of different convolutional architectures in solving melanoma diagnosis problem. Also, important guidelines to researchers working on this area were provided, easing the selection of both the proper convolutional model and technique according the characteristics of data.
Collapse
Affiliation(s)
- Eduardo Pérez
- Knowledge Discovery and Intelligent Systems in Biomedicine Laboratory, Maimónides Biomedical Research Institute of Córdoba, Córdoba, Spain; Department of Computer Science and Numerical Analysis, University of Córdoba, Córdoba, Spain.
| | - Oscar Reyes
- Department of Computer Science and Numerical Analysis, University of Córdoba, Córdoba, Spain; Knowledge Discovery and Intelligent Systems in Biomedicine Laboratory, Maimónides Biomedical Research Institute of Córdoba, Córdoba, Spain.
| | - Sebastián Ventura
- Knowledge Discovery and Intelligent Systems in Biomedicine Laboratory, Maimónides Biomedical Research Institute of Córdoba, Córdoba, Spain; Department of Information Systems, King Abdulaziz University, Saudi Arabia Kingdom; Department of Computer Science and Numerical Analysis, University of Córdoba, Córdoba, Spain.
| |
Collapse
|
64
|
Mahbod A, Schaefer G, Wang C, Dorffner G, Ecker R, Ellinger I. Transfer learning using a multi-scale and multi-network ensemble for skin lesion classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 193:105475. [PMID: 32268255 DOI: 10.1016/j.cmpb.2020.105475] [Citation(s) in RCA: 87] [Impact Index Per Article: 17.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Revised: 02/15/2020] [Accepted: 03/20/2020] [Indexed: 05/27/2023]
Abstract
BACKGROUND AND OBJECTIVE Skin cancer is among the most common cancer types in the white population and consequently computer aided methods for skin lesion classification based on dermoscopic images are of great interest. A promising approach for this uses transfer learning to adapt pre-trained convolutional neural networks (CNNs) for skin lesion diagnosis. Since pre-training commonly occurs with natural images of a fixed image resolution and these training images are usually significantly smaller than dermoscopic images, downsampling or cropping of skin lesion images is required. This however may result in a loss of useful medical information, while the ideal resizing or cropping factor of dermoscopic images for the fine-tuning process remains unknown. METHODS We investigate the effect of image size for skin lesion classification based on pre-trained CNNs and transfer learning. Dermoscopic images from the International Skin Imaging Collaboration (ISIC) skin lesion classification challenge datasets are either resized to or cropped at six different sizes ranging from 224 × 224 to 450 × 450. The resulting classification performance of three well established CNNs, namely EfficientNetB0, EfficientNetB1 and SeReNeXt-50 is explored. We also propose and evaluate a multi-scale multi-CNN (MSM-CNN) fusion approach based on a three-level ensemble strategy that utilises the three network architectures trained on cropped dermoscopic images of various scales. RESULTS Our results show that image cropping is a better strategy compared to image resizing delivering superior classification performance at all explored image scales. Moreover, fusing the results of all three fine-tuned networks using cropped images at all six scales in the proposed MSM-CNN approach boosts the classification performance compared to a single network or a single image scale. On the ISIC 2018 skin lesion classification challenge test set, our MSM-CNN algorithm yields a balanced multi-class accuracy of 86.2% making it the currently second ranked algorithm on the live leaderboard. CONCLUSIONS We confirm that the image size has an effect on skin lesion classification performance when employing transfer learning of CNNs. We also show that image cropping results in better performance compared to image resizing. Finally, a straightforward ensembling approach that fuses the results from images cropped at six scales and three fine-tuned CNNs is shown to lead to the best classification performance.
Collapse
Affiliation(s)
- Amirreza Mahbod
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, Austria; Research and Development Department of TissueGnostics GmbH, Vienna, Austria.
| | - Gerald Schaefer
- Department of Computer Science, Loughborough University, Loughborough, United Kingdom
| | - Chunliang Wang
- Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Georg Dorffner
- Section for Artificial Intelligence and Decision Support, Center for Medical Statistics, Informatics and Intelligent Systems, Medical University of Vienna, Vienna, Austria
| | - Rupert Ecker
- Research and Development Department of TissueGnostics GmbH, Vienna, Austria
| | - Isabella Ellinger
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, Austria
| |
Collapse
|
65
|
Hussain AA, Bouachir O, Al-Turjman F, Aloqaily M. AI Techniques for COVID-19. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:128776-128795. [PMID: 34976554 PMCID: PMC8545328 DOI: 10.1109/access.2020.3007939] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Accepted: 07/04/2020] [Indexed: 05/18/2023]
Abstract
Artificial Intelligence (AI) intent is to facilitate human limits. It is getting a standpoint on human administrations, filled by the growing availability of restorative clinical data and quick progression of insightful strategies. Motivated by the need to highlight the need for employing AI in battling the COVID-19 Crisis, this survey summarizes the current state of AI applications in clinical administrations while battling COVID-19. Furthermore, we highlight the application of Big Data while understanding this virus. We also overview various intelligence techniques and methods that can be applied to various types of medical information-based pandemic. We classify the existing AI techniques in clinical data analysis, including neural systems, classical SVM, and edge significant learning. Also, an emphasis has been made on regions that utilize AI-oriented cloud computing in combating various similar viruses to COVID-19. This survey study is an attempt to benefit medical practitioners and medical researchers in overpowering their faced difficulties while handling COVID-19 big data. The investigated techniques put forth advances in medical data analysis with an exactness of up to 90%. We further end up with a detailed discussion about how AI implementation can be a huge advantage in combating various similar viruses.
Collapse
Affiliation(s)
- Adedoyin Ahmed Hussain
- Department of Computer EngineeringNear East University99138NicosiaMersin 10Turkey
- Research Centre for AI and IoTDepartment of Artificial Intelligence EngineeringNear East University99138NicosiaMersin 10Turkey
| | - Ouns Bouachir
- Department of Computer EngineeringZayed UniversityDubaiUnited Arab Emirates
- College of Technological InnovationZayed UniversityDubaiUnited Arab Emirates
| | - Fadi Al-Turjman
- Research Centre for AI and IoTDepartment of Artificial Intelligence EngineeringNear East University99138NicosiaMersin 10Turkey
| | - Moayad Aloqaily
- College of EngineeringAl Ain UniversityAl AinUnited Arab Emirates
| |
Collapse
|
66
|
Xie Y, Zhang J, Xia Y, Shen C. A Mutual Bootstrapping Model for Automated Skin Lesion Segmentation and Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2482-2493. [PMID: 32070946 DOI: 10.1109/tmi.2020.2972964] [Citation(s) in RCA: 125] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Automated skin lesion segmentation and classification are two most essential and related tasks in the computer-aided diagnosis of skin cancer. Despite their prevalence, deep learning models are usually designed for only one task, ignoring the potential benefits in jointly performing both tasks. In this paper, we propose the mutual bootstrapping deep convolutional neural networks (MB-DCNN) model for simultaneous skin lesion segmentation and classification. This model consists of a coarse segmentation network (coarse-SN), a mask-guided classification network (mask-CN), and an enhanced segmentation network (enhanced-SN). On one hand, the coarse-SN generates coarse lesion masks that provide a prior bootstrapping for mask-CN to help it locate and classify skin lesions accurately. On the other hand, the lesion localization maps produced by mask-CN are then fed into enhanced-SN, aiming to transfer the localization information learned by mask-CN to enhanced-SN for accurate lesion segmentation. In this way, both segmentation and classification networks mutually transfer knowledge between each other and facilitate each other in a bootstrapping way. Meanwhile, we also design a novel rank loss and jointly use it with the Dice loss in segmentation networks to address the issues caused by class imbalance and hard-easy pixel imbalance. We evaluate the proposed MB-DCNN model on the ISIC-2017 and PH2 datasets, and achieve a Jaccard index of 80.4% and 89.4% in skin lesion segmentation and an average AUC of 93.8% and 97.7% in skin lesion classification, which are superior to the performance of representative state-of-the-art skin lesion segmentation and classification methods. Our results suggest that it is possible to boost the performance of skin lesion segmentation and classification simultaneously via training a unified model to perform both tasks in a mutual bootstrapping way.
Collapse
|
67
|
Al-Masni MA, Kim DH, Kim TS. Multiple skin lesions diagnostics via integrated deep convolutional networks for segmentation and classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 190:105351. [PMID: 32028084 DOI: 10.1016/j.cmpb.2020.105351] [Citation(s) in RCA: 95] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Revised: 01/03/2020] [Accepted: 01/19/2020] [Indexed: 05/06/2023]
Abstract
BACKGROUND AND OBJECTIVE Computer automated diagnosis of various skin lesions through medical dermoscopy images remains a challenging task. METHODS In this work, we propose an integrated diagnostic framework that combines a skin lesion boundary segmentation stage and a multiple skin lesions classification stage. Firstly, we segment the skin lesion boundaries from the entire dermoscopy images using deep learning full resolution convolutional network (FrCN). Then, a convolutional neural network classifier (i.e., Inception-v3, ResNet-50, Inception-ResNet-v2, and DenseNet-201) is applied on the segmented skin lesions for classification. The former stage is a critical prerequisite step for skin lesion diagnosis since it extracts prominent features of various types of skin lesions. A promising classifier is selected by testing well-established classification convolutional neural networks. The proposed integrated deep learning model has been evaluated using three independent datasets (i.e., International Skin Imaging Collaboration (ISIC) 2016, 2017, and 2018, which contain two, three, and seven types of skin lesions, respectively) with proper balancing, segmentation, and augmentation. RESULTS In the integrated diagnostic system, segmented lesions improve the classification performance of Inception-ResNet-v2 by 2.72% and 4.71% in terms of the F1-score for benign and malignant cases of the ISIC 2016 test dataset, respectively. The classifiers of Inception-v3, ResNet-50, Inception-ResNet-v2, and DenseNet-201 exhibit their capability with overall weighted prediction accuracies of 77.04%, 79.95%, 81.79%, and 81.27% for two classes of ISIC 2016, 81.29%, 81.57%, 81.34%, and 73.44% for three classes of ISIC 2017, and 88.05%, 89.28%, 87.74%, and 88.70% for seven classes of ISIC 2018, respectively, demonstrating the superior performance of ResNet-50. CONCLUSIONS The proposed integrated diagnostic networks could be used to support and aid dermatologists for further improvement in skin cancer diagnosis.
Collapse
Affiliation(s)
- Mohammed A Al-Masni
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul, Republic of Korea
| | - Dong-Hyun Kim
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul, Republic of Korea
| | - Tae-Seong Kim
- Department of Biomedical Engineering, College of Electronics and Information, Kyung Hee University, Yongin, Republic of Korea.
| |
Collapse
|
68
|
Evaluation of Robust Spatial Pyramid Pooling Based on Convolutional Neural Network for Traffic Sign Recognition System. ELECTRONICS 2020. [DOI: 10.3390/electronics9060889] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Traffic sign recognition (TSR) is a noteworthy issue for real-world applications such as systems for autonomous driving as it has the main role in guiding the driver. This paper focuses on Taiwan’s prohibitory sign due to the lack of a database or research system for Taiwan’s traffic sign recognition. This paper investigates the state-of-the-art of various object detection systems (Yolo V3, Resnet 50, Densenet, and Tiny Yolo V3) combined with spatial pyramid pooling (SPP). We adopt the concept of SPP to improve the backbone network of Yolo V3, Resnet 50, Densenet, and Tiny Yolo V3 for building feature extraction. Furthermore, we use a spatial pyramid pooling to study multi-scale object features thoroughly. The observation and evaluation of certain models include vital metrics measurements, such as the mean average precision (mAP), workspace size, detection time, intersection over union (IoU), and the number of billion floating-point operations (BFLOPS). Our findings show that Yolo V3 SPP strikes the best total BFLOPS (65.69), and mAP (98.88%). Besides, the highest average accuracy is Yolo V3 SPP at 99%, followed by Densenet SPP at 87%, Resnet 50 SPP at 70%, and Tiny Yolo V3 SPP at 50%. Hence, SPP can improve the performance of all models in the experiment.
Collapse
|
69
|
Pathak Y, Shukla PK, Tiwari A, Stalin S, Singh S, Shukla PK. Deep Transfer Learning Based Classification Model for COVID-19 Disease. Ing Rech Biomed 2020; 43:87-92. [PMID: 32837678 PMCID: PMC7238986 DOI: 10.1016/j.irbm.2020.05.003] [Citation(s) in RCA: 155] [Impact Index Per Article: 31.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2020] [Revised: 05/10/2020] [Accepted: 05/15/2020] [Indexed: 12/15/2022]
Abstract
The COVID-19 infection is increasing at a rapid rate, with the availability of limited number of testing kits. Therefore, the development of COVID-19 testing kits is still an open area of research. Recently, many studies have shown that chest Computed Tomography (CT) images can be used for COVID-19 testing, as chest CT images show a bilateral change in COVID-19 infected patients. However, the classification of COVID-19 patients from chest CT images is not an easy task as predicting the bilateral change is defined as an ill-posed problem. Therefore, in this paper, a deep transfer learning technique is used to classify COVID-19 infected patients. Additionally, a top-2 smooth loss function with cost-sensitive attributes is also utilized to handle noisy and imbalanced COVID-19 dataset kind of problems. Experimental results reveal that the proposed deep transfer learning-based COVID-19 classification model provides efficient results as compared to the other supervised learning models.
Collapse
Affiliation(s)
- Y Pathak
- Department of Information Technology, Indian Institute of Information Technology (IIIT-Bhopal), Bhopal (MP), 462003, India
| | - P K Shukla
- Department of Computer Science & Engineering, School of Engineering & Technology, Jagran Lake City University (JLU), Bhopal-462044 (MP), India
| | - A Tiwari
- Department of CSE & IT, Madhav Institute of Technology and Science, Gwalior-474005 (MP), India
| | - S Stalin
- Department of CSE, Maulana Azad National Institute of Technology (MANIT), Bhopal, MP, 462003, India
| | - S Singh
- Department of Computer Science & Engineering, Jabalpur Engineering College, Jabalpur-482001 (MP), India
| | - P K Shukla
- Department of Computer Science & Engineering, University Institute of Technology, RGPV, Bhopal (MP), 462033, India
| |
Collapse
|
70
|
Melanoma and Nevus Skin Lesion Classification Using Handcraft and Deep Learning Feature Fusion via Mutual Information Measures. ENTROPY 2020; 22:e22040484. [PMID: 33286257 PMCID: PMC7516968 DOI: 10.3390/e22040484] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/26/2020] [Revised: 04/16/2020] [Accepted: 04/20/2020] [Indexed: 11/17/2022]
Abstract
In this paper, a new Computer-Aided Detection (CAD) system for the detection and classification of dangerous skin lesions (melanoma type) is presented, through a fusion of handcraft features related to the medical algorithm ABCD rule (Asymmetry Borders-Colors-Dermatoscopic Structures) and deep learning features employing Mutual Information (MI) measurements. The steps of a CAD system can be summarized as preprocessing, feature extraction, feature fusion, and classification. During the preprocessing step, a lesion image is enhanced, filtered, and segmented, with the aim to obtain the Region of Interest (ROI); in the next step, the feature extraction is performed. Handcraft features such as shape, color, and texture are used as the representation of the ABCD rule, and deep learning features are extracted using a Convolutional Neural Network (CNN) architecture, which is pre-trained on Imagenet (an ILSVRC Imagenet task). MI measurement is used as a fusion rule, gathering the most important information from both types of features. Finally, at the Classification step, several methods are employed such as Linear Regression (LR), Support Vector Machines (SVMs), and Relevant Vector Machines (RVMs). The designed framework was tested using the ISIC 2018 public dataset. The proposed framework appears to demonstrate an improved performance in comparison with other state-of-the-art methods in terms of the accuracy, specificity, and sensibility obtained in the training and test stages. Additionally, we propose and justify a novel procedure that should be used in adjusting the evaluation metrics for imbalanced datasets that are common for different kinds of skin lesions.
Collapse
|
71
|
|