51
|
Ningrum DNA, Yuan SP, Kung WM, Wu CC, Tzeng IS, Huang CY, Li JYC, Wang YC. Deep Learning Classifier with Patient's Metadata of Dermoscopic Images in Malignant Melanoma Detection. J Multidiscip Healthc 2021; 14:877-885. [PMID: 33907414 PMCID: PMC8071207 DOI: 10.2147/jmdh.s306284] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Accepted: 03/25/2021] [Indexed: 02/05/2023] Open
Abstract
BACKGROUND Incidence of skin cancer is one of the global burdens of malignancies that increase each year, with melanoma being the deadliest one. Imaging-based automated skin cancer detection still remains challenging owing to variability in the skin lesions and limited standard dataset availability. Recent research indicates the potential of deep convolutional neural networks (CNN) in predicting outcomes from simple as well as highly complicated images. However, its implementation requires high-class computational facility, that is not feasible in low resource and remote areas of health care. There is potential in combining image and patient's metadata, but the study is still lacking. OBJECTIVE We want to develop malignant melanoma detection based on dermoscopic images and patient's metadata using an artificial intelligence (AI) model that will work on low-resource devices. METHODS We used an open-access dermatology repository of International Skin Imaging Collaboration (ISIC) Archive dataset consist of 23,801 biopsy-proven dermoscopic images. We tested performance for binary classification malignant melanomas vs nonmalignant melanomas. From 1200 sample images, we split the data for training (72%), validation (18%), and testing (10%). We compared CNN with image data only (CNN model) vs CNN for image data combined with an artificial neural network (ANN) for patient's metadata (CNN+ANN model). RESULTS The balanced accuracy for CNN+ANN model was higher (92.34%) than the CNN model (73.69%). Combination of the patient's metadata using ANN prevents the overfitting that occurs in the CNN model using dermoscopic images only. This small size (24 MB) of this model made it possible to run on a medium class computer without the need of cloud computing, suitable for deployment on devices with limited resources. CONCLUSION The CNN+ANN model can increase the accuracy of classification in malignant melanoma detection even with limited data and is promising for development as a screening device in remote and low resources health care.
Collapse
Affiliation(s)
- Dina Nur Anggraini Ningrum
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan
- Public Health Department, Universitas Negeri Semarang, Semarang City, Indonesia
| | - Sheng-Po Yuan
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan
- Department of Otorhinolaryngology, Shuang-Ho Hospital, Taipei Medical University, New Taipei City, Taiwan
| | - Woon-Man Kung
- Department of Exercise and Health Promotion, College of Kinesiology and Health, Chinese Culture University, Taipei, Taiwan
| | - Chieh-Chen Wu
- Department of Exercise and Health Promotion, College of Kinesiology and Health, Chinese Culture University, Taipei, Taiwan
| | - I-Shiang Tzeng
- Department of Exercise and Health Promotion, College of Kinesiology and Health, Chinese Culture University, Taipei, Taiwan
- Department of Research, Taipei Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, New Taipei City, Taiwan
- Department of Statistics, National Taipei University, Taipei, Taiwan
| | - Chu-Ya Huang
- Taiwan College of Healthcare Executives, Taipei, Taiwan
| | - Jack Yu-Chuan Li
- Graduate Institute of Biomedical Informatics, College of Medical Science and Technology, Taipei Medical University, Taipei, Taiwan
- Department Dermatology, Wan Fang Hospital, Taipei, Taiwan
- Taipei Medical University Research Center of Cancer Translational Medicine, Taipei, Taiwan
| | - Yao-Chin Wang
- Graduate Institute of Injury Prevention and Control, College of Public Health, Taipei Medical University, Taipei, Taiwan
- Department of Emergency Medicine, Min-Sheng General Hospital, Taoyuan, Taiwan
| |
Collapse
|
52
|
Eroğlu Y, Yildirim M, Çinar A. Convolutional Neural Networks based classification of breast ultrasonography images by hybrid method with respect to benign, malignant, and normal using mRMR. Comput Biol Med 2021; 133:104407. [PMID: 33901712 DOI: 10.1016/j.compbiomed.2021.104407] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 04/08/2021] [Accepted: 04/13/2021] [Indexed: 12/25/2022]
Abstract
Early diagnosis of breast lesions and differentiation of malignant lesions from benign lesions are important for the prognosis of breast cancer. In the diagnosis of this disease ultrasound is an extremely important radiological imaging method because it enables biopsy as well as lesion characterization. Since ultrasonographic diagnosis depends on the expert, the knowledge level and experience of the user is very important. In addition, the contribution of computer aided systems is quite high, as these systems can reduce the workload of radiologists and reinforce their knowledge and experience when considered together with a dense patient population in hospital conditions. In this paper, a hybrid based CNN system is developed for diagnosing breast cancer lesions with respect to benign, malignant and normal. Alexnet, MobilenetV2, and Resnet50 models are used as the base for the Hybrid structure. The features of these models used are obtained and concatenated separately. Thus, the number of features used are increased. Later, the most valuable of these features are selected by the mRMR (Minimum Redundancy Maximum Relevance) feature selection method and classified with machine learning classifiers such as SVM, KNN. The highest rate is obtained in the SVM classifier with 95.6% in accuracy.
Collapse
Affiliation(s)
- Yeşim Eroğlu
- Department of Radiology, Firat University School of Medicine, Elazig, Turkey.
| | | | - Ahmet Çinar
- Computer Engineering Department, Firat University, Elazig, Turkey.
| |
Collapse
|
53
|
Sevli O. A deep convolutional neural network-based pigmented skin lesion classification application and experts evaluation. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-05929-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
54
|
Winkler JK, Sies K, Fink C, Toberer F, Enk A, Abassi MS, Fuchs T, Haenssle HA. Association between different scale bars in dermoscopic images and diagnostic performance of a market-approved deep learning convolutional neural network for melanoma recognition. Eur J Cancer 2021; 145:146-154. [PMID: 33465706 DOI: 10.1016/j.ejca.2020.12.010] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2020] [Revised: 12/01/2020] [Accepted: 12/03/2020] [Indexed: 12/22/2022]
Abstract
BACKGROUND Studies systematically unravelling possible causes for false diagnoses of deep learning convolutional neural networks (CNNs) are scarce, yet needed before broader application. OBJECTIVES The objective of the study was to investigate whether scale bars in dermoscopic images are associated with the diagnostic accuracy of a market-approved CNN. METHODS This cross-sectional analysis applied a CNN trained with more than 150,000 images (Moleanalyzer-pro®, FotoFinder Systems Inc., Bad Birnbach, Germany) to investigate seven dermoscopic image sets depicting the same 130 melanocytic lesions (107 nevi, 23 melanomas) without or with digitally superimposed scale bars of different manufacturers. Sensitivity, specificity and area under the curve (AUC) of receiver operating characteristics (ROC) for the CNN's binary classification of images with or without superimposed scale bars were assessed. RESULTS Six dermoscopic image sets with different scale bars and one control set without scale bars (overall 910 images) were submitted to CNN analysis. In images without scale bars, the CNN attained a sensitivity [95% confidence interval] of 87.0% [67.9%-95.5%] and a specificity of 87.9% [80.3%-92.8%]. ROC AUC was 0.953 [0.914-0.992]. Scale bars were not associated with significant changes in sensitivity (range 87%-95.7%, all p ≥ 1.0). However, four scale bars induced a decrease of the CNN's specificity (range 0%-43.9%, all p < 0.001). Moreover, ROC AUC was significantly reduced by two scale bars (range 0.520-0.848, both p ≤ 0.042). CONCLUSIONS Superimposed scale bars in dermoscopic images may impair the CNN's diagnostic accuracy, mostly by increasing the rate of the false-positive diagnoses. We recommend avoiding scale bars in images intended for CNN analysis unless specific measures counteracting effects are implemented. CLINICAL TRIAL NUMBER This study was registered at the German Clinical Trial Register (DRKS-Study-ID: DRKS00013570; URL: https://www.drks.de/drks_web/).
Collapse
Affiliation(s)
- Julia K Winkler
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany
| | - Katharina Sies
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany
| | - Christine Fink
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany
| | - Ferdinand Toberer
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany
| | - Alexander Enk
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany
| | - Mohamed S Abassi
- Department of Research and Development, FotoFinder Systems GmbH, Bad Birnbach, Germany
| | - Tobias Fuchs
- Department of Research and Development, FotoFinder Systems GmbH, Bad Birnbach, Germany
| | - Holger A Haenssle
- Department of Dermatology, University of Heidelberg, Heidelberg, Germany.
| |
Collapse
|
55
|
A demystifying convolutional neural networks using Grad-CAM for prediction of coronavirus disease (COVID-19) on X-ray images. DATA SCIENCE FOR COVID-19 2021. [PMCID: PMC8137866 DOI: 10.1016/b978-0-12-824536-1.00037-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Abstract
The 2019 novel coronavirus (COVID-19) has spread quickly among people living in different countries and is impending 26,27,630 cases worldwide according to the statistics of European Center for Disease Prevention and Control. To control the spread of COVID-19, testing large numbers of alleged cases for proper quarantine and treatment is of at most important. Because of the rapid spread of the virus among people, there is limited number of testing kits available in hospitals. Since, the doctors cannot depend only on these kits, it is necessary to find an alternate way to prevent this pandemic disease. To overcome this situation, it is necessary to implement the recognition of quick substitute diagnosis options for the prevention of COVID-19 among the people. In the proposed work, the main concentration is made by considering the COVID-19 chest X-ray images and the normal chest X-ray images to build the customized deep learning classification model. Then the images obtained from the data sources both from COVID-19 chest X-ray images and from normal chest X-ray images are combined together. Roughly 20% of the images were randomly selected and used for validation, and the remaining 80% of images were used for training purpose. The dataset was carefully screened to have only relevant chest X-ray images removing images of other type or the images which lacked resolution to get the final dataset. To obtain a better accuracy, a customized convolutional neural network architecture was built for this purpose. The model was compiled with Adam optimizer along with binary cross-entropy as a loss function. Image data augmentation such as zoom, shear, normalization, and horizontal flip was used to negate the effects of using a small dataset. The highest validation accuracy obtained after a series of epochs was 98% along with a training accuracy of 96%.
Collapse
|
56
|
Narin A, Kaya C, Pamuk Z. Automatic detection of coronavirus disease (COVID-19) using X-ray images and deep convolutional neural networks. Pattern Anal Appl 2021. [PMID: 33994847 DOI: 10.1007/s10044-021-00984-y:1-14] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/10/2023]
Abstract
The 2019 novel coronavirus disease (COVID-19), with a starting point in China, has spread rapidly among people living in other countries and is approaching approximately 101,917,147 cases worldwide according to the statistics of World Health Organization. There are a limited number of COVID-19 test kits available in hospitals due to the increasing cases daily. Therefore, it is necessary to implement an automatic detection system as a quick alternative diagnosis option to prevent COVID-19 spreading among people. In this study, five pre-trained convolutional neural network-based models (ResNet50, ResNet101, ResNet152, InceptionV3 and Inception-ResNetV2) have been proposed for the detection of coronavirus pneumonia-infected patient using chest X-ray radiographs. We have implemented three different binary classifications with four classes (COVID-19, normal (healthy), viral pneumonia and bacterial pneumonia) by using five-fold cross-validation. Considering the performance results obtained, it has been seen that the pre-trained ResNet50 model provides the highest classification performance (96.1% accuracy for Dataset-1, 99.5% accuracy for Dataset-2 and 99.7% accuracy for Dataset-3) among other four used models.
Collapse
Affiliation(s)
- Ali Narin
- Department of Electrical and Electronics Engineering, Zonguldak Bulent Ecevit University, Zonguldak, 67100 Turkey
| | - Ceren Kaya
- Department of Biomedical Engineering, Zonguldak Bulent Ecevit University, Zonguldak, 67100 Turkey
| | - Ziynet Pamuk
- Department of Biomedical Engineering, Zonguldak Bulent Ecevit University, Zonguldak, 67100 Turkey
| |
Collapse
|
57
|
An Aggregated-Based Deep Learning Method for Leukemic B-lymphoblast Classification. Diagnostics (Basel) 2020; 10:diagnostics10121064. [PMID: 33302591 PMCID: PMC7763941 DOI: 10.3390/diagnostics10121064] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Revised: 12/02/2020] [Accepted: 12/03/2020] [Indexed: 11/26/2022] Open
Abstract
Leukemia is a cancer of blood cells in the bone marrow that affects both children and adolescents. The rapid growth of unusual lymphocyte cells leads to bone marrow failure, which may slow down the production of new blood cells, and hence increases patient morbidity and mortality. Age is a crucial clinical factor in leukemia diagnosis, since if leukemia is diagnosed in the early stages, it is highly curable. Incidence is increasing globally, as around 412,000 people worldwide are likely to be diagnosed with some type of leukemia, of which acute lymphoblastic leukemia accounts for approximately 12% of all leukemia cases worldwide. Thus, the reliable and accurate detection of normal and malignant cells is of major interest. Automatic detection with computer-aided diagnosis (CAD) models can assist medics, and can be beneficial for the early detection of leukemia. In this paper, a single center study, we aimed to build an aggregated deep learning model for Leukemic B-lymphoblast classification. To make a reliable and accurate deep learner, data augmentation techniques were applied to tackle the limited dataset size, and a transfer learning strategy was employed to accelerate the learning process, and further improve the performance of the proposed network. The results show that our proposed approach was able to fuse features extracted from the best deep learning models, and outperformed individual networks with a test accuracy of 96.58% in Leukemic B-lymphoblast diagnosis.
Collapse
|
58
|
Morid MA, Borjali A, Del Fiol G. A scoping review of transfer learning research on medical image analysis using ImageNet. Comput Biol Med 2020; 128:104115. [PMID: 33227578 DOI: 10.1016/j.compbiomed.2020.104115] [Citation(s) in RCA: 125] [Impact Index Per Article: 31.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2020] [Revised: 10/19/2020] [Accepted: 11/09/2020] [Indexed: 02/06/2023]
Abstract
OBJECTIVE Employing transfer learning (TL) with convolutional neural networks (CNNs), well-trained on non-medical ImageNet dataset, has shown promising results for medical image analysis in recent years. We aimed to conduct a scoping review to identify these studies and summarize their characteristics in terms of the problem description, input, methodology, and outcome. MATERIALS AND METHODS To identify relevant studies, MEDLINE, IEEE, and ACM digital library were searched for studies published between June 1st, 2012 and January 2nd, 2020. Two investigators independently reviewed articles to determine eligibility and to extract data according to a study protocol defined a priori. RESULTS After screening of 8421 articles, 102 met the inclusion criteria. Of 22 anatomical areas, eye (18%), breast (14%), and brain (12%) were the most commonly studied. Data augmentation was performed in 72% of fine-tuning TL studies versus 15% of the feature-extracting TL studies. Inception models were the most commonly used in breast related studies (50%), while VGGNet was the common in eye (44%), skin (50%) and tooth (57%) studies. AlexNet for brain (42%) and DenseNet for lung studies (38%) were the most frequently used models. Inception models were the most frequently used for studies that analyzed ultrasound (55%), endoscopy (57%), and skeletal system X-rays (57%). VGGNet was the most common for fundus (42%) and optical coherence tomography images (50%). AlexNet was the most frequent model for brain MRIs (36%) and breast X-Rays (50%). 35% of the studies compared their model with other well-trained CNN models and 33% of them provided visualization for interpretation. DISCUSSION This study identified the most prevalent tracks of implementation in the literature for data preparation, methodology selection and output evaluation for various medical image analysis tasks. Also, we identified several critical research gaps existing in the TL studies on medical image analysis. The findings of this scoping review can be used in future TL studies to guide the selection of appropriate research approaches, as well as identify research gaps and opportunities for innovation.
Collapse
Affiliation(s)
- Mohammad Amin Morid
- Department of Information Systems and Analytics, Leavey School of Business, Santa Clara University, Santa Clara, CA, USA.
| | - Alireza Borjali
- Department of Orthopaedic Surgery, Harvard Medical School, Boston, MA, USA; Department of Orthopaedic Surgery, Harris Orthopaedics Laboratory, Massachusetts General Hospital, Boston, MA, USA
| | - Guilherme Del Fiol
- Department of Biomedical Informatics, University of Utah, Salt Lake City, UT, USA
| |
Collapse
|
59
|
Pérez E, Reyes O, Ventura S. Convolutional neural networks for the automatic diagnosis of melanoma: An extensive experimental study. Med Image Anal 2020; 67:101858. [PMID: 33129155 DOI: 10.1016/j.media.2020.101858] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Revised: 08/24/2020] [Accepted: 09/18/2020] [Indexed: 12/12/2022]
Abstract
Melanoma is the type of skin cancer with the highest levels of mortality, and it is more dangerous because it can spread to other parts of the body if not caught and treated early. Melanoma diagnosis is a complex task, even for expert dermatologists, mainly due to the great variety of morphologies in moles of patients. Accordingly, the automatic diagnosis of melanoma is a task that poses the challenge of developing efficient computational methods that ease the diagnostic and, therefore, aid dermatologists in decision-making. In this work, an extensive analysis was conducted, aiming at assessing and illustrating the effectiveness of convolutional neural networks in coping with this complex task. To achieve this objective, twelve well-known convolutional network models were evaluated on eleven public image datasets. The experimental study comprised five phases, where first it was analyzed the sensitivity of the models regarding the optimization algorithm used for their training, and then it was analyzed the impact in performance when using different techniques such as cost-sensitive learning, data augmentation and transfer learning. The conducted study confirmed the usefulness, effectiveness and robustness of different convolutional architectures in solving melanoma diagnosis problem. Also, important guidelines to researchers working on this area were provided, easing the selection of both the proper convolutional model and technique according the characteristics of data.
Collapse
Affiliation(s)
- Eduardo Pérez
- Knowledge Discovery and Intelligent Systems in Biomedicine Laboratory, Maimónides Biomedical Research Institute of Córdoba, Córdoba, Spain; Department of Computer Science and Numerical Analysis, University of Córdoba, Córdoba, Spain.
| | - Oscar Reyes
- Department of Computer Science and Numerical Analysis, University of Córdoba, Córdoba, Spain; Knowledge Discovery and Intelligent Systems in Biomedicine Laboratory, Maimónides Biomedical Research Institute of Córdoba, Córdoba, Spain.
| | - Sebastián Ventura
- Knowledge Discovery and Intelligent Systems in Biomedicine Laboratory, Maimónides Biomedical Research Institute of Córdoba, Córdoba, Spain; Department of Information Systems, King Abdulaziz University, Saudi Arabia Kingdom; Department of Computer Science and Numerical Analysis, University of Córdoba, Córdoba, Spain.
| |
Collapse
|
60
|
Deep MLP-CNN Model Using Mixed-Data to Distinguish between COVID-19 and Non-COVID-19 Patients. Symmetry (Basel) 2020. [DOI: 10.3390/sym12091526] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023] Open
Abstract
The limitations and high false-negative rates (30%) of COVID-19 test kits have been a prominent challenge during the 2020 coronavirus pandemic. Manufacturing those kits and performing the tests require extensive resources and time. Recent studies show that radiological images like chest X-rays can offer a more efficient solution and faster initial screening of COVID-19 patients. In this study, we develop a COVID-19 diagnosis model using Multilayer Perceptron and Convolutional Neural Network (MLP-CNN) for mixed-data (numerical/categorical and image data). The model predicts and differentiates between COVID-19 and non-COVID-19 patients, such that early diagnosis of the virus can be initiated, leading to timely isolation and treatments to stop further spread of the disease. We also explore the benefits of using numerical/categorical data in association with chest X-ray images for screening COVID-19 patients considering both balanced and imbalanced datasets. Three different optimization algorithms are used and tested:adaptive learning rate optimization algorithm (Adam), stochastic gradient descent (Sgd), and root mean square propagation (Rmsprop). Preliminary computational results show that, on a balanced dataset, a model trained with Adam can distinguish between COVID-19 and non-COVID-19 patients with a higher accuracy of 96.3%. On the imbalanced dataset, the model trained with Rmsprop outperformed all other models by achieving an accuracy of 95.38%. Additionally, our proposed model outperformed selected existing deep learning models (considering only chest X-ray or CT scan images) by producing an overall average accuracy of 94.6% ± 3.42%.
Collapse
|
61
|
Cervantes J, Garcia-Lamont F, Rodríguez-Mazahua L, Lopez A. A comprehensive survey on support vector machine classification: Applications, challenges and trends. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.10.118] [Citation(s) in RCA: 312] [Impact Index Per Article: 78.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
|
62
|
R K, H G, R S. Deep Convolutional Neural Network for Melanoma Detection using Dermoscopy Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1524-1527. [PMID: 33018281 DOI: 10.1109/embc44109.2020.9175391] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Developing a fast and accurate classifier is an important part of a computer-aided diagnosis system for skin cancer. Melanoma is the most dangerous form of skin cancer which has a high mortality rate. Early detection and prognosis of melanoma can improve survival rates. In this paper, we propose a deep convolutional neural network for automated melanoma detection that is scalable to accommodate a variety of hardware and software constraints. Dermoscopic skin images collected from open sources were used for training the network. The trained network was then tested on a dataset of 2150 malignant or benign images. Overall, the classifier achieved high average values for accuracy, sensitivity, and specificity of 82.95%, 82.99%, and 83.89% respectively. It outperfomed other exisitng networks using the same dataset.
Collapse
|
63
|
Sharif MI, Li JP, Naz J, Rashid I. A comprehensive review on multi-organs tumor detection based on machine learning. Pattern Recognit Lett 2020. [DOI: 10.1016/j.patrec.2019.12.006] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
|
64
|
Hekler A, Utikal JS, Enk AH, Hauschild A, Weichenthal M, Maron RC, Berking C, Haferkamp S, Klode J, Schadendorf D, Schilling B, Holland-Letz T, Izar B, von Kalle C, Fröhling S, Brinker TJ, Schmitt L, Peitsch WK, Hoffmann F, Becker JC, Drusio C, Jansen P, Klode J, Lodde G, Sammet S, Schadendorf D, Sondermann W, Ugurel S, Zader J, Enk A, Salzmann M, Schäfer S, Schäkel K, Winkler J, Wölbing P, Asper H, Bohne AS, Brown V, Burba B, Deffaa S, Dietrich C, Dietrich M, Drerup KA, Egberts F, Erkens AS, Greven S, Harde V, Jost M, Kaeding M, Kosova K, Lischner S, Maagk M, Messinger AL, Metzner M, Motamedi R, Rosenthal AC, Seidl U, Stemmermann J, Torz K, Velez JG, Haiduk J, Alter M, Bär C, Bergenthal P, Gerlach A, Holtorf C, Karoglan A, Kindermann S, Kraas L, Felcht M, Gaiser MR, Klemke CD, Kurzen H, Leibing T, Müller V, Reinhard RR, Utikal J, Winter F, Berking C, Eicher L, Hartmann D, Heppt M, Kilian K, Krammer S, Lill D, Niesert AC, Oppel E, Sattler E, Senner S, Wallmichrath J, Wolff H, Gesierich A, Giner T, Glutsch V, Kerstan A, Presser D, Schrüfer P, Schummer P, Stolze I, Weber J, Drexler K, Haferkamp S, Mickler M, Stauner CT, Thiem A. Superior skin cancer classification by the combination of human and artificial intelligence. Eur J Cancer 2019; 120:114-121. [DOI: 10.1016/j.ejca.2019.07.019] [Citation(s) in RCA: 80] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2019] [Accepted: 07/18/2019] [Indexed: 10/26/2022]
|
65
|
Maron RC, Weichenthal M, Utikal JS, Hekler A, Berking C, Hauschild A, Enk AH, Haferkamp S, Klode J, Schadendorf D, Jansen P, Holland-Letz T, Schilling B, von Kalle C, Fröhling S, Gaiser MR, Hartmann D, Gesierich A, Kähler KC, Wehkamp U, Karoglan A, Bär C, Brinker TJ, Schmitt L, Peitsch WK, Hoffmann F, Becker JC, Drusio C, Jansen P, Klode J, Lodde G, Sammet S, Schadendorf D, Sondermann W, Ugurel S, Zader J, Enk A, Salzmann M, Schäfer S, Schäkel K, Winkler J, Wölbing P, Asper H, Bohne AS, Brown V, Burba B, Deffaa S, Dietrich C, Dietrich M, Drerup KA, Egberts F, Erkens AS, Greven S, Harde V, Jost M, Kaeding M, Kosova K, Lischner S, Maagk M, Messinger AL, Metzner M, Motamedi R, Rosenthal AC, Seidl U, Stemmermann J, Torz K, Velez JG, Haiduk J, Alter M, Bär C, Bergenthal P, Gerlach A, Holtorf C, Karoglan A, Kindermann S, Kraas L, Felcht M, Gaiser MR, Klemke CD, Kurzen H, Leibing T, Müller V, Reinhard RR, Utikal J, Winter F, Berking C, Eicher L, Hartmann D, Heppt M, Kilian K, Krammer S, Lill D, Niesert AC, Oppel E, Sattler E, Senner S, Wallmichrath J, Wolff H, Giner T, Glutsch V, Kerstan A, Presser D, Schrüfer P, Schummer P, Stolze I, Weber J, Drexler K, Haferkamp S, Mickler M, Stauner CT, Thiem A. Systematic outperformance of 112 dermatologists in multiclass skin cancer image classification by convolutional neural networks. Eur J Cancer 2019; 119:57-65. [DOI: 10.1016/j.ejca.2019.06.013] [Citation(s) in RCA: 92] [Impact Index Per Article: 18.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2019] [Revised: 06/19/2019] [Accepted: 06/21/2019] [Indexed: 01/07/2023]
|