1
|
Harini P, Madhavi NB, Latha SB, Sasikumar AN. Optimized self-attention based cycle-consistent generative adversarial network adopted melanoma classification from dermoscopic images. Microsc Res Tech 2024; 87:1271-1285. [PMID: 38353334 DOI: 10.1002/jemt.24506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 12/29/2023] [Accepted: 01/17/2024] [Indexed: 04/19/2024]
Abstract
Skin is the exposed part of the human body that constantly protected from UV rays, heat, light, dust, and other hazardous radiation. One of the most dangerous illnesses that affect people is skin cancer. A type of skin cancer called melanoma starts in the melanocytes, which regulate the colour in human skin. Reducing the fatality rate from skin cancer requires early detection and diagnosis of conditions like melanoma. In this article, a Self-attention based cycle-consistent generative adversarial network optimized with Archerfish Hunting Optimization Algorithm adopted Melanoma Classification (SACCGAN-AHOA-MC-DI) from dermoscopic images is proposed. Primarily, the input Skin dermoscopic images are gathered via the dataset of ISIC 2019. Then, the input Skin dermoscopic images is pre-processed using adjusted quick shift phase preserving dynamic range compression (AQSP-DRC) for removing noise and increase the quality of Skin dermoscopic images. These pre-processed images are fed to the piecewise fuzzy C-means clustering (PF-CMC) for ROI region segmentation. The segmented ROI region is supplied to the Hexadecimal Local Adaptive Binary Pattern (HLABP) to extract the Radiomic features, like Grayscale statistic features (standard deviation, mean, kurtosis, and skewness) together with Haralick Texture features (contrast, energy, entropy, homogeneity, and inverse different moments). The extracted features are fed to self-attention based cycle-consistent generative adversarial network (SACCGAN) which classifies the skin cancers as Melanocytic nevus, Basal cell carcinoma, Actinic Keratosis, Benign keratosis, Dermatofibroma, Vascular lesion, Squamous cell carcinoma and melanoma. In general, SACCGAN not adapt any optimization modes to define the ideal parameters to assure accurate classification of skin cancer. Hence, Archerfish Hunting Optimization Algorithm (AHOA) is considered to maximize the SACCGAN classifier, which categorizes the skin cancer accurately. The proposed method attains 23.01%, 14.96%, and 45.31% higher accuracy and 32.16%, 11.32%, and 24.56% lesser computational time evaluated to the existing methods, like melanoma prediction method for unbalanced data utilizing optimized Squeeze Net through bald eagle search optimization (CNN-BES-MC-DI), hyper-parameter optimized CNN depending on Grey wolf optimization algorithm (CNN-GWOA-MC-DI), DEANN incited skin cancer finding depending on fuzzy c-means clustering (DEANN-MC-DI). RESEARCH HIGHLIGHTS: This manuscript, self-attention based cycle-consistent. SACCGAN-AHOA-MC-DI method is implemented in Python. (SACCGAN-AHOA-MC-DI) from dermoscopic images is proposed. Adjusted quick shift phase preserving dynamic range compression (AQSP-DRC). Removing noise and increase the quality of Skin dermoscopic images.
Collapse
Affiliation(s)
- P Harini
- Professor and HoD, Department of Computer Science and Engineering, St. Ann's College of Engineering and Technology, Chirala, Andhra Pradesh, India
| | - N Bindu Madhavi
- Department of Management Programmes, KLEF Centre for Distance & Online Education, Koneru Lakshmaiah Education Foundation (Deemed to be University), Guntur, India
| | - S Bhargavi Latha
- Associate Professor, School of Computer Science and Engineering, REVA University, Bengaluru, Karnataka, India
| | - A N Sasikumar
- Department of Computer Science and Engineering, Panimalar Engineering College, Chennai, India
| |
Collapse
|
2
|
Choukali MA, Amirani MC, Valizadeh M, Abbasi A, Komeili M. Pseudo-class part prototype networks for interpretable breast cancer classification. Sci Rep 2024; 14:10341. [PMID: 38710757 DOI: 10.1038/s41598-024-60743-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 04/26/2024] [Indexed: 05/08/2024] Open
Abstract
Interpretability in machine learning has become increasingly important as machine learning is being used in more and more applications, including those with high-stakes consequences such as healthcare where Interpretability has been regarded as a key to the successful adoption of machine learning models. However, using confounding/irrelevant information in making predictions by deep learning models, even the interpretable ones, poses critical challenges to their clinical acceptance. That has recently drawn researchers' attention to issues beyond the mere interpretation of deep learning models. In this paper, we first investigate application of an inherently interpretable prototype-based architecture, known as ProtoPNet, for breast cancer classification in digital pathology and highlight its shortcomings in this application. Then, we propose a new method that uses more medically relevant information and makes more accurate and interpretable predictions. Our method leverages the clustering concept and implicitly increases the number of classes in the training dataset. The proposed method learns more relevant prototypes without any pixel-level annotated data. To have a more holistic assessment, in addition to classification accuracy, we define a new metric for assessing the degree of interpretability based on the comments of a group of skilled pathologists. Experimental results on the BreakHis dataset show that the proposed method effectively improves the classification accuracy and interpretability by respectively 8 % and 18 % . Therefore, the proposed method can be seen as a step toward implementing interpretable deep learning models for the detection of breast cancer using histopathology images.
Collapse
Affiliation(s)
| | - Mehdi Chehel Amirani
- Department of Electrical and Computer Engineering, Urmia University, Urmia, Iran
| | - Morteza Valizadeh
- Department of Electrical and Computer Engineering, Urmia University, Urmia, Iran.
| | - Ata Abbasi
- Cellular and Molecular Research Center, Cellular and Molecular Medicine Research Institute, Urmia University of Medical Sciences, Urmia, Iran
- Department of Pathology, Faculty of Medicine, Urmia University of medical sciences, Urmia, Iran
| | - Majid Komeili
- School of Computer Science, Carleton University, Ottawa, Canada
| |
Collapse
|
3
|
Imran M, Islam Tiwana M, Mohsan MM, Alghamdi NS, Akram MU. Transformer-based framework for multi-class segmentation of skin cancer from histopathology images. Front Med (Lausanne) 2024; 11:1380405. [PMID: 38741771 PMCID: PMC11089103 DOI: 10.3389/fmed.2024.1380405] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Accepted: 04/01/2024] [Indexed: 05/16/2024] Open
Abstract
Introduction Non-melanoma skin cancer comprising Basal cell carcinoma (BCC), Squamous cell carcinoma (SCC), and Intraepidermal carcinoma (IEC) has the highest incidence rate among skin cancers. Intelligent decision support systems may address the issue of the limited number of subject experts and help in mitigating the parity of health services between urban centers and remote areas. Method In this research, we propose a transformer-based model for the segmentation of histopathology images not only into inflammation and cancers such as BCC, SCC, and IEC but also to identify skin tissues and boundaries that are important in decision-making. Accurate segmentation of these tissue types will eventually lead to accurate detection and classification of non-melanoma skin cancer. The segmentation according to tissue types and their visual representation before classification enhances the trust of pathologists and doctors being relatable to how most pathologists approach this problem. The visualization of the confidence of the model in its prediction through uncertainty maps is also what distinguishes this study from most deep learning methods. Results The evaluation of proposed system is carried out using publicly available dataset. The application of our proposed segmentation system demonstrated good performance with an F1 score of 0.908, mean intersection over union (mIoU) of 0.653, and average accuracy of 83.1%, advocating that the system can be used as a decision support system successfully and has the potential of subsequently maturing into a fully automated system. Discussion This study is an attempt to automate the segmentation of the most occurring non-melanoma skin cancer using a transformer-based deep learning technique applied to histopathology skin images. Highly accurate segmentation and visual representation of histopathology images according to tissue types by the proposed system implies that the system can be used for skin-related routine pathology tasks including cancer and other anomaly detection, their classification, and measurement of surgical margins in the case of cancer cases.
Collapse
Affiliation(s)
- Muhammad Imran
- Department of Mechatronics Engineering, National University of Sciences and Technology, Islamabad, Pakistan
| | - Mohsin Islam Tiwana
- Department of Mechatronics Engineering, National University of Sciences and Technology, Islamabad, Pakistan
| | - Mashood Mohammad Mohsan
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, Pakistan
| | - Norah Saleh Alghamdi
- Department of Computer Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Muhammad Usman Akram
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, Pakistan
| |
Collapse
|
4
|
Myslicka M, Kawala-Sterniuk A, Bryniarska A, Sudol A, Podpora M, Gasz R, Martinek R, Kahankova Vilimkova R, Vilimek D, Pelc M, Mikolajewski D. Review of the application of the most current sophisticated image processing methods for the skin cancer diagnostics purposes. Arch Dermatol Res 2024; 316:99. [PMID: 38446274 DOI: 10.1007/s00403-024-02828-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Revised: 12/28/2023] [Accepted: 01/25/2024] [Indexed: 03/07/2024]
Abstract
This paper presents the most current and innovative solutions applying modern digital image processing methods for the purpose of skin cancer diagnostics. Skin cancer is one of the most common types of cancers. It is said that in the USA only, one in five people will develop skin cancer and this trend is constantly increasing. Implementation of new, non-invasive methods plays a crucial role in both identification and prevention of skin cancer occurrence. Early diagnosis and treatment are needed in order to decrease the number of deaths due to this disease. This paper also contains some information regarding the most common skin cancer types, mortality and epidemiological data for Poland, Europe, Canada and the USA. It also covers the most efficient and modern image recognition methods based on the artificial intelligence applied currently for diagnostics purposes. In this work, both professional, sophisticated as well as inexpensive solutions were presented. This paper is a review paper and covers the period of 2017 and 2022 when it comes to solutions and statistics. The authors decided to focus on the latest data, mostly due to the rapid technology development and increased number of new methods, which positively affects diagnosis and prognosis.
Collapse
Affiliation(s)
- Maria Myslicka
- Faculty of Medicine, Wroclaw Medical University, J. Mikulicza-Radeckiego 5, 50-345, Wroclaw, Poland.
| | - Aleksandra Kawala-Sterniuk
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland.
| | - Anna Bryniarska
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
| | - Adam Sudol
- Faculty of Natural Sciences and Technology, University of Opole, Dmowskiego 7-9, 45-368, Opole, Poland
| | - Michal Podpora
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
| | - Rafal Gasz
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
| | - Radek Martinek
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
- Department of Cybernetics and Biomedical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, Ostrava, 70800, Czech Republic
| | - Radana Kahankova Vilimkova
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Proszkowska 76, 45-758, Opole, Poland
- Department of Cybernetics and Biomedical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, Ostrava, 70800, Czech Republic
| | - Dominik Vilimek
- Department of Cybernetics and Biomedical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, Ostrava, 70800, Czech Republic
| | - Mariusz Pelc
- Institute of Computer Science, University of Opole, Oleska 48, 45-052, Opole, Poland
- School of Computing and Mathematical Sciences, University of Greenwich, Old Royal Naval College, Park Row, SE10 9LS, London, UK
| | - Dariusz Mikolajewski
- Institute of Computer Science, Kazimierz Wielki University in Bydgoszcz, ul. Kopernika 1, 85-074, Bydgoszcz, Poland
- Neuropsychological Research Unit, 2nd Clinic of the Psychiatry and Psychiatric Rehabilitation, Medical University in Lublin, Gluska 1, 20-439, Lublin, Poland
| |
Collapse
|
5
|
Petracchi B, Torti E, Marenzi E, Leporati F. Acceleration of Hyperspectral Skin Cancer Image Classification through Parallel Machine-Learning Methods. SENSORS (BASEL, SWITZERLAND) 2024; 24:1399. [PMID: 38474935 DOI: 10.3390/s24051399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 01/29/2024] [Accepted: 02/16/2024] [Indexed: 03/14/2024]
Abstract
Hyperspectral imaging (HSI) has become a very compelling technique in different scientific areas; indeed, many researchers use it in the fields of remote sensing, agriculture, forensics, and medicine. In the latter, HSI plays a crucial role as a diagnostic support and for surgery guidance. However, the computational effort in elaborating hyperspectral data is not trivial. Furthermore, the demand for detecting diseases in a short time is undeniable. In this paper, we take up this challenge by parallelizing three machine-learning methods among those that are the most intensively used: Support Vector Machine (SVM), Random Forest (RF), and eXtreme Gradient Boosting (XGB) algorithms using the Compute Unified Device Architecture (CUDA) to accelerate the classification of hyperspectral skin cancer images. They all showed a good performance in HS image classification, in particular when the size of the dataset is limited, as demonstrated in the literature. We illustrate the parallelization techniques adopted for each approach, highlighting the suitability of Graphical Processing Units (GPUs) to this aim. Experimental results show that parallel SVM and XGB algorithms significantly improve the classification times in comparison with their serial counterparts.
Collapse
Affiliation(s)
- Bernardo Petracchi
- Department of Electrical, Computer and Biomedical Engineering, University of Pavia, I-27100 Pavia, Italy
| | - Emanuele Torti
- Department of Electrical, Computer and Biomedical Engineering, University of Pavia, I-27100 Pavia, Italy
| | - Elisa Marenzi
- Department of Electrical, Computer and Biomedical Engineering, University of Pavia, I-27100 Pavia, Italy
| | - Francesco Leporati
- Department of Electrical, Computer and Biomedical Engineering, University of Pavia, I-27100 Pavia, Italy
| |
Collapse
|
6
|
Behara K, Bhero E, Agee JT. Grid-Based Structural and Dimensional Skin Cancer Classification with Self-Featured Optimized Explainable Deep Convolutional Neural Networks. Int J Mol Sci 2024; 25:1546. [PMID: 38338828 PMCID: PMC10855492 DOI: 10.3390/ijms25031546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Revised: 01/16/2024] [Accepted: 01/24/2024] [Indexed: 02/12/2024] Open
Abstract
Skin cancer is a severe and potentially lethal disease, and early detection is critical for successful treatment. Traditional procedures for diagnosing skin cancer are expensive, time-intensive, and necessitate the expertise of a medical practitioner. In recent years, many researchers have developed artificial intelligence (AI) tools, including shallow and deep machine learning-based approaches, to diagnose skin cancer. However, AI-based skin cancer diagnosis faces challenges in complexity, low reproducibility, and explainability. To address these problems, we propose a novel Grid-Based Structural and Dimensional Explainable Deep Convolutional Neural Network for accurate and interpretable skin cancer classification. This model employs adaptive thresholding for extracting the region of interest (ROI), using its dynamic capabilities to enhance the accuracy of identifying cancerous regions. The VGG-16 architecture extracts the hierarchical characteristics of skin lesion images, leveraging its recognized capabilities for deep feature extraction. Our proposed model leverages a grid structure to capture spatial relationships within lesions, while the dimensional features extract relevant information from various image channels. An Adaptive Intelligent Coney Optimization (AICO) algorithm is employed for self-feature selected optimization and fine-tuning the hyperparameters, which dynamically adapts the model architecture to optimize feature extraction and classification. The model was trained and tested using the ISIC dataset of 10,015 dermascope images and the MNIST dataset of 2357 images of malignant and benign oncological diseases. The experimental results demonstrated that the model achieved accuracy and CSI values of 0.96 and 0.97 for TP 80 using the ISIC dataset, which is 17.70% and 16.49% more than lightweight CNN, 20.83% and 19.59% more than DenseNet, 18.75% and 17.53% more than CNN, 6.25% and 6.18% more than Efficient Net-B0, 5.21% and 5.15% over ECNN, 2.08% and 2.06% over COA-CAN, and 5.21% and 5.15% more than ARO-ECNN. Additionally, the AICO self-feature selected ECNN model exhibited minimal FPR and FNR of 0.03 and 0.02, respectively. The model attained a loss of 0.09 for ISIC and 0.18 for the MNIST dataset, indicating that the model proposed in this research outperforms existing techniques. The proposed model improves accuracy, interpretability, and robustness for skin cancer classification, ultimately aiding clinicians in early diagnosis and treatment.
Collapse
Affiliation(s)
- Kavita Behara
- Department of Electrical Engineering, Mangosuthu University of Technology, Durban 4031, South Africa;
| | - Ernest Bhero
- Discipline of Electrical, Electronic and Computer Engineering, University of KwaZulu Natal, Durban 4041, South Africa;
| | - John Terhile Agee
- Discipline of Electrical, Electronic and Computer Engineering, University of KwaZulu Natal, Durban 4041, South Africa;
| |
Collapse
|
7
|
Furriel BCRS, Oliveira BD, Prôa R, Paiva JQ, Loureiro RM, Calixto WP, Reis MRC, Giavina-Bianchi M. Artificial intelligence for skin cancer detection and classification for clinical environment: a systematic review. Front Med (Lausanne) 2024; 10:1305954. [PMID: 38259845 PMCID: PMC10800812 DOI: 10.3389/fmed.2023.1305954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Accepted: 12/12/2023] [Indexed: 01/24/2024] Open
Abstract
Background Skin cancer is one of the most common forms worldwide, with a significant increase in incidence over the last few decades. Early and accurate detection of this type of cancer can result in better prognoses and less invasive treatments for patients. With advances in Artificial Intelligence (AI), tools have emerged that can facilitate diagnosis and classify dermatological images, complementing traditional clinical assessments and being applicable where there is a shortage of specialists. Its adoption requires analysis of efficacy, safety, and ethical considerations, as well as considering the genetic and ethnic diversity of patients. Objective The systematic review aims to examine research on the detection, classification, and assessment of skin cancer images in clinical settings. Methods We conducted a systematic literature search on PubMed, Scopus, Embase, and Web of Science, encompassing studies published until April 4th, 2023. Study selection, data extraction, and critical appraisal were carried out by two independent reviewers. Results were subsequently presented through a narrative synthesis. Results Through the search, 760 studies were identified in four databases, from which only 18 studies were selected, focusing on developing, implementing, and validating systems to detect, diagnose, and classify skin cancer in clinical settings. This review covers descriptive analysis, data scenarios, data processing and techniques, study results and perspectives, and physician diversity, accessibility, and participation. Conclusion The application of artificial intelligence in dermatology has the potential to revolutionize early detection of skin cancer. However, it is imperative to validate and collaborate with healthcare professionals to ensure its clinical effectiveness and safety.
Collapse
Affiliation(s)
- Brunna C. R. S. Furriel
- Imaging Research Center, Hospital Israelita Albert Einstein, São Paulo, Brazil
- Electrical, Mechanical and Computer Engineering School, Federal University of Goiás, Goiânia, Brazil
- Studies and Researches in Science and Technology Group (GCITE), Federal Institute of Goiás, Goiânia, Brazil
| | - Bruno D. Oliveira
- Imaging Research Center, Hospital Israelita Albert Einstein, São Paulo, Brazil
| | - Renata Prôa
- Imaging Research Center, Hospital Israelita Albert Einstein, São Paulo, Brazil
| | - Joselisa Q. Paiva
- Imaging Research Center, Hospital Israelita Albert Einstein, São Paulo, Brazil
| | - Rafael M. Loureiro
- Imaging Research Center, Hospital Israelita Albert Einstein, São Paulo, Brazil
| | - Wesley P. Calixto
- Electrical, Mechanical and Computer Engineering School, Federal University of Goiás, Goiânia, Brazil
- Studies and Researches in Science and Technology Group (GCITE), Federal Institute of Goiás, Goiânia, Brazil
| | - Márcio R. C. Reis
- Imaging Research Center, Hospital Israelita Albert Einstein, São Paulo, Brazil
- Studies and Researches in Science and Technology Group (GCITE), Federal Institute of Goiás, Goiânia, Brazil
| | | |
Collapse
|
8
|
Zhang Y, Feng W, Wu Z, Li W, Tao L, Liu X, Zhang F, Gao Y, Huang J, Guo X. Deep-Learning Model of ResNet Combined with CBAM for Malignant-Benign Pulmonary Nodules Classification on Computed Tomography Images. MEDICINA (KAUNAS, LITHUANIA) 2023; 59:1088. [PMID: 37374292 DOI: 10.3390/medicina59061088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 05/10/2023] [Accepted: 05/17/2023] [Indexed: 06/29/2023]
Abstract
Background and Objectives: Lung cancer remains a leading cause of cancer mortality worldwide. Accurately classifying benign pulmonary nodules and malignant ones is crucial for early diagnosis and improved patient outcomes. The purpose of this study is to explore the deep-learning model of ResNet combined with a convolutional block attention module (CBAM) for the differentiation between benign and malignant lung cancer, based on computed tomography (CT) images, morphological features, and clinical information. Methods and materials: In this study, 8241 CT slices containing pulmonary nodules were retrospectively included. A random sample comprising 20% (n = 1647) of the images was used as the test set, and the remaining data were used as the training set. ResNet combined CBAM (ResNet-CBAM) was used to establish classifiers on the basis of images, morphological features, and clinical information. Nonsubsampled dual-tree complex contourlet transform (NSDTCT) combined with SVM classifier (NSDTCT-SVM) was used as a comparative model. Results: The AUC and the accuracy of the CBAM-ResNet model were 0.940 and 0.867, respectively, in test set when there were only images as inputs. By combining the morphological features and clinical information, CBAM-ResNet shows better performance (AUC: 0.957, accuracy: 0.898). In comparison, a radiomic analysis using NSDTCT-SVM achieved AUC and accuracy values of 0.807 and 0.779, respectively. Conclusions: Our findings demonstrate that deep-learning models, combined with additional information, can enhance the classification performance of pulmonary nodules. This model can assist clinicians in accurately diagnosing pulmonary nodules in clinical practice.
Collapse
Affiliation(s)
- Yanfei Zhang
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| | - Wei Feng
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| | - Zhiyuan Wu
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| | - Weiming Li
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| | - Lixin Tao
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| | - Xiangtong Liu
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| | - Feng Zhang
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| | - Yan Gao
- Department of Nuclear Medicine, Xuanwu Hospital Capital Medical University, Beijing 100053, China
| | - Jian Huang
- School of Mathematical Sciences, University College Cork, T12 YN60 Cork, Ireland
| | - Xiuhua Guo
- Department of Epidemiology and Health Statistics, School of Public Health, Capital Medical University, Beijing 100069, China
- Beijing Municipal Key Laboratory of Clinical Epidemiology, Capital Medical University, Beijing 100069, China
| |
Collapse
|
9
|
Wang L, Zhang L, Shu X, Yi Z. Intra-class consistency and inter-class discrimination feature learning for automatic skin lesion classification. Med Image Anal 2023; 85:102746. [PMID: 36638748 DOI: 10.1016/j.media.2023.102746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 10/24/2022] [Accepted: 01/05/2023] [Indexed: 01/09/2023]
Abstract
Automated skin lesion classification has been proved to be capable of improving the diagnostic performance for dermoscopic images. Although many successes have been achieved, accurate classification remains challenging due to the significant intra-class variation and inter-class similarity. In this article, a deep learning method is proposed to increase the intra-class consistency as well as the inter-class discrimination of learned features in the automatic skin lesion classification. To enhance the inter-class discriminative feature learning, a CAM-based (class activation mapping) global-lesion localization module is proposed by optimizing the distance of CAMs for the same dermoscopic image generated by different skin lesion tasks. Then, a global features guided intra-class similarity learning module is proposed to generate the class center according to the deep features of all samples in one class and the history feature of one sample during the learning process. In this way, the performance can be improved with the collaboration of CAM-based inter-class feature discriminating and global features guided intra-class feature concentrating. To evaluate the effectiveness of the proposed method, extensive experiments are conducted on the ISIC-2017 and ISIC-2018 datasets. Experimental results with different backbones have demonstrated that the proposed method has good generalizability and can adaptively focus on more discriminative regions of the skin lesion.
Collapse
Affiliation(s)
- Lituan Wang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China
| | - Lei Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China.
| | - Xin Shu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China
| | - Zhang Yi
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China
| |
Collapse
|
10
|
Alphonse AS, Benifa JVB, Muaad AY, Chola C, Heyat MBB, Murshed BAH, Abdel Samee N, Alabdulhafith M, Al-antari MA. A Hybrid Stacked Restricted Boltzmann Machine with Sobel Directional Patterns for Melanoma Prediction in Colored Skin Images. Diagnostics (Basel) 2023; 13:diagnostics13061104. [PMID: 36980412 PMCID: PMC10047753 DOI: 10.3390/diagnostics13061104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 03/07/2023] [Accepted: 03/09/2023] [Indexed: 03/17/2023] Open
Abstract
Melanoma, a kind of skin cancer that is very risky, is distinguished by uncontrolled cell multiplication. Melanoma detection is of the utmost significance in clinical practice because of the atypical border structure and the numerous types of tissue it can involve. The identification of melanoma is still a challenging process for color images, despite the fact that numerous approaches have been proposed in the research that has been done. In this research, we present a comprehensive system for the efficient and precise classification of skin lesions. The framework includes preprocessing, segmentation, feature extraction, and classification modules. Preprocessing with DullRazor eliminates skin-imaging hair artifacts. Next, Fully Connected Neural Network (FCNN) semantic segmentation extracts precise and obvious Regions of Interest (ROIs). We then extract relevant skin image features from ROIs using an enhanced Sobel Directional Pattern (SDP). For skin image analysis, Sobel Directional Pattern outperforms ABCD. Finally, a stacked Restricted Boltzmann Machine (RBM) classifies skin ROIs. Stacked RBMs accurately classify skin melanoma. The experiments have been conducted on five datasets: Pedro Hispano Hospital (PH2), International Skin Imaging Collaboration (ISIC 2016), ISIC 2017, Dermnet, and DermIS, and achieved an accuracy of 99.8%, 96.5%, 95.5%, 87.9%, and 97.6%, respectively. The results show that a stack of Restricted Boltzmann Machines is superior for categorizing skin cancer types using the proposed innovative SDP.
Collapse
Affiliation(s)
- A. Sherly Alphonse
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, India
| | - J. V. Bibal Benifa
- Department of Studies in Computer Science and Engineering, Indian Institute of Information Technology, Kottayam 686635, India
- Correspondence: (J.V.B.B.); (M.A.); (M.A.A.-a.)
| | - Abdullah Y. Muaad
- Department of Studies in Computer Science, University of Mysore, Manasagangothri, Mysore 570006, India
| | - Channabasava Chola
- Department of Studies in Computer Science and Engineering, Indian Institute of Information Technology, Kottayam 686635, India
| | - Md Belal Bin Heyat
- IoT Research Center, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China
| | | | - Nagwan Abdel Samee
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Maali Alabdulhafith
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
- Correspondence: (J.V.B.B.); (M.A.); (M.A.A.-a.)
| | - Mugahed A. Al-antari
- Department of Artificial Intelligence, College of Software and Convergence Technology, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea
- Correspondence: (J.V.B.B.); (M.A.); (M.A.A.-a.)
| |
Collapse
|
11
|
ASI-DBNet: An Adaptive Sparse Interactive ResNet-Vision Transformer Dual-Branch Network for the Grading of Brain Cancer Histopathological Images. Interdiscip Sci 2023; 15:15-31. [PMID: 35810266 DOI: 10.1007/s12539-022-00532-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2022] [Revised: 05/26/2022] [Accepted: 05/31/2022] [Indexed: 10/17/2022]
Abstract
Brain cancer is the deadliest cancer that occurs in the brain and central nervous system, and rapid and precise grading is essential to reduce patient suffering and improve survival. Traditional convolutional neural network (CNN)-based computer-aided diagnosis algorithms cannot fully utilize the global information of pathology images, and the recently popular vision transformer (ViT) model does not focus enough on the local details of pathology images, both of which lead to a lack of precision in the focus of the model and a lack of accuracy in the grading of brain cancer. To solve this problem, we propose an adaptive sparse interaction ResNet-ViT dual-branch network (ASI-DBNet). First, we design the ResNet-ViT parallel structure to simultaneously capture and retain the local and global information of pathology images. Second, we design the adaptive sparse interaction block (ASIB) to interact the ResNet branch with the ViT branch. Furthermore, we introduce the attention mechanism in ASIB to adaptively filter the redundant information from the dual branches during the interaction so that the feature maps delivered during the interaction are more beneficial. Intensive experiments have shown that ASI-DBNet performs best in various baseline and SOTA models, with 95.24% accuracy in four grades. In particular, for brain tumors with a high degree of deterioration (Grade III and Grade IV), the highest diagnostic accuracies achieved by ASI-DBNet are 97.93% and 96.28%, respectively, which is of great clinical significance. Meanwhile, the gradient-weighted class activation map (Grad_cam) and attention rollout visualization mechanisms are utilized to visualize the working logic behind the model, and the resulting feature maps highlight the important distinguishing features related to the diagnosis. Therefore, the interpretability and confidence of the model are improved, which is of great value for the clinical diagnosis of brain cancer.
Collapse
|
12
|
Kamath V, Renuka A. Deep Learning Based Object Detection for Resource Constrained Devices- Systematic Review, Future Trends and Challenges Ahead. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.02.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/12/2023]
|
13
|
La Salvia M, Torti E, Leon R, Fabelo H, Ortega S, Balea-Fernandez F, Martinez-Vega B, Castaño I, Almeida P, Carretero G, Hernandez JA, Callico GM, Leporati F. Neural Networks-Based On-Site Dermatologic Diagnosis through Hyperspectral Epidermal Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:7139. [PMID: 36236240 PMCID: PMC9571453 DOI: 10.3390/s22197139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Revised: 09/15/2022] [Accepted: 09/18/2022] [Indexed: 06/16/2023]
Abstract
Cancer originates from the uncontrolled growth of healthy cells into a mass. Chromophores, such as hemoglobin and melanin, characterize skin spectral properties, allowing the classification of lesions into different etiologies. Hyperspectral imaging systems gather skin-reflected and transmitted light into several wavelength ranges of the electromagnetic spectrum, enabling potential skin-lesion differentiation through machine learning algorithms. Challenged by data availability and tiny inter and intra-tumoral variability, here we introduce a pipeline based on deep neural networks to diagnose hyperspectral skin cancer images, targeting a handheld device equipped with a low-power graphical processing unit for routine clinical testing. Enhanced by data augmentation, transfer learning, and hyperparameter tuning, the proposed architectures aim to meet and improve the well-known dermatologist-level detection performances concerning both benign-malignant and multiclass classification tasks, being able to diagnose hyperspectral data considering real-time constraints. Experiments show 87% sensitivity and 88% specificity for benign-malignant classification and specificity above 80% for the multiclass scenario. AUC measurements suggest classification performance improvement above 90% with adequate thresholding. Concerning binary segmentation, we measured skin DICE and IOU higher than 90%. We estimated 1.21 s, at most, consuming 5 Watts to segment the epidermal lesions with the U-Net++ architecture, meeting the imposed time limit. Hence, we can diagnose hyperspectral epidermal data assuming real-time constraints.
Collapse
Affiliation(s)
- Marco La Salvia
- Department of Electrical, Computer and Biomedical Engineering, University of Pavia, 27100 Pavia, Italy
| | - Emanuele Torti
- Department of Electrical, Computer and Biomedical Engineering, University of Pavia, 27100 Pavia, Italy
| | - Raquel Leon
- Research Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), 35001 Las Palmas de Gran Canaria, Spain
| | - Himar Fabelo
- Research Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), 35001 Las Palmas de Gran Canaria, Spain
| | - Samuel Ortega
- Research Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), 35001 Las Palmas de Gran Canaria, Spain
- Norwegian Institute of Food, Fisheries and Aquaculture Research (Nofima), 6122 Tromsø, Norway
| | - Francisco Balea-Fernandez
- Department of Psychology, Sociology and Social Work, University of Las Palmas de Gran Canaria, 35001 Las Palmas de Gran Canaria, Spain
| | - Beatriz Martinez-Vega
- Research Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), 35001 Las Palmas de Gran Canaria, Spain
| | - Irene Castaño
- Department of Dermatology, Hospital Universitario de Gran Canaria Doctor Negrín, Barranco de la Ballena, s/n, 35010 Las Palmas de Gran Canaria, Spain
| | - Pablo Almeida
- Department of Dermatology, Complejo Hospitalario Universitario Insular-Materno Infantil, Avenida Maritima del Sur, s/n, 35016 Las Palmas de Gran Canaria, Spain
| | - Gregorio Carretero
- Department of Dermatology, Hospital Universitario de Gran Canaria Doctor Negrín, Barranco de la Ballena, s/n, 35010 Las Palmas de Gran Canaria, Spain
| | - Javier A. Hernandez
- Department of Dermatology, Complejo Hospitalario Universitario Insular-Materno Infantil, Avenida Maritima del Sur, s/n, 35016 Las Palmas de Gran Canaria, Spain
| | - Gustavo M. Callico
- Research Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), 35001 Las Palmas de Gran Canaria, Spain
| | - Francesco Leporati
- Department of Electrical, Computer and Biomedical Engineering, University of Pavia, 27100 Pavia, Italy
| |
Collapse
|
14
|
Teng Q, Liu Z, Song Y, Han K, Lu Y. A survey on the interpretability of deep learning in medical diagnosis. MULTIMEDIA SYSTEMS 2022; 28:2335-2355. [PMID: 35789785 PMCID: PMC9243744 DOI: 10.1007/s00530-022-00960-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Accepted: 05/29/2022] [Indexed: 06/15/2023]
Abstract
Deep learning has demonstrated remarkable performance in the medical domain, with accuracy that rivals or even exceeds that of human experts. However, it has a significant problem that these models are "black-box" structures, which means they are opaque, non-intuitive, and difficult for people to understand. This creates a barrier to the application of deep learning models in clinical practice due to lack of interpretability, trust, and transparency. To overcome this problem, several studies on interpretability have been proposed. Therefore, in this paper, we comprehensively review the interpretability of deep learning in medical diagnosis based on the current literature, including some common interpretability methods used in the medical domain, various applications with interpretability for disease diagnosis, prevalent evaluation metrics, and several disease datasets. In addition, the challenges of interpretability and future research directions are also discussed here. To the best of our knowledge, this is the first time that various applications of interpretability methods for disease diagnosis have been summarized.
Collapse
Affiliation(s)
- Qiaoying Teng
- School of Computer Science, Jilin Normal University, Siping, 136000 China
| | - Zhe Liu
- School of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang, 212013 China
| | - Yuqing Song
- School of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang, 212013 China
| | - Kai Han
- School of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang, 212013 China
| | - Yang Lu
- School of Computer Science, Jilin Normal University, Siping, 136000 China
| |
Collapse
|
15
|
Deep Learning-Based Classification for Melanoma Detection Using XceptionNet. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:2196096. [PMID: 35360474 PMCID: PMC8964214 DOI: 10.1155/2022/2196096] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 01/04/2022] [Accepted: 02/19/2022] [Indexed: 12/31/2022]
Abstract
Skin cancer is one of the most common types of cancer in the world, accounting for at least 40% of all cancers. Melanoma is considered as the 19th most commonly occurring cancer among the other cancers in the human society, such that about 300,000 new cases were found in 2018. While cancer diagnosis is based on interventional methods such as surgery, radiotherapy, and chemotherapy, studies show that the use of new computer technologies such as image processing mechanisms in processes related to early diagnosis of this cancer can help the physicians heal this cancer. This paper proposes an automatic method for diagnosis of skin cancer from dermoscopy images. The proposed model is based on an improved XceptionNet, which utilized swish activation function and depthwise separable convolutions. This system shows an improvement in the classification accuracy of the network compared to the original Xception and other dome architectures. Simulations of the proposed method are compared with some other related skin cancer diagnosis state-of-the-art solutions, and the results show that the suggested method achieves higher accuracy compared to the other comparative methods.
Collapse
|
16
|
Hauser K, Kurz A, Haggenmüller S, Maron RC, von Kalle C, Utikal JS, Meier F, Hobelsberger S, Gellrich FF, Sergon M, Hauschild A, French LE, Heinzerling L, Schlager JG, Ghoreschi K, Schlaak M, Hilke FJ, Poch G, Kutzner H, Berking C, Heppt MV, Erdmann M, Haferkamp S, Schadendorf D, Sondermann W, Goebeler M, Schilling B, Kather JN, Fröhling S, Lipka DB, Hekler A, Krieghoff-Henning E, Brinker TJ. Explainable artificial intelligence in skin cancer recognition: A systematic review. Eur J Cancer 2022; 167:54-69. [PMID: 35390650 DOI: 10.1016/j.ejca.2022.02.025] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 02/22/2022] [Accepted: 02/24/2022] [Indexed: 01/18/2023]
Abstract
BACKGROUND Due to their ability to solve complex problems, deep neural networks (DNNs) are becoming increasingly popular in medical applications. However, decision-making by such algorithms is essentially a black-box process that renders it difficult for physicians to judge whether the decisions are reliable. The use of explainable artificial intelligence (XAI) is often suggested as a solution to this problem. We investigate how XAI is used for skin cancer detection: how is it used during the development of new DNNs? What kinds of visualisations are commonly used? Are there systematic evaluations of XAI with dermatologists or dermatopathologists? METHODS Google Scholar, PubMed, IEEE Explore, Science Direct and Scopus were searched for peer-reviewed studies published between January 2017 and October 2021 applying XAI to dermatological images: the search terms histopathological image, whole-slide image, clinical image, dermoscopic image, skin, dermatology, explainable, interpretable and XAI were used in various combinations. Only studies concerned with skin cancer were included. RESULTS 37 publications fulfilled our inclusion criteria. Most studies (19/37) simply applied existing XAI methods to their classifier to interpret its decision-making. Some studies (4/37) proposed new XAI methods or improved upon existing techniques. 14/37 studies addressed specific questions such as bias detection and impact of XAI on man-machine-interactions. However, only three of them evaluated the performance and confidence of humans using CAD systems with XAI. CONCLUSION XAI is commonly applied during the development of DNNs for skin cancer detection. However, a systematic and rigorous evaluation of its usefulness in this scenario is lacking.
Collapse
Affiliation(s)
- Katja Hauser
- Digital Biomarkers for Oncology Group, National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Alexander Kurz
- Digital Biomarkers for Oncology Group, National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Sarah Haggenmüller
- Digital Biomarkers for Oncology Group, National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Roman C Maron
- Digital Biomarkers for Oncology Group, National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Christof von Kalle
- Department of Clinical-Translational Sciences, Charité University Medicine and Berlin Institute of Health (BIH), Berlin, Germany
| | - Jochen S Utikal
- Department of Dermatology, Heidelberg University, Mannheim, Germany; Skin Cancer Unit, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Friedegund Meier
- Skin Cancer Center at the University Cancer Centre and National Center for Tumor Diseases Dresden, Department of Dermatology, University Hospital Carl Gustav Carus, Technische Universität Dresden, Germany
| | - Sarah Hobelsberger
- Skin Cancer Center at the University Cancer Centre and National Center for Tumor Diseases Dresden, Department of Dermatology, University Hospital Carl Gustav Carus, Technische Universität Dresden, Germany
| | - Frank F Gellrich
- Skin Cancer Center at the University Cancer Centre and National Center for Tumor Diseases Dresden, Department of Dermatology, University Hospital Carl Gustav Carus, Technische Universität Dresden, Germany
| | - Mildred Sergon
- Skin Cancer Center at the University Cancer Centre and National Center for Tumor Diseases Dresden, Department of Dermatology, University Hospital Carl Gustav Carus, Technische Universität Dresden, Germany
| | - Axel Hauschild
- Department of Dermatology, University Hospital (UKSH), Kiel, Germany
| | - Lars E French
- Department of Dermatology and Allergy, University Hospital, LMU Munich, Munich, Germany; Dr. Phillip Frost Department of Dermatology and Cutaneous Surgery, University of Miami, Miller School of Medicine, Miami, FL, USA
| | - Lucie Heinzerling
- Department of Dermatology and Allergy, University Hospital, LMU Munich, Munich, Germany
| | - Justin G Schlager
- Department of Dermatology and Allergy, University Hospital, LMU Munich, Munich, Germany
| | - Kamran Ghoreschi
- Department of Dermatology, Venereology and Allergology, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Max Schlaak
- Department of Dermatology, Venereology and Allergology, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Franz J Hilke
- Department of Dermatology, Venereology and Allergology, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Gabriela Poch
- Department of Dermatology, Venereology and Allergology, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Heinz Kutzner
- Dermatopathology Laboratory, Friedrichshafen, Germany
| | - Carola Berking
- Department of Dermatology, University Hospital Erlangen, Comprehensive Cancer Center Erlangen - EMN, Friedrich-Alexander University Erlangen, Nuremberg, Germany
| | - Markus V Heppt
- Department of Dermatology, University Hospital Erlangen, Comprehensive Cancer Center Erlangen - EMN, Friedrich-Alexander University Erlangen, Nuremberg, Germany
| | - Michael Erdmann
- Department of Dermatology, University Hospital Erlangen, Comprehensive Cancer Center Erlangen - EMN, Friedrich-Alexander University Erlangen, Nuremberg, Germany
| | - Sebastian Haferkamp
- Department of Dermatology, University Hospital Regensburg, Regensburg, Germany
| | - Dirk Schadendorf
- Department of Dermatology, University Hospital Essen, Essen, Germany
| | - Wiebke Sondermann
- Department of Dermatology, University Hospital Essen, Essen, Germany
| | - Matthias Goebeler
- Department of Dermatology, University Hospital Würzburg, Würzburg, Germany
| | - Bastian Schilling
- Department of Dermatology, University Hospital Würzburg, Würzburg, Germany
| | - Jakob N Kather
- Division of Translational Medical Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Stefan Fröhling
- National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Daniel B Lipka
- National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Achim Hekler
- Digital Biomarkers for Oncology Group, National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Eva Krieghoff-Henning
- Digital Biomarkers for Oncology Group, National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Titus J Brinker
- Digital Biomarkers for Oncology Group, National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany.
| |
Collapse
|
17
|
Lucieri A, Bajwa MN, Braun SA, Malik MI, Dengel A, Ahmed S. ExAID: A multimodal explanation framework for computer-aided diagnosis of skin lesions. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 215:106620. [PMID: 35033756 DOI: 10.1016/j.cmpb.2022.106620] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 12/01/2021] [Accepted: 01/03/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVES One principal impediment in the successful deployment of Artificial Intelligence (AI) based Computer-Aided Diagnosis (CAD) systems in everyday clinical workflows is their lack of transparent decision-making. Although commonly used eXplainable AI (XAI) methods provide insights into these largely opaque algorithms, such explanations are usually convoluted and not readily comprehensible. The explanation of decisions regarding the malignancy of skin lesions from dermoscopic images demands particular clarity, as the underlying medical problem definition is ambiguous in itself. This work presents ExAID (Explainable AI for Dermatology), a novel XAI framework for biomedical image analysis that provides multi-modal concept-based explanations, consisting of easy-to-understand textual explanations and visual maps, to justify the predictions. METHODS Our framework relies on Concept Activation Vectors to map human-understandable concepts to those learned by an arbitrary Deep Learning (DL) based algorithm, and Concept Localisation Maps to highlight those concepts in the input space. This identification of relevant concepts is then used to construct fine-grained textual explanations supplemented by concept-wise location information to provide comprehensive and coherent multi-modal explanations. All decision-related information is presented in a diagnostic interface for use in clinical routines. Moreover, the framework includes an educational mode providing dataset-level explanation statistics as well as tools for data and model exploration to aid medical research and education processes. RESULTS Through rigorous quantitative and qualitative evaluation of our framework on a range of publicly available dermoscopic image datasets, we show the utility of multi-modal explanations for CAD-assisted scenarios even in case of wrong disease predictions. We demonstrate that concept detectors for the explanation of pre-trained networks reach accuracies of up to 81.46%, which is comparable to supervised networks trained end-to-end. CONCLUSIONS We present a new end-to-end framework for the multi-modal explanation of DL-based biomedical image analysis in Melanoma classification and evaluate its utility on an array of datasets. Since perspicuous explanation is one of the cornerstones of any CAD system, we believe that ExAID will accelerate the transition from AI research to practice by providing dermatologists and researchers with an effective tool that they can both understand and trust. ExAID can also serve as the basis for similar applications in other biomedical fields.
Collapse
Affiliation(s)
- Adriano Lucieri
- German Research Center for Artificial Intelligence (DFKI) GmbH, Trippstadter Straße 122, 67663 Kaiserslautern, Germany; Technical University Kaiserslautern, Erwin-Schrödinger-Straße 52, 67663 Kaiserslautern, Germany.
| | - Muhammad Naseer Bajwa
- German Research Center for Artificial Intelligence (DFKI) GmbH, Trippstadter Straße 122, 67663 Kaiserslautern, Germany; Technical University Kaiserslautern, Erwin-Schrödinger-Straße 52, 67663 Kaiserslautern, Germany.
| | - Stephan Alexander Braun
- University Hospital Münster, Albert-Schweitzer-Campus 1, 48149 Münster, Germany; University Hospital of Düsseldorf, Moorenstraße 5, 40225 Düsseldorf, Germany.
| | - Muhammad Imran Malik
- School of Electrical Engineering and Computer Science (SEECS), National University of Sciences and Technology (NUST), Islamabad, Pakistan; Deep Learning Laboratory, National Center of Artificial Intelligence, Islamabad, Pakistan.
| | - Andreas Dengel
- German Research Center for Artificial Intelligence (DFKI) GmbH, Trippstadter Straße 122, 67663 Kaiserslautern, Germany; Technical University Kaiserslautern, Erwin-Schrödinger-Straße 52, 67663 Kaiserslautern, Germany.
| | - Sheraz Ahmed
- German Research Center for Artificial Intelligence (DFKI) GmbH, Trippstadter Straße 122, 67663 Kaiserslautern, Germany.
| |
Collapse
|
18
|
An Efficient Stacked Deep Transfer Learning Model for Automated Diagnosis of Lyme Disease. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:2933015. [PMID: 35265109 PMCID: PMC8901315 DOI: 10.1155/2022/2933015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Revised: 01/26/2022] [Accepted: 02/07/2022] [Indexed: 12/05/2022]
Abstract
Lyme disease is one of the most common vector-borne infections. It typically causes cardiac illnesses, neurologic illnesses, musculoskeletal disorders, and dermatologic conditions. However, most of the time, it is poorly diagnosed due to many similarities with other diseases such as drug rash. Given the potentially serious consequences of unnecessary antimicrobial treatments, it is essential to understand frequent and uncommon diagnoses that explain symptoms in this population. Recently, deep learning models have been used for the diagnosis of various rash-related diseases. However, these models suffer from overfitting and color variation problems. To overcome these problems, an efficient stacked deep transfer learning model is proposed that can efficiently distinguish between patients infected with Lyme (+) or infected with other infections. 2nd order edge-based color constancy is used as a preprocessing approach to reduce the impact of multisource light from images acquired under different setups. The AlexNet pretrained learning model is used for building the Lyme disease diagnosis model. To prevent overfitting, data augmentation techniques are also used to augment the dataset. In addition, 5-fold cross-validation is also used. Comparative analysis indicates that the proposed model outperforms the existing models in terms of accuracy, f-measure, sensitivity, specificity, and area under the curve.
Collapse
|
19
|
Pan T, Pedrycz W, Cui J, Yang J, Wu W. Interpretability of Neural Networks with Probability Density Functions. ADVANCED THEORY AND SIMULATIONS 2022. [DOI: 10.1002/adts.202100459] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Tingting Pan
- School of Mathematical Sciences Dalian University of Technology Dalian 116024 China
- Key Laboratory for Computational Mathematics and Data Intelligence of Liaoning Province Dalian 116024 China
- Department of Electrical and Computer Engineering University of Alberta Edmonton Alberta T6G 2G7 Canada
| | - Witold Pedrycz
- Department of Electrical and Computer Engineering University of Alberta Edmonton Alberta T6G 2G7 Canada
| | - Jiahui Cui
- School of Mathematical Sciences Dalian University of Technology Dalian 116024 China
- Key Laboratory for Computational Mathematics and Data Intelligence of Liaoning Province Dalian 116024 China
| | - Jie Yang
- School of Mathematical Sciences Dalian University of Technology Dalian 116024 China
- Key Laboratory for Computational Mathematics and Data Intelligence of Liaoning Province Dalian 116024 China
| | - Wei Wu
- School of Mathematical Sciences Dalian University of Technology Dalian 116024 China
- Key Laboratory for Computational Mathematics and Data Intelligence of Liaoning Province Dalian 116024 China
| |
Collapse
|