1
|
Khatri M, Yin Y, Deogun J. Enhancing Interpretability in Medical Image Classification by Integrating Formal Concept Analysis with Convolutional Neural Networks. Biomimetics (Basel) 2024; 9:421. [PMID: 39056862 PMCID: PMC11274788 DOI: 10.3390/biomimetics9070421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Revised: 06/19/2024] [Accepted: 06/28/2024] [Indexed: 07/28/2024] Open
Abstract
In this study, we present a novel approach to enhancing the interpretability of medical image classification by integrating formal concept analysis (FCA) with convolutional neural networks (CNNs). While CNNs are increasingly applied in medical diagnoses, understanding their decision-making remains a challenge. Although visualization techniques like saliency maps offer insights into CNNs' decision-making for individual images, they do not explicitly establish a relationship between the high-level features learned by CNNs and the class labels across entire dataset. To bridge this gap, we leverage the FCA framework as an image classification model, presenting a novel method for understanding the relationship between abstract features and class labels in medical imaging. Building on our previous work, which applied this method to the MNIST handwritten image dataset and demonstrated that the performance is comparable to CNNs, we extend our approach and evaluation to histopathological image datasets, including Warwick-QU and BreakHIS. Our results show that the FCA-based classifier offers comparable accuracy to deep neural classifiers while providing transparency into the classification process, an important factor in clinical decision-making.
Collapse
Affiliation(s)
- Minal Khatri
- Department of Computer Science and Engineering, University of Nebraska-Lincoln, Lincoln, NE 68588, USA;
| | - Yanbin Yin
- Department of Food Science and Technology, University of Nebraska-Lincoln, Lincoln, NE 68588, USA;
| | - Jitender Deogun
- Department of Computer Science and Engineering, University of Nebraska-Lincoln, Lincoln, NE 68588, USA;
| |
Collapse
|
2
|
Li S, Shi S, Fan Z, He X, Zhang N. Deep information-guided feature refinement network for colorectal gland segmentation. Int J Comput Assist Radiol Surg 2023; 18:2319-2328. [PMID: 36934367 DOI: 10.1007/s11548-023-02857-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 02/22/2023] [Indexed: 03/20/2023]
Abstract
PURPOSE Reliable quantification of colorectal histopathological images is based on the precise segmentation of glands but precise segmentation of glands is challenging as glandular morphology varies widely across histological grades, such as malignant glands and non-gland tissues are too similar to be identified, and tightly connected glands are even highly possibly to be incorrectly segmented as one gland. METHODS A deep information-guided feature refinement network is proposed to improve gland segmentation. Specifically, the backbone deepens the network structure to obtain effective features while maximizing the retained information, and a Multi-Scale Fusion module is proposed to increase the receptive field. In addition, to segment dense glands individually, a Multi-Scale Edge-Refined module is designed to strengthen the boundaries of glands. RESULTS The comparative experiments on the eight recently proposed deep learning methods demonstrated that our proposed network has better overall performance and is more competitive on Test B. The F1 score of Test A and Test B is 0.917 and 0.876, respectively; the object-level Dice is 0.921 and 0.884; and the object-level Hausdorff is 43.428 and 87.132, respectively. CONCLUSION The proposed colorectal gland segmentation network can effectively extract features with high representational ability and enhance edge features while retaining details to the maximum, dramatically improving the segmentation performance on malignant glands, and better segmentation results of multi-scale and closed glands can also be obtained.
Collapse
Affiliation(s)
- Sheng Li
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China
| | - Shuling Shi
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China
| | - Zhenbang Fan
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China
| | - Xiongxiong He
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China
| | - Ni Zhang
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China.
| |
Collapse
|
3
|
Wang J, Qin L, Chen D, Wang J, Han BW, Zhu Z, Qiao G. An improved Hover-net for nuclear segmentation and classification in histopathology images. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08394-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
|
4
|
Dabass M, Dabass J. An Atrous Convolved Hybrid Seg-Net Model with residual and attention mechanism for gland detection and segmentation in histopathological images. Comput Biol Med 2023; 155:106690. [PMID: 36827788 DOI: 10.1016/j.compbiomed.2023.106690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 02/06/2023] [Accepted: 02/14/2023] [Indexed: 02/21/2023]
Abstract
PURPOSE A clinically compatible computerized segmentation model is presented here that aspires to supply clinical gland informative details by seizing every small and intricate variation in medical images, integrate second opinions, and reduce human errors. APPROACH It comprises of enhanced learning capability that extracts denser multi-scale gland-specific features, recover semantic gap during concatenation, and effectively handle resolution-degradation and vanishing gradient problems. It is having three proposed modules namely Atrous Convolved Residual Learning Module in the encoder as well as decoder, Residual Attention Module in the skip connection paths, and Atrous Convolved Transitional Module as the transitional and output layer. Also, pre-processing techniques like patch-sampling, stain-normalization, augmentation, etc. are employed to develop its generalization capability. To verify its robustness and invigorate network invariance against digital variability, extensive experiments are carried out employing three different public datasets i.e., GlaS (Gland Segmentation Challenge), CRAG (Colorectal Adenocarcinoma Gland) and LC-25000 (Lung Colon-25000) dataset and a private HosC (Hospital Colon) dataset. RESULTS The presented model accomplished combative gland detection outcomes having F1-score (GlaS(Test A(0.957), Test B(0.926)), CRAG(0.935), LC 25000(0.922), HosC(0.963)); and gland segmentation results having Object-Dice Index (GlaS(Test A(0.961), Test B(0.933)), CRAG(0.961), LC-25000(0.940), HosC(0.929)), and Object-Hausdorff Distance (GlaS(Test A(21.77) and Test B(69.74)), CRAG(87.63), LC-25000(95.85), HosC(83.29)). In addition, validation score (GlaS (Test A(0.945), Test B(0.937)), CRAG(0.934), LC-25000(0.911), HosC(0.928)) supplied by the proficient pathologists is integrated for the end segmentation results to corroborate the applicability and appropriateness for assistance at the clinical level applications. CONCLUSION The proposed system will assist pathologists in devising precise diagnoses by offering a referential perspective during morphology assessment of colon histopathology images.
Collapse
Affiliation(s)
- Manju Dabass
- EECE Deptt, The NorthCap University, Gurugram, India.
| | - Jyoti Dabass
- DBT Centre of Excellence Biopharmaceutical Technology, IIT, Delhi, India
| |
Collapse
|
5
|
Ilhan A, Alpan K, Sekeroglu B, Abiyev R. COVID-19 Lung CT image segmentation using localization and enhancement methods with U-Net. PROCEDIA COMPUTER SCIENCE 2023; 218:1660-1667. [PMID: 36743788 PMCID: PMC9886330 DOI: 10.1016/j.procs.2023.01.144] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
Segmentation of pneumonia lesions from Lung CT images has become vital for diagnosing the disease and evaluating the severity of the patients during the COVID-19 pandemic. Several AI-based systems have been proposed for this task. However, some low-contrast abnormal zones in CT images make the task challenging. The researchers investigated image preprocessing techniques to accomplish this problem and to enable more accurate segmentation by the AI-based systems. This study proposes a COVID-19 Lung-CT segmentation system based on histogram-based non-parametric region localization and enhancement (LE) methods prior to the U-Net architecture. The COVID-19-infected lung CT images were initially processed by the LE method, and the infected regions were detected and enhanced to provide more discriminative features to the deep learning segmentation methods. The U-Net is trained using the enhanced images to segment the regions affected by COVID-19. The proposed system achieved 97.75%, 0.85, and 0.74 accuracy, dice score, and Jaccard index, respectively. The comparison results suggested that the use of LE methods as a preprocessing step in CT Lung images significantly improved the feature extraction and segmentation abilities of the U-Net model by a 0.21 dice score. The results might lead to implementing the LE method in segmenting varied medical images.
Collapse
Affiliation(s)
- Ahmet Ilhan
- Department of Computer Engineering, Near East University, Nicosia, 99138, Cyprus, Mersin 10 Turkey
- Applied Artificial Intelligence Research Center, Near East University, Nicosia, 99138, Cyprus, Mersin 10 Turkey
| | - Kezban Alpan
- Department of Information Systems Engineering, Near East University, Nicosia, 99138, Cyprus, Mersin 10 Turkey
- Applied Artificial Intelligence Research Center, Near East University, Nicosia, 99138, Cyprus, Mersin 10 Turkey
| | - Boran Sekeroglu
- Department of Information Systems Engineering, Near East University, Nicosia, 99138, Cyprus, Mersin 10 Turkey
- Applied Artificial Intelligence Research Center, Near East University, Nicosia, 99138, Cyprus, Mersin 10 Turkey
| | - Rahib Abiyev
- Department of Computer Engineering, Near East University, Nicosia, 99138, Cyprus, Mersin 10 Turkey
- Applied Artificial Intelligence Research Center, Near East University, Nicosia, 99138, Cyprus, Mersin 10 Turkey
| |
Collapse
|
6
|
Niranjan K, Shankar Kumar S, Vedanth S, Chitrakala DS. An Explainable AI driven Decision Support System for COVID-19 Diagnosis using Fused Classification and Segmentation. PROCEDIA COMPUTER SCIENCE 2023; 218:1915-1925. [PMID: 36743792 PMCID: PMC9886321 DOI: 10.1016/j.procs.2023.01.168] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
The coronavirus has caused havoc on billions of people worldwide. The Reverse Transcription Polymerase Chain Reaction(RT-PCR) test is widely accepted as a standard diagnostic tool for detecting infection, however, the severity of infection can't be measured accurately with RT-PCR results. Chest CT Scans of infected patients can manifest the presence of lesions with high sensitivity. During the pandemic, there is a dearth of competent doctors to examine chest CT images. Therefore, a Guided Gradcam based Explainable Classification and Segmentation system (GGECS) which is a real-time explainable classification and lesion identification decision support system is proposed in this work. The classification model used in the proposed GGECS system is inspired by Res2Net. Explainable AI techniques like GradCam and Guided GradCam are used to demystify Convolutional Neural Networks (CNNs). These explainable systems can assist in localizing the regions in the CT scan that contribute significantly to the system's prediction. The segmentation model can further reliably localize infected regions. The segmentation model is a fusion between the VGG-16 and the classification network. The proposed classification model in GGECS obtains an overall accuracy of 98.51 % and the segmentation model achieves an IoU score of 0.595.
Collapse
Affiliation(s)
- K Niranjan
- Computer Science and Engineering, College of Engineering Guindy, Anna University, Chennai, India
| | - S Shankar Kumar
- Computer Science and Engineering, College of Engineering Guindy, Anna University, Chennai, India
| | - S Vedanth
- Computer Science and Engineering, College of Engineering Guindy, Anna University, Chennai, India
| | - Dr. S. Chitrakala
- Computer Science and Engineering, College of Engineering Guindy, Anna University, Chennai, India
| |
Collapse
|
7
|
Chauhan H, Modi K. AMSFMap Methodology to improve prediction accuracy of CNN model for Covid19 using X-ray images. PROCEDIA COMPUTER SCIENCE 2023; 218:1394-1404. [PMID: 36743789 PMCID: PMC9886331 DOI: 10.1016/j.procs.2023.01.118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
A serious medical issue reported at the center of media worldwide, Since December, 2019 is the Covid19 pandemic. As declared by World Health Organization, confirmed cases of Covid19 have been 579,893,790 including 6,415,070 deaths as of 29 July 2022. Even new cases reported in last 24 hours are 20,409 in India. This needs to diagnose and timely treatment of Covid-19 is essential to prevent hurdles including death. The author developed deep learning based Covid19 diagnosis and severity prediction models using x-ray images with hope that this technology can increase access to radiology expertise in remote places where availability of expert radiologist is limited. The researchers proposed and implemented Attentive Multi Scale Feature map based deep Network (AMSF-Net) for x- ray image classification with improved accuracy. In binary classification, x-ray images are classified as normal or Covid19. Multiclass classification classifies x-ray images into mild, moderate or severe infection of Covid19. The researchers utilized lower layers features in addition to features from highest level with different scale to increase ability of CNN to learn fine-grained features. Channel attention also incorporated to amplify features of important channels. ROI based cropping and AHE employed to enhance content of training image. Image augmentation utilized to increase dataset size. To address the issue of the class imbalance problem, focal loss has been applied. Sensitivity, precision, accuracy and F1 score metrics are used for performance evaluation. The author achieved 78% accuracy for binary classification. Precision, recall and F1 score values for positive class is 85, 67 and 75, respectively while 73, 88 and 80 for negative class. Classification accuracy of mild, moderate and sever class is 90, 97 and 96. Average accuracy of 95 % achieved with superior performance compared to existing methods.
Collapse
Affiliation(s)
| | - Kirit Modi
- Sankalchand Patel University, Visnagar, 384315 India
| |
Collapse
|
8
|
Nasir ES, Parvaiz A, Fraz MM. Nuclei and glands instance segmentation in histology images: a narrative review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10372-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|