1
|
Huang Y, Liu W, Yin Z, Hu S, Wang M, Cai W. ECG classification based on guided attention mechanism. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 257:108454. [PMID: 39369585 DOI: 10.1016/j.cmpb.2024.108454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2024] [Revised: 09/18/2024] [Accepted: 10/02/2024] [Indexed: 10/08/2024]
Abstract
BACKGROUND AND OBJECTIVE Integrating domain knowledge into deep learning models can improve their effectiveness and increase explainability. This study aims to enhance the classification performance of electrocardiograms (ECGs) by customizing specific guided mechanisms based on the characteristics of different cardiac abnormalities. METHODS Two novel guided attention mechanisms, Guided Spatial Attention (GSA) and CAM-based spatial guided attention mechanism (CGAM), were introduced. Different attention guidance labels were created based on clinical knowledge for four ECG abnormality classification tasks: ST change detection, premature contraction identification, Wolf-Parkinson-White syndrome (WPW) classification, and atrial fibrillation (AF) detection. The models were trained and evaluated separately for each classification task. Model explainability was quantified using Shapley values. RESULTS GSA improved the F1 score of the model by 5.74%, 5%, 8.96%, and 3.91% for ST change detection, premature contraction identification, WPW classification, and AF detection, respectively. Similarly, CGAM exhibited improvements of 3.89%, 5.40%, 8.21%, and 1.80% for the respective tasks. The combined use of GSA and CGAM resulted in even higher improvements of 6.26%, 5.58%, 8.85%, and 4.03%, respectively. Moreover, when all four tasks were conducted simultaneously, a notable overall performance boost was achieved, demonstrating the broad adaptability of the proposed model. The quantified Shapley values demonstrated the effectiveness of the guided attention mechanisms in enhancing the model's explainability. CONCLUSIONS The guided attention mechanisms, utilizing domain knowledge, effectively directed the model's attention, leading to improved classification performance and explainability. These findings have significant implications in facilitating accurate automated ECG classification.
Collapse
Affiliation(s)
- Yangcheng Huang
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Wenjing Liu
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Ziyi Yin
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Shuaicong Hu
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Mingjie Wang
- School of Basic Medical Sciences, Fudan University, Shanghai, China
| | - Wenjie Cai
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China.
| |
Collapse
|
2
|
Zhao R, Xi Z, Liu H, Jian X, Zhang J, Zhang Z, Li S. MIST: Multi-instance selective transformer for histopathological subtype prediction. Med Image Anal 2024; 97:103251. [PMID: 38954942 DOI: 10.1016/j.media.2024.103251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 01/24/2024] [Accepted: 06/21/2024] [Indexed: 07/04/2024]
Abstract
Accurate histopathological subtype prediction is clinically significant for cancer diagnosis and tumor microenvironment analysis. However, achieving accurate histopathological subtype prediction is a challenging task due to (1) instance-level discrimination of histopathological images, (2) low inter-class and large intra-class variances among histopathological images in their shape and chromatin texture, and (3) heterogeneous feature distribution over different images. In this paper, we formulate subtype prediction as fine-grained representation learning and propose a novel multi-instance selective transformer (MIST) framework, effectively achieving accurate histopathological subtype prediction. The proposed MIST designs an effective selective self-attention mechanism with multi-instance learning (MIL) and vision transformer (ViT) to adaptive identify informative instances for fine-grained representation. Innovatively, the MIST entrusts each instance with different contributions to the bag representation based on its interactions with instances and bags. Specifically, a SiT module with selective multi-head self-attention (S-MSA) is well-designed to identify the representative instances by modeling the instance-to-instance interactions. On the contrary, a MIFD module with the information bottleneck is proposed to learn the discriminative fine-grained representation for histopathological images by modeling instance-to-bag interactions with the selected instances. Substantial experiments on five clinical benchmarks demonstrate that the MIST achieves accurate histopathological subtype prediction and obtains state-of-the-art performance with an accuracy of 0.936. The MIST shows great potential to handle fine-grained medical image analysis, such as histopathological subtype prediction in clinical applications.
Collapse
Affiliation(s)
- Rongchang Zhao
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Zijun Xi
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Huanchi Liu
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Xiangkun Jian
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Jian Zhang
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Zijian Zhang
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
| | - Shuo Li
- School of Computer Science and Engineering, Central South University, Changsha, China; Department of Computer and Data Science and Department of Biomedical Engineering, Case Western Reserve University, Cleveland, USA.
| |
Collapse
|
3
|
Bhati D, Neha F, Amiruzzaman M. A Survey on Explainable Artificial Intelligence (XAI) Techniques for Visualizing Deep Learning Models in Medical Imaging. J Imaging 2024; 10:239. [PMID: 39452402 PMCID: PMC11508748 DOI: 10.3390/jimaging10100239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2024] [Revised: 09/14/2024] [Accepted: 09/21/2024] [Indexed: 10/26/2024] Open
Abstract
The combination of medical imaging and deep learning has significantly improved diagnostic and prognostic capabilities in the healthcare domain. Nevertheless, the inherent complexity of deep learning models poses challenges in understanding their decision-making processes. Interpretability and visualization techniques have emerged as crucial tools to unravel the black-box nature of these models, providing insights into their inner workings and enhancing trust in their predictions. This survey paper comprehensively examines various interpretation and visualization techniques applied to deep learning models in medical imaging. The paper reviews methodologies, discusses their applications, and evaluates their effectiveness in enhancing the interpretability, reliability, and clinical relevance of deep learning models in medical image analysis.
Collapse
Affiliation(s)
- Deepshikha Bhati
- Department of Computer Science, Kent State University, Kent, OH 44242, USA;
| | - Fnu Neha
- Department of Computer Science, Kent State University, Kent, OH 44242, USA;
| | - Md Amiruzzaman
- Department of Computer Science, West Chester University, West Chester, PA 19383, USA;
| |
Collapse
|
4
|
Liu M, Liu Y, Xu P, Cui H, Ke J, Ma J. Exploiting Geometric Features via Hierarchical Graph Pyramid Transformer for Cancer Diagnosis Using Histopathological Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2888-2900. [PMID: 38530716 DOI: 10.1109/tmi.2024.3381994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/28/2024]
Abstract
Cancer is widely recognized as the primary cause of mortality worldwide, and pathology analysis plays a pivotal role in achieving accurate cancer diagnosis. The intricate representation of features in histopathological images encompasses abundant information crucial for disease diagnosis, regarding cell appearance, tumor microenvironment, and geometric characteristics. However, recent deep learning methods have not adequately exploited geometric features for pathological image classification due to the absence of effective descriptors that can capture both cell distribution and gathering patterns, which often serve as potent indicators. In this paper, inspired by clinical practice, a Hierarchical Graph Pyramid Transformer (HGPT) is proposed to guide pathological image classification by effectively exploiting a geometric representation of tissue distribution which was ignored by existing state-of-the-art methods. First, a graph representation is constructed according to morphological feature of input pathological image and learn geometric representation through the proposed multi-head graph aggregator. Then, the image and its graph representation are feed into the transformer encoder layer to model long-range dependency. Finally, a locality feature enhancement block is designed to enhance the 2D local representation of feature embedding, which is not well explored in the existing vision transformers. An extensive experimental study is conducted on Kather-5K, MHIST, NCT-CRC-HE, and GasHisSDB for binary or multi-category classification of multiple cancer types. Results demonstrated that our method is capable of consistently reaching superior classification outcomes for histopathological images, which provide an effective diagnostic tool for malignant tumors in clinical practice.
Collapse
|
5
|
Bui DC, Song B, Kim K, Kwak JT. DAX-Net: A dual-branch dual-task adaptive cross-weight feature fusion network for robust multi-class cancer classification in pathology images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 248:108112. [PMID: 38479146 DOI: 10.1016/j.cmpb.2024.108112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 02/15/2024] [Accepted: 03/01/2024] [Indexed: 04/02/2024]
Abstract
BACKGROUND AND OBJECTIVE Multi-class cancer classification has been extensively studied in digital and computational pathology due to its importance in clinical decision-making. Numerous computational tools have been proposed for various types of cancer classification. Many of them are built based on convolutional neural networks. Recently, Transformer-style networks have shown to be effective for cancer classification. Herein, we present a hybrid design that leverages both convolutional neural networks and transformer architecture to obtain superior performance in cancer classification. METHODS We propose a dual-branch dual-task adaptive cross-weight feature fusion network, called DAX-Net, which exploits heterogeneous feature representations from the convolutional neural network and Transformer network, adaptively combines them to boost their representation power, and conducts cancer classification as categorical classification and ordinal classification. For an efficient and effective optimization of the proposed model, we introduce two loss functions that are tailored to the two classification tasks. RESULTS To evaluate the proposed method, we employed colorectal and prostate cancer datasets, of which each contains both in-domain and out-of-domain test sets. For colorectal cancer, the proposed method obtained an accuracy of 88.4%, a quadratic kappa score of 0.945, and an F1 score of 0.831 for the in-domain test set, and 84.4%, 0.910, and 0.768 for the out-of-domain test set. For prostate cancer, it achieved an accuracy of 71.6%, a kappa score of 0.635, and an F1 score of 0.655 for the in-domain test set, 79.2% accuracy, 0.721 kappa score, and 0.686 F1 score for the first out-of-domain test set, and 58.1% accuracy, 0.564 kappa score, and 0.493 F1 score for the second out-of-domain test set. It is worth noting that the performance of the proposed method outperformed other competitors by significant margins, in particular, with respect to the out-of-domain test sets. CONCLUSIONS The experimental results demonstrate that the proposed method is not only accurate but also robust to varying conditions of the test sets in comparison to several, related methods. These results suggest that the proposed method can facilitate automated cancer classification in various clinical settings.
Collapse
Affiliation(s)
- Doanh C Bui
- School of Electrical Engineering, Korea University, Seoul, 02841, Republic of Korea
| | - Boram Song
- Department of Pathology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, 03181, Republic of Korea
| | - Kyungeun Kim
- Department of Pathology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, 03181, Republic of Korea
| | - Jin Tae Kwak
- School of Electrical Engineering, Korea University, Seoul, 02841, Republic of Korea.
| |
Collapse
|
6
|
Wang AQ, Karaman BK, Kim H, Rosenthal J, Saluja R, Young SI, Sabuncu MR. A Framework for Interpretability in Machine Learning for Medical Imaging. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2024; 12:53277-53292. [PMID: 39421804 PMCID: PMC11486155 DOI: 10.1109/access.2024.3387702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2024]
Abstract
Interpretability for machine learning models in medical imaging (MLMI) is an important direction of research. However, there is a general sense of murkiness in what interpretability means. Why does the need for interpretability in MLMI arise? What goals does one actually seek to address when interpretability is needed? To answer these questions, we identify a need to formalize the goals and elements of interpretability in MLMI. By reasoning about real-world tasks and goals common in both medical image analysis and its intersection with machine learning, we identify five core elements of interpretability: localization, visual recognizability, physical attribution, model transparency, and actionability. From this, we arrive at a framework for interpretability in MLMI, which serves as a step-by-step guide to approaching interpretability in this context. Overall, this paper formalizes interpretability needs in the context of medical imaging, and our applied perspective clarifies concrete MLMI-specific goals and considerations in order to guide method design and improve real-world usage. Our goal is to provide practical and didactic information for model designers and practitioners, inspire developers of models in the medical imaging field to reason more deeply about what interpretability is achieving, and suggest future directions of interpretability research.
Collapse
Affiliation(s)
- Alan Q Wang
- School of Electrical and Computer Engineering, Cornell University-Cornell Tech, New York City, NY 10044, USA
- Department of Radiology, Weill Cornell Medical School, New York City, NY 10065, USA
| | - Batuhan K Karaman
- School of Electrical and Computer Engineering, Cornell University-Cornell Tech, New York City, NY 10044, USA
- Department of Radiology, Weill Cornell Medical School, New York City, NY 10065, USA
| | - Heejong Kim
- Department of Radiology, Weill Cornell Medical School, New York City, NY 10065, USA
| | - Jacob Rosenthal
- Department of Radiology, Weill Cornell Medical School, New York City, NY 10065, USA
- Weill Cornell/Rockefeller/Sloan Kettering Tri-Institutional M.D.-Ph.D. Program, New York City, NY 10065, USA
| | - Rachit Saluja
- School of Electrical and Computer Engineering, Cornell University-Cornell Tech, New York City, NY 10044, USA
- Department of Radiology, Weill Cornell Medical School, New York City, NY 10065, USA
| | - Sean I Young
- Martinos Center for Biomedical Imaging, Harvard Medical School, Boston, MA 02129, USA
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA 02139, USA
| | - Mert R Sabuncu
- School of Electrical and Computer Engineering, Cornell University-Cornell Tech, New York City, NY 10044, USA
- Department of Radiology, Weill Cornell Medical School, New York City, NY 10065, USA
| |
Collapse
|
7
|
Kang J, Le VNT, Lee DW, Kim S. Diagnosing oral and maxillofacial diseases using deep learning. Sci Rep 2024; 14:2497. [PMID: 38291068 PMCID: PMC10827796 DOI: 10.1038/s41598-024-52929-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2023] [Accepted: 01/25/2024] [Indexed: 02/01/2024] Open
Abstract
The classification and localization of odontogenic lesions from panoramic radiographs is a challenging task due to the positional biases and class imbalances of the lesions. To address these challenges, a novel neural network, DOLNet, is proposed that uses mutually influencing hierarchical attention across different image scales to jointly learn the global representation of the entire jaw and the local discrepancy between normal tissue and lesions. The proposed approach uses local attention to learn representations within a patch. From the patch-level representations, we generate inter-patch, i.e., global, attention maps to represent the positional prior of lesions in the whole image. Global attention enables the reciprocal calibration of path-level representations by considering non-local information from other patches, thereby improving the generation of whole-image-level representation. To address class imbalances, we propose an effective data augmentation technique that involves merging lesion crops with normal images, thereby synthesizing new abnormal cases for effective model training. Our approach outperforms recent studies, enhancing the classification performance by up to 42.4% and 44.2% in recall and F1 scores, respectively, and ensuring robust lesion localization with respect to lesion size variations and positional biases. Our approach further outperforms human expert clinicians in classification by 10.7 % and 10.8 % in recall and F1 score, respectively.
Collapse
Affiliation(s)
| | - Van Nhat Thang Le
- Faculty of Odonto-Stomatology, Hue University of Medicine and Pharmacy, Hue University, Hue, 49120, Vietnam
| | - Dae-Woo Lee
- The Department of Pediatric Dentistry, Jeonbuk National University, Jeonju, 54896, Korea.
- Biomedical Research Institute of Jeonbuk National University Hospital, Jeonbuk National University, Jeonju, 54896, Korea.
- Research Institute of Clinical Medicine of Jeonbuk National University, Jeonju, 54896, Korea.
| | - Sungchan Kim
- The Department of Computer Science and Artificial Intelligence, Jeonbuk National University, Jeonju, 54896, Korea.
- Center for Advanced Image Information Technology, Jeonbuk National University, Jeonju, 54896, Korea.
| |
Collapse
|
8
|
Priya C V L, V G B, B R V, Ramachandran S. Deep learning approaches for breast cancer detection in histopathology images: A review. Cancer Biomark 2024; 40:1-25. [PMID: 38517775 PMCID: PMC11191493 DOI: 10.3233/cbm-230251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/24/2024]
Abstract
BACKGROUND Breast cancer is one of the leading causes of death in women worldwide. Histopathology analysis of breast tissue is an essential tool for diagnosing and staging breast cancer. In recent years, there has been a significant increase in research exploring the use of deep-learning approaches for breast cancer detection from histopathology images. OBJECTIVE To provide an overview of the current state-of-the-art technologies in automated breast cancer detection in histopathology images using deep learning techniques. METHODS This review focuses on the use of deep learning algorithms for the detection and classification of breast cancer from histopathology images. We provide an overview of publicly available histopathology image datasets for breast cancer detection. We also highlight the strengths and weaknesses of these architectures and their performance on different histopathology image datasets. Finally, we discuss the challenges associated with using deep learning techniques for breast cancer detection, including the need for large and diverse datasets and the interpretability of deep learning models. RESULTS Deep learning techniques have shown great promise in accurately detecting and classifying breast cancer from histopathology images. Although the accuracy levels vary depending on the specific data set, image pre-processing techniques, and deep learning architecture used, these results highlight the potential of deep learning algorithms in improving the accuracy and efficiency of breast cancer detection from histopathology images. CONCLUSION This review has presented a thorough account of the current state-of-the-art techniques for detecting breast cancer using histopathology images. The integration of machine learning and deep learning algorithms has demonstrated promising results in accurately identifying breast cancer from histopathology images. The insights gathered from this review can act as a valuable reference for researchers in this field who are developing diagnostic strategies using histopathology images. Overall, the objective of this review is to spark interest among scholars in this complex field and acquaint them with cutting-edge technologies in breast cancer detection using histopathology images.
Collapse
Affiliation(s)
- Lakshmi Priya C V
- Department of Electronics and Communication Engineering, College of Engineering Trivandrum, Kerala, India
| | - Biju V G
- Department of Electronics and Communication Engineering, College of Engineering Munnar, Kerala, India
| | - Vinod B R
- Department of Electronics and Communication Engineering, College of Engineering Trivandrum, Kerala, India
| | - Sivakumar Ramachandran
- Department of Electronics and Communication Engineering, Government Engineering College Wayanad, Kerala, India
| |
Collapse
|
9
|
Kaur A, Kaushal C, Sandhu JK, Damaševičius R, Thakur N. Histopathological Image Diagnosis for Breast Cancer Diagnosis Based on Deep Mutual Learning. Diagnostics (Basel) 2023; 14:95. [PMID: 38201406 PMCID: PMC10795733 DOI: 10.3390/diagnostics14010095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Revised: 12/26/2023] [Accepted: 12/28/2023] [Indexed: 01/12/2024] Open
Abstract
Every year, millions of women across the globe are diagnosed with breast cancer (BC), an illness that is both common and potentially fatal. To provide effective therapy and enhance patient outcomes, it is essential to make an accurate diagnosis as soon as possible. In recent years, deep-learning (DL) approaches have shown great effectiveness in a variety of medical imaging applications, including the processing of histopathological images. Using DL techniques, the objective of this study is to recover the detection of BC by merging qualitative and quantitative data. Using deep mutual learning (DML), the emphasis of this research was on BC. In addition, a wide variety of breast cancer imaging modalities were investigated to assess the distinction between aggressive and benign BC. Based on this, deep convolutional neural networks (DCNNs) have been established to assess histopathological images of BC. In terms of the Break His-200×, BACH, and PUIH datasets, the results of the trials indicate that the level of accuracy achieved by the DML model is 98.97%, 96.78, and 96.34, respectively. This indicates that the DML model outperforms and has the greatest value among the other methodologies. To be more specific, it improves the results of localization without compromising the performance of the classification, which is an indication of its increased utility. We intend to proceed with the development of the diagnostic model to make it more applicable to clinical settings.
Collapse
Affiliation(s)
- Amandeep Kaur
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, India
| | - Chetna Kaushal
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, India
| | - Jasjeet Kaur Sandhu
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, India
| | - Robertas Damaševičius
- Department of Applied Informatics, Vytautas Magnus University, 53361 Akademija, Lithuania
| | - Neetika Thakur
- Junior Laboratory Technician, Postgraduate Institute of Medical Education and Research, Chandigarh 160012, India
| |
Collapse
|
10
|
Ling Y, Tan W, Yan B. Self-Supervised Digital Histopathology Image Disentanglement for Arbitrary Domain Stain Transfer. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3625-3638. [PMID: 37486828 DOI: 10.1109/tmi.2023.3298361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/26/2023]
Abstract
Diagnosis of cancerous diseases relies on digital histopathology images from stained slides. However, the staining varies among medical centers, which leads to a domain gap of staining. Existing generative adversarial network (GAN) based stain transfer methods highly rely on distinct domains of source and target, and cannot handle unseen domains. To overcome these obstacles, we propose a self-supervised disentanglement network (SDN) for domain-independent optimization and arbitrary domain stain transfer. SDN decomposes an image into features of content and stain. By exchanging the stain features, the staining style of an image is transferred to the target domain. For optimization, we propose a novel self-supervised learning policy based on the consistency of stain and content among augmentations from one instance. Therefore, the process of training SDN is independent on the domain of training data, and thus SDN is able to tackle unseen domains. Exhaustive experiments demonstrate that SDN achieves the top performance in intra-dataset and cross-dataset stain transfer compared with the state-of-the-art stain transfer models, while the number of parameters in SDN is three orders of magnitude smaller parameters than that of compared models. Through stain transfer, SDN improves AUC of downstream classification model on unseen data without fine-tuning. Therefore, the proposed disentanglement framework and self-supervised learning policy have significant advantages in eliminating the stain gap among multi-center histopathology images.
Collapse
|
11
|
Li Y, Shen Y, Zhang J, Song S, Li Z, Ke J, Shen D. A Hierarchical Graph V-Net With Semi-Supervised Pre-Training for Histological Image Based Breast Cancer Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3907-3918. [PMID: 37725717 DOI: 10.1109/tmi.2023.3317132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/21/2023]
Abstract
Numerous patch-based methods have recently been proposed for histological image based breast cancer classification. However, their performance could be highly affected by ignoring spatial contextual information in the whole slide image (WSI). To address this issue, we propose a novel hierarchical Graph V-Net by integrating 1) patch-level pre-training and 2) context-based fine-tuning, with a hierarchical graph network. Specifically, a semi-supervised framework based on knowledge distillation is first developed to pre-train a patch encoder for extracting disease-relevant features. Then, a hierarchical Graph V-Net is designed to construct a hierarchical graph representation from neighboring/similar individual patches for coarse-to-fine classification, where each graph node (corresponding to one patch) is attached with extracted disease-relevant features and its target label during training is the average label of all pixels in the corresponding patch. To evaluate the performance of our proposed hierarchical Graph V-Net, we collect a large WSI dataset of 560 WSIs, with 30 labeled WSIs from the BACH dataset (through our further refinement), 30 labeled WSIs and 500 unlabeled WSIs from Yunnan Cancer Hospital. Those 500 unlabeled WSIs are employed for patch-level pre-training to improve feature representation, while 60 labeled WSIs are used to train and test our proposed hierarchical Graph V-Net. Both comparative assessment and ablation studies demonstrate the superiority of our proposed hierarchical Graph V-Net over state-of-the-art methods in classifying breast cancer from WSIs. The source code and our annotations for the BACH dataset have been released at https://github.com/lyhkevin/Graph-V-Net.
Collapse
|
12
|
Labrada A, Barkana BD. A Comprehensive Review of Computer-Aided Models for Breast Cancer Diagnosis Using Histopathology Images. Bioengineering (Basel) 2023; 10:1289. [PMID: 38002413 PMCID: PMC10669627 DOI: 10.3390/bioengineering10111289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2023] [Revised: 10/20/2023] [Accepted: 10/25/2023] [Indexed: 11/26/2023] Open
Abstract
Breast cancer is the second most common cancer in women who are mainly middle-aged and older. The American Cancer Society reported that the average risk of developing breast cancer sometime in their life is about 13%, and this incident rate has increased by 0.5% per year in recent years. A biopsy is done when screening tests and imaging results show suspicious breast changes. Advancements in computer-aided system capabilities and performance have fueled research using histopathology images in cancer diagnosis. Advances in machine learning and deep neural networks have tremendously increased the number of studies developing computerized detection and classification models. The dataset-dependent nature and trial-and-error approach of the deep networks' performance produced varying results in the literature. This work comprehensively reviews the studies published between 2010 and 2022 regarding commonly used public-domain datasets and methodologies used in preprocessing, segmentation, feature engineering, machine-learning approaches, classifiers, and performance metrics.
Collapse
Affiliation(s)
- Alberto Labrada
- Department of Electrical Engineering, The University of Bridgeport, Bridgeport, CT 06604, USA;
| | - Buket D. Barkana
- Department of Biomedical Engineering, The University of Akron, Akron, OH 44325, USA
| |
Collapse
|
13
|
Alirezazadeh P, Dornaika F. Boosted Additive Angular Margin Loss for breast cancer diagnosis from histopathological images. Comput Biol Med 2023; 166:107528. [PMID: 37774559 DOI: 10.1016/j.compbiomed.2023.107528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 09/11/2023] [Accepted: 09/19/2023] [Indexed: 10/01/2023]
Abstract
Pathologists use biopsies and microscopic examination to accurately diagnose breast cancer. This process is time-consuming, labor-intensive, and costly. Convolutional neural networks (CNNs) offer an efficient and highly accurate approach to reduce analysis time and automate the diagnostic workflow in pathology. However, the softmax loss commonly used in existing CNNs leads to noticeable ambiguity in decision boundaries and lacks a clear constraint for minimizing within-class variance. In response to this problem, a solution in the form of softmax losses based on angular margin was developed. These losses were introduced in the context of face recognition, with the goal of integrating an angular margin into the softmax loss. This integration improves discrimination features during CNN training by effectively increasing the distance between different classes while reducing the variance within each class. Despite significant progress, these losses are limited to target classes only when margin penalties are applied, which may not lead to optimal effectiveness. In this paper, we introduce Boosted Additive Angular Margin Loss (BAM) to obtain highly discriminative features for breast cancer diagnosis from histopathological images. BAM not only penalizes the angle between deep features and their target class weights, but also considers angles between deep features and non-target class weights. We performed extensive experiments on the publicly available BreaKHis dataset. BAM achieved remarkable accuracies of 99.79%, 99.86%, 99.96%, and 97.65% for magnification levels of 40X, 100X, 200X, and 400X, respectively. These results show an improvement in accuracy of 0.13%, 0.34%, and 0.21% for 40X, 100X, and 200X magnifications, respectively, compared to the baseline methods. Additional experiments were performed on the BACH dataset for breast cancer classification and on the widely accepted LFW and YTF datasets for face recognition to evaluate the generalization ability of the proposed loss function. The results show that BAM outperforms state-of-the-art methods by increasing the decision space between classes and minimizing intra-class variance, resulting in improved discriminability.
Collapse
Affiliation(s)
| | - Fadi Dornaika
- Ho Chi Minh City Open University, Ho Chi Minh City, Viet Nam.
| |
Collapse
|
14
|
Jing Y, Li C, Du T, Jiang T, Sun H, Yang J, Shi L, Gao M, Grzegorzek M, Li X. A comprehensive survey of intestine histopathological image analysis using machine vision approaches. Comput Biol Med 2023; 165:107388. [PMID: 37696178 DOI: 10.1016/j.compbiomed.2023.107388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 08/06/2023] [Accepted: 08/25/2023] [Indexed: 09/13/2023]
Abstract
Colorectal Cancer (CRC) is currently one of the most common and deadly cancers. CRC is the third most common malignancy and the fourth leading cause of cancer death worldwide. It ranks as the second most frequent cause of cancer-related deaths in the United States and other developed countries. Histopathological images contain sufficient phenotypic information, they play an indispensable role in the diagnosis and treatment of CRC. In order to improve the objectivity and diagnostic efficiency for image analysis of intestinal histopathology, Computer-aided Diagnosis (CAD) methods based on machine learning (ML) are widely applied in image analysis of intestinal histopathology. In this investigation, we conduct a comprehensive study on recent ML-based methods for image analysis of intestinal histopathology. First, we discuss commonly used datasets from basic research studies with knowledge of intestinal histopathology relevant to medicine. Second, we introduce traditional ML methods commonly used in intestinal histopathology, as well as deep learning (DL) methods. Then, we provide a comprehensive review of the recent developments in ML methods for segmentation, classification, detection, and recognition, among others, for histopathological images of the intestine. Finally, the existing methods have been studied, and the application prospects of these methods in this field are given.
Collapse
Affiliation(s)
- Yujie Jing
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China.
| | - Tianming Du
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Tao Jiang
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China; International Joint Institute of Robotics and Intelligent Systems, Chengdu University of Information Technology, Chengdu, China
| | - Hongzan Sun
- Shengjing Hospital of China Medical University, Shenyang, China
| | - Jinzhu Yang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Liyu Shi
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Minghe Gao
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Marcin Grzegorzek
- Institute for Medical Informatics, University of Luebeck, Luebeck, Germany; Department of Knowledge Engineering, University of Economics in Katowice, Katowice, Poland
| | - Xiaoyan Li
- Cancer Hospital of China Medical University, Liaoning Cancer Hospital, Shenyang, China.
| |
Collapse
|
15
|
Hou Y, Zhang W, Cheng R, Zhang G, Guo Y, Hao Y, Xue H, Wang Z, Wang L, Bai Y. Meta-adaptive-weighting-based bilateral multi-dimensional refined space feature attention network for imbalanced breast cancer histopathological image classification. Comput Biol Med 2023; 164:107300. [PMID: 37557055 DOI: 10.1016/j.compbiomed.2023.107300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2023] [Revised: 07/06/2023] [Accepted: 07/28/2023] [Indexed: 08/11/2023]
Abstract
Breast cancer histopathological image automatic classification can reduce pathologists workload and provide accurate diagnosis. However, one challenge is that empirical datasets are usually imbalanced, resulting in poorer classification quality compared with conventional methods based on balanced datasets. The recently proposed bilateral branch network (BBN) tackles this problem through considering both representation and classifier learning to improve classification performance. We firstly apply bilateral sampling strategy to imbalanced breast cancer histopathological image classification and propose a meta-adaptive-weighting-based bilateral multi-dimensional refined space feature attention network (MAW-BMRSFAN). The model is composed of BMRSFAN and MAWN. Specifically, the refined space feature attention module (RSFAM) is based on convolutional long short-term memories (ConvLSTMs). It is designed to extract refined spatial features of different dimensions for image classification and is inserted into different layers of classification model. Meanwhile, the MAWN is proposed to model the mapping from a balanced meta-dataset to imbalanced dataset. It finds suitable weighting parameter for BMRSFAN more flexibly through adaptively learning from a small amount of balanced dataset directly. The experiments show that MAW-BMRSFAN performs better than previous methods. The recognition accuracy of MAW-BMRSFAN under four different magnifications still is higher than 80% even when unbalance factor is 16, indicating that MAW-BMRSFAN can make ideal performance under extreme imbalanced conditions.
Collapse
Affiliation(s)
- Yuchao Hou
- Department of Mathematics and Computer Science, Shanxi Normal University, Taiyuan 030031, China; State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan 030051, China
| | - Wendong Zhang
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan 030051, China
| | - Rong Cheng
- School of Mathematics, North University of China, Taiyuan 030051, China
| | - Guojun Zhang
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan 030051, China
| | - Yanjie Guo
- School of Mathematics and Statistics, Ningbo University, Ningbo 315211, China
| | - Yan Hao
- School of Mathematics and Statistics, Taiyuan Normal University, Taiyuan 030002, China
| | - Hongxin Xue
- Data Science and Technology, North University of China, Taiyuan 030051, China
| | - Zhihao Wang
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan 030051, China
| | - Long Wang
- Healthcare Big Data Research Center, Shanxi Intelligence Institute of Big Data Technology and Innovation, Taiyuan 030000, China
| | - Yanping Bai
- School of Mathematics, North University of China, Taiyuan 030051, China.
| |
Collapse
|
16
|
Cooper M, Ji Z, Krishnan RG. Machine learning in computational histopathology: Challenges and opportunities. Genes Chromosomes Cancer 2023; 62:540-556. [PMID: 37314068 DOI: 10.1002/gcc.23177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 05/18/2023] [Accepted: 05/20/2023] [Indexed: 06/15/2023] Open
Abstract
Digital histopathological images, high-resolution images of stained tissue samples, are a vital tool for clinicians to diagnose and stage cancers. The visual analysis of patient state based on these images are an important part of oncology workflow. Although pathology workflows have historically been conducted in laboratories under a microscope, the increasing digitization of histopathological images has led to their analysis on computers in the clinic. The last decade has seen the emergence of machine learning, and deep learning in particular, a powerful set of tools for the analysis of histopathological images. Machine learning models trained on large datasets of digitized histopathology slides have resulted in automated models for prediction and stratification of patient risk. In this review, we provide context for the rise of such models in computational histopathology, highlight the clinical tasks they have found success in automating, discuss the various machine learning techniques that have been applied to this domain, and underscore open problems and opportunities.
Collapse
Affiliation(s)
- Michael Cooper
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- University Health Network, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Zongliang Ji
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Rahul G Krishnan
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
17
|
Krishna S, Suganthi S, Bhavsar A, Yesodharan J, Krishnamoorthy S. An interpretable decision-support model for breast cancer diagnosis using histopathology images. J Pathol Inform 2023; 14:100319. [PMID: 37416058 PMCID: PMC10320615 DOI: 10.1016/j.jpi.2023.100319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 05/29/2023] [Accepted: 06/08/2023] [Indexed: 07/08/2023] Open
Abstract
Microscopic examination of biopsy tissue slides is perceived as the gold-standard methodology for the confirmation of presence of cancer cells. Manual analysis of an overwhelming inflow of tissue slides is highly susceptible to misreading of tissue slides by pathologists. A computerized framework for histopathology image analysis is conceived as a diagnostic tool that greatly benefits pathologists, augmenting definitive diagnosis of cancer. Convolutional Neural Network (CNN) turned out to be the most adaptable and effective technique in the detection of abnormal pathologic histology. Despite their high sensitivity and predictive power, clinical translation is constrained by a lack of intelligible insights into the prediction. A computer-aided system that can offer a definitive diagnosis and interpretability is therefore highly desirable. Conventional visual explanatory techniques, Class Activation Mapping (CAM), combined with CNN models offers interpretable decision making. The major challenge in CAM is, it cannot be optimized to create the best visualization map. CAM also decreases the performance of the CNN models. To address this challenge, we introduce a novel interpretable decision-support model using CNN with a trainable attention mechanism using response-based feed-forward visual explanation. We introduce a variant of DarkNet19 CNN model for the classification of histopathology images. In order to achieve visual interpretation as well as boost the performance of the DarkNet19 model, an attention branch is integrated with DarkNet19 network forming Attention Branch Network (ABN). The attention branch uses a convolution layer of DarkNet19 and Global Average Pooling (GAP) to model the context of the visual features and generate a heatmap to identify the region of interest. Finally, the perception branch is constituted using a fully connected layer to classify images. We trained and validated our model using more than 7000 breast cancer biopsy slide images from an openly available dataset and achieved 98.7% accuracy in the binary classification of histopathology images. The observations substantiated the enhanced clinical interpretability of the DarkNet19 CNN model, supervened by the attention branch, besides delivering a 3%-4% performance boost of the baseline model. The cancer regions highlighted by the proposed model correlate well with the findings of an expert pathologist. The coalesced approach of unifying attention branch with the CNN model capacitates pathologists with augmented diagnostic interpretability of histological images with no detriment to state-of-art performance. The model's proficiency in pinpointing the region of interest is an added bonus that can lead to accurate clinical translation of deep learning models that underscore clinical decision support.
Collapse
Affiliation(s)
- Sruthi Krishna
- Center for Wireless Networks & Applications (WNA), Amrita Vishwa Vidyapeetham, Amritapuri, India
| | | | - Arnav Bhavsar
- School of Computing and Electrical Engineering, IIT Mandi, Himachal Pradesh, India
| | - Jyotsna Yesodharan
- Department of Pathology, Amrita Institute of Medical Science, Kochi, India
| | | |
Collapse
|
18
|
Ru J, Lu B, Chen B, Shi J, Chen G, Wang M, Pan Z, Lin Y, Gao Z, Zhou J, Liu X, Zhang C. Attention guided neural ODE network for breast tumor segmentation in medical images. Comput Biol Med 2023; 159:106884. [PMID: 37071938 DOI: 10.1016/j.compbiomed.2023.106884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 01/25/2023] [Accepted: 03/30/2023] [Indexed: 04/05/2023]
Abstract
Breast cancer is the most common cancer in women. Ultrasound is a widely used screening tool for its portability and easy operation, and DCE-MRI can highlight the lesions more clearly and reveal the characteristics of tumors. They are both noninvasive and nonradiative for assessment of breast cancer. Doctors make diagnoses and further instructions through the sizes, shapes and textures of the breast masses showed on medical images, so automatic tumor segmentation via deep neural networks can to some extent assist doctors. Compared to some challenges which the popular deep neural networks have faced, such as large amounts of parameters, lack of interpretability, overfitting problem, etc., we propose a segmentation network named Att-U-Node which uses attention modules to guide a neural ODE-based framework, trying to alleviate the problems mentioned above. Specifically, the network uses ODE blocks to make up an encoder-decoder structure, feature modeling by neural ODE is completed at each level. Besides, we propose to use an attention module to calculate the coefficient and generate a much refined attention feature for skip connection. Three public available breast ultrasound image datasets (i.e. BUSI, BUS and OASBUD) and a private breast DCE-MRI dataset are used to assess the efficiency of the proposed model, besides, we upgrade the model to 3D for tumor segmentation with the data selected from Public QIN Breast DCE-MRI. The experiments show that the proposed model achieves competitive results compared with the related methods while mitigates the common problems of deep neural networks.
Collapse
Affiliation(s)
- Jintao Ru
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| | - Beichen Lu
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| | - Buran Chen
- Department of Thyroid and Breast Surgery, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| | - Jialin Shi
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| | - Gaoxiang Chen
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| | - Meihao Wang
- Department of Radiology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China; Key Laboratory of Intelligent Medical Imaging of Wenzhou, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China.
| | - Zhifang Pan
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China; Zhejiang Engineering Research Center of Intelligent Medicine, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China.
| | - Yezhi Lin
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China; Key Laboratory of Intelligent Treatment and Life Support for Critical Diseases of Zhejiang Province, Wenzhou, 325000, People's Republic of China.
| | - Zhihong Gao
- Zhejiang Engineering Research Center of Intelligent Medicine, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| | - Jiejie Zhou
- Department of Radiology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| | - Xiaoming Liu
- School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, 430065, People's Republic of China
| | - Chen Zhang
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325000, People's Republic of China
| |
Collapse
|
19
|
Lan J, Chen M, Wang J, Du M, Wu Z, Zhang H, Xue Y, Wang T, Chen L, Xu C, Han Z, Hu Z, Zhou Y, Zhou X, Tong T, Chen G. Using less annotation workload to establish a pathological auxiliary diagnosis system for gastric cancer. Cell Rep Med 2023; 4:101004. [PMID: 37044091 PMCID: PMC10140598 DOI: 10.1016/j.xcrm.2023.101004] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 10/20/2022] [Accepted: 03/17/2023] [Indexed: 04/14/2023]
Abstract
Pathological diagnosis of gastric cancer requires pathologists to have extensive clinical experience. To help pathologists improve diagnostic accuracy and efficiency, we collected 1,514 cases of stomach H&E-stained specimens with complete diagnostic information to establish a pathological auxiliary diagnosis system based on deep learning. At the slide level, our system achieves a specificity of 0.8878 while maintaining a high sensitivity close to 1.0 on 269 biopsy specimens (147 malignancies) and 163 surgical specimens (80 malignancies). The classified accuracy of our system is 0.9034 at the slide level for 352 biopsy specimens (201 malignancies) from 50 medical centers. With the help of our system, the pathologists' average false-negative rate and average false-positive rate on 100 biopsy specimens (50 malignancies) are reduced to 1/5 and 1/2 of the original rates, respectively. At the same time, the average uncertainty rate and the average diagnosis time are reduced by approximately 22% and 20%, respectively.
Collapse
Affiliation(s)
- Junlin Lan
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Musheng Chen
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian 350014, China; Fujian Key Laboratory of Translational Cancer Medicine, Fuzhou, Fujian 350014, China
| | - Jianchao Wang
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian 350014, China; Fujian Key Laboratory of Translational Cancer Medicine, Fuzhou, Fujian 350014, China
| | - Min Du
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Zhida Wu
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian 350014, China; Fujian Key Laboratory of Translational Cancer Medicine, Fuzhou, Fujian 350014, China
| | - Hejun Zhang
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian 350014, China; Fujian Key Laboratory of Translational Cancer Medicine, Fuzhou, Fujian 350014, China
| | - Yuyang Xue
- School of Engineering, University of Edinburgh, Edinburgh EH8 9JU, UK
| | - Tao Wang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Lifan Chen
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian 350014, China; Fujian Key Laboratory of Translational Cancer Medicine, Fuzhou, Fujian 350014, China
| | - Chaohui Xu
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Zixin Han
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Ziwei Hu
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Yuanbo Zhou
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Xiaogen Zhou
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Tong Tong
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China; Imperial Vision Technology, Fuzhou, Fujian 350100, China.
| | - Gang Chen
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian 350014, China; Fujian Key Laboratory of Translational Cancer Medicine, Fuzhou, Fujian 350014, China.
| |
Collapse
|
20
|
Nazir S, Dickson DM, Akram MU. Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks. Comput Biol Med 2023; 156:106668. [PMID: 36863192 DOI: 10.1016/j.compbiomed.2023.106668] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2022] [Revised: 01/12/2023] [Accepted: 02/10/2023] [Indexed: 02/21/2023]
Abstract
Artificial Intelligence (AI) techniques of deep learning have revolutionized the disease diagnosis with their outstanding image classification performance. In spite of the outstanding results, the widespread adoption of these techniques in clinical practice is still taking place at a moderate pace. One of the major hindrance is that a trained Deep Neural Networks (DNN) model provides a prediction, but questions about why and how that prediction was made remain unanswered. This linkage is of utmost importance for the regulated healthcare domain to increase the trust in the automated diagnosis system by the practitioners, patients and other stakeholders. The application of deep learning for medical imaging has to be interpreted with caution due to the health and safety concerns similar to blame attribution in the case of an accident involving autonomous cars. The consequences of both a false positive and false negative cases are far reaching for patients' welfare and cannot be ignored. This is exacerbated by the fact that the state-of-the-art deep learning algorithms comprise of complex interconnected structures, millions of parameters, and a 'black box' nature, offering little understanding of their inner working unlike the traditional machine learning algorithms. Explainable AI (XAI) techniques help to understand model predictions which help develop trust in the system, accelerate the disease diagnosis, and meet adherence to regulatory requirements. This survey provides a comprehensive review of the promising field of XAI for biomedical imaging diagnostics. We also provide a categorization of the XAI techniques, discuss the open challenges, and provide future directions for XAI which would be of interest to clinicians, regulators and model developers.
Collapse
Affiliation(s)
- Sajid Nazir
- Department of Computing, Glasgow Caledonian University, Glasgow, UK.
| | - Diane M Dickson
- Department of Podiatry and Radiography, Research Centre for Health, Glasgow Caledonian University, Glasgow, UK
| | - Muhammad Usman Akram
- Computer and Software Engineering Department, National University of Sciences and Technology, Islamabad, Pakistan
| |
Collapse
|
21
|
Garg S, Singh P. Transfer Learning Based Lightweight Ensemble Model for Imbalanced Breast Cancer Classification. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:1529-1539. [PMID: 35536810 DOI: 10.1109/tcbb.2022.3174091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Automated classification of breast cancer can often save lives, as manual detection is usually time-consuming & expensive. Since the last decade, deep learning techniques have been most widely used for the automatic classification of breast cancer using histopathology images. This paper has performed the binary and multi-class classification of breast cancer using a transfer learning-based ensemble model. To analyze the correctness and reliability of the proposed model, we have used an imbalance IDC dataset, an imbalance BreakHis dataset in the binary class scenario, and a balanced BACH dataset for the multi-class classification. A lightweight shallow CNN model with batch normalization technology to accelerate convergence is aggregated with lightweight MobileNetV2 to improve learning and adaptability. The aggregation output is fed into a multilayer perceptron to complete the final classification task. The experimental study on all three datasets was performed and compared with the recent works. We have fine-tuned three different pre-trained models (ResNet50, InceptionV4, and MobilNetV2) and compared it with the proposed lightweight ensemble model in terms of execution time, number of parameters, model size, etc. In both the evaluation phases, it is seen that our model outperforms in all three datasets.
Collapse
|
22
|
Huang P, Zhou X, He P, Feng P, Tian S, Sun Y, Mercaldo F, Santone A, Qin J, Xiao H. Interpretable laryngeal tumor grading of histopathological images via depth domain adaptive network with integration gradient CAM and priori experience-guided attention. Comput Biol Med 2023; 154:106447. [PMID: 36706570 DOI: 10.1016/j.compbiomed.2022.106447] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 11/29/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022]
Abstract
Tumor grading and interpretability of laryngeal cancer is a key yet challenging task in the clinical diagnosis, mainly because of the commonly used low-magnification pathological images lack fine cellular structure information and accurate localization, the diagnosis results of pathologists are different from those of attentional convolutional network -based methods, and the gradient-weighted class activation mapping method cannot be optimized to create the best visualization map. To address this problem, we propose an end-to-end depth domain adaptive network (DDANet) with integration gradient CAM and priori experience-guided attention to improve the tumor grading performance and interpretability by introducing the pathologist's a priori experience in high-magnification into the depth model. Specifically, a novel priori experience-guided attention (PE-GA) method is developed to solve the traditional unsupervised attention optimization problem. Besides, a novel integration gradient CAM is proposed to mitigate overfitting, information redundancies and low sparsity of the Grad-CAM graphs generated by the PE-GA method. Furthermore, we establish a set of quantitative evaluation metric systems for model visual interpretation. Extensive experimental results show that compared with the state-of-the-art methods, the average grading accuracy is increased to 88.43% (↑4.04%), the effective interpretable rate is increased to 52.73% (↑11.45%). Additionally, it effectively reduces the difference between CV-based method and pathology in diagnosis results. Importantly, the visualized interpretive maps are closer to the region of interest of concern by pathologists, and our model outperforms pathologists with different levels of experience.
Collapse
Affiliation(s)
- Pan Huang
- Key Laboratory of Optoelectronic Technology & Systems (Ministry of Education), College of Optoelectronic Engineering, Chongqing University, Chongqing, China
| | - Xiaoli Zhou
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, China
| | - Peng He
- Key Laboratory of Optoelectronic Technology & Systems (Ministry of Education), College of Optoelectronic Engineering, Chongqing University, Chongqing, China.
| | - Peng Feng
- Key Laboratory of Optoelectronic Technology & Systems (Ministry of Education), College of Optoelectronic Engineering, Chongqing University, Chongqing, China.
| | - Sukun Tian
- Center of Digital Dentistry, School and Hospital of Stomatology, Peking University, Beijing, China
| | - Yuchun Sun
- Center of Digital Dentistry, School and Hospital of Stomatology, Peking University, Beijing, China.
| | - Francesco Mercaldo
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy
| | - Antonella Santone
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy
| | - Jing Qin
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| | - Hualiang Xiao
- Department of Pathology, Daping Hospital, Army Medical University, Chongqing, China
| |
Collapse
|
23
|
Shi H, Li J, Mao J, Hwang KS. Lateral Transfer Learning for Multiagent Reinforcement Learning. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:1699-1711. [PMID: 34506297 DOI: 10.1109/tcyb.2021.3108237] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Some researchers have introduced transfer learning mechanisms to multiagent reinforcement learning (MARL). However, the existing works devoted to cross-task transfer for multiagent systems were designed just for homogeneous agents or similar domains. This work proposes an all-purpose cross-transfer method, called multiagent lateral transfer (MALT), assisting MARL with alleviating the training burden. We discuss several challenges in developing an all-purpose multiagent cross-task transfer learning method and provide a feasible way of reusing knowledge for MARL. In the developed method, we take features as the transfer object rather than policies or experiences, inspired by the progressive network. To achieve more efficient transfer, we assign pretrained policy networks for agents based on clustering, while an attention module is introduced to enhance the transfer framework. The proposed method has no strict requirements for the source task and target task. Compared with the existing works, our method can transfer knowledge among heterogeneous agents and also avoid negative transfer in the case of fully different tasks. As far as we know, this article is the first work denoted to all-purpose cross-task transfer for MARL. Several experiments in various scenarios have been conducted to compare the performance of the proposed method with baselines. The results demonstrate that the method is sufficiently flexible for most settings, including cooperative, competitive, homogeneous, and heterogeneous configurations.
Collapse
|
24
|
Yu W, Zhou H, Choi Y, Goldin JG, Teng P, Wong WK, McNitt-Gray MF, Brown MS, Kim GHJ. Multi-scale, domain knowledge-guided attention + random forest: a two-stage deep learning-based multi-scale guided attention models to diagnose idiopathic pulmonary fibrosis from computed tomography images. Med Phys 2023; 50:894-905. [PMID: 36254789 PMCID: PMC10082682 DOI: 10.1002/mp.16053] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 07/25/2022] [Accepted: 09/06/2022] [Indexed: 11/08/2022] Open
Abstract
BACKGROUND Idiopathic pulmonary fibrosis (IPF) is a progressive, irreversible, and usually fatal lung disease of unknown reasons, generally affecting the elderly population. Early diagnosis of IPF is crucial for triaging patients' treatment planning into anti-fibrotic treatment or treatments for other causes of pulmonary fibrosis. However, current IPF diagnosis workflow is complicated and time-consuming, which involves collaborative efforts from radiologists, pathologists, and clinicians and it is largely subject to inter-observer variability. PURPOSE The purpose of this work is to develop a deep learning-based automated system that can diagnose subjects with IPF among subjects with interstitial lung disease (ILD) using an axial chest computed tomography (CT) scan. This work can potentially enable timely diagnosis decisions and reduce inter-observer variability. METHODS Our dataset contains CT scans from 349 IPF patients and 529 non-IPF ILD patients. We used 80% of the dataset for training and validation purposes and 20% as the holdout test set. We proposed a two-stage model: at stage one, we built a multi-scale, domain knowledge-guided attention model (MSGA) that encouraged the model to focus on specific areas of interest to enhance model explainability, including both high- and medium-resolution attentions; at stage two, we collected the output from MSGA and constructed a random forest (RF) classifier for patient-level diagnosis, to further boost model accuracy. RF classifier is utilized as a final decision stage since it is interpretable, computationally fast, and can handle correlated variables. Model utility was examined by (1) accuracy, represented by the area under the receiver operating characteristic curve (AUC) with standard deviation (SD), and (2) explainability, illustrated by the visual examination of the estimated attention maps which showed the important areas for model diagnostics. RESULTS During the training and validation stage, we observe that when we provide no guidance from domain knowledge, the IPF diagnosis model reaches acceptable performance (AUC±SD = 0.93±0.07), but lacks explainability; when including only guided high- or medium-resolution attention, the learned attention maps are not satisfactory; when including both high- and medium-resolution attention, under certain hyperparameter settings, the model reaches the highest AUC among all experiments (AUC±SD = 0.99±0.01) and the estimated attention maps concentrate on the regions of interests for this task. Three best-performing hyperparameter selections according to MSGA were applied to the holdout test set and reached comparable model performance to that of the validation set. CONCLUSIONS Our results suggest that, for a task with only scan-level labels available, MSGA+RF can utilize the population-level domain knowledge to guide the training of the network, which increases both model accuracy and explainability.
Collapse
Affiliation(s)
- Wenxi Yu
- Department of Biostatistics, University of California, Los Angeles, California, USA
| | - Hua Zhou
- Department of Biostatistics, University of California, Los Angeles, California, USA
| | - Youngwon Choi
- Department of Biostatistics, University of California, Los Angeles, California, USA
| | - Jonathan G Goldin
- Department of Biostatistics, University of California, Los Angeles, California, USA
| | - Pangyu Teng
- Department of Biostatistics, University of California, Los Angeles, California, USA
| | - Weng Kee Wong
- Department of Biostatistics, University of California, Los Angeles, California, USA
| | | | - Matthew S Brown
- Department of Biostatistics, University of California, Los Angeles, California, USA
| | - Grace Hyun J Kim
- Department of Biostatistics, University of California, Los Angeles, California, USA
| |
Collapse
|
25
|
Zeng Y, Xu X. Label Diffusion Graph Learning network for semi-supervised breast histological image recognition. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
26
|
Chan RC, To CKC, Cheng KCT, Yoshikazu T, Yan LLA, Tse GM. Artificial intelligence in breast cancer histopathology. Histopathology 2023; 82:198-210. [PMID: 36482271 DOI: 10.1111/his.14820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 09/22/2022] [Accepted: 09/28/2022] [Indexed: 12/13/2022]
Abstract
This is a review on the use of artificial intelligence for digital breast pathology. A systematic search on PubMed was conducted, identifying 17,324 research papers related to breast cancer pathology. Following a semimanual screening, 664 papers were retrieved and pursued. The papers are grouped into six major tasks performed by pathologists-namely, molecular and hormonal analysis, grading, mitotic figure counting, ki-67 indexing, tumour-infiltrating lymphocyte assessment, and lymph node metastases identification. Under each task, open-source datasets for research to build artificial intelligence (AI) tools are also listed. Many AI tools showed promise and demonstrated feasibility in the automation of routine pathology investigations. We expect continued growth of AI in this field as new algorithms mature.
Collapse
Affiliation(s)
- Ronald Ck Chan
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Chun Kit Curtis To
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Ka Chuen Tom Cheng
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Tada Yoshikazu
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Lai Ling Amy Yan
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| | - Gary M Tse
- Department of Anatomical and Cellular Pathology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong, Hong Kong
| |
Collapse
|
27
|
Huang P, He P, Tian S, Ma M, Feng P, Xiao H, Mercaldo F, Santone A, Qin J. A ViT-AMC Network With Adaptive Model Fusion and Multiobjective Optimization for Interpretable Laryngeal Tumor Grading From Histopathological Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:15-28. [PMID: 36018875 DOI: 10.1109/tmi.2022.3202248] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
The tumor grading of laryngeal cancer pathological images needs to be accurate and interpretable. The deep learning model based on the attention mechanism-integrated convolution (AMC) block has good inductive bias capability but poor interpretability, whereas the deep learning model based on the vision transformer (ViT) block has good interpretability but weak inductive bias ability. Therefore, we propose an end-to-end ViT-AMC network (ViT-AMCNet) with adaptive model fusion and multiobjective optimization that integrates and fuses the ViT and AMC blocks. However, existing model fusion methods often have negative fusion: 1). There is no guarantee that the ViT and AMC blocks will simultaneously have good feature representation capability. 2). The difference in feature representations learning between the ViT and AMC blocks is not obvious, so there is much redundant information in the two feature representations. Accordingly, we first prove the feasibility of fusing the ViT and AMC blocks based on Hoeffding's inequality. Then, we propose a multiobjective optimization method to solve the problem that ViT and AMC blocks cannot simultaneously have good feature representation. Finally, an adaptive model fusion method integrating the metrics block and the fusion block is proposed to increase the differences between feature representations and improve the deredundancy capability. Our methods improve the fusion ability of ViT-AMCNet, and experimental results demonstrate that ViT-AMCNet significantly outperforms state-of-the-art methods. Importantly, the visualized interpretive maps are closer to the region of interest of concern by pathologists, and the generalization ability is also excellent. Our code is publicly available at https://github.com/Baron-Huang/ViT-AMCNet.
Collapse
|
28
|
Li G, Wu G, Xu G, Li C, Zhu Z, Ye Y, Zhang H. Pathological image classification via embedded fusion mutual learning. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
29
|
Dey S, Mitra S, Chakraborty S, Mondal D, Nasipuri M, Das N. GC-EnC: A Copula based ensemble of CNNs for malignancy identification in breast histopathology and cytology images. Comput Biol Med 2023; 152:106329. [PMID: 36473342 DOI: 10.1016/j.compbiomed.2022.106329] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 10/25/2022] [Accepted: 11/14/2022] [Indexed: 11/18/2022]
Abstract
In the present work, we have explored the potential of Copula-based ensemble of CNNs(Convolutional Neural Networks) over individual classifiers for malignancy identification in histopathology and cytology images. The Copula-based model that integrates three best performing CNN architectures, namely, DenseNet-161/201, ResNet-101/34, InceptionNet-V3 is proposed. Also, the limitation of small dataset is circumvented using a Fuzzy template based data augmentation technique that intelligently selects multiple region of interests (ROIs) from an image. The proposed framework of data augmentation amalgamated with the ensemble technique showed a gratifying performance in malignancy prediction surpassing the individual CNN's performance on breast cytology and histopathology datasets. The proposed method has achieved accuracies of 84.37%, 97.32%, 91.67% on the JUCYT, BreakHis and BI datasets respectively. This automated technique will serve as a useful guide to the pathologist in delivering the appropriate diagnostic decision in reduced time and effort. The relevant codes of the proposed ensemble model are publicly available on GitHub.
Collapse
Affiliation(s)
- Soumyajyoti Dey
- Jadavpur University, Department of Computer Science & Engineering, Kolkata, West Bengal, India.
| | - Shyamali Mitra
- Jadavpur University, Department of Instrumentation & Electronics Engineering, Kolkata, West Bengal, India.
| | | | - Debashri Mondal
- Theism Medical Diagnostics Centre, Kolkata, West Bengal, India.
| | - Mita Nasipuri
- Jadavpur University, Department of Computer Science & Engineering, Kolkata, West Bengal, India.
| | - Nibaran Das
- Jadavpur University, Department of Computer Science & Engineering, Kolkata, West Bengal, India.
| |
Collapse
|
30
|
Improved Bald Eagle Search Optimization with Synergic Deep Learning-Based Classification on Breast Cancer Imaging. Cancers (Basel) 2022; 14:cancers14246159. [PMID: 36551644 PMCID: PMC9776477 DOI: 10.3390/cancers14246159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 11/24/2022] [Accepted: 11/26/2022] [Indexed: 12/15/2022] Open
Abstract
Medical imaging has attracted growing interest in the field of healthcare regarding breast cancer (BC). Globally, BC is a major cause of mortality amongst women. Now, the examination of histopathology images is the medical gold standard for cancer diagnoses. However, the manual process of microscopic inspections is a laborious task, and the results might be misleading as a result of human error occurring. Thus, the computer-aided diagnoses (CAD) system can be utilized for accurately detecting cancer within essential time constraints, as earlier diagnosis is the key to curing cancer. The classification and diagnosis of BC utilizing the deep learning algorithm has gained considerable attention. This article presents a model of an improved bald eagle search optimization with a synergic deep learning mechanism for breast cancer diagnoses using histopathological images (IBESSDL-BCHI). The proposed IBESSDL-BCHI model concentrates on the identification and classification of BC using HIs. To do so, the presented IBESSDL-BCHI model follows an image preprocessing method using a median filtering (MF) technique as a preprocessing step. In addition, feature extraction using a synergic deep learning (SDL) model is carried out, and the hyperparameters related to the SDL mechanism are tuned by the use of the IBES model. Lastly, long short-term memory (LSTM) was utilized to precisely categorize the HIs into two major classes, such as benign and malignant. The performance validation of the IBESSDL-BCHI system was tested utilizing the benchmark dataset, and the results demonstrate that the IBESSDL-BCHI model has shown better general efficiency for BC classification.
Collapse
|
31
|
Alzoubi I, Bao G, Zheng Y, Wang X, Graeber MB. Artificial intelligence techniques for neuropathological diagnostics and research. Neuropathology 2022. [PMID: 36443935 DOI: 10.1111/neup.12880] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 10/17/2022] [Accepted: 10/23/2022] [Indexed: 12/03/2022]
Abstract
Artificial intelligence (AI) research began in theoretical neurophysiology, and the resulting classical paper on the McCulloch-Pitts mathematical neuron was written in a psychiatry department almost 80 years ago. However, the application of AI in digital neuropathology is still in its infancy. Rapid progress is now being made, which prompted this article. Human brain diseases represent distinct system states that fall outside the normal spectrum. Many differ not only in functional but also in structural terms, and the morphology of abnormal nervous tissue forms the traditional basis of neuropathological disease classifications. However, only a few countries have the medical specialty of neuropathology, and, given the sheer number of newly developed histological tools that can be applied to the study of brain diseases, a tremendous shortage of qualified hands and eyes at the microscope is obvious. Similarly, in neuroanatomy, human observers no longer have the capacity to process the vast amounts of connectomics data. Therefore, it is reasonable to assume that advances in AI technology and, especially, whole-slide image (WSI) analysis will greatly aid neuropathological practice. In this paper, we discuss machine learning (ML) techniques that are important for understanding WSI analysis, such as traditional ML and deep learning, introduce a recently developed neuropathological AI termed PathoFusion, and present thoughts on some of the challenges that must be overcome before the full potential of AI in digital neuropathology can be realized.
Collapse
Affiliation(s)
- Islam Alzoubi
- School of Computer Science The University of Sydney Sydney New South Wales Australia
| | - Guoqing Bao
- School of Computer Science The University of Sydney Sydney New South Wales Australia
| | - Yuqi Zheng
- Ken Parker Brain Tumour Research Laboratories Brain and Mind Centre, Faculty of Medicine and Health, University of Sydney Camperdown New South Wales Australia
| | - Xiuying Wang
- School of Computer Science The University of Sydney Sydney New South Wales Australia
| | - Manuel B. Graeber
- Ken Parker Brain Tumour Research Laboratories Brain and Mind Centre, Faculty of Medicine and Health, University of Sydney Camperdown New South Wales Australia
| |
Collapse
|
32
|
Rashmi R, Prasad K, Udupa CBK. Region-based feature enhancement using channel-wise attention for classification of breast histopathological images. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07966-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
AbstractBreast histopathological image analysis at 400x magnification is essential for the determination of malignant breast tumours. But manual analysis of these images is tedious, subjective, error-prone and requires domain knowledge. To this end, computer-aided tools are gaining much attention in the recent past as it aids pathologists and save time. Furthermore, advances in computational power have leveraged the usage of computer tools. Yet, usage of computer-aided tools to analyse these images is challenging due to various reasons such as heterogeneity of malignant tumours, colour variations and presence of artefacts. Moreover, these images are captured at high resolutions which pose a major challenge to designing deep learning models as it demands high computational requirements. In this context, the present work proposes a new approach to efficiently and effectively extract features from these high-resolution images. In addition, at 400x magnification, the characteristics and structure of nuclei play a prominent role in the decision of malignancy. In this regard, the study introduces a novel CNN architecture called as CWA-Net that uses a colour channel attention module to enhance the features of the potential regions of interest such as nuclei. The developed model is qualitatively and quantitatively evaluated on private and public datasets and achieved an accuracy of 0.95% and 0.96%, respectively. The experimental evaluation demonstrates that the proposed method outperforms state-of-the-art methods on both datasets.
Collapse
|
33
|
Chattopadhyay S, Dey A, Singh PK, Oliva D, Cuevas E, Sarkar R. MTRRE-Net: A deep learning model for detection of breast cancer from histopathological images. Comput Biol Med 2022; 150:106155. [PMID: 36240595 DOI: 10.1016/j.compbiomed.2022.106155] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 08/31/2022] [Accepted: 09/24/2022] [Indexed: 11/03/2022]
Abstract
Histopathological image classification has become one of the most challenging tasks among researchers due to the fine-grained variability of the disease. However, the rapid development of deep learning-based models such as the Convolutional Neural Network (CNN) has propelled much attentiveness to the classification of complex biomedical images. In this work, we propose a novel end-to-end deep learning model, named Multi-scale Dual Residual Recurrent Network (MTRRE-Net), for breast cancer classification from histopathological images. This model introduces a contrasting approach of dual residual block combined with the recurrent network to overcome the vanishing gradient problem even if the network is significantly deep. The proposed model has been evaluated on a publicly available standard dataset, namely BreaKHis, and achieved impressive accuracy in overcoming state-of-the-art models on all the images considered at various magnification levels.
Collapse
Affiliation(s)
- Soham Chattopadhyay
- Department of Electrical Engineering, Jadavpur University, 188, Raja S.C. Mallick Road, Kolkata 700032, West Bengal, India.
| | - Arijit Dey
- Department of Computer Science and Engineering, Maulana Abul Kalam Azad University of Technology, Kolkata 700064, West Bengal, India.
| | - Pawan Kumar Singh
- Department of Information Technology, Jadavpur University, Jadavpur University Second Campus, Plot No. 8, Salt Lake Bypass, LB Block, Sector III, Salt Lake City, Kolkata 700106, West Bengal, India.
| | - Diego Oliva
- División de Tecnologías para la Integración Ciber-Humana, Universidad de Guadalajara, CUCEI, Av. Revolución 1500, 44430, Guadalajara, Jal, Mexico.
| | - Erik Cuevas
- División de Tecnologías para la Integración Ciber-Humana, Universidad de Guadalajara, CUCEI, Av. Revolución 1500, 44430, Guadalajara, Jal, Mexico.
| | - Ram Sarkar
- Department of Computer Science and Engineering, Jadavpur University, 188, Raja S.C. Mallick Road, Kolkata 700032, West Bengal, India.
| |
Collapse
|
34
|
Zhang H, Chen H, Qin J, Wang B, Ma G, Wang P, Zhong D, Liu J. MC-ViT: Multi-path cross-scale vision transformer for thymoma histopathology whole slide image typing. Front Oncol 2022; 12:925903. [PMID: 36387248 PMCID: PMC9659861 DOI: 10.3389/fonc.2022.925903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Accepted: 10/11/2022] [Indexed: 08/14/2023] Open
Abstract
OBJECTIVES Accurate histological typing plays an important role in diagnosing thymoma or thymic carcinoma (TC) and predicting the corresponding prognosis. In this paper, we develop and validate a deep learning-based thymoma typing method for hematoxylin & eosin (H&E)-stained whole slide images (WSIs), which provides useful histopathology information from patients to assist doctors for better diagnosing thymoma or TC. METHODS We propose a multi-path cross-scale vision transformer (MC-ViT), which first uses the cross attentive scale-aware transformer (CAST) to classify the pathological information related to thymoma, and then uses such pathological information priors to assist the WSIs transformer (WT) for thymoma typing. To make full use of the multi-scale (10×, 20×, and 40×) information inherent in a WSI, CAST not only employs parallel multi-path to capture different receptive field features from multi-scale WSI inputs, but also introduces the cross-correlation attention module (CAM) to aggregate multi-scale features to achieve cross-scale spatial information complementarity. After that, WT can effectively convert full-scale WSIs into 1D feature matrices with pathological information labels to improve the efficiency and accuracy of thymoma typing. RESULTS We construct a large-scale thymoma histopathology WSI (THW) dataset and annotate corresponding pathological information and thymoma typing labels. The proposed MC-ViT achieves the Top-1 accuracy of 0.939 and 0.951 in pathological information classification and thymoma typing, respectively. Moreover, the quantitative and statistical experiments on the THW dataset also demonstrate that our pipeline performs favorably against the existing classical convolutional neural networks, vision transformers, and deep learning-based medical image classification methods. CONCLUSION This paper demonstrates that comprehensively utilizing the pathological information contained in multi-scale WSIs is feasible for thymoma typing and achieves clinically acceptable performance. Specifically, the proposed MC-ViT can well predict pathological information classes as well as thymoma types, which show the application potential to the diagnosis of thymoma and TC and may assist doctors in improving diagnosis efficiency and accuracy.
Collapse
Affiliation(s)
- Huaqi Zhang
- School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China
| | - Huang Chen
- Department of Pathology, China-Japan Friendship Hospital, Beijing, China
| | - Jin Qin
- School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China
| | - Bei Wang
- Department of Pathology, China-Japan Friendship Hospital, Beijing, China
| | - Guolin Ma
- Department of Radiology, China-Japan Friendship Hospital, Beijing, China
| | - Pengyu Wang
- School of Information Science and Engineering, East China University of Science and Technology, Shanghai, China
| | - Dingrong Zhong
- Department of Pathology, China-Japan Friendship Hospital, Beijing, China
| | - Jie Liu
- School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China
| |
Collapse
|
35
|
Chen H, Gomez C, Huang CM, Unberath M. Explainable medical imaging AI needs human-centered design: guidelines and evidence from a systematic review. NPJ Digit Med 2022; 5:156. [PMID: 36261476 PMCID: PMC9581990 DOI: 10.1038/s41746-022-00699-2] [Citation(s) in RCA: 51] [Impact Index Per Article: 25.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 09/29/2022] [Indexed: 11/16/2022] Open
Abstract
Transparency in Machine Learning (ML), often also referred to as interpretability or explainability, attempts to reveal the working mechanisms of complex models. From a human-centered design perspective, transparency is not a property of the ML model but an affordance, i.e., a relationship between algorithm and users. Thus, prototyping and user evaluations are critical to attaining solutions that afford transparency. Following human-centered design principles in highly specialized and high stakes domains, such as medical image analysis, is challenging due to the limited access to end users and the knowledge imbalance between those users and ML designers. To investigate the state of transparent ML in medical image analysis, we conducted a systematic review of the literature from 2012 to 2021 in PubMed, EMBASE, and Compendex databases. We identified 2508 records and 68 articles met the inclusion criteria. Current techniques in transparent ML are dominated by computational feasibility and barely consider end users, e.g. clinical stakeholders. Despite the different roles and knowledge of ML developers and end users, no study reported formative user research to inform the design and development of transparent ML models. Only a few studies validated transparency claims through empirical user evaluations. These shortcomings put contemporary research on transparent ML at risk of being incomprehensible to users, and thus, clinically irrelevant. To alleviate these shortcomings in forthcoming research, we introduce the INTRPRT guideline, a design directive for transparent ML systems in medical image analysis. The INTRPRT guideline suggests human-centered design principles, recommending formative user research as the first step to understand user needs and domain requirements. Following these guidelines increases the likelihood that the algorithms afford transparency and enable stakeholders to capitalize on the benefits of transparent ML.
Collapse
Affiliation(s)
- Haomin Chen
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Catalina Gomez
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Chien-Ming Huang
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
36
|
Mix-attention approximation for homogeneous large-scale multi-agent reinforcement learning. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07880-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/10/2022]
|
37
|
Deep Learning Assessment for Mining Important Medical Image Features of Various Modalities. Diagnostics (Basel) 2022; 12:diagnostics12102333. [DOI: 10.3390/diagnostics12102333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 09/13/2022] [Accepted: 09/22/2022] [Indexed: 11/16/2022] Open
Abstract
Deep learning (DL) is a well-established pipeline for feature extraction in medical and nonmedical imaging tasks, such as object detection, segmentation, and classification. However, DL faces the issue of explainability, which prohibits reliable utilisation in everyday clinical practice. This study evaluates DL methods for their efficiency in revealing and suggesting potential image biomarkers. Eleven biomedical image datasets of various modalities are utilised, including SPECT, CT, photographs, microscopy, and X-ray. Seven state-of-the-art CNNs are employed and tuned to perform image classification in tasks. The main conclusion of the research is that DL reveals potential biomarkers in several cases, especially when the models are trained from scratch in domains where low-level features such as shapes and edges are not enough to make decisions. Furthermore, in some cases, device acquisition variations slightly affect the performance of DL models.
Collapse
|
38
|
Wu H, Pang KKY, Pang GKH, Au-Yeung RKH. A soft-computing based approach to overlapped cells analysis in histopathology images with genetic algorithm. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
39
|
Mehta S, Lu X, Wu W, Weaver D, Hajishirzi H, Elmore JG, Shapiro LG. End-to-End diagnosis of breast biopsy images with transformers. Med Image Anal 2022; 79:102466. [PMID: 35525135 PMCID: PMC10162595 DOI: 10.1016/j.media.2022.102466] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Revised: 02/25/2022] [Accepted: 04/18/2022] [Indexed: 01/18/2023]
Abstract
Diagnostic disagreements among pathologists occur throughout the spectrum of benign to malignant lesions. A computer-aided diagnostic system capable of reducing uncertainties would have important clinical impact. To develop a computer-aided diagnosis method for classifying breast biopsy images into a range of diagnostic categories (benign, atypia, ductal carcinoma in situ, and invasive breast cancer), we introduce a transformer-based hollistic attention network called HATNet. Unlike state-of-the-art histopathological image classification systems that use a two pronged approach, i.e., they first learn local representations using a multi-instance learning framework and then combine these local representations to produce image-level decisions, HATNet streamlines the histopathological image classification pipeline and shows how to learn representations from gigapixel size images end-to-end. HATNet extends the bag-of-words approach and uses self-attention to encode global information, allowing it to learn representations from clinically relevant tissue structures without any explicit supervision. It outperforms the previous best network Y-Net, which uses supervision in the form of tissue-level segmentation masks, by 8%. Importantly, our analysis reveals that HATNet learns representations from clinically relevant structures, and it matches the classification accuracy of 87 U.S. pathologists for this challenging test set.
Collapse
Affiliation(s)
| | - Ximing Lu
- University of Washington, Seattle, USA
| | - Wenjun Wu
- University of Washington, Seattle, USA
| | - Donald Weaver
- Department of Pathology, The University of Vermont College of Medicine, USA
| | | | - Joann G Elmore
- David Geffen School of Medicine, University of California, Los Angeles, USA
| | | |
Collapse
|
40
|
van der Velden BH, Kuijf HJ, Gilhuijs KG, Viergever MA. Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. Med Image Anal 2022; 79:102470. [DOI: 10.1016/j.media.2022.102470] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 03/15/2022] [Accepted: 05/02/2022] [Indexed: 12/11/2022]
|
41
|
Barragán-Montero A, Bibal A, Dastarac MH, Draguet C, Valdés G, Nguyen D, Willems S, Vandewinckele L, Holmström M, Löfman F, Souris K, Sterpin E, Lee JA. Towards a safe and efficient clinical implementation of machine learning in radiation oncology by exploring model interpretability, explainability and data-model dependency. Phys Med Biol 2022; 67:10.1088/1361-6560/ac678a. [PMID: 35421855 PMCID: PMC9870296 DOI: 10.1088/1361-6560/ac678a] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 04/14/2022] [Indexed: 01/26/2023]
Abstract
The interest in machine learning (ML) has grown tremendously in recent years, partly due to the performance leap that occurred with new techniques of deep learning, convolutional neural networks for images, increased computational power, and wider availability of large datasets. Most fields of medicine follow that popular trend and, notably, radiation oncology is one of those that are at the forefront, with already a long tradition in using digital images and fully computerized workflows. ML models are driven by data, and in contrast with many statistical or physical models, they can be very large and complex, with countless generic parameters. This inevitably raises two questions, namely, the tight dependence between the models and the datasets that feed them, and the interpretability of the models, which scales with its complexity. Any problems in the data used to train the model will be later reflected in their performance. This, together with the low interpretability of ML models, makes their implementation into the clinical workflow particularly difficult. Building tools for risk assessment and quality assurance of ML models must involve then two main points: interpretability and data-model dependency. After a joint introduction of both radiation oncology and ML, this paper reviews the main risks and current solutions when applying the latter to workflows in the former. Risks associated with data and models, as well as their interaction, are detailed. Next, the core concepts of interpretability, explainability, and data-model dependency are formally defined and illustrated with examples. Afterwards, a broad discussion goes through key applications of ML in workflows of radiation oncology as well as vendors' perspectives for the clinical implementation of ML.
Collapse
Affiliation(s)
- Ana Barragán-Montero
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Adrien Bibal
- PReCISE, NaDI Institute, Faculty of Computer Science, UNamur and CENTAL, ILC, UCLouvain, Belgium
| | - Margerie Huet Dastarac
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Camille Draguet
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
- Department of Oncology, Laboratory of Experimental Radiotherapy, KU Leuven, Belgium
| | - Gilmer Valdés
- Department of Radiation Oncology, Department of Epidemiology and Biostatistics, University of California, San Francisco, United States of America
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, United States of America
| | - Siri Willems
- ESAT/PSI, KU Leuven Belgium & MIRC, UZ Leuven, Belgium
| | | | | | | | - Kevin Souris
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Edmond Sterpin
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
- Department of Oncology, Laboratory of Experimental Radiotherapy, KU Leuven, Belgium
| | - John A Lee
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| |
Collapse
|
42
|
Ukwuoma CC, Hossain MA, Jackson JK, Nneji GU, Monday HN, Qin Z. Multi-Classification of Breast Cancer Lesions in Histopathological Images Using DEEP_Pachi: Multiple Self-Attention Head. Diagnostics (Basel) 2022; 12:1152. [PMID: 35626307 PMCID: PMC9139754 DOI: 10.3390/diagnostics12051152] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 04/23/2022] [Accepted: 04/28/2022] [Indexed: 11/16/2022] Open
Abstract
INTRODUCTION AND BACKGROUND Despite fast developments in the medical field, histological diagnosis is still regarded as the benchmark in cancer diagnosis. However, the input image feature extraction that is used to determine the severity of cancer at various magnifications is harrowing since manual procedures are biased, time consuming, labor intensive, and error-prone. Current state-of-the-art deep learning approaches for breast histopathology image classification take features from entire images (generic features). Thus, they are likely to overlook the essential image features for the unnecessary features, resulting in an incorrect diagnosis of breast histopathology imaging and leading to mortality. METHODS This discrepancy prompted us to develop DEEP_Pachi for classifying breast histopathology images at various magnifications. The suggested DEEP_Pachi collects global and regional features that are essential for effective breast histopathology image classification. The proposed model backbone is an ensemble of DenseNet201 and VGG16 architecture. The ensemble model extracts global features (generic image information), whereas DEEP_Pachi extracts spatial information (regions of interest). Statistically, the evaluation of the proposed model was performed on publicly available dataset: BreakHis and ICIAR 2018 Challenge datasets. RESULTS A detailed evaluation of the proposed model's accuracy, sensitivity, precision, specificity, and f1-score metrics revealed the usefulness of the backbone model and the DEEP_Pachi model for image classifying. The suggested technique outperformed state-of-the-art classifiers, achieving an accuracy of 1.0 for the benign class and 0.99 for the malignant class in all magnifications of BreakHis datasets and an accuracy of 1.0 on the ICIAR 2018 Challenge dataset. CONCLUSIONS The acquired findings were significantly resilient and proved helpful for the suggested system to assist experts at big medical institutions, resulting in early breast cancer diagnosis and a reduction in the death rate.
Collapse
Affiliation(s)
- Chiagoziem C. Ukwuoma
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China; (J.K.J.); (G.U.N.)
| | - Md Altab Hossain
- School of Management and Economics, University of Electronic Science and Technology of China, Chengdu 610054, China;
| | - Jehoiada K. Jackson
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China; (J.K.J.); (G.U.N.)
| | - Grace U. Nneji
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China; (J.K.J.); (G.U.N.)
| | - Happy N. Monday
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China;
| | - Zhiguang Qin
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China; (J.K.J.); (G.U.N.)
| |
Collapse
|
43
|
Thiagarajan P, Khairnar P, Ghosh S. Explanation and Use of Uncertainty Quantified by Bayesian Neural Network Classifiers for Breast Histopathology Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:815-825. [PMID: 34699354 DOI: 10.1109/tmi.2021.3123300] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Despite the promise of Convolutional neural network (CNN) based classification models for histopathological images, it is infeasible to quantify its uncertainties. Moreover, CNNs may suffer from overfitting when the data is biased. We show that Bayesian-CNN can overcome these limitations by regularizing automatically and by quantifying the uncertainty. We have developed a novel technique to utilize the uncertainties provided by the Bayesian-CNN that significantly improves the performance on a large fraction of the test data (about 6% improvement in accuracy on 77% of test data). Further, we provide a novel explanation for the uncertainty by projecting the data into a low dimensional space through a nonlinear dimensionality reduction technique. This dimensionality reduction enables interpretation of the test data through visualization and reveals the structure of the data in a low dimensional feature space. We show that the Bayesian-CNN can perform much better than the state-of-the-art transfer learning CNN (TL-CNN) by reducing the false negative and false positive by 11% and 7.7% respectively for the present data set. It achieves this performance with only 1.86 million parameters as compared to 134.33 million for TL-CNN. Besides, we modify the Bayesian-CNN by introducing a stochastic adaptive activation function. The modified Bayesian-CNN performs slightly better than Bayesian-CNN on all performance metrics and significantly reduces the number of false negatives and false positives (3% reduction for both). We also show that these results are statistically significant by performing McNemar's statistical significance test. This work shows the advantages of Bayesian-CNN against the state-of-the-art, explains and utilizes the uncertainties for histopathological images. It should find applications in various medical image classifications.
Collapse
|
44
|
Chen RJ, Lu MY, Wang J, Williamson DFK, Rodig SJ, Lindeman NI, Mahmood F. Pathomic Fusion: An Integrated Framework for Fusing Histopathology and Genomic Features for Cancer Diagnosis and Prognosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:757-770. [PMID: 32881682 DOI: 10.1109/tmi.2020.3021387] [Citation(s) in RCA: 137] [Impact Index Per Article: 68.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Cancer diagnosis, prognosis, mymargin and therapeutic response predictions are based on morphological information from histology slides and molecular profiles from genomic data. However, most deep learning-based objective outcome prediction and grading paradigms are based on histology or genomics alone and do not make use of the complementary information in an intuitive manner. In this work, we propose Pathomic Fusion, an interpretable strategy for end-to-end multimodal fusion of histology image and genomic (mutations, CNV, RNA-Seq) features for survival outcome prediction. Our approach models pairwise feature interactions across modalities by taking the Kronecker product of unimodal feature representations, and controls the expressiveness of each representation via a gating-based attention mechanism. Following supervised learning, we are able to interpret and saliently localize features across each modality, and understand how feature importance shifts when conditioning on multimodal input. We validate our approach using glioma and clear cell renal cell carcinoma datasets from the Cancer Genome Atlas (TCGA), which contains paired whole-slide image, genotype, and transcriptome data with ground truth survival and histologic grade labels. In a 15-fold cross-validation, our results demonstrate that the proposed multimodal fusion paradigm improves prognostic determinations from ground truth grading and molecular subtyping, as well as unimodal deep networks trained on histology and genomic data alone. The proposed method establishes insight and theory on how to train deep networks on multimodal biomedical data in an intuitive manner, which will be useful for other problems in medicine that seek to combine heterogeneous data streams for understanding diseases and predicting response and resistance to treatment. Code and trained models are made available at: https://github.com/mahmoodlab/PathomicFusion.
Collapse
|
45
|
Liu X, Kang X, Nie X, Guo J, Wang S, Yin Y. Learning Binary Semantic Embedding forLarge-Scale Breast Histology Image Analysis. IEEE J Biomed Health Inform 2022; PP:3240-3250. [PMID: 35320109 DOI: 10.1109/jbhi.2022.3161341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
With the progress of clinical imaging innovation and machine learning, the computer-assisted diagnosis of breast histology images has attracted broad attention. Nonetheless, the use of computer-assisted diagnoses has been blocked due to the incomprehensibility of customary classification models. In view of this question, we propose a novel method for Learning Binary Semantic Embedding (LBSE). In this study, bit balance and uncorrela-tion constraints, double supervision, discrete optimization and asymmetric pairwise similarity are seamlessly integrated for learning binary semantic-preserving embedding. Moreover, a fusion-based strategy is carefully designed to handle the intractable problem of parameter setting, saving huge amounts of time for boundary tuning. Based on the above-mentioned proficient and effective embedding, classification and retrieval are simultaneously performed to give interpretable image-based deduction and model helped conclusions for breast histology images. Extensive experiments are conducted on three benchmark datasets to approve the predominance of LBSE in different situations.
Collapse
|
46
|
Chattopadhyay S, Dey A, Singh PK, Sarkar R. DRDA-Net: Dense residual dual-shuffle attention network for breast cancer classification using histopathological images. Comput Biol Med 2022; 145:105437. [PMID: 35339096 DOI: 10.1016/j.compbiomed.2022.105437] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 02/24/2022] [Accepted: 03/20/2022] [Indexed: 01/19/2023]
Abstract
Breast cancer is caused by the uncontrolled growth and division of cells in the breast, whereby a mass of tissue called a tumor is created. Early detection of breast cancer can save many lives. Hence, many researchers worldwide have invested considerable effort in developing robust computer-aided tools for the classification of breast cancer using histopathological images. For this purpose, in this study we designed a dual-shuffle attention-guided deep learning model, called the dense residual dual-shuffle attention network (DRDA-Net). Inspired by the bottleneck unit of the ShuffleNet architecture, in our proposed model we incorporate a channel attention mechanism, which enhances the model's ability to learn the complex patterns of images. Moreover, the model's densely connected blocks address both the overfitting and the vanishing gradient problem, although the model is trained on a substantially small dataset. We have evaluated our proposed model on the publicly available BreaKHis dataset and achieved classification accuracies of 95.72%, 94.41%, 97.43% and 98.1% on four different magnification levels i.e., 40x, 1000x, 200x, 400x respectively which proves the supremacy of the proposed model. The relevant code of the proposed DRDA-Net model can be foundt at: https://github.com/SohamChattopadhyayEE/DRDA-Net.
Collapse
Affiliation(s)
- Soham Chattopadhyay
- Department of Electrical Engineering, Jadavpur University, 188, Raja S.C. Mallick Road, Kolkata, 700032, West Bengal, India.
| | - Arijit Dey
- Department of Computer Science and Engineering, Maulana Abul Kalam Azad University of Technology, Kolkata, 700064, West Bengal, India.
| | - Pawan Kumar Singh
- Department of Information Technology, Jadavpur University, Jadavpur University Second Campus, Plot No. 8, Salt Lake Bypass, LB Block, Sector III, Salt Lake City, Kolkata, 700106, West Bengal, India.
| | - Ram Sarkar
- Department of Computer Science and Engineering, Jadavpur University, 188, Raja S.C. Mallick Road, Kolkata, 700032, West Bengal, India.
| |
Collapse
|
47
|
Wu Y, Cheng M, Huang S, Pei Z, Zuo Y, Liu J, Yang K, Zhu Q, Zhang J, Hong H, Zhang D, Huang K, Cheng L, Shao W. Recent Advances of Deep Learning for Computational Histopathology: Principles and Applications. Cancers (Basel) 2022; 14:1199. [PMID: 35267505 PMCID: PMC8909166 DOI: 10.3390/cancers14051199] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 02/16/2022] [Accepted: 02/22/2022] [Indexed: 01/10/2023] Open
Abstract
With the remarkable success of digital histopathology, we have witnessed a rapid expansion of the use of computational methods for the analysis of digital pathology and biopsy image patches. However, the unprecedented scale and heterogeneous patterns of histopathological images have presented critical computational bottlenecks requiring new computational histopathology tools. Recently, deep learning technology has been extremely successful in the field of computer vision, which has also boosted considerable interest in digital pathology applications. Deep learning and its extensions have opened several avenues to tackle many challenging histopathological image analysis problems including color normalization, image segmentation, and the diagnosis/prognosis of human cancers. In this paper, we provide a comprehensive up-to-date review of the deep learning methods for digital H&E-stained pathology image analysis. Specifically, we first describe recent literature that uses deep learning for color normalization, which is one essential research direction for H&E-stained histopathological image analysis. Followed by the discussion of color normalization, we review applications of the deep learning method for various H&E-stained image analysis tasks such as nuclei and tissue segmentation. We also summarize several key clinical studies that use deep learning for the diagnosis and prognosis of human cancers from H&E-stained histopathological images. Finally, online resources and open research problems on pathological image analysis are also provided in this review for the convenience of researchers who are interested in this exciting field.
Collapse
Affiliation(s)
- Yawen Wu
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Michael Cheng
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA; (M.C.); (J.Z.); (K.H.)
- Regenstrief Institute, Indiana University, Indianapolis, IN 46202, USA
| | - Shuo Huang
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Zongxiang Pei
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Yingli Zuo
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Jianxin Liu
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Kai Yang
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Qi Zhu
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Jie Zhang
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA; (M.C.); (J.Z.); (K.H.)
- Regenstrief Institute, Indiana University, Indianapolis, IN 46202, USA
| | - Honghai Hong
- Department of Clinical Laboratory, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou 510006, China;
| | - Daoqiang Zhang
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Kun Huang
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA; (M.C.); (J.Z.); (K.H.)
- Regenstrief Institute, Indiana University, Indianapolis, IN 46202, USA
| | - Liang Cheng
- Departments of Pathology and Laboratory Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA
| | - Wei Shao
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| |
Collapse
|
48
|
Li Y, Wu X, Li C, Li X, Chen H, Sun C, Rahaman MM, Yao Y, Zhang Y, Jiang T. A hierarchical conditional random field-based attention mechanism approach for gastric histopathology image classification. APPL INTELL 2022. [DOI: 10.1007/s10489-021-02886-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
|
49
|
Shah SM, Khan RA, Arif S, Sajid U. Artificial intelligence for breast cancer analysis: Trends & directions. Comput Biol Med 2022; 142:105221. [PMID: 35016100 DOI: 10.1016/j.compbiomed.2022.105221] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2021] [Revised: 01/03/2022] [Accepted: 01/03/2022] [Indexed: 12/18/2022]
Abstract
Breast cancer is one of the leading causes of death among women. Early detection of breast cancer can significantly improve the lives of millions of women across the globe. Given importance of finding solution/framework for early detection and diagnosis, recently many AI researchers are focusing to automate this task. The other reasons for surge in research activities in this direction are advent of robust AI algorithms (deep learning), availability of hardware that can run/train those robust and complex AI algorithms and accessibility of large enough dataset required for training AI algorithms. Different imaging modalities that have been exploited by researchers to automate the task of breast cancer detection are mammograms, ultrasound, magnetic resonance imaging, histopathological images or any combination of them. This article analyzes these imaging modalities and presents their strengths and limitations. It also enlists resources from where their datasets can be accessed for research purpose. This article then summarizes AI and computer vision based state-of-the-art methods proposed in the last decade to detect breast cancer using various imaging modalities. Primarily, in this article we have focused on reviewing frameworks that have reported results using mammograms as it is the most widely used breast imaging modality that serves as the first test that medical practitioners usually prescribe for the detection of breast cancer. Another reason for focusing on mammogram imaging modalities is the availability of its labelled datasets. Datasets availability is one of the most important aspects for the development of AI based frameworks as such algorithms are data hungry and generally quality of dataset affects performance of AI based algorithms. In a nutshell, this research article will act as a primary resource for the research community working in the field of automated breast imaging analysis.
Collapse
Affiliation(s)
- Shahid Munir Shah
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| | - Rizwan Ahmed Khan
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan.
| | - Sheeraz Arif
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| | - Unaiza Sajid
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| |
Collapse
|
50
|
Salahuddin Z, Woodruff HC, Chatterjee A, Lambin P. Transparency of deep neural networks for medical image analysis: A review of interpretability methods. Comput Biol Med 2022; 140:105111. [PMID: 34891095 DOI: 10.1016/j.compbiomed.2021.105111] [Citation(s) in RCA: 71] [Impact Index Per Article: 35.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Revised: 12/01/2021] [Accepted: 12/02/2021] [Indexed: 02/03/2023]
Abstract
Artificial Intelligence (AI) has emerged as a useful aid in numerous clinical applications for diagnosis and treatment decisions. Deep neural networks have shown the same or better performance than clinicians in many tasks owing to the rapid increase in the available data and computational power. In order to conform to the principles of trustworthy AI, it is essential that the AI system be transparent, robust, fair, and ensure accountability. Current deep neural solutions are referred to as black-boxes due to a lack of understanding of the specifics concerning the decision-making process. Therefore, there is a need to ensure the interpretability of deep neural networks before they can be incorporated into the routine clinical workflow. In this narrative review, we utilized systematic keyword searches and domain expertise to identify nine different types of interpretability methods that have been used for understanding deep learning models for medical image analysis applications based on the type of generated explanations and technical similarities. Furthermore, we report the progress made towards evaluating the explanations produced by various interpretability methods. Finally, we discuss limitations, provide guidelines for using interpretability methods and future directions concerning the interpretability of deep neural networks for medical imaging analysis.
Collapse
Affiliation(s)
- Zohaib Salahuddin
- The D-Lab, Department of Precision Medicine, GROW - School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands.
| | - Henry C Woodruff
- The D-Lab, Department of Precision Medicine, GROW - School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands; Department of Radiology and Nuclear Medicine, GROW - School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, the Netherlands
| | - Avishek Chatterjee
- The D-Lab, Department of Precision Medicine, GROW - School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands
| | - Philippe Lambin
- The D-Lab, Department of Precision Medicine, GROW - School for Oncology and Developmental Biology, Maastricht University, Maastricht, the Netherlands; Department of Radiology and Nuclear Medicine, GROW - School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, the Netherlands
| |
Collapse
|