1
|
Mahapatra D, Tennakoon R, George Y, Roy S, Bozorgtabar B, Ge Z, Reyes M. ALFREDO: Active Learning with FeatuRe disEntangelement and DOmain adaptation for medical image classification. Med Image Anal 2024; 97:103261. [PMID: 39018722 DOI: 10.1016/j.media.2024.103261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Revised: 06/05/2024] [Accepted: 06/26/2024] [Indexed: 07/19/2024]
Abstract
State-of-the-art deep learning models often fail to generalize in the presence of distribution shifts between training (source) data and test (target) data. Domain adaptation methods are designed to address this issue using labeled samples (supervised domain adaptation) or unlabeled samples (unsupervised domain adaptation). Active learning is a method to select informative samples to obtain maximum performance from minimum annotations. Selecting informative target domain samples can improve model performance and robustness, and reduce data demands. This paper proposes a novel pipeline called ALFREDO (Active Learning with FeatuRe disEntangelement and DOmain adaptation) that performs active learning under domain shift. We propose a novel feature disentanglement approach to decompose image features into domain specific and task specific components. Domain specific components refer to those features that provide source specific information, e.g., scanners, vendors or hospitals. Task specific components are discriminative features for classification, segmentation or other tasks. Thereafter we define multiple novel cost functions that identify informative samples under domain shift. We test our proposed method for medical image classification using one histopathology dataset and two chest X-ray datasets. Experiments show our method achieves state-of-the-art results compared to other domain adaptation methods, as well as state of the art active domain adaptation methods.
Collapse
Affiliation(s)
- Dwarikanath Mahapatra
- Inception Institute of AI, Abu Dhabi, United Arab Emirates; Faculty of IT, Monash University, Melbourne, Australia.
| | - Ruwan Tennakoon
- School of Computing Technologies, RMIT University, Melbourne, Australia
| | | | | | | | - Zongyuan Ge
- Faculty of IT, Monash University, Melbourne, Australia
| | - Mauricio Reyes
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland; Department of Radiation Oncology, University Hospital Bern, University of Bern, Switzerland
| |
Collapse
|
2
|
Xu Z, Lim S, Lu Y, Jung SW. Reversed domain adaptation for nuclei segmentation-based pathological image classification. Comput Biol Med 2024; 168:107726. [PMID: 37984206 DOI: 10.1016/j.compbiomed.2023.107726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 11/01/2023] [Accepted: 11/15/2023] [Indexed: 11/22/2023]
Abstract
Despite the fact that digital pathology has provided a new paradigm for modern medicine, the insufficiency of annotations for training remains a significant challenge. Due to the weak generalization abilities of deep-learning models, their performance is notably constrained in domains without sufficient annotations. Our research aims to enhance the model's generalization ability through domain adaptation, increasing the prediction ability for the target domain data while only using the source domain labels for training. To further enhance classification performance, we introduce nuclei segmentation to provide the classifier with more diagnostically valuable nuclei information. In contrast to the general domain adaptation that generates source-like results in the target domain, we propose a reversed domain adaptation strategy that generates target-like results in the source domain, enabling the classification model to be more robust to inaccurate segmentation results. The proposed reversed unsupervised domain adaptation can effectively reduce the disparities in nuclei segmentation between the source and target domains without any target domain labels, leading to improved image classification performance in the target domain. The whole framework is designed in a unified manner so that the segmentation and classification modules can be trained jointly. Extensive experiments demonstrate that the proposed method significantly improves the classification performance in the target domain and outperforms existing general domain adaptation methods.
Collapse
Affiliation(s)
- Zhixin Xu
- Department of Electrical Engineering, Korea University, Seoul, Republic of Korea
| | - Seohoon Lim
- Department of Electrical Engineering, Korea University, Seoul, Republic of Korea
| | - Yucheng Lu
- Education and Research Center for Socialware IT, Korea University, Seoul, Republic of Korea
| | - Seung-Won Jung
- Department of Electrical Engineering, Korea University, Seoul, Republic of Korea.
| |
Collapse
|
3
|
Thomas L, Sheeja MK. Fourier ptychographic and deep learning using breast cancer histopathological image classification. JOURNAL OF BIOPHOTONICS 2023; 16:e202300194. [PMID: 37296518 DOI: 10.1002/jbio.202300194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Revised: 06/07/2023] [Accepted: 06/08/2023] [Indexed: 06/12/2023]
Abstract
Automated, as well as accurate classification with breast cancer histological images, was crucial for medical applications because of detecting malignant tumors via histopathological images. In this work create a Fourier ptychographic (FP) and deep learning using breast cancer histopathological image classification. Here the FP method used in the process begins with such a random guess that builds a high-resolution complex hologram, subsequently uses iterative retrieval using FP constraints to stitch around each other low-resolution multi-view means of production owned from either the hologram's high-resolution hologram's elemental images captured via integral imaging. Next, the feature extraction process includes entropy, geometrical features, and textural features. The entropy-based normalization is used to optimize the features. Finally, it attains the classification process of the proposed ENDNN classifies the breast cancer images into normal or abnormal. The experimental outcomes demonstrate that our presented technique overtakes the traditional techniques.
Collapse
Affiliation(s)
- Leena Thomas
- Department of Electronics & Communication Engineering, Sree Chitra Thirunal College of Engineering, Thiruvananthapuram, Kerala, India
- APJ Abdul Kalam Technological University, Kerala, India
- College of Engineering Kallooppara, Pathanamthitta, Kerala, India
| | - M K Sheeja
- Department of Electronics & Communication Engineering, Sree Chitra Thirunal College of Engineering, Thiruvananthapuram, Kerala, India
- APJ Abdul Kalam Technological University, Kerala, India
| |
Collapse
|
4
|
Baidar Bakht A, Javed S, Gilani SQ, Karki H, Muneeb M, Werghi N. DeepBLS: Deep Feature-Based Broad Learning System for Tissue Phenotyping in Colorectal Cancer WSIs. J Digit Imaging 2023; 36:1653-1662. [PMID: 37059892 PMCID: PMC10406762 DOI: 10.1007/s10278-023-00797-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 02/09/2023] [Accepted: 02/10/2023] [Indexed: 04/16/2023] Open
Abstract
Tissue phenotyping is a fundamental step in computational pathology for the analysis of tumor micro-environment in whole slide images (WSIs). Automatic tissue phenotyping in whole slide images (WSIs) of colorectal cancer (CRC) assists pathologists in better cancer grading and prognostication. In this paper, we propose a novel algorithm for the identification of distinct tissue components in colon cancer histology images by blending a comprehensive learning system with deep features extraction in the current work. Firstly, we extracted the features from the pre-trained VGG19 network which are then transformed into mapped features space for nodes enhancement generation. Utilizing both mapped features and enhancement nodes, the proposed algorithm classifies seven distinct tissue components including stroma, tumor, complex stroma, necrotic, normal benign, lymphocytes, and smooth muscle. To validate our proposed model, the experiments are performed on two publicly available colorectal cancer histology datasets. We showcase that our approach achieves a remarkable performance boost surpassing existing state-of-the-art methods by (1.3% AvTP, 2% F1) and (7% AvTP, 6% F1) on CRCD-1, and CRCD-2, respectively.
Collapse
Affiliation(s)
- Ahsan Baidar Bakht
- Electrical and Computer Engineering Department, Khalifa University, 12778 Abu Dhabi, United Arab Emirates
| | - Sajid Javed
- Electrical and Computer Engineering Department, Khalifa University, 12778 Abu Dhabi, United Arab Emirates
| | - Syed Qasim Gilani
- Department of Electrical Engineering and Computer Science, Florida Atlantic University, Boca Raton, 33431 USA
| | - Hamad Karki
- Mechanical Engineering Department, Khalifa University, 12778 Abu Dhabi, United Arab Emirates
| | - Muhammad Muneeb
- Electrical and Computer Engineering Department, Khalifa University, 12778 Abu Dhabi, United Arab Emirates
| | - Naoufel Werghi
- Electrical and Computer Engineering Department, Khalifa University, 12778 Abu Dhabi, United Arab Emirates
| |
Collapse
|
5
|
Li Y, Xu J, Wang P, Li P, Yang G, Chen R. Manifold reconstructed semi-supervised domain adaptation for histopathology images classification. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
6
|
A generalizable and robust deep learning algorithm for mitosis detection in multicenter breast histopathological images. Med Image Anal 2023; 84:102703. [PMID: 36481608 DOI: 10.1016/j.media.2022.102703] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Revised: 09/16/2022] [Accepted: 11/21/2022] [Indexed: 11/24/2022]
Abstract
Mitosis counting of biopsies is an important biomarker for breast cancer patients, which supports disease prognostication and treatment planning. Developing a robust mitotic cell detection model is highly challenging due to its complex growth pattern and high similarities with non-mitotic cells. Most mitosis detection algorithms have poor generalizability across image domains and lack reproducibility and validation in multicenter settings. To overcome these issues, we propose a generalizable and robust mitosis detection algorithm (called FMDet), which is independently tested on multicenter breast histopathological images. To capture more refined morphological features of cells, we convert the object detection task as a semantic segmentation problem. The pixel-level annotations for mitotic nuclei are obtained by taking the intersection of the masks generated from a well-trained nuclear segmentation model and the bounding boxes provided by the MIDOG 2021 challenge. In our segmentation framework, a robust feature extractor is developed to capture the appearance variations of mitotic cells, which is constructed by integrating a channel-wise multi-scale attention mechanism into a fully convolutional network structure. Benefiting from the fact that the changes in the low-level spectrum do not affect the high-level semantic perception, we employ a Fourier-based data augmentation method to reduce domain discrepancies by exchanging the low-frequency spectrum between two domains. Our FMDet algorithm has been tested in the MIDOG 2021 challenge and ranked first place. Further, our algorithm is also externally validated on four independent datasets for mitosis detection, which exhibits state-of-the-art performance in comparison with previously published results. These results demonstrate that our algorithm has the potential to be deployed as an assistant decision support tool in clinical practice. Our code has been released at https://github.com/Xiyue-Wang/1st-in-MICCAI-MIDOG-2021-challenge.
Collapse
|
7
|
He Q, He L, Duan H, Sun Q, Zheng R, Guan J, He Y, Huang W, Guan T. Expression site agnostic histopathology image segmentation framework by self supervised domain adaption. Comput Biol Med 2023; 152:106412. [PMID: 36516576 DOI: 10.1016/j.compbiomed.2022.106412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Revised: 11/22/2022] [Accepted: 12/03/2022] [Indexed: 12/12/2022]
Abstract
MOTIVATION With the sites of antigen expression different, the segmentation of immunohistochemical (IHC) histopathology images is challenging, due to the visual variances. With H&E images highlighting the tissue structure and cell distribution more broadly, transferring more salient features from H&E images can achieve considerable performance on expression site agnostic IHC images segmentation. METHODS To the best of our knowledge, this is the first work that focuses on domain adaptive segmentation for different expression sites. We propose an expression site agnostic domain adaptive histopathology image semantic segmentation framework (ESASeg). In ESASeg, multi-level feature alignment encodes expression site invariance by learning generic representations of global and multi-scale local features. Moreover, self-supervision enhances domain adaptation to perceive high-level semantics by predicting pseudo-labels. RESULTS We construct a dataset with three IHCs (Her2 with membrane stained, Ki67 with nucleus stained, GPC3 with cytoplasm stained) with different expression sites from two diseases (breast and liver cancer). Intensive experiments on tumor region segmentation illustrate that ESASeg performs best across all metrics, and the implementation of each module proves to achieve impressive improvements. CONCLUSION The performance of ESASeg on the tumor region segmentation demonstrates the efficiency of the proposed framework, which provides a novel solution on expression site agnostic IHC related tasks. Moreover, the proposed domain adaption and self-supervision module can improve feature domain adaption and extraction without labels. In addition, ESASeg lays the foundation to perform joint analysis and information interaction for IHCs with different expression sites.
Collapse
Affiliation(s)
- Qiming He
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China.
| | - Ling He
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China.
| | - Hufei Duan
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China.
| | - Qiehe Sun
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China.
| | - Runliang Zheng
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China.
| | - Jian Guan
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, China.
| | - Yonghong He
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China.
| | - Wenting Huang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, China.
| | - Tian Guan
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China.
| |
Collapse
|
8
|
Wang Z, Zhu X, Li A, Wang Y, Meng G, Wang M. Global and local attentional feature alignment for domain adaptive nuclei detection in histopathology images. Artif Intell Med 2022; 132:102341. [DOI: 10.1016/j.artmed.2022.102341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 06/08/2022] [Accepted: 06/27/2022] [Indexed: 11/02/2022]
|
9
|
Park Y, Kim M, Ashraf M, Ko YS, Yi MY. MixPatch: A New Method for Training Histopathology Image Classifiers. Diagnostics (Basel) 2022; 12:diagnostics12061493. [PMID: 35741303 PMCID: PMC9221905 DOI: 10.3390/diagnostics12061493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 06/11/2022] [Accepted: 06/14/2022] [Indexed: 11/16/2022] Open
Abstract
CNN-based image processing has been actively applied to histopathological analysis to detect and classify cancerous tumors automatically. However, CNN-based classifiers generally predict a label with overconfidence, which becomes a serious problem in the medical domain. The objective of this study is to propose a new training method, called MixPatch, designed to improve a CNN-based classifier by specifically addressing the prediction uncertainty problem and examine its effectiveness in improving diagnosis performance in the context of histopathological image analysis. MixPatch generates and uses a new sub-training dataset, which consists of mixed-patches and their predefined ground-truth labels, for every single mini-batch. Mixed-patches are generated using a small size of clean patches confirmed by pathologists while their ground-truth labels are defined using a proportion-based soft labeling method. Our results obtained using a large histopathological image dataset shows that the proposed method performs better and alleviates overconfidence more effectively than any other method examined in the study. More specifically, our model showed 97.06% accuracy, an increase of 1.6% to 12.18%, while achieving 0.76% of expected calibration error, a decrease of 0.6% to 6.3%, over the other models. By specifically considering the mixed-region variation characteristics of histopathology images, MixPatch augments the extant mixed image methods for medical image analysis in which prediction uncertainty is a crucial issue. The proposed method provides a new way to systematically alleviate the overconfidence problem of CNN-based classifiers and improve their prediction accuracy, contributing toward more calibrated and reliable histopathology image analysis.
Collapse
Affiliation(s)
- Youngjin Park
- Department of Industrial & Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (Y.P.); (M.K.); (M.A.)
| | - Mujin Kim
- Department of Industrial & Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (Y.P.); (M.K.); (M.A.)
| | - Murtaza Ashraf
- Department of Industrial & Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (Y.P.); (M.K.); (M.A.)
| | - Young Sin Ko
- Pathology Center, Seegene Medical Foundation, Seoul 04805, Korea;
| | - Mun Yong Yi
- Department of Industrial & Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (Y.P.); (M.K.); (M.A.)
- Correspondence:
| |
Collapse
|
10
|
Artificial Intelligence-Based Tissue Phenotyping in Colorectal Cancer Histopathology Using Visual and Semantic Features Aggregation. MATHEMATICS 2022. [DOI: 10.3390/math10111909] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Tissue phenotyping of the tumor microenvironment has a decisive role in digital profiling of intra-tumor heterogeneity, epigenetics, and progression of cancer. Most of the existing methods for tissue phenotyping often rely on time-consuming and error-prone manual procedures. Recently, with the advent of advanced technologies, these procedures have been automated using artificial intelligence techniques. In this paper, a novel deep histology heterogeneous feature aggregation network (HHFA-Net) is proposed based on visual and semantic information fusion for the detection of tissue phenotypes in colorectal cancer (CRC). We adopted and tested various data augmentation techniques to avoid computationally expensive stain normalization procedures and handle limited and imbalanced data problems. Three publicly available datasets are used in the experiments: CRC tissue phenotyping (CRC-TP), CRC histology (CRCH), and colon cancer histology (CCH). The proposed HHFA-Net achieves higher accuracies than the state-of-the-art methods for tissue phenotyping in CRC histopathology images.
Collapse
|
11
|
Kiziloluk S, Sert E. COVID-CCD-Net: COVID-19 and colon cancer diagnosis system with optimized CNN hyperparameters using gradient-based optimizer. Med Biol Eng Comput 2022; 60:1595-1612. [PMID: 35396625 PMCID: PMC8993211 DOI: 10.1007/s11517-022-02553-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 03/12/2022] [Indexed: 02/01/2023]
Abstract
Coronavirus disease-2019 (COVID-19) is a new types of coronavirus which have turned into a pandemic within a short time. Reverse transcription–polymerase chain reaction (RT-PCR) test is used for the diagnosis of COVID-19 in national healthcare centers. Because the number of PCR test kits is often limited, it is sometimes difficult to diagnose the disease at an early stage. However, X-ray technology is accessible nearly all over the world, and it succeeds in detecting symptoms of COVID-19 more successfully. Another disease which affects people’s lives to a great extent is colorectal cancer. Tissue microarray (TMA) is a technological method which is widely used for its high performance in the analysis of colorectal cancer. Computer-assisted approaches which can classify colorectal cancer in TMA images are also needed. In this respect, the present study proposes a convolutional neural network (CNN) classification approach with optimized parameters using gradient-based optimizer (GBO) algorithm. Thanks to the proposed approach, COVID-19, normal, and viral pneumonia in various chest X-ray images can be classified accurately. Additionally, other types such as epithelial and stromal regions in epidermal growth factor receptor (EFGR) colon in TMAs can also be classified. The proposed approach was called COVID-CCD-Net. AlexNet, DarkNet-19, Inception-v3, MobileNet, ResNet-18, and ShuffleNet architectures were used in COVID-CCD-Net, and the hyperparameters of this architecture was optimized for the proposed approach. Two different medical image classification datasets, namely, COVID-19 and Epistroma, were used in the present study. The experimental findings demonstrated that proposed approach increased the classification performance of the non-optimized CNN architectures significantly and displayed a very high classification performance even in very low value of epoch.
Collapse
Affiliation(s)
- Soner Kiziloluk
- Department of Computer Engineering, Malatya Turgut Özal University, Malatya, Turkey
| | - Eser Sert
- Department of Computer Engineering, Malatya Turgut Özal University, Malatya, Turkey
| |
Collapse
|
12
|
Jiang H, Li S, Li H. Parallel ‘same’ and ‘valid’ convolutional block and input-collaboration strategy for histopathological image classification. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.108417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
13
|
Abstract
Machine learning techniques used in computer-aided medical image analysis usually suffer from the domain shift problem caused by different distributions between source/reference data and target data. As a promising solution, domain adaptation has attracted considerable attention in recent years. The aim of this paper is to survey the recent advances of domain adaptation methods in medical image analysis. We first present the motivation of introducing domain adaptation techniques to tackle domain heterogeneity issues for medical image analysis. Then we provide a review of recent domain adaptation models in various medical image analysis tasks. We categorize the existing methods into shallow and deep models, and each of them is further divided into supervised, semi-supervised and unsupervised methods. We also provide a brief summary of the benchmark medical image datasets that support current domain adaptation research. This survey will enable researchers to gain a better understanding of the current status, challenges and future directions of this energetic research field.
Collapse
|
14
|
Zhao Y, Hu B, Wang Y, Yin X, Jiang Y, Zhu X. Identification of gastric cancer with convolutional neural networks: a systematic review. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:11717-11736. [PMID: 35221775 PMCID: PMC8856868 DOI: 10.1007/s11042-022-12258-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Revised: 06/20/2021] [Accepted: 01/14/2022] [Indexed: 06/14/2023]
Abstract
The identification of diseases is inseparable from artificial intelligence. As an important branch of artificial intelligence, convolutional neural networks play an important role in the identification of gastric cancer. We conducted a systematic review to summarize the current applications of convolutional neural networks in the gastric cancer identification. The original articles published in Embase, Cochrane Library, PubMed and Web of Science database were systematically retrieved according to relevant keywords. Data were extracted from published papers. A total of 27 articles were retrieved for the identification of gastric cancer using medical images. Among them, 19 articles were applied in endoscopic images and 8 articles were applied in pathological images. 16 studies explored the performance of gastric cancer detection, 7 studies explored the performance of gastric cancer classification, 2 studies reported the performance of gastric cancer segmentation and 2 studies analyzed the performance of gastric cancer delineating margins. The convolutional neural network structures involved in the research included AlexNet, ResNet, VGG, Inception, DenseNet and Deeplab, etc. The accuracy of studies was 77.3 - 98.7%. Good performances of the systems based on convolutional neural networks have been showed in the identification of gastric cancer. Artificial intelligence is expected to provide more accurate information and efficient judgments for doctors to diagnose diseases in clinical work.
Collapse
Affiliation(s)
- Yuxue Zhao
- School of Nursing, Department of Medicine, Qingdao University, No. 15, Ningde Road, Shinan District, Qingdao, 266073 China
| | - Bo Hu
- Department of Thoracic Surgery, Qingdao Municipal Hospital, Qingdao, China
| | - Ying Wang
- School of Nursing, Department of Medicine, Qingdao University, No. 15, Ningde Road, Shinan District, Qingdao, 266073 China
| | - Xiaomeng Yin
- Pediatrics Intensive Care Unit, Qingdao Municipal Hospital, Qingdao, China
| | - Yuanyuan Jiang
- International Medical Services, Qilu Hospital of Shandong University, Jinan, China
| | - Xiuli Zhu
- School of Nursing, Department of Medicine, Qingdao University, No. 15, Ningde Road, Shinan District, Qingdao, 266073 China
| |
Collapse
|
15
|
Tewary S, Mukhopadhyay S. AutoIHCNet: CNN architecture and decision fusion for automated HER2 scoring. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.108572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
16
|
Domain generalization on medical imaging classification using episodic training with task augmentation. Comput Biol Med 2021; 141:105144. [PMID: 34971982 DOI: 10.1016/j.compbiomed.2021.105144] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2021] [Revised: 12/12/2021] [Accepted: 12/13/2021] [Indexed: 12/22/2022]
Abstract
Medical imaging datasets usually exhibit domain shift due to the variations of scanner vendors, imaging protocols, etc. This raises the concern about the generalization capacity of machine learning models. Domain generalization (DG), which aims to learn a model from multiple source domains such that it can be directly generalized to unseen test domains, seems particularly promising to medical imaging community. To address DG, recent model-agnostic meta-learning (MAML) has been introduced, which transfers the knowledge from previous training tasks to facilitate the learning of novel testing tasks. However, in clinical practice, there are usually only a few annotated source domains available, which decreases the capacity of training task generation and thus increases the risk of overfitting to training tasks in the paradigm. In this paper, we propose a novel DG scheme of episodic training with task augmentation on medical imaging classification. Based on meta-learning, we develop the paradigm of episodic training to construct the knowledge transfer from episodic training-task simulation to the real testing task of DG. Motivated by the limited number of source domains in real-world medical deployment, we consider the unique task-level overfitting and we propose task augmentation to enhance the variety during training task generation to alleviate it. With the established learning framework, we further exploit a novel meta-objective to regularize the deep embedding of training domains. To validate the effectiveness of the proposed method, we perform experiments on histopathological images and abdominal CT images.
Collapse
|
17
|
Xing F, Cornish TC, Bennett TD, Ghosh D. Bidirectional Mapping-Based Domain Adaptation for Nucleus Detection in Cross-Modality Microscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2880-2896. [PMID: 33284750 PMCID: PMC8543886 DOI: 10.1109/tmi.2020.3042789] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Cell or nucleus detection is a fundamental task in microscopy image analysis and has recently achieved state-of-the-art performance by using deep neural networks. However, training supervised deep models such as convolutional neural networks (CNNs) usually requires sufficient annotated image data, which is prohibitively expensive or unavailable in some applications. Additionally, when applying a CNN to new datasets, it is common to annotate individual cells/nuclei in those target datasets for model re-learning, leading to inefficient and low-throughput image analysis. To tackle these problems, we present a bidirectional, adversarial domain adaptation method for nucleus detection on cross-modality microscopy image data. Specifically, the method learns a deep regression model for individual nucleus detection with both source-to-target and target-to-source image translation. In addition, we explicitly extend this unsupervised domain adaptation method to a semi-supervised learning situation and further boost the nucleus detection performance. We evaluate the proposed method on three cross-modality microscopy image datasets, which cover a wide variety of microscopy imaging protocols or modalities, and obtain a significant improvement in nucleus detection compared to reference baseline approaches. In addition, our semi-supervised method is very competitive with recent fully supervised learning models trained with all real target training labels.
Collapse
|
18
|
Olveres J, González G, Torres F, Moreno-Tagle JC, Carbajal-Degante E, Valencia-Rodríguez A, Méndez-Sánchez N, Escalante-Ramírez B. What is new in computer vision and artificial intelligence in medical image analysis applications. Quant Imaging Med Surg 2021; 11:3830-3853. [PMID: 34341753 DOI: 10.21037/qims-20-1151] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Accepted: 04/20/2021] [Indexed: 12/15/2022]
Abstract
Computer vision and artificial intelligence applications in medicine are becoming increasingly important day by day, especially in the field of image technology. In this paper we cover different artificial intelligence advances that tackle some of the most important worldwide medical problems such as cardiology, cancer, dermatology, neurodegenerative disorders, respiratory problems, and gastroenterology. We show how both areas have resulted in a large variety of methods that range from enhancement, detection, segmentation and characterizations of anatomical structures and lesions to complete systems that automatically identify and classify several diseases in order to aid clinical diagnosis and treatment. Different imaging modalities such as computer tomography, magnetic resonance, radiography, ultrasound, dermoscopy and microscopy offer multiple opportunities to build automatic systems that help medical diagnosis, taking advantage of their own physical nature. However, these imaging modalities also impose important limitations to the design of automatic image analysis systems for diagnosis aid due to their inherent characteristics such as signal to noise ratio, contrast and resolutions in time, space and wavelength. Finally, we discuss future trends and challenges that computer vision and artificial intelligence must face in the coming years in order to build systems that are able to solve more complex problems that assist medical diagnosis.
Collapse
Affiliation(s)
- Jimena Olveres
- Centro de Estudios en Computación Avanzada, Universidad Nacional Autónoma de México (UNAM), Mexico City, Mexico.,Departamento de Procesamiento de Señales, Facultad de Ingeniería, UNAM, Mexico City, Mexico
| | - Germán González
- Departamento de Procesamiento de Señales, Facultad de Ingeniería, UNAM, Mexico City, Mexico
| | - Fabian Torres
- Centro de Estudios en Computación Avanzada, Universidad Nacional Autónoma de México (UNAM), Mexico City, Mexico.,Departamento de Procesamiento de Señales, Facultad de Ingeniería, UNAM, Mexico City, Mexico
| | | | | | | | - Nahum Méndez-Sánchez
- Unidad de Investigación en Hígado, Fundación Clínica Médica Sur, Mexico City, Mexico.,Facultad de Medicina, UNAM, Mexico City, Mexico
| | - Boris Escalante-Ramírez
- Centro de Estudios en Computación Avanzada, Universidad Nacional Autónoma de México (UNAM), Mexico City, Mexico.,Departamento de Procesamiento de Señales, Facultad de Ingeniería, UNAM, Mexico City, Mexico
| |
Collapse
|
19
|
Tewary S, Mukhopadhyay S. HER2 Molecular Marker Scoring Using Transfer Learning and Decision Level Fusion. J Digit Imaging 2021; 34:667-677. [PMID: 33742331 PMCID: PMC8329150 DOI: 10.1007/s10278-021-00442-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2020] [Revised: 01/13/2021] [Accepted: 03/01/2021] [Indexed: 01/28/2023] Open
Abstract
In prognostic evaluation of breast cancer, immunohistochemical (IHC) marker human epidermal growth factor receptor 2 (HER2) is used for prognostic evaluation. Accurate assessment of HER2-stained tissue sample is essential in therapeutic decision making for the patients. In regular clinical settings, expert pathologists assess the HER2-stained tissue slide under microscope for manual scoring based on prior experience. Manual scoring is time consuming, tedious, and often prone to inter-observer variation among group of pathologists. With the recent advancement in the area of computer vision and deep learning, medical image analysis has got significant attention. A number of deep learning architectures have been proposed for classification of different image groups. These networks are also used for transfer learning to classify other image classes. In the presented study, a number of transfer learning architectures are used for HER2 scoring. Five pre-trained architectures viz. VGG16, VGG19, ResNet50, MobileNetV2, and NASNetMobile with decimating the fully connected layers to get 3-class classification have been used for the comparative assessment of the networks as well as further scoring of stained tissue sample image based on statistical voting using mode operator. HER2 Challenge dataset from Warwick University is used in this study. A total of 2130 image patches were extracted to generate the training dataset from 300 training images corresponding to 30 training cases. The output model is then tested on 800 new test image patches from 100 test images acquired from 10 test cases (different from training cases) to report the outcome results. The transfer learning models have shown significant accuracy with VGG19 showing the best accuracy for the test images. The accuracy is found to be 93%, which increases to 98% on the image-based scoring using statistical voting mechanism. The output shows a capable quantification pipeline in automated HER2 score generation.
Collapse
Affiliation(s)
- Suman Tewary
- School of Medical Science and Technology, Indian Institute of Technology Kharagpur, Kharagpur, India
- Computational Instrumentation, CSIR-Central Scientific Instruments Organisation, Chandigarh, India
| | - Sudipta Mukhopadhyay
- Department of Electronics and Electrical Communication Engineering, Indian Institute of Technology Kharagpur, Kharagpur, India.
| |
Collapse
|
20
|
Valkonen M, Hognas G, Bova GS, Ruusuvuori P. Generalized Fixation Invariant Nuclei Detection Through Domain Adaptation Based Deep Learning. IEEE J Biomed Health Inform 2021; 25:1747-1757. [PMID: 33211668 DOI: 10.1109/jbhi.2020.3039414] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Nucleus detection is a fundamental task in histological image analysis and an important tool for many follow up analyses. It is known that sample preparation and scanning procedure of histological slides introduce a great amount of variability to the histological images and poses challenges for automated nucleus detection. Here, we studied the effect of histopathological sample fixation on the accuracy of a deep learning based nuclei detection model trained with hematoxylin and eosin stained images. We experimented with training data that includes three methods of fixation; PAXgene, formalin and frozen, and studied the detection accuracy results of various convolutional neural networks. Our results indicate that the variability introduced during sample preparation affects the generalization of a model and should be considered when building accurate and robust nuclei detection algorithms. Our dataset includes over 67 000 annotated nuclei locations from 16 patients and three different sample fixation types. The dataset provides excellent basis for building an accurate and robust nuclei detection model, and combined with unsupervised domain adaptation, the workflow allows generalization to images from unseen domains, including different tissues and images from different labs.
Collapse
|
21
|
Damkliang K, Wongsirichot T, Thongsuksai P. TISSUE CLASSIFICATION FOR COLORECTAL CANCER UTILIZING TECHNIQUES OF DEEP LEARNING AND MACHINE LEARNING. BIOMEDICAL ENGINEERING: APPLICATIONS, BASIS AND COMMUNICATIONS 2021. [DOI: 10.4015/s1016237221500228] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Since the introduction of image pattern recognition and computer vision processing, the classification of cancer tissues has been a challenge at pixel-level, slide-level, and patient-level. Conventional machine learning techniques have given way to Deep Learning (DL), a contemporary, state-of-the-art approach to texture classification and localization of cancer tissues. Colorectal Cancer (CRC) is the third ranked cause of death from cancer worldwide. This paper proposes image-level texture classification of a CRC dataset by deep convolutional neural networks (CNN). Simple DL techniques consisting of transfer learning and fine-tuning were exploited. VGG-16, a Keras pre-trained model with initial weights by ImageNet, was applied. The transfer learning architecture and methods responding to VGG-16 are proposed. The training, validation, and testing sets included 5000 images of 150 × 150 pixels. The application set for detection and localization contained 10 large original images of 5000 × 5000 pixels. The model achieved F1-score and accuracy of 0.96 and 0.99, respectively, and produced a false positive rate of 0.01. AUC-based evaluation was also measured. The model classified ten large previously unseen images from the application set represented in false color maps. The reported results show the satisfactory performance of the model. The simplicity of the architecture, configuration, and implementation also contributes to the outcome this work.
Collapse
Affiliation(s)
- Kasikrit Damkliang
- Division of Computational Science, Faculty of Science, Prince of Songkla University, Hat Yai, Songkhla 90110, Thailand
| | - Thakerng Wongsirichot
- Division of Computational Science, Faculty of Science, Prince of Songkla University, Hat Yai, Songkhla 90110, Thailand
| | - Paramee Thongsuksai
- Department of Pathology, Faculty of Medicine, Prince of Songkla University, Hat Yai, Songkhla 90110, Thailand
| |
Collapse
|
22
|
Chen L, Zhao H, Jiang H, Balu N, Geleri DB, Chu B, Watase H, Zhao X, Li R, Xu J, Hatsukami TS, Xu D, Hwang JN, Yuan C. Domain adaptive and fully automated carotid artery atherosclerotic lesion detection using an artificial intelligence approach (LATTE) on 3D MRI. Magn Reson Med 2021; 86:1662-1673. [PMID: 33885165 DOI: 10.1002/mrm.28794] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2021] [Revised: 03/07/2021] [Accepted: 03/18/2021] [Indexed: 01/17/2023]
Abstract
PURPOSE To develop and evaluate a domain adaptive and fully automated review workflow (lesion assessment through tracklet evaluation, LATTE) for assessment of atherosclerotic disease in 3D carotid MR vessel wall imaging (MR VWI). METHODS VWI of 279 subjects with carotid atherosclerosis were used to develop LATTE, mainly convolutional neural network (CNN)-based domain adaptive lesion classification after image quality assessment and artery of interest localization. Heterogeneity in test sets from various sites usually causes inferior CNN performance. With our novel unsupervised domain adaptation (DA), LATTE was designed to accurately classify arteries into normal arteries and early and advanced lesions without additional annotations on new datasets. VWI of 271 subjects from four datasets (eight sites) with slightly different imaging parameters/signal patterns were collected to assess the effectiveness of DA of LATTE using the area under the receiver operating characteristic curve (AUC) on all lesions and advanced lesions before and after DA. RESULTS LATTE had good performance with advanced/all lesion classification, with the AUC of >0.88/0.83, significant improvements from >0.82/0.80 if without DA. CONCLUSIONS LATTE can locate target arteries and distinguish carotid atherosclerotic lesions with consistently improved performance with DA on new datasets. It may be useful for carotid atherosclerosis detection and assessment on various clinical sites.
Collapse
Affiliation(s)
- Li Chen
- Department of Electrical and Computer Engineering, University of Washington, Seattle, Washington, USA
| | - Huilin Zhao
- Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Hongjian Jiang
- Department of Electrical and Computer Engineering, University of Washington, Seattle, Washington, USA
| | - Niranjan Balu
- Department of Radiology, University of Washington, Seattle, Washington, USA
| | | | - Baocheng Chu
- Department of Radiology, University of Washington, Seattle, Washington, USA
| | - Hiroko Watase
- Department of Surgery, University of Washington, Seattle, Washington, USA
| | - Xihai Zhao
- Department of Biomedical Engineering, Tsinghua University School of Medicine, Beijing, China
| | - Rui Li
- Department of Biomedical Engineering, Tsinghua University School of Medicine, Beijing, China
| | - Jianrong Xu
- Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Thomas S Hatsukami
- Department of Surgery, University of Washington, Seattle, Washington, USA
| | - Dongxiang Xu
- Department of Radiology, University of Washington, Seattle, Washington, USA
| | - Jenq-Neng Hwang
- Department of Electrical and Computer Engineering, University of Washington, Seattle, Washington, USA
| | - Chun Yuan
- Department of Radiology, University of Washington, Seattle, Washington, USA
| |
Collapse
|
23
|
Qi Q, Lin X, Chen C, Xie W, Huang Y, Ding X, Liu X, Yu Y. Curriculum Feature Alignment Domain Adaptation for Epithelium-Stroma Classification in Histopathological Images. IEEE J Biomed Health Inform 2021; 25:1163-1172. [PMID: 32881698 DOI: 10.1109/jbhi.2020.3021558] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In recent years, deep learning methods have received more attention in epithelial-stroma (ES) classification tasks. Traditional deep learning methods assume that the training and test data have the same distribution, an assumption that is seldom satisfied in complex imaging procedures. Unsupervised domain adaptation (UDA) transfers knowledge from a labelled source domain to a completely unlabeled target domain, and is more suitable for ES classification tasks to avoid tedious annotation. However, existing UDA methods for this task ignore the semantic alignment across domains. In this paper, we propose a Curriculum Feature Alignment Network (CFAN) to gradually align discriminative features across domains through selecting effective samples from the target domain and minimizing intra-class differences. Specifically, we developed the Curriculum Transfer Strategy (CTS) and Adaptive Centroid Alignment (ACA) steps to train our model iteratively. We validated the method using three independent public ES datasets, and experimental results demonstrate that our method achieves better performance in ES classification compared with commonly used deep learning methods and existing deep domain adaptation methods.
Collapse
|
24
|
Cheng J, Liu Y, Huang W, Hong W, Wang L, Zhan X, Han Z, Ni D, Huang K, Zhang J. Computational Image Analysis Identifies Histopathological Image Features Associated With Somatic Mutations and Patient Survival in Gastric Adenocarcinoma. Front Oncol 2021; 11:623382. [PMID: 33869007 PMCID: PMC8045755 DOI: 10.3389/fonc.2021.623382] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Accepted: 03/15/2021] [Indexed: 12/24/2022] Open
Abstract
Computational analysis of histopathological images can identify sub-visual objective image features that may not be visually distinguishable by human eyes, and hence provides better modeling of disease phenotypes. This study aims to investigate whether specific image features are associated with somatic mutations and patient survival in gastric adenocarcinoma (sample size = 310). An automated image analysis pipeline was developed to extract quantitative morphological features from H&E stained whole-slide images. We found that four frequently somatically mutated genes (TP53, ARID1A, OBSCN, and PIK3CA) were significantly associated with tumor morphological changes. A prognostic model built on the image features significantly stratified patients into low-risk and high-risk groups (log-rank test p-value = 2.6e-4). Multivariable Cox regression showed the model predicted risk index was an additional prognostic factor besides tumor grade and stage. Gene ontology enrichment analysis showed that the genes whose expressions mostly correlated with the contributing features in the prognostic model were enriched on biological processes such as cell cycle and muscle contraction. These results demonstrate that histopathological image features can reflect underlying somatic mutations and identify high-risk patients that may benefit from more precise treatment regimens. Both the image features and pipeline are highly interpretable to enable translational applications.
Collapse
Affiliation(s)
- Jun Cheng
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen University, Shenzhen, China.,Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China.,Marshall Laboratory of Biomedical Engineering Shenzhen University, Shenzhen, China
| | - Yuting Liu
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen University, Shenzhen, China.,Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Wei Huang
- Department of Radiation Oncology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, School of Medicine, South China University of Technology, Guangzhou, China
| | - Wenhui Hong
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen University, Shenzhen, China.,Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Lingling Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen University, Shenzhen, China.,Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Xiaohui Zhan
- School of Basic Medicine, Chongqing Medical University, Chongqin, China
| | - Zhi Han
- Department of Medicine, Indiana University, School of Medicine, Indianapolis, IN, United States
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen University, Shenzhen, China.,Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China.,Marshall Laboratory of Biomedical Engineering Shenzhen University, Shenzhen, China
| | - Kun Huang
- Department of Medicine, Indiana University, School of Medicine, Indianapolis, IN, United States
| | - Jie Zhang
- Department of Medical and Molecular Genetics, Indiana University School of Medicine, Indianapolis, IN, United States
| |
Collapse
|
25
|
Automatic classification of breast cancer histopathological images based on deep feature fusion and enhanced routing. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102341] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
|
26
|
Xing F, Zhang X, Cornish TC. Artificial intelligence for pathology. Artif Intell Med 2021. [DOI: 10.1016/b978-0-12-821259-2.00011-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
27
|
Liu D, Zhang D, Song Y, Zhang F, O'Donnell L, Huang H, Chen M, Cai W. PDAM: A Panoptic-Level Feature Alignment Framework for Unsupervised Domain Adaptive Instance Segmentation in Microscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:154-165. [PMID: 32915732 DOI: 10.1109/tmi.2020.3023466] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
In this work, we present an unsupervised domain adaptation (UDA) method, named Panoptic Domain Adaptive Mask R-CNN (PDAM), for unsupervised instance segmentation in microscopy images. Since there currently lack methods particularly for UDA instance segmentation, we first design a Domain Adaptive Mask R-CNN (DAM) as the baseline, with cross-domain feature alignment at the image and instance levels. In addition to the image- and instance-level domain discrepancy, there also exists domain bias at the semantic level in the contextual information. Next, we, therefore, design a semantic segmentation branch with a domain discriminator to bridge the domain gap at the contextual level. By integrating the semantic- and instance-level feature adaptation, our method aligns the cross-domain features at the panoptic level. Third, we propose a task re-weighting mechanism to assign trade-off weights for the detection and segmentation loss functions. The task re-weighting mechanism solves the domain bias issue by alleviating the task learning for some iterations when the features contain source-specific factors. Furthermore, we design a feature similarity maximization mechanism to facilitate instance-level feature adaptation from the perspective of representational learning. Different from the typical feature alignment methods, our feature similarity maximization mechanism separates the domain-invariant and domain-specific features by enlarging their feature distribution dependency. Experimental results on three UDA instance segmentation scenarios with five datasets demonstrate the effectiveness of our proposed PDAM method, which outperforms state-of-the-art UDA methods by a large margin.
Collapse
|
28
|
Improving Computer-Aided Cervical Cells Classification Using Transfer Learning Based Snapshot Ensemble. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10207292] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Cervical cells classification is a crucial component of computer-aided cervical cancer detection. Fine-grained classification is of great clinical importance when guiding clinical decisions on the diagnoses and treatment, which remains very challenging. Recently, convolutional neural networks (CNN) provide a novel way to classify cervical cells by using automatically learned features. Although the ensemble of CNN models can increase model diversity and potentially boost the classification accuracy, it is a multi-step process, as several CNN models need to be trained respectively and then be selected for ensemble. On the other hand, due to the small training samples, the advantages of powerful CNN models may not be effectively leveraged. In order to address such a challenging issue, this paper proposes a transfer learning based snapshot ensemble (TLSE) method by integrating snapshot ensemble learning with transfer learning in a unified and coordinated way. Snapshot ensemble provides ensemble benefits within a single model training procedure, while transfer learning focuses on the small sample problem in cervical cells classification. Furthermore, a new training strategy is proposed for guaranteeing the combination. The TLSE method is evaluated on a pap-smear dataset called Herlev dataset and is proved to have some superiorities over the exiting methods. It demonstrates that TLSE can improve the accuracy in an ensemble manner with only one single training process for the small sample in fine-grained cervical cells classification.
Collapse
|
29
|
Javed S, Mahmood A, Werghi N, Benes K, Rajpoot N. Multiplex Cellular Communities in Multi-Gigapixel Colorectal Cancer Histology Images for Tissue Phenotyping. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; PP:9204-9219. [PMID: 32966218 DOI: 10.1109/tip.2020.3023795] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In computational pathology, automated tissue phenotyping in cancer histology images is a fundamental tool for profiling tumor microenvironments. Current tissue phenotyping methods use features derived from image patches which may not carry biological significance. In this work, we propose a novel multiplex cellular community-based algorithm for tissue phenotyping integrating cell-level features within a graph-based hierarchical framework. We demonstrate that such integration offers better performance compared to prior deep learning and texture-based methods as well as to cellular community based methods using uniplex networks. To this end, we construct celllevel graphs using texture, alpha diversity and multi-resolution deep features. Using these graphs, we compute cellular connectivity features which are then employed for the construction of a patch-level multiplex network. Over this network, we compute multiplex cellular communities using a novel objective function. The proposed objective function computes a low-dimensional subspace from each cellular network and subsequently seeks a common low-dimensional subspace using the Grassmann manifold. We evaluate our proposed algorithm on three publicly available datasets for tissue phenotyping, demonstrating a significant improvement over existing state-of-the-art methods.
Collapse
|
30
|
Feng M, Deng Y, Yang L, Jing Q, Zhang Z, Xu L, Wei X, Zhou Y, Wu D, Xiang F, Wang Y, Bao J, Bu H. Automated quantitative analysis of Ki-67 staining and HE images recognition and registration based on whole tissue sections in breast carcinoma. Diagn Pathol 2020; 15:65. [PMID: 32471471 PMCID: PMC7257511 DOI: 10.1186/s13000-020-00957-5] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2019] [Accepted: 04/08/2020] [Indexed: 02/08/2023] Open
Abstract
Background The scoring of Ki-67 is highly relevant for the diagnosis, classification, prognosis, and treatment in breast invasive ductal carcinoma (IDC). Traditional scoring method of Ki-67 staining followed by manual counting, is time-consumption and inter−/intra observer variability, which may limit its clinical value. Although more and more algorithms and individual platforms have been developed for the assessment of Ki-67 stained images to improve its accuracy level, most of them lack of accurate registration of immunohistochemical (IHC) images and their matched hematoxylin-eosin (HE) images, or did not accurately labelled each positive and negative cell with Ki-67 staining based on whole tissue sections (WTS). In view of this, we introduce an accurate image registration method and an automatic identification and counting software of Ki-67 based on WTS by deep learning. Methods We marked 1017 breast IDC whole slide imaging (WSI), established a research workflow based on the (i) identification of IDC area, (ii) registration of HE and IHC slides from the same anatomical region, and (iii) counting of positive Ki-67 staining. Results The accuracy, sensitivity, and specificity levels of identifying breast IDC regions were 89.44, 85.05, and 95.23%, respectively, and the contiguous HE and Ki-67 stained slides perfectly registered. We counted and labelled each cell of 10 Ki-67 slides as standard for testing on WTS, the accuracy by automatic calculation of Ki-67 positive rate in attained IDC was 90.2%. In the human-machine competition of Ki-67 scoring, the average time of 1 slide was 2.3 min with 1 GPU by using this software, and the accuracy was 99.4%, which was over 90% of the results provided by participating doctors. Conclusions Our study demonstrates the enormous potential of automated quantitative analysis of Ki-67 staining and HE images recognition and registration based on WTS, and the automated scoring of Ki67 can thus successfully address issues of consistency, reproducibility and accuracy. We will provide those labelled images as an open-free platform for researchers to assess the performance of computer algorithms for automated Ki-67 scoring on IHC stained slides.
Collapse
Affiliation(s)
- Min Feng
- Laboratory of Pathology, West China Hospital, Sichuan University, Chengdu, 610041, China.,Department of Pathology, West China Second University Hospital, Sichuan University & key Laboratory of Birth Defects and Related Diseases of Women and Children (Sichuan University), Ministry of Education, Chengdu, 610041, China.,Department of Pathology, West China Hospital, Sichuan University, Chengdu, 610041, China
| | - Yang Deng
- Laboratory of Pathology, West China Hospital, Sichuan University, Chengdu, 610041, China
| | - Libo Yang
- Laboratory of Pathology, West China Hospital, Sichuan University, Chengdu, 610041, China.,Department of Pathology, West China Hospital, Sichuan University, Chengdu, 610041, China
| | - Qiuyang Jing
- Department of Pathology, West China Second University Hospital, Sichuan University & key Laboratory of Birth Defects and Related Diseases of Women and Children (Sichuan University), Ministry of Education, Chengdu, 610041, China
| | - Zhang Zhang
- Department of Pathology, West China Hospital, Sichuan University, Chengdu, 610041, China
| | - Lian Xu
- Department of Pathology, West China Second University Hospital, Sichuan University & key Laboratory of Birth Defects and Related Diseases of Women and Children (Sichuan University), Ministry of Education, Chengdu, 610041, China.,Department of Pathology, West China Hospital, Sichuan University, Chengdu, 610041, China
| | - Xiaoxia Wei
- Laboratory of Pathology, West China Hospital, Sichuan University, Chengdu, 610041, China.,Department of Pathology, West China Hospital, Sichuan University, Chengdu, 610041, China.,Department of Pathology, Chengfei Hospital, Chengdu, China
| | - Yanyan Zhou
- Laboratory of Pathology, West China Hospital, Sichuan University, Chengdu, 610041, China
| | - Diwei Wu
- Laboratory of Pathology, West China Hospital, Sichuan University, Chengdu, 610041, China
| | - Fei Xiang
- Chengdu Knowledge Vision Science and Technology Co., Ltd, Chengdu, China
| | - Yizhe Wang
- Chengdu Knowledge Vision Science and Technology Co., Ltd, Chengdu, China
| | - Ji Bao
- Laboratory of Pathology, West China Hospital, Sichuan University, Chengdu, 610041, China.
| | - Hong Bu
- Laboratory of Pathology, West China Hospital, Sichuan University, Chengdu, 610041, China. .,Department of Pathology, West China Hospital, Sichuan University, Chengdu, 610041, China.
| |
Collapse
|
31
|
Javed S, Mahmood A, Fraz MM, Koohbanani NA, Benes K, Tsang YW, Hewitt K, Epstein D, Snead D, Rajpoot N. Cellular community detection for tissue phenotyping in colorectal cancer histology images. Med Image Anal 2020; 63:101696. [PMID: 32330851 DOI: 10.1016/j.media.2020.101696] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Revised: 02/18/2020] [Accepted: 04/02/2020] [Indexed: 02/01/2023]
Abstract
Classification of various types of tissue in cancer histology images based on the cellular compositions is an important step towards the development of computational pathology tools for systematic digital profiling of the spatial tumor microenvironment. Most existing methods for tissue phenotyping are limited to the classification of tumor and stroma and require large amount of annotated histology images which are often not available. In the current work, we pose the problem of identifying distinct tissue phenotypes as finding communities in cellular graphs or networks. First, we train a deep neural network for cell detection and classification into five distinct cellular components. Considering the detected nuclei as nodes, potential cell-cell connections are assigned using Delaunay triangulation resulting in a cell-level graph. Based on this cell graph, a feature vector capturing potential cell-cell connection of different types of cells is computed. These feature vectors are used to construct a patch-level graph based on chi-square distance. We map patch-level nodes to the geometric space by representing each node as a vector of geodesic distances from other nodes in the network and iteratively drifting the patch nodes in the direction of positive density gradients towards maximum density regions. The proposed algorithm is evaluated on a publicly available dataset and another new large-scale dataset consisting of 280K patches of seven tissue phenotypes. The estimated communities have significant biological meanings as verified by the expert pathologists. A comparison with current state-of-the-art methods reveals significant performance improvement in tissue phenotyping.
Collapse
Affiliation(s)
- Sajid Javed
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK; Khalifa University Center for Autonomous Robotic Systems (KUCARS), Abu Dhabi, P.O. Box 127788, UAE
| | - Arif Mahmood
- Department of Computer Science, Information Technology University, Lahore, Pakistan
| | - Muhammad Moazam Fraz
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK; National University of Science and Technology (NUST), Islamabad, Pakistan
| | | | - Ksenija Benes
- Department of Pathology, University Hospitals Coventry & Warwickshire NHS Trust, Walsgrave, Coventry, CV2 2DX, UK
| | - Yee-Wah Tsang
- Department of Pathology, University Hospitals Coventry & Warwickshire NHS Trust, Walsgrave, Coventry, CV2 2DX, UK
| | - Katherine Hewitt
- Department of Pathology, University Hospitals Coventry & Warwickshire NHS Trust, Walsgrave, Coventry, CV2 2DX, UK
| | - David Epstein
- Mathematics Institute, University of Warwick, Coventry, CV4 7AL, UK
| | - David Snead
- Department of Pathology, University Hospitals Coventry & Warwickshire NHS Trust, Walsgrave, Coventry, CV2 2DX, UK
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK; Department of Pathology, University Hospitals Coventry & Warwickshire NHS Trust, Walsgrave, Coventry, CV2 2DX, UK; The Alan Turing Institute, London, UK.
| |
Collapse
|
32
|
Eminaga O, Eminaga N, Semjonow A, Breil B. Diagnostic Classification of Cystoscopic Images Using Deep Convolutional Neural Networks. JCO Clin Cancer Inform 2019; 2:1-8. [PMID: 30652604 DOI: 10.1200/cci.17.00126] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023] Open
Abstract
PURPOSE The recognition of cystoscopic findings remains challenging for young colleagues and depends on the examiner's skills. Computer-aided diagnosis tools using feature extraction and deep learning show promise as instruments to perform diagnostic classification. MATERIALS AND METHODS Our study considered 479 patient cases that represented 44 urologic findings. Image color was linearly normalized and was equalized by applying contrast-limited adaptive histogram equalization. Because these findings can be viewed via cystoscopy from every possible angle and side, we ultimately generated images rotated in 10-degree grades and flipped them vertically or horizontally, which resulted in 18,681 images. After image preprocessing, we developed deep convolutional neural network (CNN) models (ResNet50, VGG-19, VGG-16, InceptionV3, and Xception) and evaluated these models using F1 scores. Furthermore, we proposed two CNN concepts: 90%-previous-layer filter size and harmonic-series filter size. A training set (60%), a validation set (10%), and a test set (30%) were randomly generated from the study data set. All models were trained on the training set, validated on the validation set, and evaluated on the test set. RESULTS The Xception-based model achieved the highest F1 score (99.52%), followed by models that were based on ResNet50 (99.48%) and the harmonic-series concept (99.45%). All images with cancer lesions were correctly determined by these models. When the focus was on the images misclassified by the model with the best performance, 7.86% of images that showed bladder stones with indwelling catheter and 1.43% of images that showed bladder diverticulum were falsely classified. CONCLUSION The results of this study show the potential of deep learning for the diagnostic classification of cystoscopic images. Future work will focus on integration of artificial intelligence-aided cystoscopy into clinical routines and possibly expansion to other clinical endoscopy applications.
Collapse
Affiliation(s)
- Okyaz Eminaga
- Okyaz Eminaga, Stanford Medical School, Stanford, CA; University Hospital of Cologne, Cologne, France; Nurettin Eminaga, St Mauritius Therapy Clinic, Meerbusch; Axel Semjonow, University Hospital Muenster; and Bernhard Breil, Niederrhein University of Applied Sciences, Krefeld, Germany
| | - Nurettin Eminaga
- Okyaz Eminaga, Stanford Medical School, Stanford, CA; University Hospital of Cologne, Cologne, France; Nurettin Eminaga, St Mauritius Therapy Clinic, Meerbusch; Axel Semjonow, University Hospital Muenster; and Bernhard Breil, Niederrhein University of Applied Sciences, Krefeld, Germany
| | - Axel Semjonow
- Okyaz Eminaga, Stanford Medical School, Stanford, CA; University Hospital of Cologne, Cologne, France; Nurettin Eminaga, St Mauritius Therapy Clinic, Meerbusch; Axel Semjonow, University Hospital Muenster; and Bernhard Breil, Niederrhein University of Applied Sciences, Krefeld, Germany
| | - Bernhard Breil
- Okyaz Eminaga, Stanford Medical School, Stanford, CA; University Hospital of Cologne, Cologne, France; Nurettin Eminaga, St Mauritius Therapy Clinic, Meerbusch; Axel Semjonow, University Hospital Muenster; and Bernhard Breil, Niederrhein University of Applied Sciences, Krefeld, Germany
| |
Collapse
|
33
|
Lafarge MW, Pluim JPW, Eppenhof KAJ, Veta M. Learning Domain-Invariant Representations of Histological Images. Front Med (Lausanne) 2019; 6:162. [PMID: 31380377 PMCID: PMC6646468 DOI: 10.3389/fmed.2019.00162] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Accepted: 07/01/2019] [Indexed: 11/13/2022] Open
Abstract
Histological images present high appearance variability due to inconsistent latent parameters related to the preparation and scanning procedure of histological slides, as well as the inherent biological variability of tissues. Machine-learning models are trained with images from a limited set of domains, and are expected to generalize to images from unseen domains. Methodological design choices have to be made in order to yield domain invariance and proper generalization. In digital pathology, standard approaches focus either on ad-hoc normalization of the latent parameters based on prior knowledge, such as staining normalization, or aim at anticipating new variations of these parameters via data augmentation. Since every histological image originates from a unique data distribution, we propose to consider every histological slide of the training data as a domain and investigated the alternative approach of domain-adversarial training to learn features that are invariant to this available domain information. We carried out a comparative analysis with staining normalization and data augmentation on two different tasks: generalization to images acquired in unseen pathology labs for mitosis detection and generalization to unseen organs for nuclei segmentation. We report that the utility of each method depends on the type of task and type of data variability present at training and test time. The proposed framework for domain-adversarial training is able to improve generalization performances on top of conventional methods.
Collapse
Affiliation(s)
- Maxime W. Lafarge
- Medical Image Analysis Group, Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | | | | | | |
Collapse
|
34
|
Cheplygina V, de Bruijne M, Pluim JPW. Not-so-supervised: A survey of semi-supervised, multi-instance, and transfer learning in medical image analysis. Med Image Anal 2019; 54:280-296. [PMID: 30959445 DOI: 10.1016/j.media.2019.03.009] [Citation(s) in RCA: 319] [Impact Index Per Article: 63.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2018] [Revised: 12/20/2018] [Accepted: 03/25/2019] [Indexed: 02/07/2023]
Abstract
Machine learning (ML) algorithms have made a tremendous impact in the field of medical imaging. While medical imaging datasets have been growing in size, a challenge for supervised ML algorithms that is frequently mentioned is the lack of annotated data. As a result, various methods that can learn with less/other types of supervision, have been proposed. We give an overview of semi-supervised, multiple instance, and transfer learning in medical imaging, both in diagnosis or segmentation tasks. We also discuss connections between these learning scenarios, and opportunities for future research. A dataset with the details of the surveyed papers is available via https://figshare.com/articles/Database_of_surveyed_literature_in_Not-so-supervised_a_survey_of_semi-supervised_multi-instance_and_transfer_learning_in_medical_image_analysis_/7479416.
Collapse
Affiliation(s)
- Veronika Cheplygina
- Medical Image Analysis, Department Biomedical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands.
| | - Marleen de Bruijne
- Biomedical Imaging Group Rotterdam, Departments Radiology and Medical Informatics, Erasmus Medical Center, Rotterdam, the Netherlands; The Image Section, Department Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Josien P W Pluim
- Medical Image Analysis, Department Biomedical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands; Image Sciences Institute, University Medical Center Utrecht, Utrecht, the Netherlands
| |
Collapse
|
35
|
Integrating segmentation with deep learning for enhanced classification of epithelial and stromal tissues in H&E images. Pattern Recognit Lett 2019. [DOI: 10.1016/j.patrec.2017.09.015] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
|
36
|
Qi Q, Li Y, Wang J, Zheng H, Huang Y, Ding X, Rohde GK. Label-Efficient Breast Cancer Histopathological Image Classification. IEEE J Biomed Health Inform 2018; 23:2108-2116. [PMID: 30530374 DOI: 10.1109/jbhi.2018.2885134] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The automatic classification of breast cancer histopathological images has great significance in computer-aided diagnosis. Recently, deep learning via neural networks has enabled pattern detection and prediction using large, labeled datasets; whereas, collecting and annotating sufficient histological data using professional pathologists is time consuming, tedious, and extremely expensive. In the proposed paper, a deep active learning framework is designed and implemented for classification of breast cancer histopathological images, with the goal of maximizing the learning accuracy from very limited labeling. This method involves manual annotation of the most valuable unlabeled samples, which are then integrated into the training set. The model is then iteratively updated with an increasing training set. Here, two selection strategies are discussed for the proposed deep active learning framework: An entropy-based strategy and a confidence-boosting strategy. The proposed method has been validated using a publicly available breast cancer histopathological image dataset, wherein each image patch is binarily classified as benign or malignant. The experimental results demonstrate that, compared with a random selection, our proposed framework can reduce annotation costs up to 66.67%, with higher accuracy and less expensive annotation than standard query strategy.
Collapse
|
37
|
Iakovidis DK, Georgakopoulos SV, Vasilakakis M, Koulaouzidis A, Plagianakos VP. Detecting and Locating Gastrointestinal Anomalies Using Deep Learning and Iterative Cluster Unification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:2196-2210. [PMID: 29994763 DOI: 10.1109/tmi.2018.2837002] [Citation(s) in RCA: 68] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
This paper proposes a novel methodology for automatic detection and localization of gastrointestinal (GI) anomalies in endoscopic video frame sequences. Training is performed with weakly annotated images, using only image-level, semantic labels instead of detailed, and pixel-level annotations. This makes it a cost-effective approach for the analysis of large videoendoscopy repositories. Other advantages of the proposed methodology include its capability to suggest possible locations of GI anomalies within the video frames, and its generality, in the sense that abnormal frame detection is based on automatically derived image features. It is implemented in three phases: 1) it classifies the video frames into abnormal or normal using a weakly supervised convolutional neural network (WCNN) architecture; 2) detects salient points from deeper WCNN layers, using a deep saliency detection algorithm; and 3) localizes GI anomalies using an iterative cluster unification (ICU) algorithm. ICU is based on a pointwise cross-feature-map (PCFM) descriptor extracted locally from the detected salient points using information derived from the WCNN. Results, from extensive experimentation using publicly available collections of gastrointestinal endoscopy video frames, are presented. The data sets used include a variety of GI anomalies. Both anomaly detection and localization performance achieved, in terms of the area under receiver operating characteristic (AUC), were >80%. The highest AUC for anomaly detection was obtained on conventional gastroscopy images, reaching 96%, and the highest AUC for anomaly localization was obtained on wireless capsule endoscopy images, reaching 88%.
Collapse
|
38
|
Van Eycke YR, Balsat C, Verset L, Debeir O, Salmon I, Decaestecker C. Segmentation of glandular epithelium in colorectal tumours to automatically compartmentalise IHC biomarker quantification: A deep learning approach. Med Image Anal 2018; 49:35-45. [PMID: 30081241 DOI: 10.1016/j.media.2018.07.004] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2017] [Revised: 06/29/2018] [Accepted: 07/05/2018] [Indexed: 12/18/2022]
Abstract
In this paper, we propose a method for automatically annotating slide images from colorectal tissue samples. Our objective is to segment glandular epithelium in histological images from tissue slides submitted to different staining techniques, including usual haematoxylin-eosin (H&E) as well as immunohistochemistry (IHC). The proposed method makes use of Deep Learning and is based on a new convolutional network architecture. Our method achieves better performances than the state of the art on the H&E images of the GlaS challenge contest, whereas it uses only the haematoxylin colour channel extracted by colour deconvolution from the RGB images in order to extend its applicability to IHC. The network only needs to be fine-tuned on a small number of additional examples to be accurate on a new IHC dataset. Our approach also includes a new method of data augmentation to achieve good generalisation when working with different experimental conditions and different IHC markers. We show that our methodology enables to automate the compartmentalisation of the IHC biomarker analysis, results concurring highly with manual annotations.
Collapse
Affiliation(s)
- Yves-Rémi Van Eycke
- DIAPath, Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles (ULB), CPI 305/1, Rue Adrienne Bolland, 8, 6041 Gosselies, Belgium; Laboratories of Image, Signal processing & Acoustics, Université Libre de Bruxelles (ULB), CPI 165/57, Avenue Franklin Roosevelt 50, Brussels 1050 , Belgium.
| | - Cédric Balsat
- DIAPath, Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles (ULB), CPI 305/1, Rue Adrienne Bolland, 8, 6041 Gosselies, Belgium
| | - Laurine Verset
- Department of Pathology, Erasme Hospital, Université Libre de Bruxelles (ULB), Route de Lennik 808, Brussels 1070, Belgium
| | - Olivier Debeir
- Laboratories of Image, Signal processing & Acoustics, Université Libre de Bruxelles (ULB), CPI 165/57, Avenue Franklin Roosevelt 50, Brussels 1050 , Belgium; MIP, Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles (ULB), CPI 305/1, Rue Adrienne Bolland, 8, 6041 Gosselies, Belgium
| | - Isabelle Salmon
- DIAPath, Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles (ULB), CPI 305/1, Rue Adrienne Bolland, 8, 6041 Gosselies, Belgium; Department of Pathology, Erasme Hospital, Université Libre de Bruxelles (ULB), Route de Lennik 808, Brussels 1070, Belgium
| | - Christine Decaestecker
- DIAPath, Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles (ULB), CPI 305/1, Rue Adrienne Bolland, 8, 6041 Gosselies, Belgium; Laboratories of Image, Signal processing & Acoustics, Université Libre de Bruxelles (ULB), CPI 165/57, Avenue Franklin Roosevelt 50, Brussels 1050 , Belgium.
| |
Collapse
|