1
|
Guo Z, Tan Z, Feng J, Zhou J. 3D Vascular Segmentation Supervised by 2D Annotation of Maximum Intensity Projection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2241-2253. [PMID: 38319757 DOI: 10.1109/tmi.2024.3362847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/08/2024]
Abstract
Vascular structure segmentation plays a crucial role in medical analysis and clinical applications. The practical adoption of fully supervised segmentation models is impeded by the intricacy and time-consuming nature of annotating vessels in the 3D space. This has spurred the exploration of weakly-supervised approaches that reduce reliance on expensive segmentation annotations. Despite this, existing weakly supervised methods employed in organ segmentation, which encompass points, bounding boxes, or graffiti, have exhibited suboptimal performance when handling sparse vascular structure. To alleviate this issue, we employ maximum intensity projection (MIP) to decrease the dimensionality of 3D volume to 2D image for efficient annotation, and the 2D labels are utilized to provide guidance and oversight for training 3D vessel segmentation model. Initially, we generate pseudo-labels for 3D blood vessels using the annotations of 2D projections. Subsequently, taking into account the acquisition method of the 2D labels, we introduce a weakly-supervised network that fuses 2D-3D deep features via MIP to further improve segmentation performance. Furthermore, we integrate confidence learning and uncertainty estimation to refine the generated pseudo-labels, followed by fine-tuning the segmentation network. Our method is validated on five datasets (including cerebral vessel, aorta and coronary artery), demonstrating highly competitive performance in segmenting vessels and the potential to significantly reduce the time and effort required for vessel annotation. Our code is available at: https://github.com/gzq17/Weakly-Supervised-by-MIP.
Collapse
|
2
|
Lin Y, Wang Z, Zhang D, Cheng KT, Chen H. BoNuS: Boundary Mining for Nuclei Segmentation With Partial Point Labels. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2137-2147. [PMID: 38231818 DOI: 10.1109/tmi.2024.3355068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Abstract
Nuclei segmentation is a fundamental prerequisite in the digital pathology workflow. The development of automated methods for nuclei segmentation enables quantitative analysis of the wide existence and large variances in nuclei morphometry in histopathology images. However, manual annotation of tens of thousands of nuclei is tedious and time-consuming, which requires significant amount of human effort and domain-specific expertise. To alleviate this problem, in this paper, we propose a weakly-supervised nuclei segmentation method that only requires partial point labels of nuclei. Specifically, we propose a novel boundary mining framework for nuclei segmentation, named BoNuS, which simultaneously learns nuclei interior and boundary information from the point labels. To achieve this goal, we propose a novel boundary mining loss, which guides the model to learn the boundary information by exploring the pairwise pixel affinity in a multiple-instance learning manner. Then, we consider a more challenging problem, i.e., partial point label, where we propose a nuclei detection module with curriculum learning to detect the missing nuclei with prior morphological knowledge. The proposed method is validated on three public datasets, MoNuSeg, CPM, and CoNIC datasets. Experimental results demonstrate the superior performance of our method to the state-of-the-art weakly-supervised nuclei segmentation methods. Code: https://github.com/hust-linyi/bonus.
Collapse
|
3
|
Yao J, Han L, Guo G, Zheng Z, Cong R, Huang X, Ding J, Yang K, Zhang D, Han J. Position-based anchor optimization for point supervised dense nuclei detection. Neural Netw 2024; 171:159-170. [PMID: 38091760 DOI: 10.1016/j.neunet.2023.12.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 10/10/2023] [Accepted: 12/04/2023] [Indexed: 01/29/2024]
Abstract
Nuclei detection is one of the most fundamental and challenging problems in histopathological image analysis, which can localize nuclei to provide effective computer-aided cancer diagnosis, treatment decision, and prognosis. The fully-supervised nuclei detector requires a large number of nuclei annotations on high-resolution digital images, which is time-consuming and needs human annotators with professional knowledge. In recent years, weakly-supervised learning has attracted significant attention in reducing the labeling burden. However, detecting dense nuclei of complex crowded distribution and diverse appearances remains a challenge. To solve this problem, we propose a novel point-supervised dense nuclei detection framework that introduces position-based anchor optimization to complete morphology-based pseudo-label supervision. Specifically, we first generate cellular-level pseudo labels (CPL) for the detection head via a morphology-based mechanism, which can help to build a baseline point-supervised detection network. Then, considering the crowded distribution of the dense nuclei, we propose a mechanism called Position-based Anchor-quality Estimation (PAE), which utilizes the positional deviation between an anchor and its corresponding point label to suppress low-quality detections far from each nucleus. Finally, to better handle the diverse appearances of nuclei, an Adaptive Anchor Selector (AAS) operation is proposed to automatically select positive and negative anchors according to morphological and positional statistical characteristics of nuclei. We conduct comprehensive experiments on two widely used benchmarks, MO and Lizard, using ResNet50 and PVTv2 as backbones. The results demonstrate that the proposed approach has superior capacity compared with other state-of-the-art methods. In particularly, in dense nuclei scenarios, our method can achieve 95.1% performance of the fully-supervised approach. The code is available at https://github.com/NucleiDet/DenseNucleiDet.
Collapse
Affiliation(s)
- Jieru Yao
- Brain and Artificial Intelligence Lab, School of Automation, Northwestern Polytechnical University, Xi'an, Shaanxi, 710072, China
| | - Longfei Han
- School of Computer Science, Beijing Technology and Business University, Beijing, 100048, China; Hefei Comprehensive National Science Center, Hefei, Anhui, 230088, China
| | - Guangyu Guo
- Brain and Artificial Intelligence Lab, School of Automation, Northwestern Polytechnical University, Xi'an, Shaanxi, 710072, China
| | - Zhaohui Zheng
- Department of Clinical Immunology, Xijing Hospital, The Fourth Military Medical University, Xi'an, Shaanxi, 710032, China.
| | - Runmin Cong
- School of Control Science and Engineering, Shandong University, Jinan, Shandong, 250100, China
| | - Xiankai Huang
- Beijing Technology and Business University, Beijing, 100048, China
| | - Jin Ding
- Department of Clinical Immunology, Xijing Hospital, The Fourth Military Medical University, Xi'an, Shaanxi, 710032, China
| | - Kaihui Yang
- School of software, Nanchang University, Nanchang, Jiangxi, 330031, China
| | - Dingwen Zhang
- Brain and Artificial Intelligence Lab, School of Automation, Northwestern Polytechnical University, Xi'an, Shaanxi, 710072, China; Hefei Comprehensive National Science Center, Hefei, Anhui, 230088, China; Department of Clinical Immunology, Xijing Hospital, The Fourth Military Medical University, Xi'an, Shaanxi, 710032, China.
| | - Junwei Han
- Hefei Comprehensive National Science Center, Hefei, Anhui, 230088, China
| |
Collapse
|
4
|
Chelebian E, Avenel C, Ciompi F, Wählby C. DEPICTER: Deep representation clustering for histology annotation. Comput Biol Med 2024; 170:108026. [PMID: 38308865 DOI: 10.1016/j.compbiomed.2024.108026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 01/24/2024] [Accepted: 01/24/2024] [Indexed: 02/05/2024]
Abstract
Automatic segmentation of histopathology whole-slide images (WSI) usually involves supervised training of deep learning models with pixel-level labels to classify each pixel of the WSI into tissue regions such as benign or cancerous. However, fully supervised segmentation requires large-scale data manually annotated by experts, which can be expensive and time-consuming to obtain. Non-fully supervised methods, ranging from semi-supervised to unsupervised, have been proposed to address this issue and have been successful in WSI segmentation tasks. But these methods have mainly been focused on technical advancements in algorithmic performance rather than on the development of practical tools that could be used by pathologists or researchers in real-world scenarios. In contrast, we present DEPICTER (Deep rEPresentatIon ClusTERing), an interactive segmentation tool for histopathology annotation that produces a patch-wise dense segmentation map at WSI level. The interactive nature of DEPICTER leverages self- and semi-supervised learning approaches to allow the user to participate in the segmentation producing reliable results while reducing the workload. DEPICTER consists of three steps: first, a pretrained model is used to compute embeddings from image patches. Next, the user selects a number of benign and cancerous patches from the multi-resolution image. Finally, guided by the deep representations, label propagation is achieved using our novel seeded iterative clustering method or by directly interacting with the embedding space via feature space gating. We report both real-time interaction results with three pathologists and evaluate the performance on three public cancer classification dataset benchmarks through simulations. The code and demos of DEPICTER are publicly available at https://github.com/eduardchelebian/depicter.
Collapse
Affiliation(s)
- Eduard Chelebian
- Department of Information Technology and SciLifeLab, Uppsala University, Uppsala, Sweden.
| | - Chirstophe Avenel
- Department of Information Technology and SciLifeLab, Uppsala University, Uppsala, Sweden
| | - Francesco Ciompi
- Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Carolina Wählby
- Department of Information Technology and SciLifeLab, Uppsala University, Uppsala, Sweden
| |
Collapse
|
5
|
Wang S, Zhao J, Cai Y, Li Y, Qi X, Qiu X, Yao X, Tian Y, Zhu Y, Cao W, Zhang X. A method for small-sized wheat seedlings detection: from annotation mode to model construction. PLANT METHODS 2024; 20:15. [PMID: 38287423 PMCID: PMC10826033 DOI: 10.1186/s13007-024-01147-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 01/23/2024] [Indexed: 01/31/2024]
Abstract
The number of seedlings is an important indicator that reflects the size of the wheat population during the seedling stage. Researchers increasingly use deep learning to detect and count wheat seedlings from unmanned aerial vehicle (UAV) images. However, due to the small size and diverse postures of wheat seedlings, it can be challenging to estimate their numbers accurately during the seedling stage. In most related works in wheat seedling detection, they label the whole plant, often resulting in a higher proportion of soil background within the annotated bounding boxes. This imbalance between wheat seedlings and soil background in the annotated bounding boxes decreases the detection performance. This study proposes a wheat seedling detection method based on a local annotation instead of a global annotation. Moreover, the detection model is also improved by replacing convolutional and pooling layers with the Space-to-depth Conv module and adding a micro-scale detection layer in the YOLOv5 head network to better extract small-scale features in these small annotation boxes. The optimization of the detection model can reduce the number of error detections caused by leaf occlusion between wheat seedlings and the small size of wheat seedlings. The results show that the proposed method achieves a detection accuracy of 90.1%, outperforming other state-of-the-art detection methods. The proposed method provides a reference for future wheat seedling detection and yield prediction.
Collapse
Affiliation(s)
- Suwan Wang
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
| | - Jianqing Zhao
- College of Geography, Jiangsu Second Normal University, Nanjing, 211200, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing, 210095, China
| | - Yucheng Cai
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing, 210095, China
| | - Yan Li
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing, 210095, China
| | - Xuerui Qi
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing, 210095, China
| | - Xiaolei Qiu
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing, 210095, China
- Jiangsu Key Laboratory for Information Agriculture, Nanjing, 210095, China
| | - Xia Yao
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing, 210095, China
- Jiangsu Key Laboratory for Information Agriculture, Nanjing, 210095, China
| | - Yongchao Tian
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
- Jiangsu Collaborative Innovation Center for Modern Crop Production, Nanjing, 210095, China
| | - Yan Zhu
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing, 210095, China
| | - Weixing Cao
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing, 210095, China
| | - Xiaohu Zhang
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China.
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing, 210095, China.
- Jiangsu Collaborative Innovation Center for Modern Crop Production, Nanjing, 210095, China.
| |
Collapse
|
6
|
Wagner SJ, Matek C, Shetab Boushehri S, Boxberg M, Lamm L, Sadafi A, Winter DJE, Marr C, Peng T. Built to Last? Reproducibility and Reusability of Deep Learning Algorithms in Computational Pathology. Mod Pathol 2024; 37:100350. [PMID: 37827448 DOI: 10.1016/j.modpat.2023.100350] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 10/02/2023] [Accepted: 10/03/2023] [Indexed: 10/14/2023]
Abstract
Recent progress in computational pathology has been driven by deep learning. While code and data availability are essential to reproduce findings from preceding publications, ensuring a deep learning model's reusability is more challenging. For that, the codebase should be well-documented and easy to integrate into existing workflows and models should be robust toward noise and generalizable toward data from different sources. Strikingly, only a few computational pathology algorithms have been reused by other researchers so far, let alone employed in a clinical setting. To assess the current state of reproducibility and reusability of computational pathology algorithms, we evaluated peer-reviewed articles available in PubMed, published between January 2019 and March 2021, in 5 use cases: stain normalization; tissue type segmentation; evaluation of cell-level features; genetic alteration prediction; and inference of grading, staging, and prognostic information. We compiled criteria for data and code availability and statistical result analysis and assessed them in 160 publications. We found that only one-quarter (41 of 160 publications) made code publicly available. Among these 41 studies, three-quarters (30 of 41) analyzed their results statistically, half of them (20 of 41) released their trained model weights, and approximately a third (16 of 41) used an independent cohort for evaluation. Our review is intended for both pathologists interested in deep learning and researchers applying algorithms to computational pathology challenges. We provide a detailed overview of publications with published code in the field, list reusable data handling tools, and provide criteria for reproducibility and reusability.
Collapse
Affiliation(s)
- Sophia J Wagner
- Helmholtz AI, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany; School of Computation, Information and Technology, Technical University of Munich, Garching, Germany
| | - Christian Matek
- Institute of AI for Health, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany; Institute of Pathology, University Hospital Erlangen, Erlangen, Germany
| | - Sayedali Shetab Boushehri
- School of Computation, Information and Technology, Technical University of Munich, Garching, Germany; Institute of AI for Health, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany; Data & Analytics (D&A), Roche Pharma Research and Early Development (pRED), Roche Innovation Center Munich, Germany
| | - Melanie Boxberg
- Institute of Pathology, Technical University Munich, Munich, Germany; Institute of Pathology Munich-North, Munich, Germany
| | - Lorenz Lamm
- Helmholtz AI, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany; Helmholtz Pioneer Campus, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany
| | - Ario Sadafi
- School of Computation, Information and Technology, Technical University of Munich, Garching, Germany; Institute of AI for Health, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany
| | - Dominik J E Winter
- Institute of AI for Health, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany; School of Life Sciences, Technical University of Munich, Weihenstephan, Germany
| | - Carsten Marr
- Institute of AI for Health, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany.
| | - Tingying Peng
- Helmholtz AI, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany.
| |
Collapse
|
7
|
Shi Y, Wang H, Ji H, Liu H, Li Y, He N, Wei D, Huang Y, Dai Q, Wu J, Chen X, Zheng Y, Yu H. A deep weakly semi-supervised framework for endoscopic lesion segmentation. Med Image Anal 2023; 90:102973. [PMID: 37757643 DOI: 10.1016/j.media.2023.102973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Revised: 07/19/2023] [Accepted: 09/14/2023] [Indexed: 09/29/2023]
Abstract
In the field of medical image analysis, accurate lesion segmentation is beneficial for the subsequent clinical diagnosis and treatment planning. Currently, various deep learning-based methods have been proposed to deal with the segmentation task. Albeit achieving some promising performances, the fully-supervised learning approaches require pixel-level annotations for model training, which is tedious and time-consuming for experienced radiologists to collect. In this paper, we propose a weakly semi-supervised segmentation framework, called Point Segmentation Transformer (Point SEGTR). Particularly, the framework utilizes a small amount of fully-supervised data with pixel-level segmentation masks and a large amount of weakly-supervised data with point-level annotations (i.e., annotating a point inside each object) for network training, which largely reduces the demand of pixel-level annotations significantly. To fully exploit the pixel-level and point-level annotations, we propose two regularization terms, i.e., multi-point consistency and symmetric consistency, to boost the quality of pseudo labels, which are then adopted to train a student model for inference. Extensive experiments are conducted on three endoscopy datasets with different lesion structures and several body sites (e.g., colorectal and nasopharynx). Comprehensive experimental results finely substantiate the effectiveness and the generality of our proposed method, as well as its potential to loosen the requirements of pixel-level annotations, which is valuable for clinical applications.
Collapse
Affiliation(s)
- Yuxuan Shi
- ENT Institute and Department of Otolaryngology, Eye & ENT Hospital, Fudan University, Shanghai, 200031, China
| | - Hong Wang
- Tencent Jarvis Lab, Shenzhen 518000, China.
| | - Haoqin Ji
- Tencent Jarvis Lab, Shenzhen 518000, China
| | - Haozhe Liu
- Tencent Jarvis Lab, Shenzhen 518000, China
| | | | - Nanjun He
- Tencent Jarvis Lab, Shenzhen 518000, China
| | - Dong Wei
- Tencent Jarvis Lab, Shenzhen 518000, China
| | | | - Qi Dai
- ENT Institute and Department of Otolaryngology, Eye & ENT Hospital, Fudan University, Shanghai, 200031, China
| | - Jianrong Wu
- Tencent Healthcare (Shenzhen) Co. LTD., Shenzhen 518063, China
| | - Xinrong Chen
- Academy for Engineering and Technology, Fudan University, 220 Handan Road, Shanghai 200033, China.
| | | | - Hongmeng Yu
- ENT Institute and Department of Otolaryngology, Eye & ENT Hospital, Fudan University, Shanghai, 200031, China; Research Units of New Technologies of Endoscopic Surgery in Skull Base Tumor, Chinese Academy of Medical Sciences, 2018RU003, China.
| |
Collapse
|
8
|
Pati P, Jaume G, Ayadi Z, Thandiackal K, Bozorgtabar B, Gabrani M, Goksel O. Weakly supervised joint whole-slide segmentation and classification in prostate cancer. Med Image Anal 2023; 89:102915. [PMID: 37633177 DOI: 10.1016/j.media.2023.102915] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 05/17/2023] [Accepted: 07/25/2023] [Indexed: 08/28/2023]
Abstract
The identification and segmentation of histological regions of interest can provide significant support to pathologists in their diagnostic tasks. However, segmentation methods are constrained by the difficulty in obtaining pixel-level annotations, which are tedious and expensive to collect for whole-slide images (WSI). Though several methods have been developed to exploit image-level weak-supervision for WSI classification, the task of segmentation using WSI-level labels has received very little attention. The research in this direction typically require additional supervision beyond image labels, which are difficult to obtain in real-world practice. In this study, we propose WholeSIGHT, a weakly-supervised method that can simultaneously segment and classify WSIs of arbitrary shapes and sizes. Formally, WholeSIGHT first constructs a tissue-graph representation of WSI, where the nodes and edges depict tissue regions and their interactions, respectively. During training, a graph classification head classifies the WSI and produces node-level pseudo-labels via post-hoc feature attribution. These pseudo-labels are then used to train a node classification head for WSI segmentation. During testing, both heads simultaneously render segmentation and class prediction for an input WSI. We evaluate the performance of WholeSIGHT on three public prostate cancer WSI datasets. Our method achieves state-of-the-art weakly-supervised segmentation performance on all datasets while resulting in better or comparable classification with respect to state-of-the-art weakly-supervised WSI classification methods. Additionally, we assess the generalization capability of our method in terms of segmentation and classification performance, uncertainty estimation, and model calibration. Our code is available at: https://github.com/histocartography/wholesight.
Collapse
Affiliation(s)
| | - Guillaume Jaume
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber/Harvard Cancer Center, Boston, MA, USA
| | - Zeineb Ayadi
- IBM Research Europe, Zurich, Switzerland; EPFL, Lausanne, Switzerland
| | - Kevin Thandiackal
- IBM Research Europe, Zurich, Switzerland; Computer-Assisted Applications in Medicine, ETH Zurich, Zurich, Switzerland
| | | | | | - Orcun Goksel
- Computer-Assisted Applications in Medicine, ETH Zurich, Zurich, Switzerland; Department of Information Technology, Uppsala University, Sweden
| |
Collapse
|
9
|
Lin Y, Qu Z, Chen H, Gao Z, Li Y, Xia L, Ma K, Zheng Y, Cheng KT. Nuclei segmentation with point annotations from pathology images via self-supervised learning and co-training. Med Image Anal 2023; 89:102933. [PMID: 37611532 DOI: 10.1016/j.media.2023.102933] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 07/21/2023] [Accepted: 08/10/2023] [Indexed: 08/25/2023]
Abstract
Nuclei segmentation is a crucial task for whole slide image analysis in digital pathology. Generally, the segmentation performance of fully-supervised learning heavily depends on the amount and quality of the annotated data. However, it is time-consuming and expensive for professional pathologists to provide accurate pixel-level ground truth, while it is much easier to get coarse labels such as point annotations. In this paper, we propose a weakly-supervised learning method for nuclei segmentation that only requires point annotations for training. First, coarse pixel-level labels are derived from the point annotations based on the Voronoi diagram and the k-means clustering method to avoid overfitting. Second, a co-training strategy with an exponential moving average method is designed to refine the incomplete supervision of the coarse labels. Third, a self-supervised visual representation learning method is tailored for nuclei segmentation of pathology images that transforms the hematoxylin component images into the H&E stained images to gain better understanding of the relationship between the nuclei and cytoplasm. We comprehensively evaluate the proposed method using two public datasets. Both visual and quantitative results demonstrate the superiority of our method to the state-of-the-art methods, and its competitive performance compared to the fully-supervised methods. Codes are available at https://github.com/hust-linyi/SC-Net.
Collapse
Affiliation(s)
- Yi Lin
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong
| | - Zhiyong Qu
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Hao Chen
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong; Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong.
| | - Zhongke Gao
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | | | - Lili Xia
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Kai Ma
- Tencent Jarvis Lab, Shenzhen, China
| | | | - Kwang-Ting Cheng
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong
| |
Collapse
|
10
|
Chang Q, Yan Z, Zhou M, Qu H, He X, Zhang H, Baskaran L, Al'Aref S, Li H, Zhang S, Metaxas DN. Mining multi-center heterogeneous medical data with distributed synthetic learning. Nat Commun 2023; 14:5510. [PMID: 37679325 PMCID: PMC10484909 DOI: 10.1038/s41467-023-40687-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 08/03/2023] [Indexed: 09/09/2023] Open
Abstract
Overcoming barriers on the use of multi-center data for medical analytics is challenging due to privacy protection and data heterogeneity in the healthcare system. In this study, we propose the Distributed Synthetic Learning (DSL) architecture to learn across multiple medical centers and ensure the protection of sensitive personal information. DSL enables the building of a homogeneous dataset with entirely synthetic medical images via a form of GAN-based synthetic learning. The proposed DSL architecture has the following key functionalities: multi-modality learning, missing modality completion learning, and continual learning. We systematically evaluate the performance of DSL on different medical applications using cardiac computed tomography angiography (CTA), brain tumor MRI, and histopathology nuclei datasets. Extensive experiments demonstrate the superior performance of DSL as a high-quality synthetic medical image provider by the use of an ideal synthetic quality metric called Dist-FID. We show that DSL can be adapted to heterogeneous data and remarkably outperforms the real misaligned modalities segmentation model by 55% and the temporal datasets segmentation model by 8%.
Collapse
Affiliation(s)
- Qi Chang
- Department of Computer Science, Rutgers University, Piscataway, NJ, USA
| | | | - Mu Zhou
- SenseBrain Research, Princeton, NJ, USA
- Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | - Hui Qu
- Department of Computer Science, Rutgers University, Piscataway, NJ, USA
| | - Xiaoxiao He
- Department of Computer Science, Rutgers University, Piscataway, NJ, USA
| | - Han Zhang
- Department of Computer Science, Rutgers University, Piscataway, NJ, USA
| | - Lohendran Baskaran
- Department of Cardiovascular Medicine, National Heart Centre Singapore, and Duke-National University Of Singapore, Singapore, Singapore
| | - Subhi Al'Aref
- Department of Medicine, Division of Cardiology, University of Arkansas for Medical Sciences, Little Rock, AR, USA
| | - Hongsheng Li
- Chinese University of Hong Kong, Hong Kong SAR, China.
- Centre for Perceptual and Interactive Intelligence (CPII), Hong Kong SAR, China.
| | - Shaoting Zhang
- Shanghai Artificial Intelligence Laboratory, Shanghai, China.
- Centre for Perceptual and Interactive Intelligence (CPII), Hong Kong SAR, China.
- SenseTime, Shanghai, China.
| | - Dimitris N Metaxas
- Department of Computer Science, Rutgers University, Piscataway, NJ, USA.
| |
Collapse
|
11
|
Meng X, Zou T. Clinical applications of graph neural networks in computational histopathology: A review. Comput Biol Med 2023; 164:107201. [PMID: 37517325 DOI: 10.1016/j.compbiomed.2023.107201] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 06/10/2023] [Accepted: 06/19/2023] [Indexed: 08/01/2023]
Abstract
Pathological examination is the optimal approach for diagnosing cancer, and with the advancement of digital imaging technologies, it has spurred the emergence of computational histopathology. The objective of computational histopathology is to assist in clinical tasks through image processing and analysis techniques. In the early stages, the technique involved analyzing histopathology images by extracting mathematical features, but the performance of these models was unsatisfactory. With the development of artificial intelligence (AI) technologies, traditional machine learning methods were applied in this field. Although the performance of the models improved, there were issues such as poor model generalization and tedious manual feature extraction. Subsequently, the introduction of deep learning techniques effectively addressed these problems. However, models based on traditional convolutional architectures could not adequately capture the contextual information and deep biological features in histopathology images. Due to the special structure of graphs, they are highly suitable for feature extraction in tissue histopathology images and have achieved promising performance in numerous studies. In this article, we review existing graph-based methods in computational histopathology and propose a novel and more comprehensive graph construction approach. Additionally, we categorize the methods and techniques in computational histopathology according to different learning paradigms. We summarize the common clinical applications of graph-based methods in computational histopathology. Furthermore, we discuss the core concepts in this field and highlight the current challenges and future research directions.
Collapse
Affiliation(s)
- Xiangyan Meng
- Xi'an Technological University, Xi'an, Shaanxi, 710021, China.
| | - Tonghui Zou
- Xi'an Technological University, Xi'an, Shaanxi, 710021, China.
| |
Collapse
|
12
|
Drioua WR, Benamrane N, Sais L. Breast Cancer Histopathological Images Segmentation Using Deep Learning. SENSORS (BASEL, SWITZERLAND) 2023; 23:7318. [PMID: 37687772 PMCID: PMC10490494 DOI: 10.3390/s23177318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 08/10/2023] [Accepted: 08/18/2023] [Indexed: 09/10/2023]
Abstract
Hospitals generate a significant amount of medical data every day, which constitute a very rich database for research. Today, this database is still not exploitable because to make its valorization possible, the images require an annotation which remains a costly and difficult task. Thus, the use of an unsupervised segmentation method could facilitate the process. In this article, we propose two approaches for the semantic segmentation of breast cancer histopathology images. On the one hand, an autoencoder architecture for unsupervised segmentation is proposed, and on the other hand, an improvement U-Net architecture for supervised segmentation is proposed. We evaluate these models on a public dataset of histological images of breast cancer. In addition, the performance of our segmentation methods is measured using several evaluation metrics such as accuracy, recall, precision and F1 score. The results are competitive with those of other modern methods.
Collapse
Affiliation(s)
- Wafaa Rajaa Drioua
- Laboratoire SIMPA, Département d’Informatique, Université des Sciences et de la Technologie d’Oran Mohamed Boudiaf (USTO-MB), Oran 31000, Algeria;
| | - Nacéra Benamrane
- Laboratoire SIMPA, Département d’Informatique, Université des Sciences et de la Technologie d’Oran Mohamed Boudiaf (USTO-MB), Oran 31000, Algeria;
| | - Lakhdar Sais
- Centre de Recherche en Informatique de Lens, CRIL, CNRS, Université d’Artois, 62307 Lens, France;
| |
Collapse
|
13
|
NST: A nuclei segmentation method based on transformer for gastrointestinal cancer pathological images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
|
14
|
Ding K, Zhou M, Wang H, Gevaert O, Metaxas D, Zhang S. A Large-scale Synthetic Pathological Dataset for Deep Learning-enabled Segmentation of Breast Cancer. Sci Data 2023; 10:231. [PMID: 37085533 PMCID: PMC10121551 DOI: 10.1038/s41597-023-02125-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 03/31/2023] [Indexed: 04/23/2023] Open
Abstract
The success of training computer-vision models heavily relies on the support of large-scale, real-world images with annotations. Yet such an annotation-ready dataset is difficult to curate in pathology due to the privacy protection and excessive annotation burden. To aid in computational pathology, synthetic data generation, curation, and annotation present a cost-effective means to quickly enable data diversity that is required to boost model performance at different stages. In this study, we introduce a large-scale synthetic pathological image dataset paired with the annotation for nuclei semantic segmentation, termed as Synthetic Nuclei and annOtation Wizard (SNOW). The proposed SNOW is developed via a standardized workflow by applying the off-the-shelf image generator and nuclei annotator. The dataset contains overall 20k image tiles and 1,448,522 annotated nuclei with the CC-BY license. We show that SNOW can be used in both supervised and semi-supervised training scenarios. Extensive results suggest that synthetic-data-trained models are competitive under a variety of model training settings, expanding the scope of better using synthetic images for enhancing downstream data-driven clinical tasks.
Collapse
Affiliation(s)
- Kexin Ding
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC, 28262, USA
| | - Mu Zhou
- Sensebrain Research, San Jose, CA, 95131, USA
| | - He Wang
- Department of Pathology, Yale University, New Haven, CT, 06520, USA
| | - Olivier Gevaert
- Stanford Center for Biomedical Informatics Research, Department of Medicine and Biomedical Data Science, Stanford University, Stanford, CA, 94305, USA
| | - Dimitris Metaxas
- Department of Computer Science, Rutgers University, New Brunswick, NJ, 08901, USA
| | - Shaoting Zhang
- Shanghai Artificial Intelligence Laboratory, Shanghai, 200232, China.
| |
Collapse
|
15
|
Lou W, Li H, Li G, Han X, Wan X. Which Pixel to Annotate: A Label-Efficient Nuclei Segmentation Framework. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:947-958. [PMID: 36355729 DOI: 10.1109/tmi.2022.3221666] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Recently deep neural networks, which require a large amount of annotated samples, have been widely applied in nuclei instance segmentation of H&E stained pathology images. However, it is inefficient and unnecessary to label all pixels for a dataset of nuclei images which usually contain similar and redundant patterns. Although unsupervised and semi-supervised learning methods have been studied for nuclei segmentation, very few works have delved into the selective labeling of samples to reduce the workload of annotation. Thus, in this paper, we propose a novel full nuclei segmentation framework that chooses only a few image patches to be annotated, augments the training set from the selected samples, and achieves nuclei segmentation in a semi-supervised manner. In the proposed framework, we first develop a novel consistency-based patch selection method to determine which image patches are the most beneficial to the training. Then we introduce a conditional single-image GAN with a component-wise discriminator, to synthesize more training samples. Lastly, our proposed framework trains an existing segmentation model with the above augmented samples. The experimental results show that our proposed method could obtain the same-level performance as a fully-supervised baseline by annotating less than 5% pixels on some benchmarks.
Collapse
|
16
|
Basu A, Senapati P, Deb M, Rai R, Dhal KG. A survey on recent trends in deep learning for nucleus segmentation from histopathology images. EVOLVING SYSTEMS 2023; 15:1-46. [PMID: 38625364 PMCID: PMC9987406 DOI: 10.1007/s12530-023-09491-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 02/13/2023] [Indexed: 03/08/2023]
Abstract
Nucleus segmentation is an imperative step in the qualitative study of imaging datasets, considered as an intricate task in histopathology image analysis. Segmenting a nucleus is an important part of diagnosing, staging, and grading cancer, but overlapping regions make it hard to separate and tell apart independent nuclei. Deep Learning is swiftly paving its way in the arena of nucleus segmentation, attracting quite a few researchers with its numerous published research articles indicating its efficacy in the field. This paper presents a systematic survey on nucleus segmentation using deep learning in the last five years (2017-2021), highlighting various segmentation models (U-Net, SCPP-Net, Sharp U-Net, and LiverNet) and exploring their similarities, strengths, datasets utilized, and unfolding research areas.
Collapse
Affiliation(s)
- Anusua Basu
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| | - Pradip Senapati
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| | - Mainak Deb
- Wipro Technologies, Pune, Maharashtra India
| | - Rebika Rai
- Department of Computer Applications, Sikkim University, Sikkim, India
| | - Krishna Gopal Dhal
- Department of Computer Science and Application, Midnapore College (Autonomous), Paschim Medinipur, Midnapore, West Bengal India
| |
Collapse
|
17
|
Guo R, Xie K, Pagnucco M, Song Y. SAC-Net: Learning with weak and noisy labels in histopathology image segmentation. Med Image Anal 2023; 86:102790. [PMID: 36878159 DOI: 10.1016/j.media.2023.102790] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2022] [Revised: 11/24/2022] [Accepted: 02/23/2023] [Indexed: 03/06/2023]
Abstract
Deep convolutional neural networks have been highly effective in segmentation tasks. However, segmentation becomes more difficult when training images include many complex instances to segment, such as the task of nuclei segmentation in histopathology images. Weakly supervised learning can reduce the need for large-scale, high-quality ground truth annotations by involving non-expert annotators or algorithms to generate supervision information for segmentation. However, there is still a significant performance gap between weakly supervised learning and fully supervised learning approaches. In this work, we propose a weakly-supervised nuclei segmentation method in a two-stage training manner that only requires annotation of the nuclear centroids. First, we generate boundary and superpixel-based masks as pseudo ground truth labels to train our SAC-Net, which is a segmentation network enhanced by a constraint network and an attention network to effectively address the problems caused by noisy labels. Then, we refine the pseudo labels at the pixel level based on Confident Learning to train the network again. Our method shows highly competitive performance of cell nuclei segmentation in histopathology images on three public datasets. Code will be available at: https://github.com/RuoyuGuo/MaskGA_Net.
Collapse
Affiliation(s)
- Ruoyu Guo
- School of Computer Science and Engineering, University of New South Wales, Australia
| | - Kunzi Xie
- School of Computer Science and Engineering, University of New South Wales, Australia
| | - Maurice Pagnucco
- School of Computer Science and Engineering, University of New South Wales, Australia
| | - Yang Song
- School of Computer Science and Engineering, University of New South Wales, Australia.
| |
Collapse
|
18
|
Li S, Cai H, Qi L, Yu Q, Shi Y, Gao Y. PLN: Parasitic-Like Network for Barely Supervised Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:582-593. [PMID: 36178993 DOI: 10.1109/tmi.2022.3211188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
It is known that annotations for 3D medical image segmentation tasks are laborious, time-consuming and expensive. Considering the similarities existing in inter-slice and inter-volume, we believe that the delineation way and the model architecture should be tightly coupled. In this paper, by introducing an extremely sparse annotation way of labeling only one slice per 3D image, we investigate a novel barely-supervised segmentation setting with only a few sparsely-labeled images along with a large amount of unlabeled images. To achieve this goal, we present a new parasitic-like network including a registration module (as host) and a semi-supervised segmentation module (as parasite) to deal with inter-slice label propagation and inter-volume segmentation prediction, respectively. Specifically, our parasitism mechanism effectively achieves the collaboration of these two modules through three stages of infection, development and eclosion, providing accurate pseudo-labels for training. Extensive results demonstrate that our framework is capable of achieving high performance on extremely sparse annotation tasks, e.g., we achieve Dice of 84.83% on LA dataset with only 16 labeled slices. The code is available athttps://github.com/ShumengLI/PLN.
Collapse
|
19
|
Tan L, Li H, Yu J, Zhou H, Wang Z, Niu Z, Li J, Li Z. Colorectal cancer lymph node metastasis prediction with weakly supervised transformer-based multi-instance learning. Med Biol Eng Comput 2023; 61:1565-1580. [PMID: 36809427 PMCID: PMC10182132 DOI: 10.1007/s11517-023-02799-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Accepted: 01/31/2023] [Indexed: 02/23/2023]
Abstract
Lymph node metastasis examined by the resected lymph nodes is considered one of the most important prognostic factors for colorectal cancer (CRC). However, it requires careful and comprehensive inspection by expert pathologists. To relieve the pathologists' burden and speed up the diagnostic process, in this paper, we develop a deep learning system with the binary positive/negative labels of the lymph nodes to solve the CRC lymph node classification task. The multi-instance learning (MIL) framework is adopted in our method to handle the whole slide images (WSIs) of gigapixels in size at once and get rid of the labor-intensive and time-consuming detailed annotations. First, a transformer-based MIL model, DT-DSMIL, is proposed in this paper based on the deformable transformer backbone and the dual-stream MIL (DSMIL) framework. The local-level image features are extracted and aggregated with the deformable transformer, and the global-level image features are obtained with the DSMIL aggregator. The final classification decision is made based on both the local and the global-level features. After the effectiveness of our proposed DT-DSMIL model is demonstrated by comparing its performance with its predecessors, a diagnostic system is developed to detect, crop, and finally identify the single lymph nodes within the slides based on the DT-DSMIL and the Faster R-CNN model. The developed diagnostic model is trained and tested on a clinically collected CRC lymph node metastasis dataset composed of 843 slides (864 metastasis lymph nodes and 1415 non-metastatic lymph nodes), achieving the accuracy of 95.3% and the area under the receiver operating characteristic curve (AUC) of 0.9762 (95% confidence interval [CI]: 0.9607-0.9891) for the single lymph node classification. As for the lymph nodes with micro-metastasis and macro-metastasis, our diagnostic system achieves the AUC of 0.9816 (95% CI: 0.9659-0.9935) and 0.9902 (95% CI: 0.9787-0.9983), respectively. Moreover, the system shows reliable diagnostic region localizing performance: the model can always identify the most likely metastases, no matter the model's predictions or manual labels, showing great potential in avoiding false negatives and discovering incorrectly labeled slides in actual clinical use.
Collapse
Affiliation(s)
- Luxin Tan
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Pathology, Peking University Cancer Hospital & Institute, Beijing, 100142, China
| | - Huan Li
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Pathology, Peking University Cancer Hospital & Institute, Beijing, 100142, China
| | - Jinze Yu
- Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, Beijing, 100191, China.,School of Computer Science and Engineering, Beihang University, Beijing, 100191, China.,Shenyuan Honors College, Beihang University, Beijing, 100191, China
| | - Haoyi Zhou
- Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, Beijing, 100191, China.,College of Software, Beihang University, Beijing, 100191, China
| | - Zhi Wang
- Blot Info & Tech (Beijing) Co. Ltd, Beijing, China
| | - Zhiyong Niu
- Blot Info & Tech (Beijing) Co. Ltd, Beijing, China.
| | - Jianxin Li
- Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, Beijing, 100191, China. .,School of Computer Science and Engineering, Beihang University, Beijing, 100191, China.
| | - Zhongwu Li
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Pathology, Peking University Cancer Hospital & Institute, Beijing, 100142, China.
| |
Collapse
|
20
|
Han Y, Cheng L, Huang G, Zhong G, Li J, Yuan X, Liu H, Li J, Zhou J, Cai M. Weakly supervised semantic segmentation of histological tissue via attention accumulation and pixel-level contrast learning. Phys Med Biol 2023; 68. [PMID: 36577142 DOI: 10.1088/1361-6560/acaeee] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Accepted: 12/28/2022] [Indexed: 12/29/2022]
Abstract
Objective. Histopathology image segmentation can assist medical professionals in identifying and diagnosing diseased tissue more efficiently. Although fully supervised segmentation models have excellent performance, the annotation cost is extremely expensive. Weakly supervised models are widely used in medical image segmentation due to their low annotation cost. Nevertheless, these weakly supervised models have difficulty in accurately locating the boundaries between different classes of regions in pathological images, resulting in a high rate of false alarms Our objective is to design a weakly supervised segmentation model to resolve the above problems.Approach. The segmentation model is divided into two main stages, the generation of pseudo labels based on class residual attention accumulation network (CRAANet) and the semantic segmentation based on pixel feature space construction network (PFSCNet). CRAANet provides attention scores for each class through the class residual attention module, while the Attention Accumulation (AA) module overlays the attention feature maps generated in each training epoch. PFSCNet employs a network model containing an inflated convolutional residual neural network and a multi-scale feature-aware module as the segmentation backbone, and proposes dense energy loss and pixel clustering modules are based on contrast learning to solve the pseudo-labeling-inaccuracy problem.Main results. We validate our method using the lung adenocarcinoma (LUAD-HistoSeg) dataset and the breast cancer (BCSS) dataset. The results of the experiments show that our proposed method outperforms other state-of-the-art methods on both datasets in several metrics. This suggests that it is capable of performing well in a wide variety of histopathological image segmentation tasks.Significance. We propose a weakly supervised semantic segmentation network that achieves approximate fully supervised segmentation performance even in the case of incomplete labels. The proposed AA and pixel-level contrast learning also make the edges more accurate and can well assist pathologists in their research.
Collapse
Affiliation(s)
- Yongqi Han
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, People's Republic of China
| | - Lianglun Cheng
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, People's Republic of China
| | - Guoheng Huang
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, People's Republic of China
| | - Guo Zhong
- School of Information Science and Technology, Guangdong University of Foreign Studies, Guangzhou 510420, People's Republic of China
| | - Jiahua Li
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, People's Republic of China
| | - Xiaochen Yuan
- Faculty of Applied Sciences, Macao Polytechnic University, Macao 999078, People's Republic of China
| | - Hongrui Liu
- Department of Industrial and Systems Engineering, San Jose State University, CA 95192, United States of America
| | - Jiao Li
- Department of Radiology, Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, People's Republic of China
| | - Jian Zhou
- Department of Medical Imaging, Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, People's Republic of China
| | - Muyan Cai
- Department of Pathology, Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine, Guangzhou 510060, People's Republic of China
| |
Collapse
|
21
|
Liang Y, Yin Z, Liu H, Zeng H, Wang J, Liu J, Che N. Weakly Supervised Deep Nuclei Segmentation With Sparsely Annotated Bounding Boxes for DNA Image Cytometry. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:785-795. [PMID: 34951851 DOI: 10.1109/tcbb.2021.3138189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Nuclei segmentation is an essential step in DNA ploidy analysis by image-based cytometry (DNA-ICM) which is widely used in cytopathology and allows an objective measurement of DNA content (ploidy). The routine fully supervised learning-based method requires often tedious and expensive pixel-wise labels. In this paper, we propose a novel weakly supervised nuclei segmentation framework which exploits only sparsely annotated bounding boxes, without any segmentation labels. The key is to integrate the traditional image segmentation and self-training into fully supervised instance segmentation. We first leverage the traditional segmentation to generate coarse masks for each box-annotated nucleus to supervise the training of a teacher model, which is then responsible for both the refinement of these coarse masks and pseudo labels generation of unlabeled nuclei. These pseudo labels and refined masks along with the original manually annotated bounding boxes jointly supervise the training of student model. Both teacher and student share the same architecture and especially the student is initialized by the teacher. We have extensively evaluated our method with both our DNA-ICM dataset and public cytopathological dataset. Without bells and whistles, our method outperforms all existing weakly supervised entries on both datasets. Code and our DNA-ICM dataset are publicly available at https://github.com/CVIU-CSU/Weakly-Supervised-Nuclei-Segmentation.
Collapse
|
22
|
Nasir ES, Parvaiz A, Fraz MM. Nuclei and glands instance segmentation in histology images: a narrative review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10372-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
23
|
Nan Y, Tang P, Zhang G, Zeng C, Liu Z, Gao Z, Zhang H, Yang G. Unsupervised Tissue Segmentation via Deep Constrained Gaussian Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3799-3811. [PMID: 35905069 DOI: 10.1109/tmi.2022.3195123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Tissue segmentation is the mainstay of pathological examination, whereas the manual delineation is unduly burdensome. To assist this time-consuming and subjective manual step, researchers have devised methods to automatically segment structures in pathological images. Recently, automated machine and deep learning based methods dominate tissue segmentation research studies. However, most machine and deep learning based approaches are supervised and developed using a large number of training samples, in which the pixel-wise annotations are expensive and sometimes can be impossible to obtain. This paper introduces a novel unsupervised learning paradigm by integrating an end-to-end deep mixture model with a constrained indicator to acquire accurate semantic tissue segmentation. This constraint aims to centralise the components of deep mixture models during the calculation of the optimisation function. In so doing, the redundant or empty class issues, which are common in current unsupervised learning methods, can be greatly reduced. By validation on both public and in-house datasets, the proposed deep constrained Gaussian network achieves significantly (Wilcoxon signed-rank test) better performance (with the average Dice scores of 0.737 and 0.735, respectively) on tissue segmentation with improved stability and robustness, compared to other existing unsupervised segmentation approaches. Furthermore, the proposed method presents a similar performance (p-value >0.05) compared to the fully supervised U-Net.
Collapse
|
24
|
Meng X, Fan J, Yu H, Mu J, Li Z, Yang A, Liu B, Lv K, Ai D, Lin Y, Song H, Fu T, Xiao D, Ma G, Yang J, Gu Y. Volume-awareness and outlier-suppression co-training for weakly-supervised MRI breast mass segmentation with partial annotations. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
25
|
Liu Y, Lian L, Zhang E, Xu L, Xiao C, Zhong X, Li F, Jiang B, Dong Y, Ma L, Huang Q, Xu M, Zhang Y, Yu D, Yan C, Qin P. Mixed-UNet: Refined class activation mapping for weakly-supervised semantic segmentation with multi-scale inference. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2022.1036934] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Deep learning techniques have shown great potential in medical image processing, particularly through accurate and reliable image segmentation on magnetic resonance imaging (MRI) scans or computed tomography (CT) scans, which allow the localization and diagnosis of lesions. However, training these segmentation models requires a large number of manually annotated pixel-level labels, which are time-consuming and labor-intensive, in contrast to image-level labels that are easier to obtain. It is imperative to resolve this problem through weakly-supervised semantic segmentation models using image-level labels as supervision since it can significantly reduce human annotation efforts. Most of the advanced solutions exploit class activation mapping (CAM). However, the original CAMs rarely capture the precise boundaries of lesions. In this study, we propose the strategy of multi-scale inference to refine CAMs by reducing the detail loss in single-scale reasoning. For segmentation, we develop a novel model named Mixed-UNet, which has two parallel branches in the decoding phase. The results can be obtained after fusing the extracted features from two branches. We evaluate the designed Mixed-UNet against several prevalent deep learning-based segmentation approaches on our dataset collected from the local hospital and public datasets. The validation results demonstrate that our model surpasses available methods under the same supervision level in the segmentation of various lesions from brain imaging.
Collapse
|
26
|
Nielsen PS, Georgsen JB, Vinding MS, Østergaard LR, Steiniche T. Computer-Assisted Annotation of Digital H&E/SOX10 Dual Stains Generates High-Performing Convolutional Neural Network for Calculating Tumor Burden in H&E-Stained Cutaneous Melanoma. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:14327. [PMID: 36361209 PMCID: PMC9654525 DOI: 10.3390/ijerph192114327] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 10/07/2022] [Accepted: 10/26/2022] [Indexed: 06/16/2023]
Abstract
Deep learning for the analysis of H&E stains requires a large annotated training set. This may form a labor-intensive task involving highly skilled pathologists. We aimed to optimize and evaluate computer-assisted annotation based on digital dual stains of the same tissue section. H&E stains of primary and metastatic melanoma (N = 77) were digitized, re-stained with SOX10, and re-scanned. Because images were aligned, annotations of SOX10 image analysis were directly transferred to H&E stains of the training set. Based on 1,221,367 annotated nuclei, a convolutional neural network for calculating tumor burden (CNNTB) was developed. For primary melanomas, precision of annotation was 100% (95%CI, 99% to 100%) for tumor cells and 99% (95%CI, 98% to 100%) for normal cells. Due to low or missing tumor-cell SOX10 positivity, precision for normal cells was markedly reduced in lymph-node and organ metastases compared with primary melanomas (p < 0.001). Compared with stereological counts within skin lesions, mean difference in tumor burden was 6% (95%CI, -1% to 13%, p = 0.10) for CNNTB and 16% (95%CI, 4% to 28%, p = 0.02) for pathologists. Conclusively, the technique produced a large annotated H&E training set with high quality within a reasonable timeframe for primary melanomas and subcutaneous metastases. For these lesion types, the training set generated a high-performing CNNTB, which was superior to the routine assessments of pathologists.
Collapse
Affiliation(s)
- Patricia Switten Nielsen
- Department of Pathology, Aarhus University Hospital, Palle Juul-Jensens Boulevard 35, DK-8200 Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Palle Juul-Jensens Boulevard 82, DK-8200 Aarhus, Denmark
| | - Jeanette Baehr Georgsen
- Department of Pathology, Aarhus University Hospital, Palle Juul-Jensens Boulevard 35, DK-8200 Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Palle Juul-Jensens Boulevard 82, DK-8200 Aarhus, Denmark
| | - Mads Sloth Vinding
- Department of Clinical Medicine, Aarhus University, Palle Juul-Jensens Boulevard 82, DK-8200 Aarhus, Denmark
- Center of Functionally Integrative Neuroscience, Aarhus University Hospital, Palle Juul-Jensens Boulevard 99, DK-8200 Aarhus, Denmark
| | - Lasse Riis Østergaard
- Department of Health Science and Technology, Aalborg University, Fredrik Bajers Vej 7E, DK-9220 Aalborg, Denmark
| | - Torben Steiniche
- Department of Pathology, Aarhus University Hospital, Palle Juul-Jensens Boulevard 35, DK-8200 Aarhus, Denmark
- Department of Clinical Medicine, Aarhus University, Palle Juul-Jensens Boulevard 82, DK-8200 Aarhus, Denmark
| |
Collapse
|
27
|
Wu H, Souedet N, Jan C, Clouchoux C, Delzescaux T. A general deep learning framework for neuron instance segmentation based on Efficient UNet and morphological post-processing. Comput Biol Med 2022; 150:106180. [PMID: 36244305 DOI: 10.1016/j.compbiomed.2022.106180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 09/21/2022] [Accepted: 10/01/2022] [Indexed: 11/03/2022]
Abstract
Recent studies have demonstrated the superiority of deep learning in medical image analysis, especially in cell instance segmentation, a fundamental step for many biological studies. However, the excellent performance of the neural networks requires training on large, unbiased dataset and annotations, which is labor-intensive and expertise-demanding. This paper presents an end-to-end framework to automatically detect and segment NeuN stained neuronal cells on histological images using only point annotations. Unlike traditional nuclei segmentation with point annotation, we propose using point annotation and binary segmentation to synthesize pixel-level annotations. The synthetic masks are used as the ground truth to train the neural network, a U-Net-like architecture with a state-of-the-art network, EfficientNet, as the encoder. Validation results show the superiority of our model compared to other recent methods. In addition, we investigated multiple post-processing schemes and proposed an original strategy to convert the probability map into segmented instances using ultimate erosion and dynamic reconstruction. This approach is easy to configure and outperforms other classical post-processing techniques. This work aims to develop a robust and efficient framework for analyzing neurons using optical microscopic data, which can be used in preclinical biological studies and, more specifically, in the context of neurodegenerative diseases. Code is available at: https://github.com/MIRCen/NeuronInstanceSeg.
Collapse
Affiliation(s)
- Huaqian Wu
- CEA-CNRS-UMR 9199, MIRCen, Fontenay-aux-Roses, France
| | | | - Caroline Jan
- CEA-CNRS-UMR 9199, MIRCen, Fontenay-aux-Roses, France
| | | | | |
Collapse
|
28
|
Ben Hamida A, Devanne M, Weber J, Truntzer C, Derangère V, Ghiringhelli F, Forestier G, Wemmert C. Weakly Supervised Learning using Attention gates for colon cancer histopathological image segmentation. Artif Intell Med 2022; 133:102407. [PMID: 36328667 DOI: 10.1016/j.artmed.2022.102407] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 09/07/2022] [Accepted: 09/15/2022] [Indexed: 02/08/2023]
Abstract
Recently, Artificial Intelligence namely Deep Learning methods have revolutionized a wide range of domains and applications. Besides, Digital Pathology has so far played a major role in the diagnosis and the prognosis of tumors. However, the characteristics of the Whole Slide Images namely the gigapixel size, high resolution and the shortage of richly labeled samples have hindered the efficiency of classical Machine Learning methods. That goes without saying that traditional methods are poor in generalization to different tasks and data contents. Regarding the success of Deep learning when dealing with Large Scale applications, we have resorted to the use of such models for histopathological image segmentation tasks. First, we review and compare the classical UNet and Att-UNet models for colon cancer WSI segmentation in a sparsely annotated data scenario. Then, we introduce novel enhanced models of the Att-UNet where different schemes are proposed for the skip connections and spatial attention gates positions in the network. In fact, spatial attention gates assist the training process and enable the model to avoid irrelevant feature learning. Alternating the presence of such modules namely in our Alter-AttUNet model adds robustness and ensures better image segmentation results. In order to cope with the lack of richly annotated data in our AiCOLO colon cancer dataset, we suggest the use of a multi-step training strategy that also deals with the WSI sparse annotations and unbalanced class issues. All proposed methods outperform state-of-the-art approaches but Alter-AttUNet generates the best compromise between accurate results and light network. The model achieves 95.88% accuracy with our sparse AiCOLO colon cancer datasets. Finally, to evaluate and validate our proposed architectures we resort to publicly available WSI data: the NCT-CRC-HE-100K, the CRC-5000 and the Warwick colon cancer histopathological dataset. Respective accuracies of 99.65%, 99.73% and 79.03% were reached. A comparison with state-of-art approaches is established to view and compare the key solutions for histopathological image segmentation.
Collapse
Affiliation(s)
| | - M Devanne
- IRIMAS, University of Haute-Alsace, France
| | - J Weber
- IRIMAS, University of Haute-Alsace, France
| | - C Truntzer
- Platform of Transform in Biological Oncology, Dijon, France
| | - V Derangère
- Platform of Transform in Biological Oncology, Dijon, France
| | - F Ghiringhelli
- Platform of Transform in Biological Oncology, Dijon, France
| | | | - C Wemmert
- ICube, University of Strasbourg, France
| |
Collapse
|
29
|
Shi P, Zhong J, Lin L, Lin L, Li H, Wu C. Nuclei segmentation of HE stained histopathological images based on feature global delivery connection network. PLoS One 2022; 17:e0273682. [PMID: 36107930 PMCID: PMC9477331 DOI: 10.1371/journal.pone.0273682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 08/12/2022] [Indexed: 11/22/2022] Open
Abstract
The analysis of pathological images, such as cell counting and nuclear morphological measurement, is an essential part in clinical histopathology researches. Due to the diversity of uncertain cell boundaries after staining, automated nuclei segmentation of Hematoxylin-Eosin (HE) stained pathological images remains challenging. Although better performances could be achieved than most of classic image processing methods do, manual labeling is still necessary in a majority of current machine learning based segmentation strategies, which restricts further improvements of efficiency and accuracy. Aiming at the requirements of stable and efficient high-throughput pathological image analysis, an automated Feature Global Delivery Connection Network (FGDC-net) is proposed for nuclei segmentation of HE stained images. Firstly, training sample patches and their corresponding asymmetric labels are automatically generated based on a Full Mixup strategy from RGB to HSV color space. Secondly, in order to add connections between adjacent layers and achieve the purpose of feature selection, FGDC module is designed by removing the jumping connections between codecs commonly used in UNet-based image segmentation networks, which learns the relationships between channels in each layer and pass information selectively. Finally, a dynamic training strategy based on mixed loss is used to increase the generalization capability of the model by flexible epochs. The proposed improvements were verified by the ablation experiments on multiple open databases and own clinical meningioma dataset. Experimental results on multiple datasets showed that FGDC-net could effectively improve the segmentation performances of HE stained pathological images without manual interventions, and provide valuable references for clinical pathological analysis.
Collapse
Affiliation(s)
- Peng Shi
- College of Computer and Cyber Security, Fujian Normal University, Fuzhou, Fujian, China
- Digit Fujian Internet-of-Things Laboratory of Environmental Monitoring, Fujian Normal University, Fuzhou, Fujian, China
- * E-mail:
| | - Jing Zhong
- Department of Radiology, Fujian Medical University Cancer Hospital, Fujian Cancer Hospital, Fuzhou, Fujian, China
| | - Liyan Lin
- Department of Pathology, Fujian Medical University Cancer Hospital, Fujian Cancer Hospital, Fuzhou, Fujian, China
| | - Lin Lin
- Department of Radiology, Fujian Medical University Union Hospital, Fuzhou, Fujian, China
| | - Huachang Li
- College of Computer and Cyber Security, Fujian Normal University, Fuzhou, Fujian, China
- Digit Fujian Internet-of-Things Laboratory of Environmental Monitoring, Fujian Normal University, Fuzhou, Fujian, China
| | - Chongshu Wu
- College of Computer and Cyber Security, Fujian Normal University, Fuzhou, Fujian, China
- Digit Fujian Internet-of-Things Laboratory of Environmental Monitoring, Fujian Normal University, Fuzhou, Fujian, China
| |
Collapse
|
30
|
Liu Y, He Q, Duan H, Shi H, Han A, He Y. Using Sparse Patch Annotation for Tumor Segmentation in Histopathological Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:6053. [PMID: 36015814 PMCID: PMC9414209 DOI: 10.3390/s22166053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 08/05/2022] [Accepted: 08/10/2022] [Indexed: 06/15/2023]
Abstract
Tumor segmentation is a fundamental task in histopathological image analysis. Creating accurate pixel-wise annotations for such segmentation tasks in a fully-supervised training framework requires significant effort. To reduce the burden of manual annotation, we propose a novel weakly supervised segmentation framework based on sparse patch annotation, i.e., only small portions of patches in an image are labeled as 'tumor' or 'normal'. The framework consists of a patch-wise segmentation model called PSeger, and an innovative semi-supervised algorithm. PSeger has two branches for patch classification and image classification, respectively. This two-branch structure enables the model to learn more general features and thus reduce the risk of overfitting when learning sparsely annotated data. We incorporate the idea of consistency learning and self-training into the semi-supervised training strategy to take advantage of the unlabeled images. Trained on the BCSS dataset with only 25% of the images labeled (five patches for each labeled image), our proposed method achieved competitive performance compared to the fully supervised pixel-wise segmentation models. Experiments demonstrate that the proposed solution has the potential to reduce the burden of labeling histopathological images.
Collapse
Affiliation(s)
- Yiqing Liu
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China
| | - Qiming He
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China
| | - Hufei Duan
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China
| | - Huijuan Shi
- Department of Pathology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou 510080, China
| | - Anjia Han
- Department of Pathology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou 510080, China
| | - Yonghong He
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China
| |
Collapse
|
31
|
Da Q, Huang X, Li Z, Zuo Y, Zhang C, Liu J, Chen W, Li J, Xu D, Hu Z, Yi H, Guo Y, Wang Z, Chen L, Zhang L, He X, Zhang X, Mei K, Zhu C, Lu W, Shen L, Shi J, Li J, S S, Krishnamurthi G, Yang J, Lin T, Song Q, Liu X, Graham S, Bashir RMS, Yang C, Qin S, Tian X, Yin B, Zhao J, Metaxas DN, Li H, Wang C, Zhang S. DigestPath: A benchmark dataset with challenge review for the pathological detection and segmentation of digestive-system. Med Image Anal 2022; 80:102485. [DOI: 10.1016/j.media.2022.102485] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2021] [Revised: 04/08/2022] [Accepted: 05/20/2022] [Indexed: 12/19/2022]
|
32
|
Kaseva T, Omidali B, Hippeläinen E, Mäkelä T, Wilppu U, Sofiev A, Merivaara A, Yliperttula M, Savolainen S, Salli E. Marker-controlled watershed with deep edge emphasis and optimized H-minima transform for automatic segmentation of densely cultivated 3D cell nuclei. BMC Bioinformatics 2022; 23:289. [PMID: 35864453 PMCID: PMC9306214 DOI: 10.1186/s12859-022-04827-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Accepted: 06/07/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND The segmentation of 3D cell nuclei is essential in many tasks, such as targeted molecular radiotherapies (MRT) for metastatic tumours, toxicity screening, and the observation of proliferating cells. In recent years, one popular method for automatic segmentation of nuclei has been deep learning enhanced marker-controlled watershed transform. In this method, convolutional neural networks (CNNs) have been used to create nuclei masks and markers, and the watershed algorithm for the instance segmentation. We studied whether this method could be improved for the segmentation of densely cultivated 3D nuclei via developing multiple system configurations in which we studied the effect of edge emphasizing CNNs, and optimized H-minima transform for mask and marker generation, respectively. RESULTS The dataset used for training and evaluation consisted of twelve in vitro cultivated densely packed 3D human carcinoma cell spheroids imaged using a confocal microscope. With this dataset, the evaluation was performed using a cross-validation scheme. In addition, four independent datasets were used for evaluation. The datasets were resampled near isotropic for our experiments. The baseline deep learning enhanced marker-controlled watershed obtained an average of 0.69 Panoptic Quality (PQ) and 0.66 Aggregated Jaccard Index (AJI) over the twelve spheroids. Using a system configuration, which was otherwise the same but used 3D-based edge emphasizing CNNs and optimized H-minima transform, the scores increased to 0.76 and 0.77, respectively. When using the independent datasets for evaluation, the best performing system configuration was shown to outperform or equal the baseline and a set of well-known cell segmentation approaches. CONCLUSIONS The use of edge emphasizing U-Nets and optimized H-minima transform can improve the marker-controlled watershed transform for segmentation of densely cultivated 3D cell nuclei. A novel dataset of twelve spheroids was introduced to the public.
Collapse
Affiliation(s)
- Tuomas Kaseva
- HUS Medical Imaging Center, Radiology, Helsinki University Hospital and University of Helsinki, P.O. Box 340, FI-00290, Helsinki, Finland
| | - Bahareh Omidali
- Department of Physics, University of Helsinki, P.O. Box 64, FI-00014, Helsinki, Finland
| | - Eero Hippeläinen
- Department of Physics, University of Helsinki, P.O. Box 64, FI-00014, Helsinki, Finland.,HUS Medical Imaging Centre, Clinical Physiology and Nuclear Medicine, Helsinki University Hospital and University of Helsinki, Helsinki, Finland
| | - Teemu Mäkelä
- HUS Medical Imaging Center, Radiology, Helsinki University Hospital and University of Helsinki, P.O. Box 340, FI-00290, Helsinki, Finland.,Department of Physics, University of Helsinki, P.O. Box 64, FI-00014, Helsinki, Finland
| | - Ulla Wilppu
- HUS Medical Imaging Center, Radiology, Helsinki University Hospital and University of Helsinki, P.O. Box 340, FI-00290, Helsinki, Finland
| | - Alexey Sofiev
- HUS Medical Imaging Center, Radiology, Helsinki University Hospital and University of Helsinki, P.O. Box 340, FI-00290, Helsinki, Finland.,Department of Physics, University of Helsinki, P.O. Box 64, FI-00014, Helsinki, Finland
| | - Arto Merivaara
- Division of Pharmaceutical Biosciences, Faculty of Pharmacy, Centre for Drug Research, University of Helsinki, Helsinki, Finland
| | - Marjo Yliperttula
- Division of Pharmaceutical Biosciences, Faculty of Pharmacy, Centre for Drug Research, University of Helsinki, Helsinki, Finland
| | - Sauli Savolainen
- HUS Medical Imaging Center, Radiology, Helsinki University Hospital and University of Helsinki, P.O. Box 340, FI-00290, Helsinki, Finland.,Department of Physics, University of Helsinki, P.O. Box 64, FI-00014, Helsinki, Finland
| | - Eero Salli
- HUS Medical Imaging Center, Radiology, Helsinki University Hospital and University of Helsinki, P.O. Box 340, FI-00290, Helsinki, Finland.
| |
Collapse
|
33
|
Kuang X, Cheung JPY, Wong KYK, Lam WY, Lam CH, Choy RW, Cheng CP, Wu H, Yang C, Wang K, Li Y, Zhang T. Spine-GFlow: A hybrid learning framework for robust multi-tissue segmentation in lumbar MRI without manual annotation. Comput Med Imaging Graph 2022; 99:102091. [PMID: 35803034 DOI: 10.1016/j.compmedimag.2022.102091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 05/30/2022] [Accepted: 06/13/2022] [Indexed: 10/18/2022]
Abstract
Most learning-based magnetic resonance image (MRI) segmentation methods rely on the manual annotation to provide supervision, which is extremely tedious, especially when multiple anatomical structures are required. In this work, we aim to develop a hybrid framework named Spine-GFlow that combines the image features learned by a CNN model and anatomical priors for multi-tissue segmentation in a sagittal lumbar MRI. Our framework does not require any manual annotation and is robust against image feature variation caused by different image settings and/or underlying pathology. Our contributions include: 1) a rule-based method that automatically generates the weak annotation (initial seed area), 2) a novel proposal generation method that integrates the multi-scale image features and anatomical prior, 3) a comprehensive loss for CNN training that optimizes the pixel classification and feature distribution simultaneously. Our Spine-GFlow has been validated on 2 independent datasets: HKDDC (containing images obtained from 3 different machines) and IVDM3Seg. The segmentation results of vertebral bodies (VB), intervertebral discs (IVD), and spinal canal (SC) are evaluated quantitatively using intersection over union (IoU) and the Dice coefficient. Results show that our method, without requiring manual annotation, has achieved a segmentation performance comparable to a model trained with full supervision (mean Dice 0.914 vs 0.916).
Collapse
Affiliation(s)
- Xihe Kuang
- Department of Orthopaedics and Traumatology, Li Ka Shing Faculty of Medicine, University of Hong Kong, Hong Kong, China
| | - Jason Pui Yin Cheung
- Department of Orthopaedics and Traumatology, Li Ka Shing Faculty of Medicine, University of Hong Kong, Hong Kong, China
| | - Kwan-Yee K Wong
- Department of Computer Science, Faculty of Engineering, University of Hong Kong, Hong Kong, China
| | - Wai Yi Lam
- Department of Orthopaedics and Traumatology, Li Ka Shing Faculty of Medicine, University of Hong Kong, Hong Kong, China
| | - Chak Hei Lam
- Department of Orthopaedics and Traumatology, Li Ka Shing Faculty of Medicine, University of Hong Kong, Hong Kong, China
| | - Richard W Choy
- Department of Orthopaedics and Traumatology, Li Ka Shing Faculty of Medicine, University of Hong Kong, Hong Kong, China
| | | | - Honghan Wu
- Department of Orthopaedics and Traumatology, Li Ka Shing Faculty of Medicine, University of Hong Kong, Hong Kong, China
| | - Cao Yang
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
| | - Kun Wang
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
| | - Yang Li
- School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China.
| | - Teng Zhang
- Department of Orthopaedics and Traumatology, Li Ka Shing Faculty of Medicine, University of Hong Kong, Hong Kong, China.
| |
Collapse
|
34
|
Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Med Image Anal 2022; 80:102487. [PMID: 35671591 DOI: 10.1016/j.media.2022.102487] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2021] [Revised: 05/07/2022] [Accepted: 05/20/2022] [Indexed: 01/15/2023]
Abstract
Tissue-level semantic segmentation is a vital step in computational pathology. Fully-supervised models have already achieved outstanding performance with dense pixel-level annotations. However, drawing such labels on the giga-pixel whole slide images is extremely expensive and time-consuming. In this paper, we use only patch-level classification labels to achieve tissue semantic segmentation on histopathology images, finally reducing the annotation efforts. We propose a two-step model including a classification and a segmentation phases. In the classification phase, we propose a CAM-based model to generate pseudo masks by patch-level labels. In the segmentation phase, we achieve tissue semantic segmentation by our propose Multi-Layer Pseudo-Supervision. Several technical novelties have been proposed to reduce the information gap between pixel-level and patch-level annotations. As a part of this paper, we introduce a new weakly-supervised semantic segmentation (WSSS) dataset for lung adenocarcinoma (LUAD-HistoSeg). We conduct several experiments to evaluate our proposed model on two datasets. Our proposed model outperforms five state-of-the-art WSSS approaches. Note that we can achieve comparable quantitative and qualitative results with the fully-supervised model, with only around a 2% gap for MIoU and FwIoU. By comparing with manual labeling on a randomly sampled 100 patches dataset, patch-level labeling can greatly reduce the annotation time from hours to minutes. The source code and the released datasets are available at: https://github.com/ChuHan89/WSSS-Tissue.
Collapse
|
35
|
Lyu F, Ma AJ, Yip TCF, Wong GLH, Yuen PC. Weakly Supervised Liver Tumor Segmentation Using Couinaud Segment Annotation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1138-1149. [PMID: 34871168 DOI: 10.1109/tmi.2021.3132905] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Automatic liver tumor segmentation is of great importance for assisting doctors in liver cancer diagnosis and treatment planning. Recently, deep learning approaches trained with pixel-level annotations have contributed many breakthroughs in image segmentation. However, acquiring such accurate dense annotations is time-consuming and labor-intensive, which limits the performance of deep neural networks for medical image segmentation. We note that Couinaud segment is widely used by radiologists when recording liver cancer-related findings in the reports, since it is well-suited for describing the localization of tumors. In this paper, we propose a novel approach to train convolutional networks for liver tumor segmentation using Couinaud segment annotations. Couinaud segment annotations are image-level labels with values ranging from 1 to 8, indicating a specific region of the liver. Our proposed model, namely CouinaudNet, can estimate pseudo tumor masks from the Couinaud segment annotations as pixel-wise supervision for training a fully supervised tumor segmentation model, and it is composed of two components: 1) an inpainting network with Couinaud segment masks which can effectively remove tumors for pathological images by filling the tumor regions with plausible healthy-looking intensities; 2) a difference spotting network for segmenting the tumors, which is trained with healthy-pathological pairs generated by an effective tumor synthesis strategy. The proposed method is extensively evaluated on two liver tumor segmentation datasets. The experimental results demonstrate that our method can achieve competitive performance compared to the fully supervised counterpart and the state-of-the-art methods while requiring significantly less annotation effort.
Collapse
|
36
|
Multi-task generative adversarial learning for nuclei segmentation with dual attention and recurrent convolution. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103558] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
|
37
|
Kiran I, Raza B, Ijaz A, Khan MA. DenseRes-Unet: Segmentation of overlapped/clustered nuclei from multi organ histopathology images. Comput Biol Med 2022; 143:105267. [PMID: 35114445 DOI: 10.1016/j.compbiomed.2022.105267] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 01/23/2022] [Accepted: 01/23/2022] [Indexed: 11/16/2022]
Abstract
Cancer is the second deadliest disease globally that can affect any human body organ. Early detection of cancer can increase the chances of survival in humans. Morphometric appearances of histopathology images make it difficult to segment nuclei effectively. We proposed a model to segment overlapped nuclei from H&E stained images. U-Net model achieved state-of-the-art performance in many medical image segmentation tasks; however, we modified the U-Net to learn a distinct set of consistent features. In this paper, we proposed the DenseRes-Unet model by integrating dense blocks in the last layers of the encoder block of U-Net, focused on relevant features from previous layers of the model. Moreover, we take advantage of residual connections with Atrous blocks instead of conventional skip connections, which helps to reduce the semantic gap between encoder and decoder paths. The distance map and binary threshold techniques intensify the nuclei interior and contour information in the images, respectively. The distance map is used to detect the center point of nuclei; moreover, it differentiates among nuclei interior boundary and core area. The distance map lacks a contour problem, which is resolved by using a binary threshold. Binary threshold helps to enhance the pixels around nuclei. Afterward, we fed images into the proposed DenseRes-Unet model, a deep, fully convolutional network to segment nuclei in the images. We have evaluated our model on four publicly available datasets for Nuclei segmentation to validate the model's performance. Our proposed model achieves 89.77% accuracy 90.36% F1-score, and 78.61% Aggregated Jaccard Index (AJI) on Multi organ Nucleus Segmentation (MoNuSeg).
Collapse
Affiliation(s)
- Iqra Kiran
- Department of Computer Science, COMSATS University Islamabad (CUI), Islamabad, Pakistan.
| | - Basit Raza
- Department of Computer Science, COMSATS University Islamabad (CUI), Islamabad, Pakistan.
| | - Areesha Ijaz
- Department of Computer Science, COMSATS University Islamabad (CUI), Islamabad, Pakistan.
| | - Muazzam A Khan
- Department of Computer Science, Quaid-i-Azam University, Islamabad, Pakistan.
| |
Collapse
|
38
|
Mougeot G, Dubos T, Chausse F, Péry E, Graumann K, Tatout C, Evans DE, Desset S. Deep learning -- promises for 3D nuclear imaging: a guide for biologists. J Cell Sci 2022; 135:275041. [PMID: 35420128 PMCID: PMC9016621 DOI: 10.1242/jcs.258986] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
For the past century, the nucleus has been the focus of extensive investigations in cell biology. However, many questions remain about how its shape and size are regulated during development, in different tissues, or during disease and aging. To track these changes, microscopy has long been the tool of choice. Image analysis has revolutionized this field of research by providing computational tools that can be used to translate qualitative images into quantitative parameters. Many tools have been designed to delimit objects in 2D and, eventually, in 3D in order to define their shapes, their number or their position in nuclear space. Today, the field is driven by deep-learning methods, most of which take advantage of convolutional neural networks. These techniques are remarkably adapted to biomedical images when trained using large datasets and powerful computer graphics cards. To promote these innovative and promising methods to cell biologists, this Review summarizes the main concepts and terminologies of deep learning. Special emphasis is placed on the availability of these methods. We highlight why the quality and characteristics of training image datasets are important and where to find them, as well as how to create, store and share image datasets. Finally, we describe deep-learning methods well-suited for 3D analysis of nuclei and classify them according to their level of usability for biologists. Out of more than 150 published methods, we identify fewer than 12 that biologists can use, and we explain why this is the case. Based on this experience, we propose best practices to share deep-learning methods with biologists.
Collapse
Affiliation(s)
- Guillaume Mougeot
- Université Clermont Auvergne, CNRS, Inserm, GReD, F-63000 Clermont-Ferrand, France.,Department of Biological and Molecular Sciences, Faculty of Health and Life Sciences, Oxford Brookes University, Oxford OX3 0BP, UK
| | - Tristan Dubos
- Université Clermont Auvergne, CNRS, Inserm, GReD, F-63000 Clermont-Ferrand, France
| | - Frédéric Chausse
- Université Clermont Auvergne, Clermont Auvergne INP, CNRS, Institut Pascal, F-63000 Clermont-Ferrand, France
| | - Emilie Péry
- Université Clermont Auvergne, Clermont Auvergne INP, CNRS, Institut Pascal, F-63000 Clermont-Ferrand, France
| | - Katja Graumann
- Department of Biological and Molecular Sciences, Faculty of Health and Life Sciences, Oxford Brookes University, Oxford OX3 0BP, UK
| | - Christophe Tatout
- Université Clermont Auvergne, CNRS, Inserm, GReD, F-63000 Clermont-Ferrand, France
| | - David E Evans
- Department of Biological and Molecular Sciences, Faculty of Health and Life Sciences, Oxford Brookes University, Oxford OX3 0BP, UK
| | - Sophie Desset
- Université Clermont Auvergne, CNRS, Inserm, GReD, F-63000 Clermont-Ferrand, France
| |
Collapse
|
39
|
Wahab N, Miligy IM, Dodd K, Sahota H, Toss M, Lu W, Jahanifar M, Bilal M, Graham S, Park Y, Hadjigeorghiou G, Bhalerao A, Lashen AG, Ibrahim AY, Katayama A, Ebili HO, Parkin M, Sorell T, Raza SEA, Hero E, Eldaly H, Tsang YW, Gopalakrishnan K, Snead D, Rakha E, Rajpoot N, Minhas F. Semantic annotation for computational pathology: multidisciplinary experience and best practice recommendations. JOURNAL OF PATHOLOGY CLINICAL RESEARCH 2022; 8:116-128. [PMID: 35014198 PMCID: PMC8822374 DOI: 10.1002/cjp2.256] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 11/25/2021] [Accepted: 12/10/2021] [Indexed: 02/06/2023]
Abstract
Recent advances in whole‐slide imaging (WSI) technology have led to the development of a myriad of computer vision and artificial intelligence‐based diagnostic, prognostic, and predictive algorithms. Computational Pathology (CPath) offers an integrated solution to utilise information embedded in pathology WSIs beyond what can be obtained through visual assessment. For automated analysis of WSIs and validation of machine learning (ML) models, annotations at the slide, tissue, and cellular levels are required. The annotation of important visual constructs in pathology images is an important component of CPath projects. Improper annotations can result in algorithms that are hard to interpret and can potentially produce inaccurate and inconsistent results. Despite the crucial role of annotations in CPath projects, there are no well‐defined guidelines or best practices on how annotations should be carried out. In this paper, we address this shortcoming by presenting the experience and best practices acquired during the execution of a large‐scale annotation exercise involving a multidisciplinary team of pathologists, ML experts, and researchers as part of the Pathology image data Lake for Analytics, Knowledge and Education (PathLAKE) consortium. We present a real‐world case study along with examples of different types of annotations, diagnostic algorithm, annotation data dictionary, and annotation constructs. The analyses reported in this work highlight best practice recommendations that can be used as annotation guidelines over the lifecycle of a CPath project.
Collapse
Affiliation(s)
- Noorul Wahab
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
| | - Islam M Miligy
- Pathology, University of Nottingham, Nottingham, UK.,Department of Pathology, Faculty of Medicine, Menoufia University, Shebin El-Kom, Egypt
| | - Katherine Dodd
- Histopathology, University Hospital Coventry and Warwickshire, Coventry, UK
| | - Harvir Sahota
- Histopathology, University Hospital Coventry and Warwickshire, Coventry, UK
| | - Michael Toss
- Pathology, University of Nottingham, Nottingham, UK
| | - Wenqi Lu
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
| | | | - Mohsin Bilal
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
| | - Simon Graham
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
| | - Young Park
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
| | | | - Abhir Bhalerao
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
| | | | | | - Ayaka Katayama
- Graduate School of Medicine, Gunma University, Maebashi, Japan
| | | | | | - Tom Sorell
- Department of Politics and International Studies, University of Warwick, Coventry, UK
| | | | - Emily Hero
- Histopathology, University Hospital Coventry and Warwickshire, Coventry, UK.,Leicester Royal Infirmary, Histopathology, University Hospitals Leicester, Leicester, UK
| | - Hesham Eldaly
- Histopathology, University Hospital Coventry and Warwickshire, Coventry, UK
| | - Yee Wah Tsang
- Histopathology, University Hospital Coventry and Warwickshire, Coventry, UK
| | | | - David Snead
- Histopathology, University Hospital Coventry and Warwickshire, Coventry, UK
| | - Emad Rakha
- Pathology, University of Nottingham, Nottingham, UK
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
| | - Fayyaz Minhas
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
| |
Collapse
|
40
|
Weakly-supervised learning for catheter segmentation in 3D frustum ultrasound. Comput Med Imaging Graph 2022; 96:102037. [DOI: 10.1016/j.compmedimag.2022.102037] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 11/15/2021] [Accepted: 01/13/2022] [Indexed: 11/21/2022]
|
41
|
Zhao L, Xu X, Hou R, Zhao W, Zhong H, Teng H, Han Y, Fu X, Sun J, Zhao J. Lung cancer subtype classification using histopathological images based on weakly supervised multi-instance learning. Phys Med Biol 2021; 66. [PMID: 34794136 DOI: 10.1088/1361-6560/ac3b32] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2021] [Accepted: 11/18/2021] [Indexed: 11/12/2022]
Abstract
Objective.Subtype classification plays a guiding role in the clinical diagnosis and treatment of non-small-cell lung cancer (NSCLC). However, due to the gigapixel of whole slide images (WSIs) and the absence of definitive morphological features, most automatic subtype classification methods for NSCLC require manually delineating the regions of interest (ROIs) on WSIs.Approach.In this paper, a weakly supervised framework is proposed for accurate subtype classification while freeing pathologists from pixel-level annotation. With respect to the characteristics of histopathological images, we design a two-stage structure with ROI localization and subtype classification. We first develop a method called multi-resolution expectation-maximization convolutional neural network (MR-EM-CNN) to locate ROIs for subsequent subtype classification. The EM algorithm is introduced to select the discriminative image patches for training a patch-wise network, with only WSI-wise labels available. A multi-resolution mechanism is designed for fine localization, similar to the coarse-to-fine process of manual pathological analysis. In the second stage, we build a novel hierarchical attention multi-scale network (HMS) for subtype classification. HMS can capture multi-scale features flexibly driven by the attention module and implement hierarchical features interaction.Results.Experimental results on the 1002-patient Cancer Genome Atlas dataset achieved an AUC of 0.9602 in the ROI localization and an AUC of 0.9671 for subtype classification.Significance.The proposed method shows superiority compared with other algorithms in the subtype classification of NSCLC. The proposed framework can also be extended to other classification tasks with WSIs.
Collapse
Affiliation(s)
- Lu Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Xiaowei Xu
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Runping Hou
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China.,Department of radiation oncology, Shanghai Chest Hospital, Shanghai, People's Republic of China
| | - Wangyuan Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Hai Zhong
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Haohua Teng
- Department of pathology, Shanghai Chest Hospital, Shanghai, People's Republic of China
| | - Yuchen Han
- Department of pathology, Shanghai Chest Hospital, Shanghai, People's Republic of China
| | - Xiaolong Fu
- Department of radiation oncology, Shanghai Chest Hospital, Shanghai, People's Republic of China
| | - Jianqi Sun
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| |
Collapse
|
42
|
Zhou X, Gu M, Cheng Z. Local Integral Regression Network for Cell Nuclei Detection. ENTROPY 2021; 23:e23101336. [PMID: 34682060 PMCID: PMC8535160 DOI: 10.3390/e23101336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/15/2021] [Accepted: 10/07/2021] [Indexed: 11/16/2022]
Abstract
Nuclei detection is a fundamental task in the field of histopathology image analysis and remains challenging due to cellular heterogeneity. Recent studies explore convolutional neural networks to either isolate them with sophisticated boundaries (segmentation-based methods) or locate the centroids of the nuclei (counting-based approaches). Although these two methods have demonstrated superior success, their fully supervised training demands considerable and laborious pixel-wise annotations manually labeled by pathology experts. To alleviate such tedious effort and reduce the annotation cost, we propose a novel local integral regression network (LIRNet) that allows both fully and weakly supervised learning (FSL/WSL) frameworks for nuclei detection. Furthermore, the LIRNet can output an exquisite density map of nuclei, in which the localization of each nucleus is barely affected by the post-processing algorithms. The quantitative experimental results demonstrate that the FSL version of the LIRNet achieves a state-of-the-art performance compared to other counterparts. In addition, the WSL version has exhibited a competitive detection performance and an effortless data annotation that requires only 17.5% of the annotation effort.
Collapse
|
43
|
Zhu X, Chen J, Zeng X, Liang J, Li C, Liu S, Behpour S, Xu M. Weakly Supervised 3D Semantic Segmentation Using Cross-Image Consensus and Inter-Voxel Affinity Relations. PROCEEDINGS. IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION 2021; 2021:2814-2824. [PMID: 35350748 DOI: 10.1109/iccv48922.2021.00283] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We propose a novel weakly supervised approach for 3D semantic segmentation on volumetric images. Unlike most existing methods that require voxel-wise densely labeled training data, our weakly-supervised CIVA-Net is the first model that only needs image-level class labels as guidance to learn accurate volumetric segmentation. Our model learns from cross-image co-occurrence for integral region generation, and explores inter-voxel affinity relations to predict segmentation with accurate boundaries. We empirically validate our model on both simulated and real cryo-ET datasets. Our experiments show that CIVA-Net achieves comparable performance to the state-of-the-art models trained with stronger supervision.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | - Min Xu
- Carnegie Mellon University
| |
Collapse
|
44
|
Zhao T, Yin Z. Weakly Supervised Cell Segmentation by Point Annotation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2736-2747. [PMID: 33347404 DOI: 10.1109/tmi.2020.3046292] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
We propose weakly supervised training schemes to train end-to-end cell segmentation networks that only require a single point annotation per cell as the training label and generate a high-quality segmentation mask close to those fully supervised methods using mask annotation on cells. Three training schemes are investigated to train cell segmentation networks, using the point annotation. First, self-training is performed to learn additional information near the annotated points. Next, co-training is applied to learn more cell regions using multiple networks that supervise each other. Finally, a hybrid-training scheme is proposed to leverage the advantages of both self-training and co-training. During the training process, we propose a divergence loss to avoid the overfitting and a consistency loss to enforce the consensus among multiple co-trained networks. Furthermore, we propose weakly supervised learning with human in the loop, aiming at achieving high segmentation accuracy and annotation efficiency simultaneously. Evaluated on two benchmark datasets, our proposal achieves high-quality cell segmentation results comparable to the fully supervised methods, but with much less amount of human annotation effort.
Collapse
|
45
|
Zhang S, Yuan Z, Wang Y, Bai Y, Chen B, Wang H. REUR: A unified deep framework for signet ring cell detection in low-resolution pathological images. Comput Biol Med 2021; 136:104711. [PMID: 34388466 DOI: 10.1016/j.compbiomed.2021.104711] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2021] [Revised: 07/28/2021] [Accepted: 07/29/2021] [Indexed: 11/15/2022]
Abstract
Detecting signet ring cells (SRCs) in pathological images is essential for carcinoma diagnosis. However, it is time consuming for pathologists to detect SRCs manually from pathological images, and the accuracy of detecting them is also relatively low because of their small sizes. Recently, the exploration of deep learning methods in pathology analysis has been widely investigated by researchers. Nevertheless, the automatic detection of SRCs from real pathological images faces two problems. One is that labeled pathological images are insufficient and usually incomplete. The other is that the training data and the real clinical data have a large difference in resolution. Hence, adopting the transfer learning method affects the performance of deep learning methods. To address these two problems, we present a unified framework named REUR [RetinaNet combining USRNet (unfolding super-resolution network) with the RGHMC (revised gradient harmonizing mechanism classification) loss] that can accurately detect SRCs in low-resolution (LR) pathological images. First, the framework with the super-resolution (SR) module can address the difference in resolution between the training data and the real clinical data. Second, the framework with the label correction module can obtain the revised ground-truth labels from noisy examples, which are embedded into the gradient harmonizing mechanism to acquire the RGHMC loss. The results of the numerical experiments showed that the framework can perform better than other one-stage detectors based on the RetinaNet architecture in the high-resolution (HR) noisy dataset. It achieved a kappa value of 0.74 and an accuracy of 0.89 in the test with 27 randomly selected whole slide images (WSIs), and, thus, it can assist pathologists in better analyzing WSIs. The framework provides an essential method in computer-aided diagnosis for medical applications.
Collapse
Affiliation(s)
- Shuchang Zhang
- Department of Mathematics, National University of Defense Technology, Changsha, China.
| | - Ziyang Yuan
- Department of Mathematics, National University of Defense Technology, Changsha, China.
| | - Yadong Wang
- Department of Laboratory Pathology, Baiyun Branch, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Yang Bai
- Department of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Bo Chen
- Suzhou Research Center, Institute of Automation, Chinese Academy of Sciences, Suzhou, China
| | - Hongxia Wang
- Department of Mathematics, National University of Defense Technology, Changsha, China.
| |
Collapse
|
46
|
Rashmi R, Prasad K, Udupa CBK. Multi-channel Chan-Vese model for unsupervised segmentation of nuclei from breast histopathological images. Comput Biol Med 2021; 136:104651. [PMID: 34333226 DOI: 10.1016/j.compbiomed.2021.104651] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2021] [Revised: 07/13/2021] [Accepted: 07/13/2021] [Indexed: 11/28/2022]
Abstract
T he pathologist determines the malignancy of a breast tumor by studying the histopathological images. In particular, the characteristics and distribution of nuclei contribute greatly to the decision process. Hence, the segmentation of nuclei constitutes a crucial task in the classification of breast histopathological images. Manual analysis of these images is subjective, tedious and susceptible to human error. Consequently, the development of computer-aided diagnostic systems for analysing these images have become a vital factor in the domain of medical imaging. However, the usage of medical image processing techniques to segment nuclei is challenging due to the diverse structure of the cells, poor staining process, the occurrence of artifacts, etc. Although supervised computer-aided systems for nuclei segmentation is popular, it is dependent on the availability of standard annotated datasets. In this regard, this work presents an unsupervised method based on Chan-Vese model to segment nuclei from breast histopathological images. The proposed model utilizes multi-channel color information to efficiently segment the nuclei. Also, this study proposes a pre-processing step to select appropriate color channel such that it discriminates nuclei from the background region. An extensive evaluation of the proposed model on two challenging datasets demonstrates its validity and effectiveness.
Collapse
Affiliation(s)
- R Rashmi
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, India.
| | - Keerthana Prasad
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, India.
| | - Chethana Babu K Udupa
- Department of Pathology, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, India.
| |
Collapse
|
47
|
Cha JY, Yoon HI, Yeo IS, Huh KH, Han JS. Peri-Implant Bone Loss Measurement Using a Region-Based Convolutional Neural Network on Dental Periapical Radiographs. J Clin Med 2021; 10:1009. [PMID: 33801384 PMCID: PMC7958615 DOI: 10.3390/jcm10051009] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 02/23/2021] [Accepted: 02/24/2021] [Indexed: 01/06/2023] Open
Abstract
Determining the peri-implant marginal bone level on radiographs is challenging because the boundaries of the bones around implants are often unclear or the heights of the buccal and lingual bone levels are different. Therefore, a deep convolutional neural network (CNN) was evaluated for detecting the marginal bone level, top, and apex of implants on dental periapical radiographs. An automated assistant system was proposed for calculating the bone loss percentage and classifying the bone resorption severity. A modified region-based CNN (R-CNN) was trained using transfer learning based on Microsoft Common Objects in Context dataset. Overall, 708 periapical radiographic images were divided into training (n = 508), validation (n = 100), and test (n = 100) datasets. The training dataset was randomly enriched by data augmentation. For evaluation, average precision, average recall, and mean object keypoint similarity (OKS) were calculated, and the mean OKS values of the model and a dental clinician were compared. Using detected keypoints, radiographic bone loss was measured and classified. No statistically significant difference was found between the modified R-CNN model and dental clinician for detecting landmarks around dental implants. The modified R-CNN model can be utilized to measure the radiographic peri-implant bone loss ratio to assess the severity of peri-implantitis.
Collapse
Affiliation(s)
- Jun-Young Cha
- Department of Prosthodontics, School of Dentistry and Dental Research Institute, Seoul National University, Daehak-ro 101, Jongro-gu, Seoul 03080, Korea; (J.-Y.C.); (H.-I.Y.); (I.-S.Y.)
| | - Hyung-In Yoon
- Department of Prosthodontics, School of Dentistry and Dental Research Institute, Seoul National University, Daehak-ro 101, Jongro-gu, Seoul 03080, Korea; (J.-Y.C.); (H.-I.Y.); (I.-S.Y.)
| | - In-Sung Yeo
- Department of Prosthodontics, School of Dentistry and Dental Research Institute, Seoul National University, Daehak-ro 101, Jongro-gu, Seoul 03080, Korea; (J.-Y.C.); (H.-I.Y.); (I.-S.Y.)
| | - Kyung-Hoe Huh
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Daehak-ro 101, Jongro-gu, Seoul 03080, Korea
| | - Jung-Suk Han
- Department of Prosthodontics, School of Dentistry and Dental Research Institute, Seoul National University, Daehak-ro 101, Jongro-gu, Seoul 03080, Korea; (J.-Y.C.); (H.-I.Y.); (I.-S.Y.)
| |
Collapse
|
48
|
Dou Y, Tsai YH, Liu CC, Hobson BA, Lein PJ. Co-localization of fluorescent signals using deep learning with Manders overlapping coefficient. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2021; 11596:115963C. [PMID: 34305257 PMCID: PMC8301216 DOI: 10.1117/12.2580650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Object-based co-localization of fluorescent signals allows the assessment of interactions between two (or more) biological entities using spatial information. It relies on object identification with high accuracy to separate fluorescent signals from the background. Object detectors using convolutional neural networks (CNN) with annotated training samples could facilitate the process by detecting and counting fluorescent-labeled cells from fluorescence photomicrographs. However, datasets containing segmented annotations of colocalized cells are generally not available, and creating a new dataset with delineated masks is label-intensive. Also, the co-localization coefficient is often not used as a component during training with the CNN model. Yet, it may aid with localizing and detecting objects during training and testing. In this work, we propose to address these issues by using a quantification coefficient for co-localization called Manders overlapping coefficient (MOC)1 as a single-layer branch in a CNN. Fully convolutional one-state (FCOS)2 with a Resnet101 backbone served as the network to evaluate the effectiveness of the novel branch to assist with bounding box prediction. Training data were sourced from lab curated fluorescence images of neurons from the rat hippocampus, piriform cortex, somatosensory cortex, and amygdala. Results suggest that using modified FCOS with MOC outperformed the original FCOS model for accuracy in detecting fluorescence signals by 1.1% in mean average precision (mAP). The model could be downloaded from https://github.com/Alphafrey946/Colocalization-MOC.
Collapse
Affiliation(s)
- Yimeng Dou
- UW-Madison, Department of Biostatistics and Medical Informatics, Madison, Wisconsin, United States
- UC Davis School of Veterinary Medicine, Department of Molecular Biosciences, Davis, California, United States
| | - Yi-Hua Tsai
- UC Davis School of Veterinary Medicine, Department of Molecular Biosciences, Davis, California, United States
| | - Chih-Chieh Liu
- UC Davis, Department of Biomedical Engineering, Davis, California, United States
| | - Brad A. Hobson
- UC Davis, Center for Molecular and Genomic Imaging, Davis, California, United States
| | - Pamela J. Lein
- UC Davis School of Veterinary Medicine, Department of Molecular Biosciences, Davis, California, United States
| |
Collapse
|
49
|
Meijering E. A bird's-eye view of deep learning in bioimage analysis. Comput Struct Biotechnol J 2020; 18:2312-2325. [PMID: 32994890 PMCID: PMC7494605 DOI: 10.1016/j.csbj.2020.08.003] [Citation(s) in RCA: 58] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 07/26/2020] [Accepted: 08/01/2020] [Indexed: 02/07/2023] Open
Abstract
Deep learning of artificial neural networks has become the de facto standard approach to solving data analysis problems in virtually all fields of science and engineering. Also in biology and medicine, deep learning technologies are fundamentally transforming how we acquire, process, analyze, and interpret data, with potentially far-reaching consequences for healthcare. In this mini-review, we take a bird's-eye view at the past, present, and future developments of deep learning, starting from science at large, to biomedical imaging, and bioimage analysis in particular.
Collapse
Affiliation(s)
- Erik Meijering
- School of Computer Science and Engineering & Graduate School of Biomedical Engineering, University of New South Wales, Sydney, Australia
| |
Collapse
|