1
|
Pan X, Song S, Liu Z, Wang H, Li L, Lu H, Lan R, Luo X. Weakly supervised nuclei segmentation based on pseudo label correction and uncertainty denoising. Artif Intell Med 2025; 164:103113. [PMID: 40174353 DOI: 10.1016/j.artmed.2025.103113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2024] [Revised: 01/17/2025] [Accepted: 03/14/2025] [Indexed: 04/04/2025]
Abstract
Nuclei segmentation plays a vital role in computer-aided histopathology image analysis. Numerous fully supervised learning approaches exhibit amazing performance relying on pathological image with precisely annotations. Whereas, it is difficult and time-consuming in accurate manual labeling on pathological images. Hence, this paper presents a two-stage weakly supervised model including coarse and fine phases, which can achieve nuclei segmentation on whole slide images using only point annotations. In the coarse segmentation step, Voronoi diagram and K-means cluster results are generated based on the point annotations to supervise the training network. In order to cope with the different imaging conditions, an image adaptive clustering pseudo label algorithm is proposed to adapt the color distribution of different images. A Multi-scale Feature Fusion (MFF) module is designed in the decoder to better fusion the feature outputs. Additionally, to reduce the interference of erroneous cluster label, an Exponential Moving Average for cluster label Correction (EMAC) strategy is proposed. After the first step, an uncertainty estimation pseudo label denoising strategy is introduced to denoise Voronoi diagram and adaptive cluster label. In the fine segmentation step, the optimized labels are used for training to obtain the final predicted probability map. Extensive experiments are performed on MoNuSeg and TNBC public benchmarks, which demonstrate our proposed method is superior to other existing nuclei segmentation methods based on point labels. Codes are available at: https://github.com/SSL-droid/WNS-PLCUD.
Collapse
Affiliation(s)
- Xipeng Pan
- Guangxi Key Laboratory of Image and Graphic Intelligent Processing, Guilin University of Electronic Technology, Guilin, 541004, Guangxi, China
| | - Shilong Song
- Guangxi Key Laboratory of Image and Graphic Intelligent Processing, Guilin University of Electronic Technology, Guilin, 541004, Guangxi, China
| | - Zhenbing Liu
- Guangxi Key Laboratory of Image and Graphic Intelligent Processing, Guilin University of Electronic Technology, Guilin, 541004, Guangxi, China
| | - Huadeng Wang
- Guangxi Key Laboratory of Image and Graphic Intelligent Processing, Guilin University of Electronic Technology, Guilin, 541004, Guangxi, China
| | - Lingqiao Li
- Guangxi Key Laboratory of Image and Graphic Intelligent Processing, Guilin University of Electronic Technology, Guilin, 541004, Guangxi, China
| | - Haoxiang Lu
- Guangxi Key Laboratory of Image and Graphic Intelligent Processing, Guilin University of Electronic Technology, Guilin, 541004, Guangxi, China; Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, Guangdong, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangdong Provincial People's Hospital, Guangzhou, 510080, Guangdong, China.
| | - Rushi Lan
- Guangxi Key Laboratory of Image and Graphic Intelligent Processing, Guilin University of Electronic Technology, Guilin, 541004, Guangxi, China; International Joint Research Laboratory of Spatio-temporal Information and Intelligent Location Services, Guilin University of Electronic Technology, Guilin, 541004, Guangxi, China.
| | - Xiaonan Luo
- Guangxi Key Laboratory of Image and Graphic Intelligent Processing, Guilin University of Electronic Technology, Guilin, 541004, Guangxi, China; International Joint Research Laboratory of Spatio-temporal Information and Intelligent Location Services, Guilin University of Electronic Technology, Guilin, 541004, Guangxi, China
| |
Collapse
|
2
|
Xu J, Shi L, Li S, Zhang Y, Zhao G, Shi Y, Li J, Gao Y. PointFormer: Keypoint-Guided Transformer for Simultaneous Nuclei Segmentation and Classification in Multi-Tissue Histology Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2025; 34:2883-2895. [PMID: 40323744 DOI: 10.1109/tip.2025.3565184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2025]
Abstract
Automatic nuclei segmentation and classification (NSC) is a fundamental prerequisite in digital pathology analysis as it enables the quantification of biomarkers and histopathological features for precision medicine. Nuclei appear to be small, however, global spatial distribution and brightness contrast, or color correlation between the nucleus and background, have been recognized as key rationales for accurate nuclei segmentation in actual clinical practice. Although recent great breakthroughs in medical image segmentation have been achieved by Transformer-based methods, the adaptability of segmenting and classifying nuclei from histopathological images is rarely investigated. Also, the severe overlap of nuclei and the large intra-class variability are common in clinical wild data. Prevailing methods based on polygonal representations or distance maps are limited by empirically designed post-processing strategies, resulting in ineffective segmentation of large irregular nuclei instances. To address these challenges, we propose a keypoint-guided tri-decoder Transformer (PointFormer) for NSC simultaneously. Specifically, the overall NSC task is decoupled to a multi-task learning problem, where a tri-decoder structure is employed for decoding nuclei instance, edges, and types, respectively. The nuclei detection and classification (NDC) subtask is reformulated as a semantic keypoint estimation problem. Meanwhile, introduces a novel attention-guiding strategy to capture strong inter-branch correlations and mitigate inconsistencies between multi-decoder predictions. Finally, a multi-local perception module is designed as the base building block of PointFormer to achieve local and global trade-offs and reduce model complexity. Comprehensive quantitative and qualitative experimental results on three datasets of different volumes have demonstrated the superiority of the proposed method over prevalent methods, especially for the PanNuke dataset with an achievement of 70.6% on bPQ.
Collapse
|
3
|
Lin J, Wang H, Li D, Wang J, Zhao B, Shi Z, Liang C, Han G, Liang L, Liu Z, Han C. Rethinking mitosis detection: Towards diverse data and feature representation for better domain generalization. Artif Intell Med 2025; 163:103097. [PMID: 40049058 DOI: 10.1016/j.artmed.2025.103097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Revised: 02/20/2025] [Accepted: 02/21/2025] [Indexed: 04/06/2025]
Abstract
Mitosis detection is one of the fundamental tasks in computational pathology, which is extremely challenging due to the heterogeneity of mitotic cell. Most of the current studies solve the heterogeneity in the technical aspect by increasing the model complexity. However, lacking consideration of the biological knowledge and the complex model design may lead to the overfitting problem while limited the generalizability of the detection model. In this paper, we systematically study the morphological appearances in different mitotic phases as well as the ambiguous non-mitotic cells and identify that balancing the data and feature diversity can achieve better generalizability. Based on this observation, we propose a novel generalizable framework (MitDet) for mitosis detection. The data diversity is considered by the proposed diversity-guided sample balancing (DGSB). And the feature diversity is preserved by inter- and intra- class feature diversity-preserved module (InCDP). Stain enhancement (SE) module is introduced to enhance the domain-relevant diversity of both data and features simultaneously. Extensive experiments have demonstrated that our proposed model outperforms all the state-of-the-art (SOTA) approaches in several popular mitosis detection datasets in both internal and unseen test sets using point annotations only. Comprehensive ablation studies have also proven the effectiveness of the rethinking of data and feature diversity balancing. By analyzing the results quantitatively and qualitatively, we believe that our proposed model not only achieves SOTA performance but also might inspire the future studies in new perspectives. Code is available at https://github.com/linjiatai/MitDet.
Collapse
Affiliation(s)
- Jiatai Lin
- Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital, Guangdong Academy of Sciences, Guangzhou 510080, China; Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou 510080, China
| | - Hao Wang
- School of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China
| | - Danyi Li
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou 510515, China; Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou 510515, China
| | - Jing Wang
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou 510515, China
| | - Bingchao Zhao
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou 510080, China; School of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China
| | - Zhenwei Shi
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou 510080, China; School of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China
| | - Changhong Liang
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou 510080, China; School of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China
| | - Guoqiang Han
- School of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China.
| | - Li Liang
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou 510515, China.
| | - Zaiyi Liu
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou 510080, China.
| | - Chu Han
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou 510080, China.
| |
Collapse
|
4
|
Li B, Liu Z, Zhang S, Liu X, Sun C, Liu J, Qiu B, Tian J. NuHTC: A hybrid task cascade for nuclei instance segmentation and classification. Med Image Anal 2025; 103:103595. [PMID: 40294567 DOI: 10.1016/j.media.2025.103595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 03/22/2025] [Accepted: 04/09/2025] [Indexed: 04/30/2025]
Abstract
Nuclei instance segmentation and classification of hematoxylin and eosin (H&E) stained digital pathology images are essential for further downstream cancer diagnosis and prognosis tasks. Previous works mainly focused on bottom-up methods using a single-level feature map for segmenting nuclei instances, while multilevel feature maps seemed to be more suitable for nuclei instances with various sizes and types. In this paper, we develop an effective top-down nuclei instance segmentation and classification framework (NuHTC) based on a hybrid task cascade (HTC). The NuHTC has two new components: a watershed proposal network (WSPN) and a hybrid feature extractor (HFE). The WSPN can provide additional proposals for the region proposal network which leads the model to predict bounding boxes more precisely. The HFE at the region of interest (RoI) alignment stage can better utilize both the high-level global and the low-level semantic features. It can guide NuHTC to learn nuclei instance features with less intraclass variance. We conduct extensive experiments using our method in four public multiclass nuclei instance segmentation datasets. The quantitative results of NuHTC demonstrate its superiority in both instance segmentation and classification compared to other state-of-the-art methods.
Collapse
Affiliation(s)
- Bao Li
- Center for Biomedical Imaging, University of Science and Technology of China, Hefei, Anhui 230026, China; CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Zhenyu Liu
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Song Zhang
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Xiangyu Liu
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Caixia Sun
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, 100191, China; Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, 100191, China
| | - Jiangang Liu
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, 100191, China; Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, 100191, China
| | - Bensheng Qiu
- Center for Biomedical Imaging, University of Science and Technology of China, Hefei, Anhui 230026, China.
| | - Jie Tian
- Center for Biomedical Imaging, University of Science and Technology of China, Hefei, Anhui 230026, China; CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, School of Engineering Medicine, Beihang University, Beijing, 100191, China; Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology, Beijing, 100191, China.
| |
Collapse
|
5
|
Guan B, Chu G, Wang Z, Li J, Yi B. Instance-level semantic segmentation of nuclei based on multimodal structure encoding. BMC Bioinformatics 2025; 26:42. [PMID: 39915737 PMCID: PMC11804060 DOI: 10.1186/s12859-025-06066-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2024] [Accepted: 01/29/2025] [Indexed: 02/09/2025] Open
Abstract
BACKGROUND Accurate segmentation and classification of cell nuclei are crucial for histopathological image analysis. However, existing deep neural network-based methods often struggle to capture complex morphological features and global spatial distributions of cell nuclei due to their reliance on local receptive fields. METHODS This study proposes a graph neural structure encoding framework based on a vision-language model. The framework incorporates: (1) A multi-scale feature fusion and knowledge distillation module utilizing the Contrastive Language-Image Pre-training (CLIP) model's image encoder; (2) A method to transform morphological features of cells into textual descriptions for semantic representation; and (3) A graph neural network approach to learn spatial relationships and contextual information between cell nuclei. RESULTS Experimental results demonstrate that the proposed method significantly improves the accuracy of cell nucleus segmentation and classification compared to existing approaches. The framework effectively captures complex nuclear structures and global distribution features, leading to enhanced performance in histopathological image analysis. CONCLUSIONS By deeply mining the morphological features of cell nuclei and their spatial topological relationships, our graph neural structure encoding framework achieves high-precision nuclear segmentation and classification. This approach shows significant potential for enhancing histopathological image analysis, potentially leading to more accurate diagnoses and improved understanding of cellular structures in pathological tissues.
Collapse
Affiliation(s)
- Bo Guan
- Key Lab for Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, Tianjin, 300072, China
| | - Guangdi Chu
- Department of Urology, The Affiliated Hospital of Qingdao University, Qingdao, 266000, China
| | - Ziying Wang
- Department of Medicine, Qingdao University, Qingdao, 266000, China
| | - Jianmin Li
- Key Lab for Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, Tianjin, 300072, China.
| | - Bo Yi
- Department of General Surgery, Third Xiangya Hospital, Central South University, Changsha, 410013, China.
| |
Collapse
|
6
|
Ma X, Huang J, Long M, Li X, Ye Z, Hu W, Yalikun Y, Wang D, Hu T, Mei L, Lei C. CellSAM: Advancing Pathologic Image Cell Segmentation via Asymmetric Large-Scale Vision Model Feature Distillation Aggregation Network. Microsc Res Tech 2025; 88:501-515. [PMID: 39440549 DOI: 10.1002/jemt.24716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2024] [Revised: 08/21/2024] [Accepted: 10/06/2024] [Indexed: 10/25/2024]
Abstract
Segment anything model (SAM) has attracted extensive interest as a potent large-scale image segmentation model, with prior efforts adapting it for use in medical imaging. However, the precise segmentation of cell nucleus instances remains a formidable challenge in computational pathology, given substantial morphological variations and the dense clustering of nuclei with unclear boundaries. This study presents an innovative cell segmentation algorithm named CellSAM. CellSAM has the potential to improve the effectiveness and precision of disease identification and therapy planning. As a variant of SAM, CellSAM integrates dual-image encoders and employs techniques such as knowledge distillation and mask fusion. This innovative model exhibits promising capabilities in capturing intricate cell structures and ensuring adaptability in resource-constrained scenarios. The experimental results indicate that this structure effectively enhances the quality and precision of cell segmentation. Remarkably, CellSAM demonstrates outstanding results even with minimal training data. In the evaluation of particular cell segmentation tasks, extensive comparative analyzes show that CellSAM outperforms both general fundamental models and state-of-the-art (SOTA) task-specific models. Comprehensive evaluation metrics yield scores of 0.884, 0.876, and 0.768 for mean accuracy, recall, and precision respectively. Extensive experiments show that CellSAM excels in capturing subtle details and complex structures and is capable of segmenting cells in images accurately. Additionally, CellSAM demonstrates excellent performance on clinical data, indicating its potential for robust applications in treatment planning and disease diagnosis, thereby further improving the efficiency of computer-aided medicine.
Collapse
Affiliation(s)
- Xiao Ma
- The Institute of Technological Sciences, Wuhan University, Wuhan, China
| | - Jin Huang
- The Institute of Technological Sciences, Wuhan University, Wuhan, China
| | - Mengping Long
- The Institute of Technological Sciences, Wuhan University, Wuhan, China
- Department of Pathology, Peking University Cancer Hospital, Beijing, China
| | - Xiaoxiao Li
- The Institute of Technological Sciences, Wuhan University, Wuhan, China
| | - Zhaoyi Ye
- School of Computer Science, Hubei University of Technology, Wuhan, China
| | - Wanting Hu
- The Institute of Technological Sciences, Wuhan University, Wuhan, China
| | - Yaxiaer Yalikun
- Division of Materials Science, Nara Institute of Science and Technology, Nara, Japan
| | - Du Wang
- The Institute of Technological Sciences, Wuhan University, Wuhan, China
| | - Taobo Hu
- The Institute of Technological Sciences, Wuhan University, Wuhan, China
- Department of Breast Surgery, Peking University People's Hospital, Beijing, China
| | - Liye Mei
- The Institute of Technological Sciences, Wuhan University, Wuhan, China
- School of Computer Science, Hubei University of Technology, Wuhan, China
| | - Cheng Lei
- The Institute of Technological Sciences, Wuhan University, Wuhan, China
- Suzhou Institute of Wuhan University, Suzhou, China
- Shenzhen Institute of Wuhan University, Shenzhen, China
| |
Collapse
|
7
|
Xue Y, Hu Y, Yao Y, Huang J, Wang H, He J. MSRMMP: Multi-scale residual module and multi-layer pseudo-supervision for weakly supervised segmentation of histopathological images. Med Eng Phys 2025; 136:104284. [PMID: 39979013 DOI: 10.1016/j.medengphy.2025.104284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2024] [Revised: 12/10/2024] [Accepted: 01/05/2025] [Indexed: 02/22/2025]
Abstract
Accurate semantic segmentation of histopathological images plays a crucial role in accurate cancer diagnosis. While fully supervised learning models have shown outstanding performance in this field, the annotation cost is extremely high. Weakly Supervised Semantic Segmentation (WSSS) reduces annotation costs due to the use of image-level labels. However, these WSSS models that rely on Class Activation Maps (CAM) focus only on the most salient parts of the image, which is challenging when dealing with semantic segmentation tasks involving multiple targets. We propose a two-stage weakly supervised segmentation framework (MSRMMP) to resolve the above problems, the generation of pseudo masks based on multi-scale residual networks (MSR-Net) and the semantic segmentation based on multi-layer pseudo-supervision. MSR-Net fully captures the local features of an image through multi-scale residual module (MSRM) and generates pseudo masks using image-level label. Additionally, we employ Transunet as the segmentation backbone, and uses multi-layer pseudo-supervision algorithms to solve the problem of pseudo-mask inaccuracy. Experiments performed on two publicly available histopathology image datasets show that our proposed method outperforms other state-of-the-art weakly supervised semantic segmentation methods. Additionally, it outperforms the fully-supervised model in mIoU and has a similar result in fwIoU when compared to fully-supervised models. Compared with manual labeling, our model can significantly save the labeling time from hours to minutes.
Collapse
Affiliation(s)
- Yuanchao Xue
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650504, China
| | - Yangsheng Hu
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650504, China
| | - Yu Yao
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650504, China
| | - Jie Huang
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650504, China
| | - Haitao Wang
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650504, China.
| | - Jianfeng He
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650504, China; School of Physics and Electronic Engineering, Yuxi Normal University, Yuxi 653100, Yunnan, China.
| |
Collapse
|
8
|
Krikid F, Rositi H, Vacavant A. State-of-the-Art Deep Learning Methods for Microscopic Image Segmentation: Applications to Cells, Nuclei, and Tissues. J Imaging 2024; 10:311. [PMID: 39728208 DOI: 10.3390/jimaging10120311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2024] [Revised: 11/20/2024] [Accepted: 12/02/2024] [Indexed: 12/28/2024] Open
Abstract
Microscopic image segmentation (MIS) is a fundamental task in medical imaging and biological research, essential for precise analysis of cellular structures and tissues. Despite its importance, the segmentation process encounters significant challenges, including variability in imaging conditions, complex biological structures, and artefacts (e.g., noise), which can compromise the accuracy of traditional methods. The emergence of deep learning (DL) has catalyzed substantial advancements in addressing these issues. This systematic literature review (SLR) provides a comprehensive overview of state-of-the-art DL methods developed over the past six years for the segmentation of microscopic images. We critically analyze key contributions, emphasizing how these methods specifically tackle challenges in cell, nucleus, and tissue segmentation. Additionally, we evaluate the datasets and performance metrics employed in these studies. By synthesizing current advancements and identifying gaps in existing approaches, this review not only highlights the transformative potential of DL in enhancing diagnostic accuracy and research efficiency but also suggests directions for future research. The findings of this study have significant implications for improving methodologies in medical and biological applications, ultimately fostering better patient outcomes and advancing scientific understanding.
Collapse
Affiliation(s)
- Fatma Krikid
- Institut Pascal, CNRS, Clermont Auvergne INP, Université Clermont Auvergne, F-63000 Clermont-Ferrand, France
| | - Hugo Rositi
- LORIA, CNRS, Université de Lorraine, F-54000 Nancy, France
| | - Antoine Vacavant
- Institut Pascal, CNRS, Clermont Auvergne INP, Université Clermont Auvergne, F-63000 Clermont-Ferrand, France
| |
Collapse
|
9
|
Mei M, Wei Z, Hu B, Wang M, Mei L, Ye Z. DAT-Net: Deep Aggregation Transformer Network for automatic nuclear segmentation. Biomed Signal Process Control 2024; 98:106764. [DOI: 10.1016/j.bspc.2024.106764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
|
10
|
Mahbod A, Dorffner G, Ellinger I, Woitek R, Hatamikia S. Improving generalization capability of deep learning-based nuclei instance segmentation by non-deterministic train time and deterministic test time stain normalization. Comput Struct Biotechnol J 2024; 23:669-678. [PMID: 38292472 PMCID: PMC10825317 DOI: 10.1016/j.csbj.2023.12.042] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 12/26/2023] [Accepted: 12/26/2023] [Indexed: 02/01/2024] Open
Abstract
With the advent of digital pathology and microscopic systems that can scan and save whole slide histological images automatically, there is a growing trend to use computerized methods to analyze acquired images. Among different histopathological image analysis tasks, nuclei instance segmentation plays a fundamental role in a wide range of clinical and research applications. While many semi- and fully-automatic computerized methods have been proposed for nuclei instance segmentation, deep learning (DL)-based approaches have been shown to deliver the best performances. However, the performance of such approaches usually degrades when tested on unseen datasets. In this work, we propose a novel method to improve the generalization capability of a DL-based automatic segmentation approach. Besides utilizing one of the state-of-the-art DL-based models as a baseline, our method incorporates non-deterministic train time and deterministic test time stain normalization, and ensembling to boost the segmentation performance. We trained the model with one single training set and evaluated its segmentation performance on seven test datasets. Our results show that the proposed method provides up to 4.9%, 5.4%, and 5.9% better average performance in segmenting nuclei based on Dice score, aggregated Jaccard index, and panoptic quality score, respectively, compared to the baseline segmentation model.
Collapse
Affiliation(s)
- Amirreza Mahbod
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, Austria
| | - Georg Dorffner
- Institute of Artificial Intelligence, Medical University of Vienna, Vienna, Austria
| | - Isabella Ellinger
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, Austria
| | - Ramona Woitek
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, Austria
| | - Sepideh Hatamikia
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, Austria
- Austrian Center for Medical Innovation and Technology, Wiener Neustadt, Austria
| |
Collapse
|
11
|
Wen J, Gao J, Liu Y, Li T, Pu Q, Ding X, Li Y, Fenech A. Toxicological mechanisms and molecular impacts of tire particles and antibiotics on zebrafish. ENVIRONMENTAL POLLUTION (BARKING, ESSEX : 1987) 2024; 362:124912. [PMID: 39245201 DOI: 10.1016/j.envpol.2024.124912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2024] [Revised: 09/04/2024] [Accepted: 09/05/2024] [Indexed: 09/10/2024]
Abstract
Tire microplastics (TMPs) and antibiotics are emerging pollutants that widely exist in water environments. The coexistence of these pollutants poses severe threats to aquatic organisms. However, the toxicity characteristics and key molecular factors of the combined exposure to TMPs in aquatic organisms remain unknown. Therefore, the joint toxicity of styrene-butadiene rubber TMPs (SBR-TMPs) and 32 antibiotics (macrolides, fluoroquinolones, β-lactams, sulfonamides, tetracyclines, nitroimidazoles, highly toxic antibiotics, high-content antibiotics, and common antibiotics) in zebrafish was investigated using a full factorial design, molecular docking, and molecular dynamics simulation. Sixty-four combinations of antibiotics were designed to investigate the hepatotoxicity of the coexistence of SBR-TMPs additives and antibiotics in zebrafish. Results indicated that low-order effects of antibiotics (e.g., enoxacin-lomefloxacin and ofloxacin-enoxacin-lomefloxacin) had relatively notable toxicity. The van der Waals interaction between additives and zebrafish cytochrome P450 enzymes primarily affected zebrafish hepatotoxicity. Zebrafish hepatotoxicity was also affected by the ability of SBR-TMPs to adsorb antibiotics, the relation between antibiotics, the affinity of antibiotics docking to zebrafish cytochrome P450 enzymes, electronegativity, atomic mass, and the hydrophobicity of the antibiotic molecules. This study aimed to eliminate the joint toxicity of TMPs and antibiotics and provide more environmentally friendly instructions for using different chemicals.
Collapse
Affiliation(s)
- Jingya Wen
- College of Environmental Science and Engineering, North China Electric Power University, Beijing, 102206, China; MOE Key Laboratory of Resources and Environmental System Optimization, North China Electric Power University, Beijing, 102206, China.
| | - Jiaxuan Gao
- College of Environmental Science and Engineering, North China Electric Power University, Beijing, 102206, China; MOE Key Laboratory of Resources and Environmental System Optimization, North China Electric Power University, Beijing, 102206, China.
| | - Yajing Liu
- College of Environmental Science and Engineering, North China Electric Power University, Beijing, 102206, China; MOE Key Laboratory of Resources and Environmental System Optimization, North China Electric Power University, Beijing, 102206, China.
| | - Tong Li
- College of Environmental Science and Engineering, North China Electric Power University, Beijing, 102206, China; MOE Key Laboratory of Resources and Environmental System Optimization, North China Electric Power University, Beijing, 102206, China.
| | - Qikun Pu
- College of Environmental Science and Engineering, North China Electric Power University, Beijing, 102206, China; MOE Key Laboratory of Resources and Environmental System Optimization, North China Electric Power University, Beijing, 102206, China.
| | - Xiaowen Ding
- College of Environmental Science and Engineering, North China Electric Power University, Beijing, 102206, China; MOE Key Laboratory of Resources and Environmental System Optimization, North China Electric Power University, Beijing, 102206, China.
| | - Yu Li
- College of Environmental Science and Engineering, North China Electric Power University, Beijing, 102206, China; MOE Key Laboratory of Resources and Environmental System Optimization, North China Electric Power University, Beijing, 102206, China.
| | - Adam Fenech
- School of Climate Change and Adaptation, University of Prince Edward Island, Charlottetown, Canada.
| |
Collapse
|
12
|
Meng Z, Dong J, Zhang B, Li S, Wu R, Su F, Wang G, Guo L, Zhao Z. NuSEA: Nuclei Segmentation With Ellipse Annotations. IEEE J Biomed Health Inform 2024; 28:5996-6007. [PMID: 38913516 DOI: 10.1109/jbhi.2024.3418106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/26/2024]
Abstract
OBJECTIVE Nuclei segmentation is a crucial pre-task for pathological microenvironment quantification. However, the acquisition of manually precise nuclei annotations for improving the performance of deep learning models is time-consuming and expensive. METHODS In this paper, an efficient nuclear annotation tool called NuSEA is proposed to achieve accurate nucleus segmentation, where a simple but effective ellipse annotation is applied. Specifically, the core network U-Light of NuSEA is lightweight with only 0.86 M parameters, which is suitable for real-time nuclei segmentation. In addition, an Elliptical Field Loss and a Texture Loss are proposed to enhance the edge segmentation and constrain the smoothness simultaneously. RESULTS Extensive experiments on three public datasets (MoNuSeg, CPM-17, and CoNSeP) demonstrate that NuSEA is superior to the state-of-the-art (SOTA) methods and better than existing algorithms based on point, rectangle, and text annotations. CONCLUSIONS With the assistance of NuSEA, a new dataset called NuSEA-dataset v1.0, encompassing 118,857 annotated nuclei from the whole-slide images of 12 organs is released. SIGNIFICANCE NuSEA provides a rapid and effective annotation tool for nuclei in histopathological images, benefiting future explorations in deep learning algorithms.
Collapse
|
13
|
Lou W, Wan X, Li G, Lou X, Li C, Gao F, Li H. Structure Embedded Nucleus Classification for Histopathology Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3149-3160. [PMID: 38607704 DOI: 10.1109/tmi.2024.3388328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/14/2024]
Abstract
Nuclei classification provides valuable information for histopathology image analysis. However, the large variations in the appearance of different nuclei types cause difficulties in identifying nuclei. Most neural network based methods are affected by the local receptive field of convolutions, and pay less attention to the spatial distribution of nuclei or the irregular contour shape of a nucleus. In this paper, we first propose a novel polygon-structure feature learning mechanism that transforms a nucleus contour into a sequence of points sampled in order, and employ a recurrent neural network that aggregates the sequential change in distance between key points to obtain learnable shape features. Next, we convert a histopathology image into a graph structure with nuclei as nodes, and build a graph neural network to embed the spatial distribution of nuclei into their representations. To capture the correlations between the categories of nuclei and their surrounding tissue patterns, we further introduce edge features that are defined as the background textures between adjacent nuclei. Lastly, we integrate both polygon and graph structure learning mechanisms into a whole framework that can extract intra and inter-nucleus structural characteristics for nuclei classification. Experimental results show that the proposed framework achieves significant improvements compared to the previous methods. Code and data are made available via https://github.com/lhaof/SENC.
Collapse
|
14
|
Liu W, Zhang Q, Li Q, Wang S. Contrastive and uncertainty-aware nuclei segmentation and classification. Comput Biol Med 2024; 178:108667. [PMID: 38850962 DOI: 10.1016/j.compbiomed.2024.108667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Revised: 04/18/2024] [Accepted: 05/26/2024] [Indexed: 06/10/2024]
Abstract
Nuclei segmentation and classification play a crucial role in pathology diagnosis, enabling pathologists to analyze cellular characteristics accurately. Overlapping cluster nuclei, misdetection of small-scale nuclei, and pleomorphic nuclei-induced misclassification have always been major challenges in the nuclei segmentation and classification tasks. To this end, we introduce an auxiliary task of nuclei boundary-guided contrastive learning to enhance the representativeness and discriminative power of visual features, particularly for addressing the challenge posed by the unclear contours of adherent nuclei and small nuclei. In addition, misclassifications resulting from pleomorphic nuclei often exhibit low classification confidence, indicating a high level of uncertainty. To mitigate misclassification, we capitalize on the characteristic clustering of similar cells to propose a locality-aware class embedding module, offering a regional perspective to capture category information. Moreover, we address uncertain classification in densely aggregated nuclei by designing a top-k uncertainty attention module that leverages deep features to enhance shallow features, thereby improving the learning of contextual semantic information. We demonstrate that the proposed network outperforms the off-the-shelf methods in both nuclei segmentation and classification experiments, achieving the state-of-the-art performance.
Collapse
Affiliation(s)
- Wenxi Liu
- College of Computer and Data Science, Fuzhou University, Fuzhou, 350108, China.
| | - Qing Zhang
- College of Computer and Data Science, Fuzhou University, Fuzhou, 350108, China.
| | - Qi Li
- College of Computer and Data Science, Fuzhou University, Fuzhou, 350108, China.
| | - Shu Wang
- College of Mechanical Engineering and Automation, Fuzhou University, Fuzhou, 350108, China.
| |
Collapse
|
15
|
Yang L, Shao D, Huang Z, Geng M, Zhang N, Chen L, Wang X, Liang D, Pang ZF, Hu Z. Few-shot segmentation framework for lung nodules via an optimized active contour model. Med Phys 2024; 51:2788-2805. [PMID: 38189528 DOI: 10.1002/mp.16933] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2023] [Revised: 11/07/2023] [Accepted: 12/15/2023] [Indexed: 01/09/2024] Open
Abstract
BACKGROUND Accurate segmentation of lung nodules is crucial for the early diagnosis and treatment of lung cancer in clinical practice. However, the similarity between lung nodules and surrounding tissues has made their segmentation a longstanding challenge. PURPOSE Existing deep learning and active contour models each have their limitations. This paper aims to integrate the strengths of both approaches while mitigating their respective shortcomings. METHODS In this paper, we propose a few-shot segmentation framework that combines a deep neural network with an active contour model. We introduce heat kernel convolutions and high-order total variation into the active contour model and solve the challenging nonsmooth optimization problem using the alternating direction method of multipliers. Additionally, we use the presegmentation results obtained from training a deep neural network on a small sample set as the initial contours for our optimized active contour model, addressing the difficulty of manually setting the initial contours. RESULTS We compared our proposed method with state-of-the-art methods for segmentation effectiveness using clinical computed tomography (CT) images acquired from two different hospitals and the publicly available LIDC dataset. The results demonstrate that our proposed method achieved outstanding segmentation performance according to both visual and quantitative indicators. CONCLUSION Our approach utilizes the output of few-shot network training as prior information, avoiding the need to select the initial contour in the active contour model. Additionally, it provides mathematical interpretability to the deep learning, reducing its dependency on the quantity of training samples.
Collapse
Affiliation(s)
- Lin Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- College of Mathematics and Statistics, Henan University, Kaifeng, China
| | - Dan Shao
- Department of Nuclear Medicine, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Zhenxing Huang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Mengxiao Geng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- College of Mathematics and Statistics, Henan University, Kaifeng, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China
| | - Long Chen
- Department of PET/CT Center and the Department of Thoracic Cancer I, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Xi Wang
- Department of PET/CT Center and the Department of Thoracic Cancer I, Cancer Center of Yunnan Province, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China
| | - Zhi-Feng Pang
- College of Mathematics and Statistics, Henan University, Kaifeng, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
16
|
Mahbod A, Polak C, Feldmann K, Khan R, Gelles K, Dorffner G, Woitek R, Hatamikia S, Ellinger I. NuInsSeg: A fully annotated dataset for nuclei instance segmentation in H&E-stained histological images. Sci Data 2024; 11:295. [PMID: 38486039 PMCID: PMC10940572 DOI: 10.1038/s41597-024-03117-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 03/04/2024] [Indexed: 03/18/2024] Open
Abstract
In computational pathology, automatic nuclei instance segmentation plays an essential role in whole slide image analysis. While many computerized approaches have been proposed for this task, supervised deep learning (DL) methods have shown superior segmentation performances compared to classical machine learning and image processing techniques. However, these models need fully annotated datasets for training which is challenging to acquire, especially in the medical domain. In this work, we release one of the biggest fully manually annotated datasets of nuclei in Hematoxylin and Eosin (H&E)-stained histological images, called NuInsSeg. This dataset contains 665 image patches with more than 30,000 manually segmented nuclei from 31 human and mouse organs. Moreover, for the first time, we provide additional ambiguous area masks for the entire dataset. These vague areas represent the parts of the images where precise and deterministic manual annotations are impossible, even for human experts. The dataset and detailed step-by-step instructions to generate related segmentation masks are publicly available on the respective repositories.
Collapse
Affiliation(s)
- Amirreza Mahbod
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, 3500, Austria.
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria.
| | - Christine Polak
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria
| | - Katharina Feldmann
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria
| | - Rumsha Khan
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria
| | - Katharina Gelles
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria
| | - Georg Dorffner
- Institute of Artificial Intelligence, Medical University of Vienna, Vienna, 1090, Austria
| | - Ramona Woitek
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, 3500, Austria
| | - Sepideh Hatamikia
- Research Center for Medical Image Analysis and Artificial Intelligence, Department of Medicine, Danube Private University, Krems an der Donau, 3500, Austria
- Austrian Center for Medical Innovation and Technology, Wiener Neustadt, 2700, Austria
| | - Isabella Ellinger
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, Vienna, 1090, Austria
| |
Collapse
|
17
|
Wagner SJ, Matek C, Shetab Boushehri S, Boxberg M, Lamm L, Sadafi A, Winter DJE, Marr C, Peng T. Built to Last? Reproducibility and Reusability of Deep Learning Algorithms in Computational Pathology. Mod Pathol 2024; 37:100350. [PMID: 37827448 DOI: 10.1016/j.modpat.2023.100350] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 10/02/2023] [Accepted: 10/03/2023] [Indexed: 10/14/2023]
Abstract
Recent progress in computational pathology has been driven by deep learning. While code and data availability are essential to reproduce findings from preceding publications, ensuring a deep learning model's reusability is more challenging. For that, the codebase should be well-documented and easy to integrate into existing workflows and models should be robust toward noise and generalizable toward data from different sources. Strikingly, only a few computational pathology algorithms have been reused by other researchers so far, let alone employed in a clinical setting. To assess the current state of reproducibility and reusability of computational pathology algorithms, we evaluated peer-reviewed articles available in PubMed, published between January 2019 and March 2021, in 5 use cases: stain normalization; tissue type segmentation; evaluation of cell-level features; genetic alteration prediction; and inference of grading, staging, and prognostic information. We compiled criteria for data and code availability and statistical result analysis and assessed them in 160 publications. We found that only one-quarter (41 of 160 publications) made code publicly available. Among these 41 studies, three-quarters (30 of 41) analyzed their results statistically, half of them (20 of 41) released their trained model weights, and approximately a third (16 of 41) used an independent cohort for evaluation. Our review is intended for both pathologists interested in deep learning and researchers applying algorithms to computational pathology challenges. We provide a detailed overview of publications with published code in the field, list reusable data handling tools, and provide criteria for reproducibility and reusability.
Collapse
Affiliation(s)
- Sophia J Wagner
- Helmholtz AI, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany; School of Computation, Information and Technology, Technical University of Munich, Garching, Germany
| | - Christian Matek
- Institute of AI for Health, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany; Institute of Pathology, University Hospital Erlangen, Erlangen, Germany
| | - Sayedali Shetab Boushehri
- School of Computation, Information and Technology, Technical University of Munich, Garching, Germany; Institute of AI for Health, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany; Data & Analytics (D&A), Roche Pharma Research and Early Development (pRED), Roche Innovation Center Munich, Germany
| | - Melanie Boxberg
- Institute of Pathology, Technical University Munich, Munich, Germany; Institute of Pathology Munich-North, Munich, Germany
| | - Lorenz Lamm
- Helmholtz AI, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany; Helmholtz Pioneer Campus, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany
| | - Ario Sadafi
- School of Computation, Information and Technology, Technical University of Munich, Garching, Germany; Institute of AI for Health, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany
| | - Dominik J E Winter
- Institute of AI for Health, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany; School of Life Sciences, Technical University of Munich, Weihenstephan, Germany
| | - Carsten Marr
- Institute of AI for Health, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany.
| | - Tingying Peng
- Helmholtz AI, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany.
| |
Collapse
|
18
|
Zhao Y, Shao X, Chen C, Song J, Tian C, Li W. The Contrastive Network With Convolution and Self-Attention Mechanisms for Unsupervised Cell Segmentation. IEEE J Biomed Health Inform 2023; 27:5837-5847. [PMID: 37651477 DOI: 10.1109/jbhi.2023.3310507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
Deep learning for cell instance segmentation is a significant research direction in biomedical image analysis. The traditional supervised learning methods rely on pixel-wise annotation of object images to train the models, which is often accompanied by time-consuming and labor-intensive. Various modified segmentation methods, based on weakly supervised or semi-supervised learning, have been proposed to recognize cell regions by only using rough annotations of cell positions. However, it is still hard to achieve the fully unsupervised in most approaches that the utilization of few annotations for training is still inevitable. In this article, we propose an end-to-end unsupervised model that can segment individual cell regions on hematoxylin and eosin (H&E) stained slides without any annotation. Compared with weakly or semi-supervised methods, the input of our model is in the form of raw data without any identifiers and there is no need to generate pseudo-labelling during training. We demonstrated that the performance of our model is satisfactory and also has a great generalization ability on various validation sets compared with supervised models. The ablation experiment shows that our backbone has superior performance in capturing object edge and context information than pure CNN or transformer under our unsupervised method.
Collapse
|
19
|
Rasheed A, Shirazi SH, Umar AI, Shahzad M, Yousaf W, Khan Z. Cervical cell's nucleus segmentation through an improved UNet architecture. PLoS One 2023; 18:e0283568. [PMID: 37788295 PMCID: PMC10547184 DOI: 10.1371/journal.pone.0283568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 03/11/2023] [Indexed: 10/05/2023] Open
Abstract
Precise segmentation of the nucleus is vital for computer-aided diagnosis (CAD) in cervical cytology. Automated delineation of the cervical nucleus has notorious challenges due to clumped cells, color variation, noise, and fuzzy boundaries. Due to its standout performance in medical image analysis, deep learning has gained attention from other techniques. We have proposed a deep learning model, namely C-UNet (Cervical-UNet), to segment cervical nuclei from overlapped, fuzzy, and blurred cervical cell smear images. Cross-scale features integration based on a bi-directional feature pyramid network (BiFPN) and wide context unit are used in the encoder of classic UNet architecture to learn spatial and local features. The decoder of the improved network has two inter-connected decoders that mutually optimize and integrate these features to produce segmentation masks. Each component of the proposed C-UNet is extensively evaluated to judge its effectiveness on a complex cervical cell dataset. Different data augmentation techniques were employed to enhance the proposed model's training. Experimental results have shown that the proposed model outperformed extant models, i.e., CGAN (Conditional Generative Adversarial Network), DeepLabv3, Mask-RCNN (Region-Based Convolutional Neural Network), and FCN (Fully Connected Network), on the employed dataset used in this study and ISBI-2014 (International Symposium on Biomedical Imaging 2014), ISBI-2015 datasets. The C-UNet achieved an object-level accuracy of 93%, pixel-level accuracy of 92.56%, object-level recall of 95.32%, pixel-level recall of 92.27%, Dice coefficient of 93.12%, and F1-score of 94.96% on complex cervical images dataset.
Collapse
Affiliation(s)
- Assad Rasheed
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| | - Syed Hamad Shirazi
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| | - Arif Iqbal Umar
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| | - Muhammad Shahzad
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| | - Waqas Yousaf
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| | - Zakir Khan
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| |
Collapse
|
20
|
Lin Y, Qu Z, Chen H, Gao Z, Li Y, Xia L, Ma K, Zheng Y, Cheng KT. Nuclei segmentation with point annotations from pathology images via self-supervised learning and co-training. Med Image Anal 2023; 89:102933. [PMID: 37611532 DOI: 10.1016/j.media.2023.102933] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 07/21/2023] [Accepted: 08/10/2023] [Indexed: 08/25/2023]
Abstract
Nuclei segmentation is a crucial task for whole slide image analysis in digital pathology. Generally, the segmentation performance of fully-supervised learning heavily depends on the amount and quality of the annotated data. However, it is time-consuming and expensive for professional pathologists to provide accurate pixel-level ground truth, while it is much easier to get coarse labels such as point annotations. In this paper, we propose a weakly-supervised learning method for nuclei segmentation that only requires point annotations for training. First, coarse pixel-level labels are derived from the point annotations based on the Voronoi diagram and the k-means clustering method to avoid overfitting. Second, a co-training strategy with an exponential moving average method is designed to refine the incomplete supervision of the coarse labels. Third, a self-supervised visual representation learning method is tailored for nuclei segmentation of pathology images that transforms the hematoxylin component images into the H&E stained images to gain better understanding of the relationship between the nuclei and cytoplasm. We comprehensively evaluate the proposed method using two public datasets. Both visual and quantitative results demonstrate the superiority of our method to the state-of-the-art methods, and its competitive performance compared to the fully-supervised methods. Codes are available at https://github.com/hust-linyi/SC-Net.
Collapse
Affiliation(s)
- Yi Lin
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong
| | - Zhiyong Qu
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Hao Chen
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong; Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong.
| | - Zhongke Gao
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | | | - Lili Xia
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Kai Ma
- Tencent Jarvis Lab, Shenzhen, China
| | | | - Kwang-Ting Cheng
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong
| |
Collapse
|
21
|
Pan X, Cheng J, Hou F, Lan R, Lu C, Li L, Feng Z, Wang H, Liang C, Liu Z, Chen X, Han C, Liu Z. SMILE: Cost-sensitive multi-task learning for nuclear segmentation and classification with imbalanced annotations. Med Image Anal 2023; 88:102867. [PMID: 37348167 DOI: 10.1016/j.media.2023.102867] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 03/25/2023] [Accepted: 06/07/2023] [Indexed: 06/24/2023]
Abstract
High throughput nuclear segmentation and classification of whole slide images (WSIs) is crucial to biological analysis, clinical diagnosis and precision medicine. With the advances of CNN algorithms and the continuously growing datasets, considerable progress has been made in nuclear segmentation and classification. However, few works consider how to reasonably deal with nuclear heterogeneity in the following two aspects: imbalanced data distribution and diversified morphology characteristics. The minority classes might be dominated by the majority classes due to the imbalanced data distribution and the diversified morphology characteristics may lead to fragile segmentation results. In this study, a cost-Sensitive MultI-task LEarning (SMILE) framework is conducted to tackle the data heterogeneity problem. Based on the most popular multi-task learning backbone in nuclei segmentation and classification, we propose a multi-task correlation attention (MTCA) to perform feature interaction of multiple high relevant tasks to learn better feature representation. A cost-sensitive learning strategy is proposed to solve the imbalanced data distribution by increasing the penalization for the error classification of the minority classes. Furthermore, we propose a novel post-processing step based on the coarse-to-fine marker-controlled watershed scheme to alleviate fragile segmentation when nuclei are with large size and unclear contour. Extensive experiments show that the proposed method achieves state-of-the-art performances on CoNSeP and MoNuSAC 2020 datasets. The code is available at: https://github.com/panxipeng/nuclear_segandcls.
Collapse
Affiliation(s)
- Xipeng Pan
- School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin, Guangxi 541004, China; Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, Guangdong 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, Guangdong 510080, China.
| | - Jijun Cheng
- Software Engineering Institute, East China Normal University, Shanghai 200062, China
| | - Feihu Hou
- School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin, Guangxi 541004, China
| | - Rushi Lan
- School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin, Guangxi 541004, China
| | - Cheng Lu
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, Guangdong 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, Guangdong 510080, China
| | - Lingqiao Li
- School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin, Guangxi 541004, China
| | - Zhengyun Feng
- School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin, Guangxi 541004, China
| | - Huadeng Wang
- School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin, Guangxi 541004, China
| | - Changhong Liang
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, Guangdong 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, Guangdong 510080, China
| | - Zhenbing Liu
- School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin, Guangxi 541004, China.
| | - Xin Chen
- Department of Radiology, Guangzhou First People's Hospital, School of Medicine, South China University of Technology, Guangzhou, Guangdong 510180, China.
| | - Chu Han
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, Guangdong 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, Guangdong 510080, China.
| | - Zaiyi Liu
- School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin, Guangxi 541004, China; Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, Guangdong 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, Guangdong 510080, China.
| |
Collapse
|
22
|
Liu Y, Wang J, Wu C, Liu L, Zhang Z, Yu H. Fovea-UNet: detection and segmentation of lymph node metastases in colorectal cancer with deep learning. Biomed Eng Online 2023; 22:74. [PMID: 37479991 PMCID: PMC10362618 DOI: 10.1186/s12938-023-01137-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 07/11/2023] [Indexed: 07/23/2023] Open
Abstract
BACKGROUND Colorectal cancer is one of the most serious malignant tumors, and lymph node metastasis (LNM) from colorectal cancer is a major factor for patient management and prognosis. Accurate image detection of LNM is an important task to help clinicians diagnose cancer. Recently, the U-Net architecture based on convolutional neural networks (CNNs) has been widely used to segment image to accomplish more precise cancer diagnosis. However, the accurate segmentation of important regions with high diagnostic value is still a great challenge due to the insufficient capability of CNN and codec structure in aggregating the detailed and non-local contextual information. In this work, we propose a high performance and low computation solution. METHODS Inspired by the working principle of Fovea in visual neuroscience, a novel network framework based on U-Net for cancer segmentation named Fovea-UNet is proposed to adaptively adjust the resolution according to the importance-aware of information and selectively focuses on the region most relevant to colorectal LNM. Specifically, we design an effective adaptively optimized pooling operation called Fovea Pooling (FP), which dynamically aggregate the detailed and non-local contextual information according to the pixel-level feature importance. In addition, the improved lightweight backbone network based on GhostNet is adopted to reduce the computational cost caused by FP. RESULTS Experimental results show that our proposed framework can achieve higher performance than other state-of-the-art segmentation networks with 79.38% IoU, 88.51% DSC, 92.82% sensitivity and 84.57% precision on the LNM dataset, and the parameter amount is reduced to 23.23 MB. CONCLUSIONS The proposed framework can provide a valid tool for cancer diagnosis, especially for LNM of colorectal cancer.
Collapse
Affiliation(s)
- Yajiao Liu
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Jiang Wang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Chenpeng Wu
- Department of Pathology, Tangshan Gongren Hospital, Tangshan, China
| | - Liyun Liu
- Department of Pathology, Tangshan Gongren Hospital, Tangshan, China
| | - Zhiyong Zhang
- Department of Pathology, Tangshan Gongren Hospital, Tangshan, China
| | - Haitao Yu
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China.
| |
Collapse
|
23
|
Lou W, Li H, Li G, Han X, Wan X. Which Pixel to Annotate: A Label-Efficient Nuclei Segmentation Framework. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:947-958. [PMID: 36355729 DOI: 10.1109/tmi.2022.3221666] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Recently deep neural networks, which require a large amount of annotated samples, have been widely applied in nuclei instance segmentation of H&E stained pathology images. However, it is inefficient and unnecessary to label all pixels for a dataset of nuclei images which usually contain similar and redundant patterns. Although unsupervised and semi-supervised learning methods have been studied for nuclei segmentation, very few works have delved into the selective labeling of samples to reduce the workload of annotation. Thus, in this paper, we propose a novel full nuclei segmentation framework that chooses only a few image patches to be annotated, augments the training set from the selected samples, and achieves nuclei segmentation in a semi-supervised manner. In the proposed framework, we first develop a novel consistency-based patch selection method to determine which image patches are the most beneficial to the training. Then we introduce a conditional single-image GAN with a component-wise discriminator, to synthesize more training samples. Lastly, our proposed framework trains an existing segmentation model with the above augmented samples. The experimental results show that our proposed method could obtain the same-level performance as a fully-supervised baseline by annotating less than 5% pixels on some benchmarks.
Collapse
|
24
|
Guo R, Xie K, Pagnucco M, Song Y. SAC-Net: Learning with weak and noisy labels in histopathology image segmentation. Med Image Anal 2023; 86:102790. [PMID: 36878159 DOI: 10.1016/j.media.2023.102790] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2022] [Revised: 11/24/2022] [Accepted: 02/23/2023] [Indexed: 03/06/2023]
Abstract
Deep convolutional neural networks have been highly effective in segmentation tasks. However, segmentation becomes more difficult when training images include many complex instances to segment, such as the task of nuclei segmentation in histopathology images. Weakly supervised learning can reduce the need for large-scale, high-quality ground truth annotations by involving non-expert annotators or algorithms to generate supervision information for segmentation. However, there is still a significant performance gap between weakly supervised learning and fully supervised learning approaches. In this work, we propose a weakly-supervised nuclei segmentation method in a two-stage training manner that only requires annotation of the nuclear centroids. First, we generate boundary and superpixel-based masks as pseudo ground truth labels to train our SAC-Net, which is a segmentation network enhanced by a constraint network and an attention network to effectively address the problems caused by noisy labels. Then, we refine the pseudo labels at the pixel level based on Confident Learning to train the network again. Our method shows highly competitive performance of cell nuclei segmentation in histopathology images on three public datasets. Code will be available at: https://github.com/RuoyuGuo/MaskGA_Net.
Collapse
Affiliation(s)
- Ruoyu Guo
- School of Computer Science and Engineering, University of New South Wales, Australia
| | - Kunzi Xie
- School of Computer Science and Engineering, University of New South Wales, Australia
| | - Maurice Pagnucco
- School of Computer Science and Engineering, University of New South Wales, Australia
| | - Yang Song
- School of Computer Science and Engineering, University of New South Wales, Australia.
| |
Collapse
|
25
|
Karabağ C, Ortega-Ruíz MA, Reyes-Aldasoro CC. Impact of Training Data, Ground Truth and Shape Variability in the Deep Learning-Based Semantic Segmentation of HeLa Cells Observed with Electron Microscopy. J Imaging 2023; 9:59. [PMID: 36976110 PMCID: PMC10058680 DOI: 10.3390/jimaging9030059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 02/16/2023] [Accepted: 02/17/2023] [Indexed: 03/06/2023] Open
Abstract
This paper investigates the impact of the amount of training data and the shape variability on the segmentation provided by the deep learning architecture U-Net. Further, the correctness of ground truth (GT) was also evaluated. The input data consisted of a three-dimensional set of images of HeLa cells observed with an electron microscope with dimensions 8192×8192×517. From there, a smaller region of interest (ROI) of 2000×2000×300 was cropped and manually delineated to obtain the ground truth necessary for a quantitative evaluation. A qualitative evaluation was performed on the 8192×8192 slices due to the lack of ground truth. Pairs of patches of data and labels for the classes nucleus, nuclear envelope, cell and background were generated to train U-Net architectures from scratch. Several training strategies were followed, and the results were compared against a traditional image processing algorithm. The correctness of GT, that is, the inclusion of one or more nuclei within the region of interest was also evaluated. The impact of the extent of training data was evaluated by comparing results from 36,000 pairs of data and label patches extracted from the odd slices in the central region, to 135,000 patches obtained from every other slice in the set. Then, 135,000 patches from several cells from the 8192×8192 slices were generated automatically using the image processing algorithm. Finally, the two sets of 135,000 pairs were combined to train once more with 270,000 pairs. As would be expected, the accuracy and Jaccard similarity index improved as the number of pairs increased for the ROI. This was also observed qualitatively for the 8192×8192 slices. When the 8192×8192 slices were segmented with U-Nets trained with 135,000 pairs, the architecture trained with automatically generated pairs provided better results than the architecture trained with the pairs from the manually segmented ground truths. This suggests that the pairs that were extracted automatically from many cells provided a better representation of the four classes of the various cells in the 8192×8192 slice than those pairs that were manually segmented from a single cell. Finally, the two sets of 135,000 pairs were combined, and the U-Net trained with these provided the best results.
Collapse
Affiliation(s)
- Cefa Karabağ
- giCentre, Department of Computer Science, School of Science and Technology, City, University of London, London EC1V 0HB, UK
| | - Mauricio Alberto Ortega-Ruíz
- giCentre, Department of Computer Science, School of Science and Technology, City, University of London, London EC1V 0HB, UK
- Departamento de Ingeniería, Campus Coyoacán, Universidad del Valle de México, Ciudad de México C.P. 04910, Mexico
| | | |
Collapse
|
26
|
Chen S, Ding C, Liu M, Cheng J, Tao D. CPP-Net: Context-Aware Polygon Proposal Network for Nucleus Segmentation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2023; 32:980-994. [PMID: 37022023 DOI: 10.1109/tip.2023.3237013] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Nucleus segmentation is a challenging task due to the crowded distribution and blurry boundaries of nuclei. Recent approaches represent nuclei by means of polygons to differentiate between touching and overlapping nuclei and have accordingly achieved promising performance. Each polygon is represented by a set of centroid-to-boundary distances, which are in turn predicted by features of the centroid pixel for a single nucleus. However, using the centroid pixel alone does not provide sufficient contextual information for robust prediction and thus degrades the segmentation accuracy. To handle this problem, we propose a Context-aware Polygon Proposal Network (CPP-Net) for nucleus segmentation. First, we sample a point set rather than one single pixel within each cell for distance prediction. This strategy substantially enhances contextual information and thereby improves the robustness of the prediction. Second, we propose a Confidence-based Weighting Module, which adaptively fuses the predictions from the sampled point set. Third, we introduce a novel Shape-Aware Perceptual (SAP) loss that constrains the shape of the predicted polygons. Here, the SAP loss is based on an additional network that is pre-trained by means of mapping the centroid probability map and the pixel-to-boundary distance maps to a different nucleus representation. Extensive experiments justify the effectiveness of each component in the proposed CPP-Net. Finally, CPP-Net is found to achieve state-of-the-art performance on three publicly available databases, namely DSB2018, BBBC06, and PanNuke. Code of this paper is available at https://github.com/csccsccsccsc/cpp-net.
Collapse
|
27
|
Liu K, Li B, Wu W, May C, Chang O, Knezevich S, Reisch L, Elmore J, Shapiro L. VSGD-Net: Virtual Staining Guided Melanocyte Detection on Histopathological Images. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION 2023; 2023:1918-1927. [PMID: 36865487 PMCID: PMC9977454 DOI: 10.1109/wacv56688.2023.00196] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
Abstract
Detection of melanocytes serves as a critical prerequisite in assessing melanocytic growth patterns when diagnosing melanoma and its precursor lesions on skin biopsy specimens. However, this detection is challenging due to the visual similarity of melanocytes to other cells in routine Hematoxylin and Eosin (H&E) stained images, leading to the failure of current nuclei detection methods. Stains such as Sox10 can mark melanocytes, but they require an additional step and expense and thus are not regularly used in clinical practice. To address these limitations, we introduce VSGD-Net, a novel detection network that learns melanocyte identification through virtual staining from H&E to Sox10. The method takes only routine H&E images during inference, resulting in a promising approach to support pathologists in the diagnosis of melanoma. To the best of our knowledge, this is the first study that investigates the detection problem using image synthesis features between two distinct pathology stainings. Extensive experimental results show that our proposed model outperforms state-of-the-art nuclei detection methods for melanocyte detection. The source code and pre-trained model are available at: https://github.com/kechunl/VSGD-Net.
Collapse
Affiliation(s)
| | - Beibin Li
- University of Washington
- Microsoft Research
| | | | | | | | | | | | | | | |
Collapse
|
28
|
Nasir ES, Parvaiz A, Fraz MM. Nuclei and glands instance segmentation in histology images: a narrative review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10372-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
29
|
Chen Y, Xu C, Ding W, Sun S, Yue X, Fujita H. Target-aware U-Net with fuzzy skip connections for refined pancreas segmentation. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
30
|
Shi P, Zhong J, Lin L, Lin L, Li H, Wu C. Nuclei segmentation of HE stained histopathological images based on feature global delivery connection network. PLoS One 2022; 17:e0273682. [PMID: 36107930 PMCID: PMC9477331 DOI: 10.1371/journal.pone.0273682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 08/12/2022] [Indexed: 11/22/2022] Open
Abstract
The analysis of pathological images, such as cell counting and nuclear morphological measurement, is an essential part in clinical histopathology researches. Due to the diversity of uncertain cell boundaries after staining, automated nuclei segmentation of Hematoxylin-Eosin (HE) stained pathological images remains challenging. Although better performances could be achieved than most of classic image processing methods do, manual labeling is still necessary in a majority of current machine learning based segmentation strategies, which restricts further improvements of efficiency and accuracy. Aiming at the requirements of stable and efficient high-throughput pathological image analysis, an automated Feature Global Delivery Connection Network (FGDC-net) is proposed for nuclei segmentation of HE stained images. Firstly, training sample patches and their corresponding asymmetric labels are automatically generated based on a Full Mixup strategy from RGB to HSV color space. Secondly, in order to add connections between adjacent layers and achieve the purpose of feature selection, FGDC module is designed by removing the jumping connections between codecs commonly used in UNet-based image segmentation networks, which learns the relationships between channels in each layer and pass information selectively. Finally, a dynamic training strategy based on mixed loss is used to increase the generalization capability of the model by flexible epochs. The proposed improvements were verified by the ablation experiments on multiple open databases and own clinical meningioma dataset. Experimental results on multiple datasets showed that FGDC-net could effectively improve the segmentation performances of HE stained pathological images without manual interventions, and provide valuable references for clinical pathological analysis.
Collapse
Affiliation(s)
- Peng Shi
- College of Computer and Cyber Security, Fujian Normal University, Fuzhou, Fujian, China
- Digit Fujian Internet-of-Things Laboratory of Environmental Monitoring, Fujian Normal University, Fuzhou, Fujian, China
- * E-mail:
| | - Jing Zhong
- Department of Radiology, Fujian Medical University Cancer Hospital, Fujian Cancer Hospital, Fuzhou, Fujian, China
| | - Liyan Lin
- Department of Pathology, Fujian Medical University Cancer Hospital, Fujian Cancer Hospital, Fuzhou, Fujian, China
| | - Lin Lin
- Department of Radiology, Fujian Medical University Union Hospital, Fuzhou, Fujian, China
| | - Huachang Li
- College of Computer and Cyber Security, Fujian Normal University, Fuzhou, Fujian, China
- Digit Fujian Internet-of-Things Laboratory of Environmental Monitoring, Fujian Normal University, Fuzhou, Fujian, China
| | - Chongshu Wu
- College of Computer and Cyber Security, Fujian Normal University, Fuzhou, Fujian, China
- Digit Fujian Internet-of-Things Laboratory of Environmental Monitoring, Fujian Normal University, Fuzhou, Fujian, China
| |
Collapse
|
31
|
Cao X, Chen H, Li Y, Peng Y, Zhou Y, Cheng L, Liu T, Shen D. Auto-DenseUNet: Searchable neural network architecture for mass segmentation in 3D automated breast ultrasound. Med Image Anal 2022; 82:102589. [DOI: 10.1016/j.media.2022.102589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Revised: 07/18/2022] [Accepted: 08/17/2022] [Indexed: 11/15/2022]
|
32
|
Yang L, Gu Y, Huo B, Liu Y, Bian G. A shape-guided deep residual network for automated CT lung segmentation. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.108981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
33
|
Qin J, He Y, Zhou Y, Zhao J, Ding B. REU-Net: Region-enhanced nuclei segmentation network. Comput Biol Med 2022; 146:105546. [DOI: 10.1016/j.compbiomed.2022.105546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2022] [Revised: 03/24/2022] [Accepted: 04/17/2022] [Indexed: 11/03/2022]
|
34
|
Multi-layer pseudo-supervision for histopathology tissue semantic segmentation using patch-level classification labels. Med Image Anal 2022; 80:102487. [PMID: 35671591 DOI: 10.1016/j.media.2022.102487] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2021] [Revised: 05/07/2022] [Accepted: 05/20/2022] [Indexed: 01/15/2023]
Abstract
Tissue-level semantic segmentation is a vital step in computational pathology. Fully-supervised models have already achieved outstanding performance with dense pixel-level annotations. However, drawing such labels on the giga-pixel whole slide images is extremely expensive and time-consuming. In this paper, we use only patch-level classification labels to achieve tissue semantic segmentation on histopathology images, finally reducing the annotation efforts. We propose a two-step model including a classification and a segmentation phases. In the classification phase, we propose a CAM-based model to generate pseudo masks by patch-level labels. In the segmentation phase, we achieve tissue semantic segmentation by our propose Multi-Layer Pseudo-Supervision. Several technical novelties have been proposed to reduce the information gap between pixel-level and patch-level annotations. As a part of this paper, we introduce a new weakly-supervised semantic segmentation (WSSS) dataset for lung adenocarcinoma (LUAD-HistoSeg). We conduct several experiments to evaluate our proposed model on two datasets. Our proposed model outperforms five state-of-the-art WSSS approaches. Note that we can achieve comparable quantitative and qualitative results with the fully-supervised model, with only around a 2% gap for MIoU and FwIoU. By comparing with manual labeling on a randomly sampled 100 patches dataset, patch-level labeling can greatly reduce the annotation time from hours to minutes. The source code and the released datasets are available at: https://github.com/ChuHan89/WSSS-Tissue.
Collapse
|
35
|
Bai T, Xu J, Zhang Z, Guo S, Luo X. Context-aware learning for cancer cell nucleus recognition in pathology images. Bioinformatics 2022; 38:2892-2898. [PMID: 35561198 DOI: 10.1093/bioinformatics/btac167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 02/28/2022] [Accepted: 03/17/2022] [Indexed: 11/13/2022] Open
Abstract
MOTIVATION Nucleus identification supports many quantitative analysis studies that rely on nuclei positions or categories. Contextual information in pathology images refers to information near the to-be-recognized cell, which can be very helpful for nucleus subtyping. Current CNN-based methods do not explicitly encode contextual information within the input images and point annotations. RESULTS In this article, we propose a novel framework with context to locate and classify nuclei in microscopy image data. Specifically, first we use state-of-the-art network architectures to extract multi-scale feature representations from multi-field-of-view, multi-resolution input images and then conduct feature aggregation on-the-fly with stacked convolutional operations. Then, two auxiliary tasks are added to the model to effectively utilize the contextual information. One for predicting the frequencies of nuclei, and the other for extracting the regional distribution information of the same kind of nuclei. The entire framework is trained in an end-to-end, pixel-to-pixel fashion. We evaluate our method on two histopathological image datasets with different tissue and stain preparations, and experimental results demonstrate that our method outperforms other recent state-of-the-art models in nucleus identification. AVAILABILITY AND IMPLEMENTATION The source code of our method is freely available at https://github.com/qjxjy123/DonRabbit. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Tian Bai
- College of Computer Science and Technology, Jilin University, 130012 Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, 130012 Changchun, China
| | - Jiayu Xu
- College of Computer Science and Technology, Jilin University, 130012 Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, 130012 Changchun, China
| | - Zhenting Zhang
- College of Computer Science and Technology, Jilin University, 130012 Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, 130012 Changchun, China
| | - Shuyu Guo
- College of Computer Science and Technology, Jilin University, 130012 Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, 130012 Changchun, China
| | - Xiao Luo
- Department of Breast Surgery, China-Japan Union Hospital of Jilin University, 130033 Changchun, China
| |
Collapse
|
36
|
Han C, Yao H, Zhao B, Li Z, Shi Z, Wu L, Chen X, Qu J, Zhao K, Lan R, Liang C, Pan X, Liu Z. Meta Multi-task Nuclei Segmentation with Fewer Training Samples. Med Image Anal 2022; 80:102481. [DOI: 10.1016/j.media.2022.102481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Revised: 05/05/2022] [Accepted: 05/13/2022] [Indexed: 11/29/2022]
|
37
|
Multi-task generative adversarial learning for nuclei segmentation with dual attention and recurrent convolution. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103558] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
|
38
|
Liang S, Lu H, Zang M, Wang X, Jiao Y, Zhao T, Xu EY, Xu J. Deep SED-Net with interactive learning for multiple testicular cell types segmentation and cell composition analysis in mouse seminiferous tubules. Cytometry A 2022; 101:658-674. [PMID: 35388957 DOI: 10.1002/cyto.a.24556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 03/05/2022] [Accepted: 04/01/2022] [Indexed: 11/06/2022]
Abstract
The development of mouse spermatozoa is a continuous process from spermatogonia, spermatocytes, spermatids to mature sperm. Those developing germ cells (spermatogonia, spermatocyte, spermatids) together with supporting Sertoli cells are all enclosed inside seminiferous tubules of the testis, their identification is key to testis histology and pathology analysis. Automated segmentation of all these cells is a challenging task because of their dynamical changes in different stages. The accurate segmentation of testicular cells is critical in developing computerized spermatogenesis staging. In this paper, we present a novel segmentation model, SED-Net, which incorporates a Squeeze-and-Excitation (SE) module and a Dense unit. The SE module optimizes and obtains features from different channels, whereas the Dense unit uses fewer parameters to enhance the use of features. A human-in-the-loop strategy, named deep interactive learning, is developed to achieve better segmentation performance while reducing the workload of manual annotation and time consumption. Across a cohort of 274 seminiferous tubules from Stages VI to VIII, the SED-Net achieved a pixel accuracy of 0.930, a mean pixel accuracy of 0.866, a mean intersection over union of 0.710, and a frequency weighted intersection over union of 0.878, respectively, in terms of four types of testicular cell segmentation. There is no significant difference between manual annotated tubules and segmentation results by SED-Net in cell composition analysis for tubules from Stages VI to VIII. In addition, we performed cell composition analysis on 2346 segmented seminiferous tubule images from 12 segmented testicular section results. The results provided quantitation of cells of various testicular cell types across 12 stages. The rule reflects the cell variation tendency across 12 stages during development of mouse spermatozoa. The method could enable us to not only analyze cell morphology and staging during the development of mouse spermatozoa but also potientially could be applied to the study of reproductive diseases such as infertility.
Collapse
Affiliation(s)
- Shi Liang
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, China
| | - Haoda Lu
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, China
| | - Min Zang
- State Key Laboratory of Reproductive Medicine, Nanjing Medical University, Nanjing, China
| | - Xiangxue Wang
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, China
| | - Yiping Jiao
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, China
| | - Tingting Zhao
- State Key Laboratory of Reproductive Medicine, Nanjing Medical University, Nanjing, China
| | - Eugene Yujun Xu
- State Key Laboratory of Reproductive Medicine, Nanjing Medical University, Nanjing, China.,Department of Neurology, Center for Reproductive Sciences, Northwestern University Feinberg School of Medicine, IL, USA
| | - Jun Xu
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, China
| |
Collapse
|
39
|
DDTNet: A dense dual-task network for tumor-infiltrating lymphocyte detection and segmentation in histopathological images of breast cancer. Med Image Anal 2022; 78:102415. [PMID: 35339950 DOI: 10.1016/j.media.2022.102415] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 02/14/2022] [Accepted: 03/01/2022] [Indexed: 11/23/2022]
Abstract
The morphological evaluation of tumor-infiltrating lymphocytes (TILs) in hematoxylin and eosin (H& E)-stained histopathological images is the key to breast cancer (BCa) diagnosis, prognosis, and therapeutic response prediction. For now, the qualitative assessment of TILs is carried out by pathologists, and computer-aided automatic lymphocyte measurement is still a great challenge because of the small size and complex distribution of lymphocytes. In this paper, we propose a novel dense dual-task network (DDTNet) to simultaneously achieve automatic TIL detection and segmentation in histopathological images. DDTNet consists of a backbone network (i.e., feature pyramid network) for extracting multi-scale morphological characteristics of TILs, a detection module for the localization of TIL centers, and a segmentation module for the delineation of TIL boundaries, where a boundary-aware branch is further used to provide a shape prior to segmentation. An effective feature fusion strategy is utilized to introduce multi-scale features with lymphocyte location information from highly correlated branches for precise segmentation. Experiments on three independent lymphocyte datasets of BCa demonstrate that DDTNet outperforms other advanced methods in detection and segmentation metrics. As part of this work, we also propose a semi-automatic method (TILAnno) to generate high-quality boundary annotations for TILs in H& E-stained histopathological images. TILAnno is used to produce a new lymphocyte dataset that contains 5029 annotated lymphocyte boundaries, which have been released to facilitate computational histopathology in the future.
Collapse
|
40
|
Wang J, Wei J, Zhou Y, Chen G, Ren L. Leonurine hydrochloride-a new drug for the treatment of menopausal syndrome: Synthesis, estrogen-like effects and pharmacokinetics. Fitoterapia 2022; 157:105108. [PMID: 34954263 DOI: 10.1016/j.fitote.2021.105108] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Revised: 12/19/2021] [Accepted: 12/19/2021] [Indexed: 11/18/2022]
Abstract
This research aimed to investigate the estrogen-like effects of Leonurine hydrochloride (Leo). First, we developed a total synthesis of Leo from 3,4,5-trimethoxy-benzoic acid and the structure was confirmed through 1H NMR and mass spectrometry (MS). Then the estrogenic activity of Leo in vitro and in vivo was studied. The proliferation and proliferation inhibitory effects of Leo on MCF-7 cells and MDA-MB-231 cells indicate that Leo exerts estrogen-like effects through estrogen receptor α (ERα) and estrogen receptor β((ERβ) in vitro. Uterotrophic assay in juvenile mice showed that Leo has an estrogen-like effect in vivo, as it can promote the development of the uterus of juvenile mice, increase its uterine coefficient and the size of the uterine cavity, as well as the increased number of uterine glands and the thickened uterine wall. For further research, cyclophosphamide (CTX) was used to establish a mouse model of ovarian function decline. Through this model, we found that Leo can restore the estrous cycle of mice, increase the number of primordial and primary follicles in the ovaries of mice, and regulate the disordered hypothalamic-pituitary-ovarian (HPOA) axis of mice. Finally, the pharmacokinetics of Leo was studied and oral bioavailability of Leo was calculated to be 2.21%. Leo was synthesized and the estrogen-like effect in vitro and in vivo was confirmed as well as its pharmacokinetics.
Collapse
Affiliation(s)
- Jin Wang
- School of Pharmacy, Nanjing Tech University, 5th Mofan Road, Nanjing 21009, China
| | - Jie Wei
- School of Pharmacy, Nanjing Tech University, 5th Mofan Road, Nanjing 21009, China
| | - Yaxin Zhou
- School of Pharmacy, Nanjing Tech University, 5th Mofan Road, Nanjing 21009, China
| | - Guoguang Chen
- School of Pharmacy, Nanjing Tech University, 5th Mofan Road, Nanjing 21009, China.
| | - Lili Ren
- School of Pharmacy, Nanjing Tech University, 5th Mofan Road, Nanjing 21009, China; Department of Microbiology and Immunology, Stanford University, Stanford, CA 94305, USA.
| |
Collapse
|
41
|
Wu Y, Cheng M, Huang S, Pei Z, Zuo Y, Liu J, Yang K, Zhu Q, Zhang J, Hong H, Zhang D, Huang K, Cheng L, Shao W. Recent Advances of Deep Learning for Computational Histopathology: Principles and Applications. Cancers (Basel) 2022; 14:1199. [PMID: 35267505 PMCID: PMC8909166 DOI: 10.3390/cancers14051199] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 02/16/2022] [Accepted: 02/22/2022] [Indexed: 01/10/2023] Open
Abstract
With the remarkable success of digital histopathology, we have witnessed a rapid expansion of the use of computational methods for the analysis of digital pathology and biopsy image patches. However, the unprecedented scale and heterogeneous patterns of histopathological images have presented critical computational bottlenecks requiring new computational histopathology tools. Recently, deep learning technology has been extremely successful in the field of computer vision, which has also boosted considerable interest in digital pathology applications. Deep learning and its extensions have opened several avenues to tackle many challenging histopathological image analysis problems including color normalization, image segmentation, and the diagnosis/prognosis of human cancers. In this paper, we provide a comprehensive up-to-date review of the deep learning methods for digital H&E-stained pathology image analysis. Specifically, we first describe recent literature that uses deep learning for color normalization, which is one essential research direction for H&E-stained histopathological image analysis. Followed by the discussion of color normalization, we review applications of the deep learning method for various H&E-stained image analysis tasks such as nuclei and tissue segmentation. We also summarize several key clinical studies that use deep learning for the diagnosis and prognosis of human cancers from H&E-stained histopathological images. Finally, online resources and open research problems on pathological image analysis are also provided in this review for the convenience of researchers who are interested in this exciting field.
Collapse
Affiliation(s)
- Yawen Wu
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Michael Cheng
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA; (M.C.); (J.Z.); (K.H.)
- Regenstrief Institute, Indiana University, Indianapolis, IN 46202, USA
| | - Shuo Huang
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Zongxiang Pei
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Yingli Zuo
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Jianxin Liu
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Kai Yang
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Qi Zhu
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Jie Zhang
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA; (M.C.); (J.Z.); (K.H.)
- Regenstrief Institute, Indiana University, Indianapolis, IN 46202, USA
| | - Honghai Hong
- Department of Clinical Laboratory, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou 510006, China;
| | - Daoqiang Zhang
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Kun Huang
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA; (M.C.); (J.Z.); (K.H.)
- Regenstrief Institute, Indiana University, Indianapolis, IN 46202, USA
| | - Liang Cheng
- Departments of Pathology and Laboratory Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA
| | - Wei Shao
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| |
Collapse
|
42
|
Doan TNN, Song B, Vuong TTL, Kim K, Kwak JT. SONNET: A self-guided ordinal regression neural network for segmentation and classification of nuclei in large-scale multi-tissue histology images. IEEE J Biomed Health Inform 2022; 26:3218-3228. [PMID: 35139032 DOI: 10.1109/jbhi.2022.3149936] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Automated nuclei segmentation and classification are the keys to analyze and understand the cellular characteristics and functionality, supporting computer-aided digital pathology in disease diagnosis. However, the task still remains challenging due to the intrinsic variations in size, intensity, and morphology of different types of nuclei. Herein, we propose a self-guided ordinal regression neural network for simultaneous nuclear segmentation and classification that can exploit the intrinsic characteristics of nuclei and focus on highly uncertain areas during training. The proposed network formulates nuclei segmentation as an ordinal regression learning by introducing a distance decreasing discretization strategy, which stratifies nuclei in a way that inner regions forming a regular shape of nuclei are separated from outer regions forming an irregular shape. It also adopts a self-guided training strategy to adaptively adjust the weights associated with nuclear pixels, depending on the difficulty of the pixels that is assessed by the network itself. To evaluate the performance of the proposed network, we employ large-scale multi-tissue datasets with 276349 exhaustively annotated nuclei. We show that the proposed network achieves the state-of-the-art performance in both nuclei segmentation and classification in comparison to several methods that are recently developed for segmentation and/or classification.
Collapse
|
43
|
Sarti M, Parlani M, Diaz-Gomez L, Mikos AG, Cerveri P, Casarin S, Dondossola E. Deep Learning for Automated Analysis of Cellular and Extracellular Components of the Foreign Body Response in Multiphoton Microscopy Images. Front Bioeng Biotechnol 2022; 9:797555. [PMID: 35145962 PMCID: PMC8822221 DOI: 10.3389/fbioe.2021.797555] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 12/28/2021] [Indexed: 12/02/2022] Open
Abstract
The Foreign body response (FBR) is a major unresolved challenge that compromises medical implant integration and function by inflammation and fibrotic encapsulation. Mice implanted with polymeric scaffolds coupled to intravital non-linear multiphoton microscopy acquisition enable multiparametric, longitudinal investigation of the FBR evolution and interference strategies. However, follow-up analyses based on visual localization and manual segmentation are extremely time-consuming, subject to human error, and do not allow for automated parameter extraction. We developed an integrated computational pipeline based on an innovative and versatile variant of the U-Net neural network to segment and quantify cellular and extracellular structures of interest, which is maintained across different objectives without impairing accuracy. This software for automatically detecting the elements of the FBR shows promise to unravel the complexity of this pathophysiological process.
Collapse
Affiliation(s)
- Mattia Sarti
- Department of Electronics, Information and Bioengineering, Politecnico di Milano University, Milan, Italy
| | - Maria Parlani
- David H. Koch Center for Applied Research of Genitourinary Cancers and Genitourinary Medical Oncology Department, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
- Department of Cell Biology, Radboud University Medical Center, Nijmegen, Netherlands
| | - Luis Diaz-Gomez
- Rice University, Dept. of Bioengineering, Houston, TX, United States
| | - Antonios G. Mikos
- Rice University, Dept. of Bioengineering, Houston, TX, United States
| | - Pietro Cerveri
- Department of Electronics, Information and Bioengineering, Politecnico di Milano University, Milan, Italy
| | - Stefano Casarin
- Center for Computational Surgery, Houston Methodist Research Institute, Houston, TX, United States
- Department of Surgery, Houston Methodist Hospital, Houston, TX, United States
- Houston Methodist Academic Institute, Houston, TX, United States
| | - Eleonora Dondossola
- David H. Koch Center for Applied Research of Genitourinary Cancers and Genitourinary Medical Oncology Department, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| |
Collapse
|
44
|
|
45
|
Hollandi R, Moshkov N, Paavolainen L, Tasnadi E, Piccinini F, Horvath P. Nucleus segmentation: towards automated solutions. Trends Cell Biol 2022; 32:295-310. [DOI: 10.1016/j.tcb.2021.12.004] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 11/30/2021] [Accepted: 12/14/2021] [Indexed: 11/25/2022]
|
46
|
Zhao J, He YJ, Zhao SQ, Huang JJ, Zuo WM. AL-Net: Attention Learning Network based on Multi-Task Learning for Cervical Nucleus Segmentation. IEEE J Biomed Health Inform 2021; 26:2693-2702. [PMID: 34928808 DOI: 10.1109/jbhi.2021.3136568] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Cervical nucleus segmentation is a crucial and challenging issue in automatic pathological diagnosis due to uneven staining, blurry boundaries, and adherent or overlapping nuclei in nucleus images. To overcome the limitation of current methods, we propose a multi-task network based on U-Net for cervical nucleus segmentation. This network consists of a primary task and an auxiliary task. The primary task is employed to predict nuclei regions. The auxiliary task, which predicts the boundaries of nuclei, is designed to improve the feature extraction of the main task. Furthermore, a context encoding layer is added behind each encoding layer of the U-Net. The output of each context encoding layer is processed by an attention learning module and then fused with the features of the decoding layer. In addition, a codec block is used in the attention learning module to obtain saliency-based attention and focused attention simultaneously. Experiment results show that the proposed network performs better than the state-of-the-art methods on the 2014 ISBI dataset, BNS, MoNuSeg, and our nucluesSeg dataset.
Collapse
|
47
|
Pal A, Xue Z, Desai K, Aina F Banjo A, Adepiti CA, Long LR, Schiffman M, Antani S. Deep multiple-instance learning for abnormal cell detection in cervical histopathology images. Comput Biol Med 2021; 138:104890. [PMID: 34601391 PMCID: PMC11977668 DOI: 10.1016/j.compbiomed.2021.104890] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Revised: 09/15/2021] [Accepted: 09/22/2021] [Indexed: 01/18/2023]
Abstract
Cervical cancer is a disease of significant concern affecting women's health worldwide. Early detection of and treatment at the precancerous stage can help reduce mortality. High-grade cervical abnormalities and precancer are confirmed using microscopic analysis of cervical histopathology. However, manual analysis of cervical biopsy slides is time-consuming, needs expert pathologists, and suffers from reader variability errors. Prior work in the literature has suggested using automated image analysis algorithms for analyzing cervical histopathology images captured with the whole slide digital scanners (e.g., Aperio, Hamamatsu, etc.). However, whole-slide digital tissue scanners with good optical magnification and acceptable imaging quality are cost-prohibitive and difficult to acquire in low and middle-resource regions. Hence, the development of low-cost imaging systems and automated image analysis algorithms are of critical importance. Motivated by this, we conduct an experimental study to assess the feasibility of developing a low-cost diagnostic system with the H&E stained cervical tissue image analysis algorithm. In our imaging system, the image acquisition is performed by a smartphone affixing it on the top of a commonly available light microscope which magnifies the cervical tissues. The images are not captured in a constant optical magnification, and, unlike whole-slide scanners, our imaging system is unable to record the magnification. The images are mega-pixel images and are labeled based on the presence of abnormal cells. In our dataset, there are total 1331 (train: 846, validation: 116 test: 369) images. We formulate the classification task as a deep multiple instance learning problem and quantitatively evaluate the classification performance of four different types of multiple instance learning algorithms trained with five different architectures designed with varying instance sizes. Finally, we designed a sparse attention-based multiple instance learning framework that can produce a maximum of 84.55% classification accuracy on the test set.
Collapse
Affiliation(s)
- Anabik Pal
- National Library of Medicine, National Institutes of Health, Bethesda, MD, USA.
| | - Zhiyun Xue
- National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Kanan Desai
- National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | | | | | - L Rodney Long
- National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Mark Schiffman
- National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Sameer Antani
- National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
48
|
Rashmi R, Prasad K, Udupa CBK. Multi-channel Chan-Vese model for unsupervised segmentation of nuclei from breast histopathological images. Comput Biol Med 2021; 136:104651. [PMID: 34333226 DOI: 10.1016/j.compbiomed.2021.104651] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2021] [Revised: 07/13/2021] [Accepted: 07/13/2021] [Indexed: 11/28/2022]
Abstract
T he pathologist determines the malignancy of a breast tumor by studying the histopathological images. In particular, the characteristics and distribution of nuclei contribute greatly to the decision process. Hence, the segmentation of nuclei constitutes a crucial task in the classification of breast histopathological images. Manual analysis of these images is subjective, tedious and susceptible to human error. Consequently, the development of computer-aided diagnostic systems for analysing these images have become a vital factor in the domain of medical imaging. However, the usage of medical image processing techniques to segment nuclei is challenging due to the diverse structure of the cells, poor staining process, the occurrence of artifacts, etc. Although supervised computer-aided systems for nuclei segmentation is popular, it is dependent on the availability of standard annotated datasets. In this regard, this work presents an unsupervised method based on Chan-Vese model to segment nuclei from breast histopathological images. The proposed model utilizes multi-channel color information to efficiently segment the nuclei. Also, this study proposes a pre-processing step to select appropriate color channel such that it discriminates nuclei from the background region. An extensive evaluation of the proposed model on two challenging datasets demonstrates its validity and effectiveness.
Collapse
Affiliation(s)
- R Rashmi
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, India.
| | - Keerthana Prasad
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, India.
| | - Chethana Babu K Udupa
- Department of Pathology, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, India.
| |
Collapse
|
49
|
Sun Y, Huang X, Zhou H, Zhang Q. SRPN: similarity-based region proposal networks for nuclei and cells detection in histology images. Med Image Anal 2021; 72:102142. [PMID: 34198042 DOI: 10.1016/j.media.2021.102142] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 05/11/2021] [Accepted: 06/17/2021] [Indexed: 10/21/2022]
Abstract
The detection of nuclei and cells in histology images is of great value in both clinical practice and pathological studies. However, multiple reasons such as morphological variations of nuclei or cells make it a challenging task where conventional object detection methods cannot obtain satisfactory performance in many cases. A detection task consists of two sub-tasks, classification and localization. Under the condition of dense object detection, classification is a key to boost the detection performance. Considering this, we propose similarity based region proposal networks (SRPN) for nuclei and cells detection in histology images. In particular, a customised convolution layer termed as embedding layer is designed for network building. The embedding layer is added into the region proposal networks, enabling the networks to learn discriminative features based on similarity learning. Features obtained by similarity learning can significantly boost the classification performance compared to conventional methods. SRPN can be easily integrated into standard convolutional neural networks architectures such as the Faster R-CNN and RetinaNet. We test the proposed approach on tasks of multi-organ nuclei detection and signet ring cells detection in histological images. Experimental results show that networks applying similarity learning achieved superior performance on both tasks when compared to their counterparts. In particular, the proposed SRPN achieve state-of-the-art performance on the MoNuSeg benchmark for nuclei segmentation and detection while compared to previous methods, and on the signet ring cell detection benchmark when compared with baselines. The sourcecode is publicly available at: https://github.com/sigma10010/nuclei_cells_det.
Collapse
Affiliation(s)
- Yibao Sun
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Road, London, E1 4NS, United Kingdom
| | - Xingru Huang
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Road, London, E1 4NS, United Kingdom.
| | - Huiyu Zhou
- School of Informatics, University of Leicester, University Road, Leicester, LE1 7RH, United Kingdom
| | - Qianni Zhang
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Road, London, E1 4NS, United Kingdom
| |
Collapse
|
50
|
Abstract
State-of-the-art semantic segmentation methods rely too much on complicated deep networks and thus cannot train efficiently. This paper introduces a novel Circle-U-Net architecture that exceeds the original U-Net on several standards. The proposed model includes circle connect layers, which is the backbone of ResUNet-a architecture. The model possesses a contracting part with residual bottleneck and circle connect layers that capture context and expanding paths, with sampling layers and merging layers for a pixel-wise localization. The results of the experiment show that the proposed Circle-U-Net achieves an improved accuracy of 5.6676%, 2.1587% IoU (Intersection of union, IoU) and can detect 67% classes greater than U-Net, which is better than current results.
Collapse
|