1
|
Xing F, Yang X, Cornish TC, Ghosh D. Learning with limited target data to detect cells in cross-modality images. Med Image Anal 2023; 90:102969. [PMID: 37802010 DOI: 10.1016/j.media.2023.102969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 08/16/2023] [Accepted: 09/11/2023] [Indexed: 10/08/2023]
Abstract
Deep neural networks have achieved excellent cell or nucleus quantification performance in microscopy images, but they often suffer from performance degradation when applied to cross-modality imaging data. Unsupervised domain adaptation (UDA) based on generative adversarial networks (GANs) has recently improved the performance of cross-modality medical image quantification. However, current GAN-based UDA methods typically require abundant target data for model training, which is often very expensive or even impossible to obtain for real applications. In this paper, we study a more realistic yet challenging UDA situation, where (unlabeled) target training data is limited and previous work seldom delves into cell identification. We first enhance a dual GAN with task-specific modeling, which provides additional supervision signals to assist with generator learning. We explore both single-directional and bidirectional task-augmented GANs for domain adaptation. Then, we further improve the GAN by introducing a differentiable, stochastic data augmentation module to explicitly reduce discriminator overfitting. We examine source-, target-, and dual-domain data augmentation for GAN enhancement, as well as joint task and data augmentation in a unified GAN-based UDA framework. We evaluate the framework for cell detection on multiple public and in-house microscopy image datasets, which are acquired with different imaging modalities, staining protocols and/or tissue preparations. The experiments demonstrate that our method significantly boosts performance when compared with the reference baseline, and it is superior to or on par with fully supervised models that are trained with real target annotations. In addition, our method outperforms recent state-of-the-art UDA approaches by a large margin on different datasets.
Collapse
Affiliation(s)
- Fuyong Xing
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA.
| | - Xinyi Yang
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| | - Toby C Cornish
- Department of Pathology, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| | - Debashis Ghosh
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| |
Collapse
|
2
|
Karabulut YY, Dinç U, Köse EÇ, Türsen Ü. Deep learning as a new tool in the diagnosis of mycosis fungoides. Arch Dermatol Res 2022. [PMID: 36571610 DOI: 10.1007/s00403-022-02521-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 12/05/2022] [Accepted: 12/14/2022] [Indexed: 12/27/2022]
Abstract
Mycosis Fungoides (MF) makes up the most of the cutaneous lymphomas. As a malignant disease, the greatest diagnostical challenge is to timely differentiate MF from inflammatory diseases. Contemporary computational methods successfully identify cell nuclei in histological specimens. Deep learning methods are especially favored for such tasks. A deep learning model was used to detect nuclei Hematoxylin-Eosin(H-E) stained micrographs. Nuclear properties are extracted after detection. A multi-layer perceptron classifier is used to detect lymphocytes specifically among the detected nuclei. The comparisons for each property between MF and non-MF were carried out using statistical tests the results are compared with the findings in the literature to provide a descriptive analysis as well. Random forest classifier method is used to build a model to classify MF and non-MF lymphocytes. 10 nuclear properties were statistically significantly different between MF and non-MF specimens. MF nuclei were smaller, darker and more heterogenous. Lymphocyte detection algorithm had an average 90.5% prediction power and MF detection algorithm had an average 94.2% prediction power. This project aims to fill the gap between computational advancement and medical practice. The models could make MF diagnoses easier, more accurate and earlier. The results also challenge the manually examined and defined nuclear properties of MF with the help of data abundance and computer objectivity.
Collapse
|
3
|
Abstract
Due to domain shifts, deep cell/nucleus detection models trained on one microscopy image dataset might not be applicable to other datasets acquired with different imaging modalities. Unsupervised domain adaptation (UDA) based on generative adversarial networks (GANs) has recently been exploited to close domain gaps and has achieved excellent nucleus detection performance. However, current GAN-based UDA model training often requires a large amount of unannotated target data, which may be prohibitively expensive to obtain in real practice. Additionally, these methods have significant performance degradation when using limited target training data. In this paper, we study a more realistic yet challenging UDA scenario, where (unannotated) target training data is very scarce, a low-resource case rarely explored for nucleus detection in previous work. Specifically, we augment a dual GAN network by leveraging a task-specific model to supplement the target-domain discriminator and facilitate generator learning with limited data. The task model is constrained by cross-domain prediction consistency to encourage semantic content preservation for image-to-image translation. Next, we incorporate a stochastic, differentiable data augmentation module into the task-augmented GAN network to further improve model training by alleviating discriminator overfitting. This data augmentation module is a plug-and-play component, requiring no modification of network architectures or loss functions. We evaluate the proposed low-resource UDA method for nucleus detection on multiple public cross-modality microscopy image datasets. With a single training image in the target domain, our method significantly outperforms recent state-of-the-art UDA approaches and delivers very competitive or superior performance over fully supervised models trained with real labeled target data.
Collapse
Affiliation(s)
- Fuyong Xing
- Depatment of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus
| | - Toby C Cornish
- Department of Pathology, University of Colorado Anschutz Medical Campus
| |
Collapse
|
4
|
Javed S, Mahmood A, Dias J, Werghi N. Multi-level feature fusion for nucleus detection in histology images using correlation filters. Comput Biol Med 2022; 143:105281. [PMID: 35139456 DOI: 10.1016/j.compbiomed.2022.105281] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 01/14/2022] [Accepted: 01/30/2022] [Indexed: 11/29/2022]
Abstract
Nucleus detection is an important step for the analysis of histology images in the field of computational pathology. Pathologists use quantitative nuclear morphology for better cancer grading and prognostication. The nucleus detection becomes very challenging because of the large morphological variations across different types of nuclei, nuclei clutter, and heterogeneity. To address these challenges, we aim to improve the nucleus detection using multi-level feature fusion based on discriminative correlation filters. The proposed algorithm employs multiple features pool, based on varying features combinations. Early fusion is employed to integrate multi-feature information within a pool and inter-pool fusion is proposed to fuse information across multiple pools. Inter-pool consistency is proposed to find the pools which are consistent and complement each other to improve performance. For this purpose, the relative standard deviation is used as an inter-pool consistency measure. Pool robustness to noise is also estimated using relative standard deviation as a robustness measure. High-level pool fusion is proposed using inter-pool consistency and pool-robustness scores. The proposed algorithm facilitates a robust and reliable appearance model for nucleus detection. The proposed algorithm is evaluated on three publicly available datasets and compared with several existing state-of-the-art methods. Our proposed algorithm has consistently outperformed existing methods on a wide range of experiments.
Collapse
Affiliation(s)
- Sajid Javed
- Department of Electrical Engineering and Computer Science, Khalifa University of Science and Technology, Abu Dhabi, United Arab Emirates; Khalifa University Centre for Autonomous Robotics Systems (KUCARS), Abu Dhabi, United Arab Emirates.
| | - Arif Mahmood
- Department of Computer Science, Information Technology University, Lahore, Pakistan.
| | - Jorge Dias
- Department of Electrical Engineering and Computer Science, Khalifa University of Science and Technology, Abu Dhabi, United Arab Emirates; Khalifa University Centre for Autonomous Robotics Systems (KUCARS), Abu Dhabi, United Arab Emirates.
| | - Naoufel Werghi
- Department of Electrical Engineering and Computer Science, Khalifa University of Science and Technology, Abu Dhabi, United Arab Emirates; Khalifa University Centre for Autonomous Robotics Systems (KUCARS), Abu Dhabi, United Arab Emirates.
| |
Collapse
|
5
|
Javed S, Mahmood A, Dias J, Werghi N, Rajpoot N. Spatially Constrained Context-Aware Hierarchical Deep Correlation Filters for Nucleus Detection in Histology Images. Med Image Anal 2021; 72:102104. [PMID: 34242872 DOI: 10.1016/j.media.2021.102104] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Revised: 05/10/2021] [Accepted: 05/12/2021] [Indexed: 09/30/2022]
Abstract
Nucleus detection in histology images is a fundamental step for cellular-level analysis in computational pathology. In clinical practice, quantitative nuclear morphology can be used for diagnostic decision making, prognostic stratification, and treatment outcome prediction. Nucleus detection is a challenging task because of large variations in the shape of different types of nucleus such as nuclear clutter, heterogeneous chromatin distribution, and irregular and fuzzy boundaries. To address these challenges, we aim to accurately detect nuclei using spatially constrained context-aware correlation filters using hierarchical deep features extracted from multiple layers of a pre-trained network. During training, we extract contextual patches around each nucleus which are used as negative examples while the actual nucleus patch is used as a positive example. In order to spatially constrain the correlation filters, we propose to construct a spatial structural graph across different nucleus components encoding pairwise similarities. The correlation filters are constrained to act as eigenvectors of the Laplacian of the spatial graphs enforcing these to capture the nucleus structure. A novel objective function is proposed by embedding graph-based structural information as well as the contextual information within the discriminative correlation filter framework. The learned filters are constrained to be orthogonal to both the contextual patches and the spatial graph-Laplacian basis to improve the localization and discriminative performance. The proposed objective function trains a hierarchy of correlation filters on different deep feature layers to capture the heterogeneity in nuclear shape and texture. The proposed algorithm is evaluated on three publicly available datasets and compared with 15 current state-of-the-art methods demonstrating competitive performance in terms of accuracy, speed, and generalization.
Collapse
Affiliation(s)
- Sajid Javed
- Khalifa University Center for Autonomous Robotic Systems (KUCARS), Khalifa University, Abu Dhabi, UAE.; Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, UAE
| | - Arif Mahmood
- Department of Computer Science, Information Technology University, Lahore, Pakistan
| | - Jorge Dias
- Khalifa University Center for Autonomous Robotic Systems (KUCARS), Khalifa University, Abu Dhabi, UAE.; Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, UAE
| | - Naoufel Werghi
- Khalifa University Center for Autonomous Robotic Systems (KUCARS), Khalifa University, Abu Dhabi, UAE.; Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, UAE..
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, U.K.; Department of Pathology, University Hospitals Coventry and Warwickshire, Walsgrave, Coventry, CV2 2DX, U.K.; The Alan Turing Institute, London, NW1 2DB, U.K
| |
Collapse
|
6
|
Abstract
BACKGROUND Nucleus is a fundamental task in microscopy image analysis and supports many other quantitative studies such as object counting, segmentation, tracking, etc. Deep neural networks are emerging as a powerful tool for biomedical image computing; in particular, convolutional neural networks have been widely applied to nucleus/cell detection in microscopy images. However, almost all models are tailored for specific datasets and their applicability to other microscopy image data remains unknown. Some existing studies casually learn and evaluate deep neural networks on multiple microscopy datasets, but there are still several critical, open questions to be addressed. RESULTS We analyze the applicability of deep models specifically for nucleus detection across a wide variety of microscopy image data. More specifically, we present a fully convolutional network-based regression model and extensively evaluate it on large-scale digital pathology and microscopy image datasets, which consist of 23 organs (or cancer diseases) and come from multiple institutions. We demonstrate that for a specific target dataset, training with images from the same types of organs might be usually necessary for nucleus detection. Although the images can be visually similar due to the same staining technique and imaging protocol, deep models learned with images from different organs might not deliver desirable results and would require model fine-tuning to be on a par with those trained with target data. We also observe that training with a mixture of target and other/non-target data does not always mean a higher accuracy of nucleus detection, and it might require proper data manipulation during model training to achieve good performance. CONCLUSIONS We conduct a systematic case study on deep models for nucleus detection in a wide variety of microscopy images, aiming to address several important but previously understudied questions. We present and extensively evaluate an end-to-end, pixel-to-pixel fully convolutional regression network and report a few significant findings, some of which might have not been reported in previous studies. The model performance analysis and observations would be helpful to nucleus detection in microscopy images.
Collapse
Affiliation(s)
- Fuyong Xing
- Department of Biostatistics and Informatics, and the Data Science to Patient Value initiative, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, Colorado 80045, United States
| | - Yuanpu Xie
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, 1275 Center Drive, Gainesville, Florida 32611, United States
| | - Xiaoshuang Shi
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, 1275 Center Drive, Gainesville, Florida 32611, United States
| | - Pingjun Chen
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, 1275 Center Drive, Gainesville, Florida 32611, United States
| | - Zizhao Zhang
- Department of Computer and Information Science and Engineering, University of Florida, 432 Newell Drive, Gainesville, Florida 32611, United States
| | - Lin Yang
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, 1275 Center Drive, Gainesville, Florida 32611, United States
- Department of Computer and Information Science and Engineering, University of Florida, 432 Newell Drive, Gainesville, Florida 32611, United States
| |
Collapse
|