1
|
Imran MT, Shafi I, Ahmad J, Butt MFU, Villar SG, Villena EG, Khurshaid T, Ashraf I. Virtual histopathology methods in medical imaging - a systematic review. BMC Med Imaging 2024; 24:318. [PMID: 39593024 PMCID: PMC11590286 DOI: 10.1186/s12880-024-01498-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2024] [Accepted: 11/12/2024] [Indexed: 11/28/2024] Open
Abstract
Virtual histopathology is an emerging technology in medical imaging that utilizes advanced computational methods to analyze tissue images for more precise disease diagnosis. Traditionally, histopathology relies on manual techniques and expertise, often resulting in time-consuming processes and variability in diagnoses. Virtual histopathology offers a more consistent, and automated approach, employing techniques like machine learning, deep learning, and image processing to simulate staining and enhance tissue analysis. This review explores the strengths, limitations, and clinical applications of these methods, highlighting recent advancements in virtual histopathological approaches. In addition, important areas are identified for future research to improve diagnostic accuracy and efficiency in clinical settings.
Collapse
Affiliation(s)
- Muhammad Talha Imran
- College of Electrical and Mechanical Engineering, National University of Sciences and Technology (NUST), Islamabad, 44000, Pakistan
| | - Imran Shafi
- College of Electrical and Mechanical Engineering, National University of Sciences and Technology (NUST), Islamabad, 44000, Pakistan
| | - Jamil Ahmad
- Department of Computing, Abasyn University Islamabad Campus, Islamabad, 44000, Pakistan
| | - Muhammad Fasih Uddin Butt
- Department of Electrical and Computer Engineering, COMSATS University Islamabad, Islamabad, 44000, Pakistan
| | - Santos Gracia Villar
- Universidad Europea del Atlantico, Santander, 39011, Spain
- Universidad Internacional Iberoamericana, Campeche, 24560, Mexico
- Universidade Internacional do Cuanza, Cuito, Angola
| | - Eduardo Garcia Villena
- Universidad Europea del Atlantico, Santander, 39011, Spain
- Universidad Internacional Iberoamericana Arecibo, Puerto Rico, 00613, USA
- Universidad de La Romana, La Romana, República Dominicana
| | - Tahir Khurshaid
- Department of Electrical Engineering, Yeungnam University, Gyeongsan, 38541, Republic of Korea.
| | - Imran Ashraf
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan, 38541, Republic of Korea.
| |
Collapse
|
2
|
Kumari S, Singh P. Deep learning for unsupervised domain adaptation in medical imaging: Recent advancements and future perspectives. Comput Biol Med 2024; 170:107912. [PMID: 38219643 DOI: 10.1016/j.compbiomed.2023.107912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 11/02/2023] [Accepted: 12/24/2023] [Indexed: 01/16/2024]
Abstract
Deep learning has demonstrated remarkable performance across various tasks in medical imaging. However, these approaches primarily focus on supervised learning, assuming that the training and testing data are drawn from the same distribution. Unfortunately, this assumption may not always hold true in practice. To address these issues, unsupervised domain adaptation (UDA) techniques have been developed to transfer knowledge from a labeled domain to a related but unlabeled domain. In recent years, significant advancements have been made in UDA, resulting in a wide range of methodologies, including feature alignment, image translation, self-supervision, and disentangled representation methods, among others. In this paper, we provide a comprehensive literature review of recent deep UDA approaches in medical imaging from a technical perspective. Specifically, we categorize current UDA research in medical imaging into six groups and further divide them into finer subcategories based on the different tasks they perform. We also discuss the respective datasets used in the studies to assess the divergence between the different domains. Finally, we discuss emerging areas and provide insights and discussions on future research directions to conclude this survey.
Collapse
Affiliation(s)
- Suruchi Kumari
- Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, India.
| | - Pravendra Singh
- Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, India.
| |
Collapse
|
3
|
Yang X, Chin BB, Silosky M, Wehrend J, Litwiller DV, Ghosh D, Xing F. Learning Without Real Data Annotations to Detect Hepatic Lesions in PET Images. IEEE Trans Biomed Eng 2024; 71:679-688. [PMID: 37708016 DOI: 10.1109/tbme.2023.3315268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/16/2023]
Abstract
OBJECTIVE Deep neural networks have been recently applied to lesion identification in fluorodeoxyglucose (FDG) positron emission tomography (PET) images, but they typically rely on a large amount of well-annotated data for model training. This is extremely difficult to achieve for neuroendocrine tumors (NETs), because of low incidence of NETs and expensive lesion annotation in PET images. The objective of this study is to design a novel, adaptable deep learning method, which uses no real lesion annotations but instead low-cost, list mode-simulated data, for hepatic lesion detection in real-world clinical NET PET images. METHODS We first propose a region-guided generative adversarial network (RG-GAN) for lesion-preserved image-to-image translation. Then, we design a specific data augmentation module for our list-mode simulated data and incorporate this module into the RG-GAN to improve model training. Finally, we combine the RG-GAN, the data augmentation module and a lesion detection neural network into a unified framework for joint-task learning to adaptatively identify lesions in real-world PET data. RESULTS The proposed method outperforms recent state-of-the-art lesion detection methods in real clinical 68Ga-DOTATATE PET images, and produces very competitive performance with the target model that is trained with real lesion annotations. CONCLUSION With RG-GAN modeling and specific data augmentation, we can obtain good lesion detection performance without using any real data annotations. SIGNIFICANCE This study introduces an adaptable deep learning method for hepatic lesion identification in NETs, which can significantly reduce human effort for data annotation and improve model generalizability for lesion detection with PET imaging.
Collapse
|
4
|
Juez-Castillo G, Valencia-Vidal B, Orrego LM, Cabello-Donayre M, Montosa-Hidalgo L, Pérez-Victoria JM. FiCRoN, a deep learning-based algorithm for the automatic determination of intracellular parasite burden from fluorescence microscopy images. Med Image Anal 2024; 91:103036. [PMID: 38016388 DOI: 10.1016/j.media.2023.103036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 06/27/2023] [Accepted: 11/13/2023] [Indexed: 11/30/2023]
Abstract
Protozoan parasites are responsible for dramatic, neglected diseases. The automatic determination of intracellular parasite burden from fluorescence microscopy images is a challenging problem. Recent advances in deep learning are transforming this process, however, high-performance algorithms have not been developed. The limitations in image acquisition, especially for intracellular parasites, make this process complex. For this reason, traditional image-processing methods are not easily transferred between different datasets and segmentation-based strategies do not have a high performance. Here, we propose a novel method FiCRoN, based on fully convolutional regression networks (FCRNs), as a promising new tool for estimating intracellular parasite burden. This estimation requires three values, intracellular parasites, infected cells and uninfected cells. FiCRoN solves this problem as multi-task learning: counting by regression at two scales, a smaller one for intracellular parasites and a larger one for host cells. It does not use segmentation or detection, resulting in a higher generalization of counting tasks and, therefore, a decrease in error propagation. Linear regression reveals an excellent correlation coefficient between manual and automatic methods. FiCRoN is an innovative freedom-respecting image analysis software based on deep learning, designed to provide a fast and accurate quantification of parasite burden, also potentially useful as a single-cell counter.
Collapse
Affiliation(s)
- Graciela Juez-Castillo
- Instituto de Parasitología y Biomedicina "López-Neyra", Consejo Superior de Investigaciones Cientìficas, (IPBLN-CSIC), PTS Granada, 18016 Granada, Spain; Research Group Osiris&Bioaxis, Faculty of Engineering, El Bosque University, 110121 Bogotá, Colombia
| | - Brayan Valencia-Vidal
- Research Group Osiris&Bioaxis, Faculty of Engineering, El Bosque University, 110121 Bogotá, Colombia; Department of Computer Engineering, Automation and Robotics, Research Centre for Information and Communication Technologies, University of Granada, 18014 Granada, Spain.
| | - Lina M Orrego
- Instituto de Parasitología y Biomedicina "López-Neyra", Consejo Superior de Investigaciones Cientìficas, (IPBLN-CSIC), PTS Granada, 18016 Granada, Spain
| | - María Cabello-Donayre
- Instituto de Parasitología y Biomedicina "López-Neyra", Consejo Superior de Investigaciones Cientìficas, (IPBLN-CSIC), PTS Granada, 18016 Granada, Spain; Universidad Internacional de la Rioja, 26006 La Rioja, Spain
| | - Laura Montosa-Hidalgo
- Instituto de Parasitología y Biomedicina "López-Neyra", Consejo Superior de Investigaciones Cientìficas, (IPBLN-CSIC), PTS Granada, 18016 Granada, Spain
| | - José M Pérez-Victoria
- Instituto de Parasitología y Biomedicina "López-Neyra", Consejo Superior de Investigaciones Cientìficas, (IPBLN-CSIC), PTS Granada, 18016 Granada, Spain.
| |
Collapse
|
5
|
Liu S, Yin S, Qu L, Wang M, Song Z. A Structure-Aware Framework of Unsupervised Cross-Modality Domain Adaptation via Frequency and Spatial Knowledge Distillation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3919-3931. [PMID: 37738201 DOI: 10.1109/tmi.2023.3318006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/24/2023]
Abstract
Unsupervised domain adaptation (UDA) aims to train a model on a labeled source domain and adapt it to an unlabeled target domain. In medical image segmentation field, most existing UDA methods rely on adversarial learning to address the domain gap between different image modalities. However, this process is complicated and inefficient. In this paper, we propose a simple yet effective UDA method based on both frequency and spatial domain transfer under a multi-teacher distillation framework. In the frequency domain, we introduce non-subsampled contourlet transform for identifying domain-invariant and domain-variant frequency components (DIFs and DVFs) and replace the DVFs of the source domain images with those of the target domain images while keeping the DIFs unchanged to narrow the domain gap. In the spatial domain, we propose a batch momentum update-based histogram matching strategy to minimize the domain-variant image style bias. Additionally, we further propose a dual contrastive learning module at both image and pixel levels to learn structure-related information. Our proposed method outperforms state-of-the-art methods on two cross-modality medical image segmentation datasets (cardiac and abdominal). Codes are avaliable at https://github.com/slliuEric/FSUDA.
Collapse
|
6
|
Xing F, Yang X, Cornish TC, Ghosh D. Learning with limited target data to detect cells in cross-modality images. Med Image Anal 2023; 90:102969. [PMID: 37802010 DOI: 10.1016/j.media.2023.102969] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 08/16/2023] [Accepted: 09/11/2023] [Indexed: 10/08/2023]
Abstract
Deep neural networks have achieved excellent cell or nucleus quantification performance in microscopy images, but they often suffer from performance degradation when applied to cross-modality imaging data. Unsupervised domain adaptation (UDA) based on generative adversarial networks (GANs) has recently improved the performance of cross-modality medical image quantification. However, current GAN-based UDA methods typically require abundant target data for model training, which is often very expensive or even impossible to obtain for real applications. In this paper, we study a more realistic yet challenging UDA situation, where (unlabeled) target training data is limited and previous work seldom delves into cell identification. We first enhance a dual GAN with task-specific modeling, which provides additional supervision signals to assist with generator learning. We explore both single-directional and bidirectional task-augmented GANs for domain adaptation. Then, we further improve the GAN by introducing a differentiable, stochastic data augmentation module to explicitly reduce discriminator overfitting. We examine source-, target-, and dual-domain data augmentation for GAN enhancement, as well as joint task and data augmentation in a unified GAN-based UDA framework. We evaluate the framework for cell detection on multiple public and in-house microscopy image datasets, which are acquired with different imaging modalities, staining protocols and/or tissue preparations. The experiments demonstrate that our method significantly boosts performance when compared with the reference baseline, and it is superior to or on par with fully supervised models that are trained with real target annotations. In addition, our method outperforms recent state-of-the-art UDA approaches by a large margin on different datasets.
Collapse
Affiliation(s)
- Fuyong Xing
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA.
| | - Xinyi Yang
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| | - Toby C Cornish
- Department of Pathology, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| | - Debashis Ghosh
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| |
Collapse
|
7
|
Wang S, Rong R, Gu Z, Fujimoto J, Zhan X, Xie Y, Xiao G. Unsupervised domain adaptation for nuclei segmentation: Adapting from hematoxylin & eosin stained slides to immunohistochemistry stained slides using a curriculum approach. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 241:107768. [PMID: 37619429 DOI: 10.1016/j.cmpb.2023.107768] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 07/31/2023] [Accepted: 08/17/2023] [Indexed: 08/26/2023]
Abstract
BACKGROUND AND OBJECTIVE Unsupervised domain adaptation (UDA) is a powerful approach in tackling domain discrepancies and reducing the burden of laborious and error-prone pixel-level annotations for instance segmentation. However, the domain adaptation strategies utilized in previous instance segmentation models pool all the labeled/detected instances together to train the instance-level GAN discriminator, which neglects the differences among multiple instance categories. Such pooling prevents UDA instance segmentation models from learning categorical correspondence between source and target domains for accurate instance classification; METHODS: To tackle this challenge, we propose an Instance Segmentation CycleGAN (ISC-GAN) algorithm for UDA multiclass-instance segmentation. We conduct extensive experiments on the multiclass nuclei recognition task to transfer knowledge from hematoxylin and eosin to immunohistochemistry stained pathology images. Specifically, we fuse CycleGAN with Mask R-CNN to learn categorical correspondence with image-level domain adaptation and virtual supervision. Moreover, we utilize Curriculum Learning to separate the learning process into two steps: (1) learning segmentation only on labeled source data, and (2) learning target domain segmentation with paired virtual labels generated by ISC-GAN. The performance was further improved through experiments with other strategies, including Shared Weights, Knowledge Distillation, and Expanded Source Data. RESULTS Comparing to the baseline model or the three UDA instance detection and segmentation models, ISC-GAN illustrates the state-of-the-art performance, with 39.1% average precision and 48.7% average recall. The source codes of ISC-GAN are available at https://github.com/sdw95927/InstanceSegmentation-CycleGAN. CONCLUSION ISC-GAN adapted knowledge from hematoxylin and eosin to immunohistochemistry stained pathology images, suggesting the potential for reducing the need for large annotated pathological image datasets in deep learning and computer vision tasks.
Collapse
Affiliation(s)
- Shidan Wang
- Quantitative Biomedical Research Center, Peter O'Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Ruichen Rong
- Quantitative Biomedical Research Center, Peter O'Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Zifan Gu
- Quantitative Biomedical Research Center, Peter O'Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Junya Fujimoto
- Department of Pathology, Division of Pathology/Lab Medicine, The University of Texas MD, Anderson Cancer Center, Houston, TX 77030, USA
| | - Xiaowei Zhan
- Quantitative Biomedical Research Center, Peter O'Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Yang Xie
- Quantitative Biomedical Research Center, Peter O'Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; Simmons Comprehensive Cancer Center, UT Southwestern Medical Center, Dallas, TX 75390, USA; Department of Bioinformatics, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Guanghua Xiao
- Quantitative Biomedical Research Center, Peter O'Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA; Simmons Comprehensive Cancer Center, UT Southwestern Medical Center, Dallas, TX 75390, USA; Department of Bioinformatics, UT Southwestern Medical Center, Dallas, TX 75390, USA.
| |
Collapse
|
8
|
Nasir ES, Parvaiz A, Fraz MM. Nuclei and glands instance segmentation in histology images: a narrative review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10372-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
9
|
Bai T, Zhang Z, Guo S, Zhao C, Luo X. Semi-Supervised Cell Detection with Reliable Pseudo-Labels. J Comput Biol 2022; 29:1061-1073. [PMID: 35704885 DOI: 10.1089/cmb.2022.0108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Pathological images play an important role in the diagnosis, treatment, and prognosis of cancer. Usually, pathological images contain complex environments and cells of different shapes. Pathologists consume a lot of time and labor costs when analyzing and discriminating the cells in the images. Therefore, fully annotated pathological image data sets are not easy to obtain. In view of the problem of insufficient labeled data, we input a large number of unlabeled images into the pretrained model to generate accurate pseudo-labels. In this article, we propose two methods to improve the quality of pseudo-labels, namely, the pseudo-labeling based on adaptive threshold and the pseudo-labeling based on cell count. These two pseudo-labeling methods take into account the distribution of cells in different pathological images when removing background noise, and ensure that accurate pseudo-labels are generated for each unlabeled image. Meanwhile, when pseudo-labels are used for model retraining, we perform data distillation on the feature maps of unlabeled images through an attention mechanism, which further improves the quality of training data. In addition, we also propose a multi-task learning model, which learns the cell detection task and the cell count task simultaneously, and improves the performance of cell detection through feature sharing. We verified the above methods on three different data sets, and the results show that the detection effect of the model with a large number of unlabeled images involved in retraining is improved by 9%-13% compared with the model that only uses a small number of labeled images for pretraining. Moreover, our methods have good applicability on the three data sets.
Collapse
Affiliation(s)
- Tian Bai
- College of Computer Science and Technology, Jilin University, Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, Changchun, China
| | - Zhenting Zhang
- College of Computer Science and Technology, Jilin University, Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, Changchun, China
| | - Shuyu Guo
- College of Computer Science and Technology, Jilin University, Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, Changchun, China
| | - Chen Zhao
- College of Computer Science and Technology, Jilin University, Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, Changchun, China
| | - Xiao Luo
- Department of Breast Surgery, China-Japan Union Hospital of Jilin University, Changchun, China
| |
Collapse
|
10
|
Abstract
Machine learning techniques used in computer-aided medical image analysis usually suffer from the domain shift problem caused by different distributions between source/reference data and target data. As a promising solution, domain adaptation has attracted considerable attention in recent years. The aim of this paper is to survey the recent advances of domain adaptation methods in medical image analysis. We first present the motivation of introducing domain adaptation techniques to tackle domain heterogeneity issues for medical image analysis. Then we provide a review of recent domain adaptation models in various medical image analysis tasks. We categorize the existing methods into shallow and deep models, and each of them is further divided into supervised, semi-supervised and unsupervised methods. We also provide a brief summary of the benchmark medical image datasets that support current domain adaptation research. This survey will enable researchers to gain a better understanding of the current status, challenges and future directions of this energetic research field.
Collapse
|
11
|
Cornish TC. Artificial intelligence for automating the measurement of histologic image biomarkers. J Clin Invest 2021; 131:147966. [PMID: 33855974 DOI: 10.1172/jci147966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Artificial intelligence has been applied to histopathology for decades, but the recent increase in interest is attributable to well-publicized successes in the application of deep-learning techniques, such as convolutional neural networks, for image analysis. Recently, generative adversarial networks (GANs) have provided a method for performing image-to-image translation tasks on histopathology images, including image segmentation. In this issue of the JCI, Koyuncu et al. applied GANs to whole-slide images of p16-positive oropharyngeal squamous cell carcinoma (OPSCC) to automate the calculation of a multinucleation index (MuNI) for prognostication in p16-positive OPSCC. Multivariable analysis showed that the MuNI was prognostic for disease-free survival, overall survival, and metastasis-free survival. These results are promising, as they present a prognostic method for p16-positive OPSCC and highlight methods for using deep learning to measure image biomarkers from histopathologic samples in an inherently explainable manner.
Collapse
|