1
|
Zhang W, Yang S, Luo M, He C, Li Y, Zhang J, Wang X, Wang F. Keep it accurate and robust: An enhanced nuclei analysis framework. Comput Struct Biotechnol J 2024; 24:699-710. [PMID: 39650700 PMCID: PMC11621583 DOI: 10.1016/j.csbj.2024.10.046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Revised: 10/21/2024] [Accepted: 10/27/2024] [Indexed: 12/11/2024] Open
Abstract
Accurate segmentation and classification of nuclei in histology images is critical but challenging due to nuclei heterogeneity, staining variations, and tissue complexity. Existing methods often struggle with limited dataset variability, with patches extracted from similar whole slide images (WSI), making models prone to falling into local optima. Here we propose a new framework to address this limitation and enable robust nuclear analysis. Our method leverages dual-level ensemble modeling to overcome issues stemming from limited dataset variation. Intra-ensembling applies diverse transformations to individual samples, while inter-ensembling combines networks of different scales. We also introduce enhancements to the HoVer-Net architecture, including updated encoders, nested dense decoding and model regularization strategy. We achieve state-of-the-art results on public benchmarks, including 1st place for nuclear composition prediction and 3rd place for segmentation/classification in the 2022 Colon Nuclei Identification and Counting (CoNIC) Challenge. This success validates our approach for accurate histological nuclei analysis. Extensive experiments and ablation studies provide insights into optimal network design choices and training techniques. In conclusion, this work proposes an improved framework advancing the state-of-the-art in nuclei analysis. We will release our code and models to serve as a toolkit for the community.
Collapse
Affiliation(s)
- Wenhua Zhang
- Institute of Artificial Intelligence, Shanghai University, Shanghai 200444, China
| | - Sen Yang
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA 94305 USA
| | | | - Chuan He
- Shanghai Aitrox Technology Corporation Limited, Shanghai, 200444, China
| | - Yuchen Li
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA 94305 USA
| | - Jun Zhang
- Tencent AI Lab, Shenzhen 518057, China
| | - Xiyue Wang
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA 94305 USA
| | - Fang Wang
- Department of Pathology, The Affiliated Yantai Yuhuangding Hospital of Qingdao University, Yantai, 264000, China
| |
Collapse
|
2
|
Liu R, Dai W, Wu C, Wu T, Wang M, Zhou J, Zhang X, Li WJ, Liu J. Deep Learning-Based Microscopic Cell Detection Using Inverse Distance Transform and Auxiliary Counting. IEEE J Biomed Health Inform 2024; 28:6092-6104. [PMID: 38900626 DOI: 10.1109/jbhi.2024.3417229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/22/2024]
Abstract
Microscopic cell detection is a challenging task due to significant inter-cell occlusions in dense clusters and diverse cell morphologies. This paper introduces a novel framework designed to enhance automated cell detection. The proposed approach integrates a deep learning model that produces an inverse distance transform-based detection map from the given image, accompanied by a secondary network designed to regress a cell density map from the same input. The inverse distance transform-based map effectively highlights each cell instance in the densely populated areas, while the density map accurately estimates the total cell count in the image. Then, a custom counting-aided cell center extraction strategy leverages the cell count obtained by integrating over the density map to refine the detection process, significantly reducing false responses and thereby boosting overall accuracy. The proposed framework demonstrated superior performance with F-scores of 96.93%, 91.21%, and 92.00% on the VGG, MBM, and ADI datasets, respectively, surpassing existing state-of-the-art methods. It also achieved the lowest distance error, further validating the effectiveness of the proposed approach. These results demonstrate significant potential for automated cell analysis in biomedical applications.
Collapse
|
3
|
Talebi S, Gai S, Sossin A, Zhu V, Tong E, Mofrad MRK. Deep Learning for Perfusion Cerebral Blood Flow (CBF) and Volume (CBV) Predictions and Diagnostics. Ann Biomed Eng 2024; 52:1568-1575. [PMID: 38402314 PMCID: PMC11082011 DOI: 10.1007/s10439-024-03471-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 02/06/2024] [Indexed: 02/26/2024]
Abstract
Dynamic susceptibility contrast magnetic resonance perfusion (DSC-MRP) is a non-invasive imaging technique for hemodynamic measurements. Various perfusion parameters, such as cerebral blood volume (CBV) and cerebral blood flow (CBF), can be derived from DSC-MRP, hence this non-invasive imaging protocol is widely used clinically for the diagnosis and assessment of intracranial pathologies. Currently, most institutions use commercially available software to compute the perfusion parametric maps. However, these conventional methods often have limitations, such as being time-consuming and sensitive to user input, which can lead to inconsistent results; this highlights the need for a more robust and efficient approach like deep learning. Using the relative cerebral blood volume (rCBV) and relative cerebral blood flow (rCBF) perfusion maps generated by FDA-approved software, we trained a multistage deep learning model. The model, featuring a combination of a 1D convolutional neural network (CNN) and a 2D U-Net encoder-decoder network, processes each 4D MRP dataset by integrating temporal and spatial features of the brain for voxel-wise perfusion parameters prediction. An auxiliary model, with similar architecture, but trained with truncated datasets that had fewer time-points, was designed to explore the contribution of temporal features. Both qualitatively and quantitatively evaluated, deep learning-generated rCBV and rCBF maps showcased effective integration of temporal and spatial data, producing comprehensive predictions for the entire brain volume. Our deep learning model provides a robust and efficient approach for calculating perfusion parameters, demonstrating comparable performance to FDA-approved commercial software, and potentially mitigating the challenges inherent to traditional techniques.
Collapse
Affiliation(s)
- Salmonn Talebi
- Departments of Bioengineering and Mechanical Engineering, University of California, 208A Stanley Hall #1762, Berkeley, CA, 94720-1762, USA
| | - Siyu Gai
- Departments of Electrical Engineering and Computer Science, University of California, Berkeley, California, USA
| | - Aaron Sossin
- Department of Bioinformatics, Stanford School of Medicine, Stanford University, Stanford, California, USA
| | - Vivian Zhu
- Department of Bioinformatics, Stanford School of Medicine, Stanford University, Stanford, California, USA
| | - Elizabeth Tong
- Department of Radiology, Stanford School of Medicine, Stanford University, 725 Welch Rd Rm 1860, Palo Alto, Stanford, CA, 94304, USA.
| | - Mohammad R K Mofrad
- Departments of Bioengineering and Mechanical Engineering, University of California, 208A Stanley Hall #1762, Berkeley, CA, 94720-1762, USA.
| |
Collapse
|
4
|
Xing F, Yang X, Cornish TC, Ghosh D. Learning with limited target data to detect cells in cross-modality images. Med Image Anal 2023; 90:102969. [PMID: 37802010 DOI: 10.1016/j.media.2023.102969] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 08/16/2023] [Accepted: 09/11/2023] [Indexed: 10/08/2023]
Abstract
Deep neural networks have achieved excellent cell or nucleus quantification performance in microscopy images, but they often suffer from performance degradation when applied to cross-modality imaging data. Unsupervised domain adaptation (UDA) based on generative adversarial networks (GANs) has recently improved the performance of cross-modality medical image quantification. However, current GAN-based UDA methods typically require abundant target data for model training, which is often very expensive or even impossible to obtain for real applications. In this paper, we study a more realistic yet challenging UDA situation, where (unlabeled) target training data is limited and previous work seldom delves into cell identification. We first enhance a dual GAN with task-specific modeling, which provides additional supervision signals to assist with generator learning. We explore both single-directional and bidirectional task-augmented GANs for domain adaptation. Then, we further improve the GAN by introducing a differentiable, stochastic data augmentation module to explicitly reduce discriminator overfitting. We examine source-, target-, and dual-domain data augmentation for GAN enhancement, as well as joint task and data augmentation in a unified GAN-based UDA framework. We evaluate the framework for cell detection on multiple public and in-house microscopy image datasets, which are acquired with different imaging modalities, staining protocols and/or tissue preparations. The experiments demonstrate that our method significantly boosts performance when compared with the reference baseline, and it is superior to or on par with fully supervised models that are trained with real target annotations. In addition, our method outperforms recent state-of-the-art UDA approaches by a large margin on different datasets.
Collapse
Affiliation(s)
- Fuyong Xing
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA.
| | - Xinyi Yang
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| | - Toby C Cornish
- Department of Pathology, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| | - Debashis Ghosh
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| |
Collapse
|
5
|
Ding Y, Zheng Y, Han Z, Yang X. Using optimal transport theory to optimize a deep convolutional neural network microscopic cell counting method. Med Biol Eng Comput 2023; 61:2939-2950. [PMID: 37532907 DOI: 10.1007/s11517-023-02862-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 05/17/2023] [Indexed: 08/04/2023]
Abstract
Medical image processing has become increasingly important in recent years, particularly in the field of microscopic cell imaging. However, accurately counting the number of cells in an image can be a challenging task due to the significant variations in cell size and shape. To tackle this problem, many existing methods rely on deep learning techniques, such as convolutional neural networks (CNNs), to count cells in an image or use regression counting methods to learn the similarities between an input image and a predicted cell image density map. In this paper, we propose a novel approach to monitor the cell counting process by optimizing the loss function using the optimal transport method, a rigorous measure to calculate the difference between the predicted count map and the dot annotation map generated by the CNN. We evaluated our algorithm on three publicly available cell count benchmarks: the synthetic fluorescence microscopy (VGG) dataset, the modified bone marrow (MBM) dataset, and the human subcutaneous adipose tissue (ADI) dataset. Our method outperforms other state-of-the-art methods, achieving a mean absolute error (MAE) of 2.3, 4.8, and 13.1 on the VGG, MBM, and ADI datasets, respectively, with smaller standard deviations. By using the optimal transport method, our approach provides a more accurate and reliable cell counting method for medical image processing.
Collapse
Affiliation(s)
- Yuanyuan Ding
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, Shandong, China
| | - Yuanjie Zheng
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, Shandong, China.
| | - Zeyu Han
- School of Mathematics and Statistics, Shandong University (Weihai), Weihai, 264209, Shandong, China
| | - Xinbo Yang
- School of Information Science and Engineering, Shandong Normal University, Jinan, 250358, Shandong, China
| |
Collapse
|
6
|
Cooper M, Ji Z, Krishnan RG. Machine learning in computational histopathology: Challenges and opportunities. Genes Chromosomes Cancer 2023; 62:540-556. [PMID: 37314068 DOI: 10.1002/gcc.23177] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 05/18/2023] [Accepted: 05/20/2023] [Indexed: 06/15/2023] Open
Abstract
Digital histopathological images, high-resolution images of stained tissue samples, are a vital tool for clinicians to diagnose and stage cancers. The visual analysis of patient state based on these images are an important part of oncology workflow. Although pathology workflows have historically been conducted in laboratories under a microscope, the increasing digitization of histopathological images has led to their analysis on computers in the clinic. The last decade has seen the emergence of machine learning, and deep learning in particular, a powerful set of tools for the analysis of histopathological images. Machine learning models trained on large datasets of digitized histopathology slides have resulted in automated models for prediction and stratification of patient risk. In this review, we provide context for the rise of such models in computational histopathology, highlight the clinical tasks they have found success in automating, discuss the various machine learning techniques that have been applied to this domain, and underscore open problems and opportunities.
Collapse
Affiliation(s)
- Michael Cooper
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- University Health Network, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Zongliang Ji
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Rahul G Krishnan
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
7
|
Yin H, Zhang F, Yang X, Meng X, Miao Y, Noor Hussain MS, Yang L, Li Z. Research trends of artificial intelligence in pancreatic cancer: a bibliometric analysis. Front Oncol 2022; 12:973999. [PMID: 35982967 PMCID: PMC9380440 DOI: 10.3389/fonc.2022.973999] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Accepted: 07/13/2022] [Indexed: 01/03/2023] Open
Abstract
Purpose We evaluated the related research on artificial intelligence (AI) in pancreatic cancer (PC) through bibliometrics analysis and explored the research hotspots and current status from 1997 to 2021. Methods Publications related to AI in PC were retrieved from the Web of Science Core Collection (WoSCC) during 1997-2021. Bibliometrix package of R software 4.0.3 and VOSviewer were used to bibliometrics analysis. Results A total of 587 publications in this field were retrieved from WoSCC database. After 2018, the number of publications grew rapidly. The United States and Johns Hopkins University were the most influential country and institution, respectively. A total of 2805 keywords were investigated, 81 of which appeared more than 10 times. Co-occurrence analysis categorized these keywords into five types of clusters: (1) AI in biology of PC, (2) AI in pathology and radiology of PC, (3) AI in the therapy of PC, (4) AI in risk assessment of PC and (5) AI in endoscopic ultrasonography (EUS) of PC. Trend topics and thematic maps show that keywords " diagnosis ", “survival”, “classification”, and “management” are the research hotspots in this field. Conclusion The research related to AI in pancreatic cancer is still in the initial stage. Currently, AI is widely studied in biology, diagnosis, treatment, risk assessment, and EUS of pancreatic cancer. This bibliometrics study provided an insight into AI in PC research and helped researchers identify new research orientations.
Collapse
Affiliation(s)
- Hua Yin
- Department of Gastroenterology, General Hospital of Ningxia Medical University, Yinchuan, China
- Postgraduate Training Base in Shanghai Gongli Hospital, Ningxia Medical University, Shanghai, China
| | - Feixiong Zhang
- Department of Gastroenterology, General Hospital of Ningxia Medical University, Yinchuan, China
| | - Xiaoli Yang
- Department of Gastroenterology, General Hospital of Ningxia Medical University, Yinchuan, China
| | - Xiangkun Meng
- Department of Gastroenterology, General Hospital of Ningxia Medical University, Yinchuan, China
| | - Yu Miao
- Department of Gastroenterology, General Hospital of Ningxia Medical University, Yinchuan, China
| | | | - Li Yang
- Department of Gastroenterology, General Hospital of Ningxia Medical University, Yinchuan, China
- *Correspondence: Zhaoshen Li, ; Li Yang,
| | - Zhaoshen Li
- Postgraduate Training Base in Shanghai Gongli Hospital, Ningxia Medical University, Shanghai, China
- Clinical Medical College, Ningxia Medical University, Yinchuan, China
- *Correspondence: Zhaoshen Li, ; Li Yang,
| |
Collapse
|
8
|
Mathew T, Niyas S, Johnpaul C, Kini JR, Rajan J. A novel deep classifier framework for automated molecular subtyping of breast carcinoma using immunohistochemistry image analysis. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
9
|
Bai T, Xu J, Zhang Z, Guo S, Luo X. Context-aware learning for cancer cell nucleus recognition in pathology images. Bioinformatics 2022; 38:2892-2898. [PMID: 35561198 DOI: 10.1093/bioinformatics/btac167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 02/28/2022] [Accepted: 03/17/2022] [Indexed: 11/13/2022] Open
Abstract
MOTIVATION Nucleus identification supports many quantitative analysis studies that rely on nuclei positions or categories. Contextual information in pathology images refers to information near the to-be-recognized cell, which can be very helpful for nucleus subtyping. Current CNN-based methods do not explicitly encode contextual information within the input images and point annotations. RESULTS In this article, we propose a novel framework with context to locate and classify nuclei in microscopy image data. Specifically, first we use state-of-the-art network architectures to extract multi-scale feature representations from multi-field-of-view, multi-resolution input images and then conduct feature aggregation on-the-fly with stacked convolutional operations. Then, two auxiliary tasks are added to the model to effectively utilize the contextual information. One for predicting the frequencies of nuclei, and the other for extracting the regional distribution information of the same kind of nuclei. The entire framework is trained in an end-to-end, pixel-to-pixel fashion. We evaluate our method on two histopathological image datasets with different tissue and stain preparations, and experimental results demonstrate that our method outperforms other recent state-of-the-art models in nucleus identification. AVAILABILITY AND IMPLEMENTATION The source code of our method is freely available at https://github.com/qjxjy123/DonRabbit. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Tian Bai
- College of Computer Science and Technology, Jilin University, 130012 Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, 130012 Changchun, China
| | - Jiayu Xu
- College of Computer Science and Technology, Jilin University, 130012 Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, 130012 Changchun, China
| | - Zhenting Zhang
- College of Computer Science and Technology, Jilin University, 130012 Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, 130012 Changchun, China
| | - Shuyu Guo
- College of Computer Science and Technology, Jilin University, 130012 Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, 130012 Changchun, China
| | - Xiao Luo
- Department of Breast Surgery, China-Japan Union Hospital of Jilin University, 130033 Changchun, China
| |
Collapse
|
10
|
Pantelis AG, Panagopoulou PA, Lapatsanis DP. Artificial Intelligence and Machine Learning in the Diagnosis and Management of Gastroenteropancreatic Neuroendocrine Neoplasms-A Scoping Review. Diagnostics (Basel) 2022; 12:874. [PMID: 35453922 PMCID: PMC9027316 DOI: 10.3390/diagnostics12040874] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Revised: 03/27/2022] [Accepted: 03/29/2022] [Indexed: 12/21/2022] Open
Abstract
Neuroendocrine neoplasms (NENs) and tumors (NETs) are rare neoplasms that may affect any part of the gastrointestinal system. In this scoping review, we attempt to map existing evidence on the role of artificial intelligence, machine learning and deep learning in the diagnosis and management of NENs of the gastrointestinal system. After implementation of inclusion and exclusion criteria, we retrieved 44 studies with 53 outcome analyses. We then classified the papers according to the type of studied NET (26 Pan-NETs, 59.1%; 3 metastatic liver NETs (6.8%), 2 small intestinal NETs, 4.5%; colorectal, rectal, non-specified gastroenteropancreatic and non-specified gastrointestinal NETs had from 1 study each, 2.3%). The most frequently used AI algorithms were Supporting Vector Classification/Machine (14 analyses, 29.8%), Convolutional Neural Network and Random Forest (10 analyses each, 21.3%), Random Forest (9 analyses, 19.1%), Logistic Regression (8 analyses, 17.0%), and Decision Tree (6 analyses, 12.8%). There was high heterogeneity on the description of the prediction model, structure of datasets, and performance metrics, whereas the majority of studies did not report any external validation set. Future studies should aim at incorporating a uniform structure in accordance with existing guidelines for purposes of reproducibility and research quality, which are prerequisites for integration into clinical practice.
Collapse
Affiliation(s)
- Athanasios G. Pantelis
- 4th Department of Surgery, Evaggelismos General Hospital of Athens, 10676 Athens, Greece;
| | | | - Dimitris P. Lapatsanis
- 4th Department of Surgery, Evaggelismos General Hospital of Athens, 10676 Athens, Greece;
| |
Collapse
|
11
|
Kashyap R. Breast Cancer Histopathological Image Classification Using Stochastic Dilated Residual Ghost Model. INTERNATIONAL JOURNAL OF INFORMATION RETRIEVAL RESEARCH 2022. [DOI: 10.4018/ijirr.289655] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
A new deep learning-based classification model called the Stochastic Dilated Residual Ghost (SDRG) was proposed in this work for categorizing histopathology images of breast cancer. The SDRG model used the proposed Multiscale Stochastic Dilated Convolution (MSDC) model, a ghost unit, stochastic upsampling, and downsampling units to categorize breast cancer accurately. This study addresses four primary issues: first, strain normalization was used to manage color divergence, data augmentation with several factors was used to handle the overfitting. The second challenge is extracting and enhancing tiny and low-level information such as edge, contour, and color accuracy; it is done by the proposed multiscale stochastic and dilation unit. The third contribution is to remove redundant or similar information from the convolution neural network using a ghost unit. According to the assessment findings, the SDRG model scored overall 95.65 percent accuracy rates in categorizing images with a precision of 99.17 percent, superior to state-of-the-art approaches.
Collapse
Affiliation(s)
- Ramgopal Kashyap
- Amity School of Engineering and Technology, Amity University, Raipur, India
| |
Collapse
|
12
|
Li H, Kang Y, Yang W, Wu Z, Shi X, Liu F, Liu J, Hu L, Ma Q, Cui L, Feng J, Yang L. A Robust Training Method for Pathological Cellular Detector via Spatial Loss Calibration. Front Med (Lausanne) 2022; 8:767625. [PMID: 34970560 PMCID: PMC8712578 DOI: 10.3389/fmed.2021.767625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 11/15/2021] [Indexed: 11/13/2022] Open
Abstract
Computer-aided diagnosis of pathological images usually requires detecting and examining all positive cells for accurate diagnosis. However, cellular datasets tend to be sparsely annotated due to the challenge of annotating all the cells. However, training detectors on sparse annotations may be misled by miscalculated losses, limiting the detection performance. Thus, efficient and reliable methods for training cellular detectors on sparse annotations are in higher demand than ever. In this study, we propose a training method that utilizes regression boxes' spatial information to conduct loss calibration to reduce the miscalculated loss. Extensive experimental results show that our method can significantly boost detectors' performance trained on datasets with varying degrees of sparse annotations. Even if 90% of the annotations are missing, the performance of our method is barely affected. Furthermore, we find that the middle layers of the detector are closely related to the generalization performance. More generally, this study could elucidate the link between layers and generalization performance, provide enlightenment for future research, such as designing and applying constraint rules to specific layers according to gradient analysis to achieve “scalpel-level” model training.
Collapse
Affiliation(s)
- Hansheng Li
- School of Information Science and Technology, Northwest University, Xi'an, China
| | - Yuxin Kang
- School of Information Science and Technology, Northwest University, Xi'an, China
| | - Wentao Yang
- Fudan University Shanghai Cancer Center, Shanghai, China
| | - Zhuoyue Wu
- School of Information Science and Technology, Northwest University, Xi'an, China
| | - Xiaoshuang Shi
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Feihong Liu
- School of Information Science and Technology, Northwest University, Xi'an, China
| | - Jianye Liu
- School of Information Science and Technology, Northwest University, Xi'an, China
| | | | - Qian Ma
- AstraZeneca, Shanghai, China
| | - Lei Cui
- School of Information Science and Technology, Northwest University, Xi'an, China
| | - Jun Feng
- School of Information Science and Technology, Northwest University, Xi'an, China
| | - Lin Yang
- School of Information Science and Technology, Northwest University, Xi'an, China
| |
Collapse
|
13
|
Duanmu H, Wang F, Teodoro G, Kong J. Foveal blur-boosted segmentation of nuclei in histopathology images with shape prior knowledge and probability map constraints. Bioinformatics 2021; 37:3905-3913. [PMID: 34081103 PMCID: PMC11025700 DOI: 10.1093/bioinformatics/btab418] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 04/07/2021] [Accepted: 06/02/2021] [Indexed: 11/13/2022] Open
Abstract
MOTIVATION In most tissue-based biomedical research, the lack of sufficient pathology training images with well-annotated ground truth inevitably limits the performance of deep learning systems. In this study, we propose a convolutional neural network with foveal blur enriching datasets with multiple local nuclei regions of interest derived from original pathology images. We further propose a human-knowledge boosted deep learning system by inclusion to the convolutional neural network new loss function terms capturing shape prior knowledge and imposing smoothness constraints on the predicted probability maps. RESULTS Our proposed system outperforms all state-of-the-art deep learning and non-deep learning methods by Jaccard coefficient, Dice coefficient, Accuracy and Panoptic Quality in three independent datasets. The high segmentation accuracy and execution speed suggest its promising potential for automating histopathology nuclei segmentation in biomedical research and clinical settings. AVAILABILITY AND IMPLEMENTATION The codes, the documentation and example data are available on an open source at: https://github.com/HongyiDuanmu26/FovealBoosted. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Hongyi Duanmu
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, USA
| | - Fusheng Wang
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, USA
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY 11794, USA
| | - George Teodoro
- Department of Computer Science, Federal University of Minas Gerais, Belo Horizonte 31270-901, Brazil
| | - Jun Kong
- Department of Mathematics and Statistics and Computer Science, Georgia State University, Atlanta, GA 30303, USA
- Department of Computer Science and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| |
Collapse
|
14
|
Xing F, Cornish TC, Bennett TD, Ghosh D. Bidirectional Mapping-Based Domain Adaptation for Nucleus Detection in Cross-Modality Microscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2880-2896. [PMID: 33284750 PMCID: PMC8543886 DOI: 10.1109/tmi.2020.3042789] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Cell or nucleus detection is a fundamental task in microscopy image analysis and has recently achieved state-of-the-art performance by using deep neural networks. However, training supervised deep models such as convolutional neural networks (CNNs) usually requires sufficient annotated image data, which is prohibitively expensive or unavailable in some applications. Additionally, when applying a CNN to new datasets, it is common to annotate individual cells/nuclei in those target datasets for model re-learning, leading to inefficient and low-throughput image analysis. To tackle these problems, we present a bidirectional, adversarial domain adaptation method for nucleus detection on cross-modality microscopy image data. Specifically, the method learns a deep regression model for individual nucleus detection with both source-to-target and target-to-source image translation. In addition, we explicitly extend this unsupervised domain adaptation method to a semi-supervised learning situation and further boost the nucleus detection performance. We evaluate the proposed method on three cross-modality microscopy image datasets, which cover a wide variety of microscopy imaging protocols or modalities, and obtain a significant improvement in nucleus detection compared to reference baseline approaches. In addition, our semi-supervised method is very competitive with recent fully supervised learning models trained with all real target training labels.
Collapse
|
15
|
Song J, Xiao L, Molaei M, Lian Z. Sparse Coding Driven Deep Decision Tree Ensembles for Nucleus Segmentation in Digital Pathology Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:8088-8101. [PMID: 34534088 DOI: 10.1109/tip.2021.3112057] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Automating generalized nucleus segmentation has proven to be non-trivial and challenging in digital pathology. Most existing techniques in the field rely either on deep neural networks or on shallow learning-based cascading models. The former lacks theoretical understanding and tends to degrade performance when only limited amounts of training data are available while the latter often suffers from limitations for generalization. To address these issues, we propose sparse coding driven deep decision tree ensembles (ScD2TE), an easily trained yet powerful representation learning approach with performance highly competitive to deep neural networks in the generalized nucleus segmentation task. We explore the possibility of stacking several layers based on fast convolutional sparse coding-decision tree ensemble pairwise modules and generate a layer-wise encoder-decoder architecture with intra-decoder and inter-encoder dense connectivity patterns. Under this architecture, all the encoders share the same assumption across the different layers to represent images and interact with their decoders to give fast convergence. Compared with deep neural networks, our proposed ScD2TE does not require back-propagation computation and depends on less hyper-parameters. ScD2TE is able to achieve a fast end-to-end pixel-wise training in a layer-wise manner. We demonstrated the superiority of our segmentation method by evaluating it on the multi-disease state and multi-organ dataset where consistently higher performances were obtained for comparison against other state-of-the-art deep learning techniques and cascading methods with various connectivity patterns.
Collapse
|
16
|
Xu H, Liu L, Lei X, Mandal M, Lu C. An unsupervised method for histological image segmentation based on tissue cluster level graph cut. Comput Med Imaging Graph 2021; 93:101974. [PMID: 34481236 DOI: 10.1016/j.compmedimag.2021.101974] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Revised: 07/11/2021] [Accepted: 08/17/2021] [Indexed: 11/16/2022]
Abstract
While deep learning models have demonstrated outstanding performance in medical image segmentation tasks, histological annotations for training deep learning models are usually challenging to obtain, due to the effort and experience required to carefully delineate tissue structures. In this study, we propose an unsupervised method, termed as tissue cluster level graph cut (TisCut), for segmenting histological images into meaningful compartments (e.g., tumor or non-tumor regions), which aims at assisting histological annotations for downstream supervised models. The TisCut consists of three modules. First, histological tissue objects are clustered based on their spatial proximity and morphological features. The Voronoi diagram is then constructed based on tissue object clustering. In the last module, morphological features computed from the Voronoi diagram are integrated into a region adjacency graph. Image partition is then performed to divide the image into meaningful compartments by using the graph cut algorithm. The TisCut has been evaluated on three histological image sets for necrosis and melanoma detections. Experiments show that the TisCut could provide a comparative performance with U-Net models, which achieves about 70% and 85% Jaccard index coefficients in partitioning brain and skin histological images, respectively. In addition, it shows the potential to be used for generating histological annotations when training masks are difficult to collect for supervised segmentation models.
Collapse
Affiliation(s)
- Hongming Xu
- School of Biomedical Engineering at Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, Liaoning 116024, China
| | - Lina Liu
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 1H9, Canada
| | - Xiujuan Lei
- College of Computer Science, Shaanxi Normal University, Xi'an, Shaanxi 710119, China
| | - Mrinal Mandal
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 1H9, Canada
| | - Cheng Lu
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA.
| |
Collapse
|
17
|
Zhang X, Cornish TC, Yang L, Bennett TD, Ghosh D, Xing F. Generative Adversarial Domain Adaptation for Nucleus Quantification in Images of Tissue Immunohistochemically Stained for Ki-67. JCO Clin Cancer Inform 2021; 4:666-679. [PMID: 32730116 PMCID: PMC7397778 DOI: 10.1200/cci.19.00108] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
PURPOSE We focus on the problem of scarcity of annotated training data for nucleus recognition in Ki-67 immunohistochemistry (IHC)–stained pancreatic neuroendocrine tumor (NET) images. We hypothesize that deep learning–based domain adaptation is helpful for nucleus recognition when image annotations are unavailable in target data sets. METHODS We considered 2 different institutional pancreatic NET data sets: one (ie, source) containing 38 cases with 114 annotated images and the other (ie, target) containing 72 cases with 20 annotated images. The gold standards were manually annotated by 1 pathologist. We developed a novel deep learning–based domain adaptation framework to count different types of nuclei (ie, immunopositive tumor, immunonegative tumor, nontumor nuclei). We compared the proposed method with several recent fully supervised deep learning models, such as fully convolutional network-8s (FCN-8s), U-Net, fully convolutional regression network (FCRN) A, FCRNB, and fully residual convolutional network (FRCN). We also evaluated the proposed method by learning with a mixture of converted source images and real target annotations. RESULTS Our method achieved an F1 score of 81.3% and 62.3% for nucleus detection and classification in the target data set, respectively. Our method outperformed FCN-8s (53.6% and 43.6% for nucleus detection and classification, respectively), U-Net (61.1% and 47.6%), FCRNA (63.4% and 55.8%), and FCRNB (68.2% and 60.6%) in terms of F1 score and was competitive with FRCN (81.7% and 70.7%). In addition, learning with a mixture of converted source images and only a small set of real target labels could further boost the performance. CONCLUSION This study demonstrates that deep learning–based domain adaptation is helpful for nucleus recognition in Ki-67 IHC stained images when target data annotations are not available. It would improve the applicability of deep learning models designed for downstream supervised learning tasks on different data sets.
Collapse
Affiliation(s)
- Xuhong Zhang
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, Aurora, CO
| | - Toby C Cornish
- Department of Pathology, University of Colorado Anschutz Medical Campus, Aurora, CO
| | - Lin Yang
- Department of Electrical and Computer Engineering, Department of Computer and Information Science, Department of Biomedical Engineering, University of Florida, Gainesville, FL
| | - Tellen D Bennett
- Department of Pediatrics, University of Colorado Anschutz Medical Campus, Aurora, CO.,The Data Science to Patient Value Initiative, University of Colorado Anschutz Medical Campus, Aurora, CO
| | - Debashis Ghosh
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, Aurora, CO.,The Data Science to Patient Value Initiative, University of Colorado Anschutz Medical Campus, Aurora, CO
| | - Fuyong Xing
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, Aurora, CO.,The Data Science to Patient Value Initiative, University of Colorado Anschutz Medical Campus, Aurora, CO
| |
Collapse
|
18
|
PathoNet introduced as a deep neural network backend for evaluation of Ki-67 and tumor-infiltrating lymphocytes in breast cancer. Sci Rep 2021; 11:8489. [PMID: 33875676 PMCID: PMC8055887 DOI: 10.1038/s41598-021-86912-w] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Accepted: 03/16/2021] [Indexed: 12/16/2022] Open
Abstract
The nuclear protein Ki-67 and Tumor infiltrating lymphocytes (TILs) have been introduced as prognostic factors in predicting both tumor progression and probable response to chemotherapy. The value of Ki-67 index and TILs in approach to heterogeneous tumors such as Breast cancer (BC) that is the most common cancer in women worldwide, has been highlighted in literature. Considering that estimation of both factors are dependent on professional pathologists’ observation and inter-individual variations may also exist, automated methods using machine learning, specifically approaches based on deep learning, have attracted attention. Yet, deep learning methods need considerable annotated data. In the absence of publicly available benchmarks for BC Ki-67 cell detection and further annotated classification of cells, In this study we propose SHIDC-BC-Ki-67 as a dataset for the aforementioned purpose. We also introduce a novel pipeline and backend, for estimation of Ki-67 expression and simultaneous determination of intratumoral TILs score in breast cancer cells. Further, we show that despite the challenges that our proposed model has encountered, our proposed backend, PathoNet, outperforms the state of the art methods proposed to date with regard to harmonic mean measure acquired. Dataset is publicly available in http://shiraz-hidc.com and all experiment codes are published in https://github.com/SHIDCenter/PathoNet.
Collapse
|
19
|
He S, Minn KT, Solnica-Krezel L, Anastasio MA, Li H. Deeply-supervised density regression for automatic cell counting in microscopy images. Med Image Anal 2021; 68:101892. [PMID: 33285481 PMCID: PMC7856299 DOI: 10.1016/j.media.2020.101892] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 11/06/2020] [Accepted: 11/07/2020] [Indexed: 12/21/2022]
Abstract
Accurately counting the number of cells in microscopy images is required in many medical diagnosis and biological studies. This task is tedious, time-consuming, and prone to subjective errors. However, designing automatic counting methods remains challenging due to low image contrast, complex background, large variance in cell shapes and counts, and significant cell occlusions in two-dimensional microscopy images. In this study, we proposed a new density regression-based method for automatically counting cells in microscopy images. The proposed method processes two innovations compared to other state-of-the-art density regression-based methods. First, the density regression model (DRM) is designed as a concatenated fully convolutional regression network (C-FCRN) to employ multi-scale image features for the estimation of cell density maps from given images. Second, auxiliary convolutional neural networks (AuxCNNs) are employed to assist in the training of intermediate layers of the designed C-FCRN to improve the DRM performance on unseen datasets. Experimental studies evaluated on four datasets demonstrate the superior performance of the proposed method.
Collapse
Affiliation(s)
- Shenghua He
- Department of Computer Science and Engineering, Washington University in St. Louis, St. Louis, MO 63110 USA
| | - Kyaw Thu Minn
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, MO 63110 USA; Department of Developmental Biology, Washington University School of Medicine in St. Louis, St. Louis, MO 63110 USA
| | - Lilianna Solnica-Krezel
- Department of Developmental Biology, Washington University School of Medicine in St. Louis, St. Louis, MO 63110 USA; Center of Regenerative Medicine, Washington University School of Medicine in St. Louis, St. Louis, MO 63110 USA
| | - Mark A Anastasio
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801 USA.
| | - Hua Li
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801 USA; Cancer Center at Illinois, University of Illinois at Urbana-Champaign, Urbana, IL 61801 USA; Carle Cancer Center, Carle Foundation Hospital, Urbana, IL 61801 USA.
| |
Collapse
|
20
|
Geread RS, Sivanandarajah A, Brouwer ER, Wood GA, Androutsos D, Faragalla H, Khademi A. piNET-An Automated Proliferation Index Calculator Framework for Ki67 Breast Cancer Images. Cancers (Basel) 2020; 13:E11. [PMID: 33375043 PMCID: PMC7792768 DOI: 10.3390/cancers13010011] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Revised: 12/15/2020] [Accepted: 12/17/2020] [Indexed: 12/16/2022] Open
Abstract
In this work, a novel proliferation index (PI) calculator for Ki67 images called piNET is proposed. It is successfully tested on four datasets, from three scanners comprised of patches, tissue microarrays (TMAs) and whole slide images (WSI), representing a diverse multi-centre dataset for evaluating Ki67 quantification. Compared to state-of-the-art methods, piNET consistently performs the best over all datasets with an average PI difference of 5.603%, PI accuracy rate of 86% and correlation coefficient R = 0.927. The success of the system can be attributed to several innovations. Firstly, this tool is built based on deep learning, which can adapt to wide variability of medical images-and it was posed as a detection problem to mimic pathologists' workflow which improves accuracy and efficiency. Secondly, the system is trained purely on tumor cells, which reduces false positives from non-tumor cells without needing the usual pre-requisite tumor segmentation step for Ki67 quantification. Thirdly, the concept of learning background regions through weak supervision is introduced, by providing the system with ideal and non-ideal (artifact) patches that further reduces false positives. Lastly, a novel hotspot analysis is proposed to allow automated methods to score patches from WSI that contain "significant" activity.
Collapse
Affiliation(s)
- Rokshana Stephny Geread
- Electrical, Computer and Biomedical Engineering Department, Ryerson University, Toronto, ON M5B 2K3, Canada; (A.S.); (D.A.)
| | - Abishika Sivanandarajah
- Electrical, Computer and Biomedical Engineering Department, Ryerson University, Toronto, ON M5B 2K3, Canada; (A.S.); (D.A.)
| | - Emily Rita Brouwer
- Department of Pathobiology, Ontario Veterinarian College, University of Guelph, Guelph, ON NIG 2W1, Canada; (E.R.B.); (G.A.W.)
| | - Geoffrey A. Wood
- Department of Pathobiology, Ontario Veterinarian College, University of Guelph, Guelph, ON NIG 2W1, Canada; (E.R.B.); (G.A.W.)
| | - Dimitrios Androutsos
- Electrical, Computer and Biomedical Engineering Department, Ryerson University, Toronto, ON M5B 2K3, Canada; (A.S.); (D.A.)
| | - Hala Faragalla
- Department of Laboratory Medicine & Pathobiology, St. Michael’s Hospital, Unity Health Network, Toronto, ON M5B 1W8, Canada;
| | - April Khademi
- Electrical, Computer and Biomedical Engineering Department, Ryerson University, Toronto, ON M5B 2K3, Canada; (A.S.); (D.A.)
- Keenan Research Center for Biomedical Science, St. Michael’s Hospital, Unity Health Network, Toronto, ON M5B 1W8, Canada
| |
Collapse
|
21
|
Detection of Ki67 Hot-Spots of Invasive Breast Cancer Based on Convolutional Neural Networks Applied to Mutual Information of H&E and Ki67 Whole Slide Images. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10217761] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Ki67 hot-spot detection and its evaluation in invasive breast cancer regions play a significant role in routine medical practice. The quantification of cellular proliferation assessed by Ki67 immunohistochemistry is an established prognostic and predictive biomarker that determines the choice of therapeutic protocols. In this paper, we present three deep learning-based approaches to automatically detect and quantify Ki67 hot-spot areas by means of the Ki67 labeling index. To this end, a dataset composed of 100 whole slide images (WSIs) belonging to 50 breast cancer cases (Ki67 and H&E WSI pairs) was used. Three methods based on CNN classification were proposed and compared to create the tumor proliferation map. The best results were obtained by applying the CNN to the mutual information acquired from the color deconvolution of both the Ki67 marker and the H&E WSIs. The overall accuracy of this approach was 95%. The agreement between the automatic Ki67 scoring and the manual analysis is promising with a Spearman’s ρ correlation of 0.92. The results illustrate the suitability of this CNN-based approach for detecting hot-spots areas of invasive breast cancer in WSI.
Collapse
|
22
|
Talebi S, Madani MH, Madani A, Chien A, Shen J, Mastrodicasa D, Fleischmann D, Chan FP, Mofrad MRK. Machine learning for endoleak detection after endovascular aortic repair. Sci Rep 2020; 10:18343. [PMID: 33110113 PMCID: PMC7591558 DOI: 10.1038/s41598-020-74936-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Accepted: 09/30/2020] [Indexed: 12/13/2022] Open
Abstract
Diagnosis of endoleak following endovascular aortic repair (EVAR) relies on manual review of multi-slice CT angiography (CTA) by physicians which is a tedious and time-consuming process that is susceptible to error. We evaluate the use of a deep neural network for the detection of endoleak on CTA for post-EVAR patients using a novel data efficient training approach. 50 CTAs and 20 CTAs with and without endoleak respectively were identified based on gold standard interpretation by a cardiovascular subspecialty radiologist. The Endoleak Augmentor, a custom designed augmentation method, provided robust training for the machine learning (ML) model. Predicted segmentation maps underwent post-processing to determine the presence of endoleak. The model was tested against 3 blinded general radiologists and 1 blinded subspecialist using a held-out subset (10 positive endoleak CTAs, 10 control CTAs). Model accuracy, precision and recall for endoleak diagnosis were 95%, 90% and 100% relative to reference subspecialist interpretation (AUC = 0.99). Accuracy, precision and recall was 70/70/70% for generalist1, 50/50/90% for generalist2, and 90/83/100% for generalist3. The blinded subspecialist had concordant interpretations for all test cases compared with the reference. In conclusion, our ML-based approach has similar performance for endoleak diagnosis relative to subspecialists and superior performance compared with generalists.
Collapse
Affiliation(s)
- Salmonn Talebi
- Molecular Cell Biomechanics Laboratory, Departments of Bioengineering and Mechanical Engineering, University of California, 208A Stanley Hall #1762, Berkeley, CA, 94720-1762, USA
| | - Mohammad H Madani
- Department of Radiology, School of Medicine, Stanford University, Stanford, CA, USA
| | - Ali Madani
- Molecular Cell Biomechanics Laboratory, Departments of Bioengineering and Mechanical Engineering, University of California, 208A Stanley Hall #1762, Berkeley, CA, 94720-1762, USA
- Salesforce Research, Palo Alto, CA, USA
| | - Ashley Chien
- Molecular Cell Biomechanics Laboratory, Departments of Bioengineering and Mechanical Engineering, University of California, 208A Stanley Hall #1762, Berkeley, CA, 94720-1762, USA
| | - Jody Shen
- Department of Radiology, School of Medicine, Stanford University, Stanford, CA, USA
| | | | - Dominik Fleischmann
- Department of Radiology, School of Medicine, Stanford University, Stanford, CA, USA
| | - Frandics P Chan
- Department of Radiology, School of Medicine, Stanford University, Stanford, CA, USA.
| | - Mohammad R K Mofrad
- Molecular Cell Biomechanics Laboratory, Departments of Bioengineering and Mechanical Engineering, University of California, 208A Stanley Hall #1762, Berkeley, CA, 94720-1762, USA.
- Molecular Biophysics and Integrative Bioimaging Division, Lawrence Berkeley National Laboratory, Berkeley, CA, 94720, USA.
| |
Collapse
|
23
|
S L, Sai Ritwik KV, Vijayasenan D, S SD, Sreeram S, Suresh PK. Deep Learning Model based Ki-67 Index estimation with Automatically Labelled Data. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1412-1415. [PMID: 33018254 DOI: 10.1109/embc44109.2020.9175752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Ki-67 labelling index is a biomarker which is used across the world to predict the aggressiveness of cancer. To compute the Ki-67 index, pathologists normally count the tumour nuclei from the slide images manually; hence it is timeconsuming and is subject to inter pathologist variability. With the development of image processing and machine learning, many methods have been introduced for automatic Ki-67 estimation. But most of them require manual annotations and are restricted to one type of cancer. In this work, we propose a pooled Otsu's method to generate labels and train a semantic segmentation deep neural network (DNN). The output is postprocessed to find the Ki-67 index. Evaluation of two different types of cancer (bladder and breast cancer) results in a mean absolute error of 3.52%. The performance of the DNN trained with automatic labels is better than DNN trained with ground truth by an absolute value of 1.25%.
Collapse
|
24
|
Improving the accuracy of gastrointestinal neuroendocrine tumor grading with deep learning. Sci Rep 2020; 10:11064. [PMID: 32632119 PMCID: PMC7338406 DOI: 10.1038/s41598-020-67880-z] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Accepted: 06/15/2020] [Indexed: 02/06/2023] Open
Abstract
The Ki-67 index is an established prognostic factor in gastrointestinal neuroendocrine tumors (GI-NETs) and defines tumor grade. It is currently estimated by microscopically examining tumor tissue single-immunostained (SS) for Ki-67 and counting the number of Ki-67-positive and Ki-67-negative tumor cells within a subjectively picked hot-spot. Intraobserver variability in this procedure as well as difficulty in distinguishing tumor from non-tumor cells can lead to inaccurate Ki-67 indices and possibly incorrect tumor grades. We introduce two computational tools that utilize Ki-67 and synaptophysin double-immunostained (DS) slides to improve the accuracy of Ki-67 index quantitation in GI-NETs: (1) Synaptophysin-KI-Estimator (SKIE), a pipeline automating Ki-67 index quantitation via whole-slide image (WSI) analysis and (2) deep-SKIE, a deep learner-based approach where a Ki-67 index heatmap is generated throughout the tumor. Ki-67 indices for 50 GI-NETs were quantitated using SKIE and compared with DS slide assessments by three pathologists using a microscope and a fourth pathologist via manually ticking off each cell, the latter of which was deemed the gold standard (GS). Compared to the GS, SKIE achieved a grading accuracy of 90% and substantial agreement (linear-weighted Cohen’s kappa 0.62). Using DS WSIs, deep-SKIE displayed a training, validation, and testing accuracy of 98.4%, 90.9%, and 91.0%, respectively, significantly higher than using SS WSIs. Since DS slides are not standard clinical practice, we also integrated a cycle generative adversarial network into our pipeline to transform SS into DS WSIs. The proposed methods can improve accuracy and potentially save a significant amount of time if implemented into clinical practice.
Collapse
|
25
|
Feng M, Deng Y, Yang L, Jing Q, Zhang Z, Xu L, Wei X, Zhou Y, Wu D, Xiang F, Wang Y, Bao J, Bu H. Automated quantitative analysis of Ki-67 staining and HE images recognition and registration based on whole tissue sections in breast carcinoma. Diagn Pathol 2020; 15:65. [PMID: 32471471 PMCID: PMC7257511 DOI: 10.1186/s13000-020-00957-5] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2019] [Accepted: 04/08/2020] [Indexed: 02/08/2023] Open
Abstract
Background The scoring of Ki-67 is highly relevant for the diagnosis, classification, prognosis, and treatment in breast invasive ductal carcinoma (IDC). Traditional scoring method of Ki-67 staining followed by manual counting, is time-consumption and inter−/intra observer variability, which may limit its clinical value. Although more and more algorithms and individual platforms have been developed for the assessment of Ki-67 stained images to improve its accuracy level, most of them lack of accurate registration of immunohistochemical (IHC) images and their matched hematoxylin-eosin (HE) images, or did not accurately labelled each positive and negative cell with Ki-67 staining based on whole tissue sections (WTS). In view of this, we introduce an accurate image registration method and an automatic identification and counting software of Ki-67 based on WTS by deep learning. Methods We marked 1017 breast IDC whole slide imaging (WSI), established a research workflow based on the (i) identification of IDC area, (ii) registration of HE and IHC slides from the same anatomical region, and (iii) counting of positive Ki-67 staining. Results The accuracy, sensitivity, and specificity levels of identifying breast IDC regions were 89.44, 85.05, and 95.23%, respectively, and the contiguous HE and Ki-67 stained slides perfectly registered. We counted and labelled each cell of 10 Ki-67 slides as standard for testing on WTS, the accuracy by automatic calculation of Ki-67 positive rate in attained IDC was 90.2%. In the human-machine competition of Ki-67 scoring, the average time of 1 slide was 2.3 min with 1 GPU by using this software, and the accuracy was 99.4%, which was over 90% of the results provided by participating doctors. Conclusions Our study demonstrates the enormous potential of automated quantitative analysis of Ki-67 staining and HE images recognition and registration based on WTS, and the automated scoring of Ki67 can thus successfully address issues of consistency, reproducibility and accuracy. We will provide those labelled images as an open-free platform for researchers to assess the performance of computer algorithms for automated Ki-67 scoring on IHC stained slides.
Collapse
Affiliation(s)
- Min Feng
- Laboratory of Pathology, West China Hospital, Sichuan University, Chengdu, 610041, China.,Department of Pathology, West China Second University Hospital, Sichuan University & key Laboratory of Birth Defects and Related Diseases of Women and Children (Sichuan University), Ministry of Education, Chengdu, 610041, China.,Department of Pathology, West China Hospital, Sichuan University, Chengdu, 610041, China
| | - Yang Deng
- Laboratory of Pathology, West China Hospital, Sichuan University, Chengdu, 610041, China
| | - Libo Yang
- Laboratory of Pathology, West China Hospital, Sichuan University, Chengdu, 610041, China.,Department of Pathology, West China Hospital, Sichuan University, Chengdu, 610041, China
| | - Qiuyang Jing
- Department of Pathology, West China Second University Hospital, Sichuan University & key Laboratory of Birth Defects and Related Diseases of Women and Children (Sichuan University), Ministry of Education, Chengdu, 610041, China
| | - Zhang Zhang
- Department of Pathology, West China Hospital, Sichuan University, Chengdu, 610041, China
| | - Lian Xu
- Department of Pathology, West China Second University Hospital, Sichuan University & key Laboratory of Birth Defects and Related Diseases of Women and Children (Sichuan University), Ministry of Education, Chengdu, 610041, China.,Department of Pathology, West China Hospital, Sichuan University, Chengdu, 610041, China
| | - Xiaoxia Wei
- Laboratory of Pathology, West China Hospital, Sichuan University, Chengdu, 610041, China.,Department of Pathology, West China Hospital, Sichuan University, Chengdu, 610041, China.,Department of Pathology, Chengfei Hospital, Chengdu, China
| | - Yanyan Zhou
- Laboratory of Pathology, West China Hospital, Sichuan University, Chengdu, 610041, China
| | - Diwei Wu
- Laboratory of Pathology, West China Hospital, Sichuan University, Chengdu, 610041, China
| | - Fei Xiang
- Chengdu Knowledge Vision Science and Technology Co., Ltd, Chengdu, China
| | - Yizhe Wang
- Chengdu Knowledge Vision Science and Technology Co., Ltd, Chengdu, China
| | - Ji Bao
- Laboratory of Pathology, West China Hospital, Sichuan University, Chengdu, 610041, China.
| | - Hong Bu
- Laboratory of Pathology, West China Hospital, Sichuan University, Chengdu, 610041, China. .,Department of Pathology, West China Hospital, Sichuan University, Chengdu, 610041, China.
| |
Collapse
|
26
|
Koyuncu CF, Gunesli GN, Cetin-Atalay R, Gunduz-Demir C. DeepDistance: A multi-task deep regression model for cell detection in inverted microscopy images. Med Image Anal 2020; 63:101720. [PMID: 32438298 DOI: 10.1016/j.media.2020.101720] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2018] [Revised: 02/28/2020] [Accepted: 05/04/2020] [Indexed: 11/25/2022]
Abstract
This paper presents a new deep regression model, which we call DeepDistance, for cell detection in images acquired with inverted microscopy. This model considers cell detection as a task of finding most probable locations that suggest cell centers in an image. It represents this main task with a regression task of learning an inner distance metric. However, different than the previously reported regression based methods, the DeepDistance model proposes to approach its learning as a multi-task regression problem where multiple tasks are learned by using shared feature representations. To this end, it defines a secondary metric, normalized outer distance, to represent a different aspect of the problem and proposes to define its learning as complementary to the main cell detection task. In order to learn these two complementary tasks more effectively, the DeepDistance model designs a fully convolutional network (FCN) with a shared encoder path and end-to-end trains this FCN to concurrently learn the tasks in parallel. For further performance improvement on the main task, this paper also presents an extended version of the DeepDistance model that includes an auxiliary classification task and learns it in parallel to the two regression tasks by also sharing feature representations with them. DeepDistance uses the inner distances estimated by these FCNs in a detection algorithm to locate individual cells in a given image. In addition to this detection algorithm, this paper also suggests a cell segmentation algorithm that employs the estimated maps to find cell boundaries. Our experiments on three different human cell lines reveal that the proposed multi-task learning models, the DeepDistance model and its extended version, successfully identify the locations of cell as well as delineate their boundaries, even for the cell line that was not used in training, and improve the results of its counterparts.
Collapse
Affiliation(s)
| | - Gozde Nur Gunesli
- Department of Computer Engineering, Bilkent University, Ankara TR-06800, Turkey.
| | - Rengul Cetin-Atalay
- CanSyL,Graduate School of Informatics, Middle East Technical University, Ankara TR-06800, Turkey.
| | - Cigdem Gunduz-Demir
- Department of Computer Engineering, Bilkent University, Ankara TR-06800, Turkey; Neuroscience Graduate Program, Bilkent University, Ankara TR-06800, Turkey.
| |
Collapse
|
27
|
Cui L, Li H, Hui W, Chen S, Yang L, Kang Y, Bo Q, Feng J. A deep learning-based framework for lung cancer survival analysis with biomarker interpretation. BMC Bioinformatics 2020; 21:112. [PMID: 32183709 PMCID: PMC7079513 DOI: 10.1186/s12859-020-3431-z] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Accepted: 02/25/2020] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Lung cancer is the leading cause of cancer-related deaths in both men and women in the United States, and it has a much lower five-year survival rate than many other cancers. Accurate survival analysis is urgently needed for better disease diagnosis and treatment management. RESULTS In this work, we propose a survival analysis system that takes advantage of recently emerging deep learning techniques. The proposed system consists of three major components. 1) The first component is an end-to-end cellular feature learning module using a deep neural network with global average pooling. The learned cellular representations encode high-level biologically relevant information without requiring individual cell segmentation, which is aggregated into patient-level feature vectors by using a locality-constrained linear coding (LLC)-based bag of words (BoW) encoding algorithm. 2) The second component is a Cox proportional hazards model with an elastic net penalty for robust feature selection and survival analysis. 3) The third commponent is a biomarker interpretation module that can help localize the image regions that contribute to the survival model's decision. Extensive experiments show that the proposed survival model has excellent predictive power for a public (i.e., The Cancer Genome Atlas) lung cancer dataset in terms of two commonly used metrics: log-rank test (p-value) of the Kaplan-Meier estimate and concordance index (c-index). CONCLUSIONS In this work, we have proposed a segmentation-free survival analysis system that takes advantage of the recently emerging deep learning framework and well-studied survival analysis methods such as the Cox proportional hazards model. In addition, we provide an approach to visualize the discovered biomarkers, which can serve as concrete evidence supporting the survival model's decision.
Collapse
Affiliation(s)
- Lei Cui
- Department of Information Science and Technology, Northwest University, Xi’an, China
| | - Hansheng Li
- Department of Information Science and Technology, Northwest University, Xi’an, China
| | - Wenli Hui
- The College of Life Sciences, Northwest University, Xi’an, China
| | - Sitong Chen
- The College of Life Sciences, Northwest University, Xi’an, China
| | - Lin Yang
- The College of Life Sciences, Northwest University, Xi’an, China
| | - Yuxin Kang
- Department of Information Science and Technology, Northwest University, Xi’an, China
| | - Qirong Bo
- Department of Information Science and Technology, Northwest University, Xi’an, China
| | - Jun Feng
- Department of Information Science and Technology, Northwest University, Xi’an, China
| |
Collapse
|
28
|
Barricelli BR, Casiraghi E, Gliozzo J, Huber V, Leone BE, Rizzi A, Vergani B. ki67 nuclei detection and ki67-index estimation: a novel automatic approach based on human vision modeling. BMC Bioinformatics 2019; 20:733. [PMID: 31881821 PMCID: PMC6935242 DOI: 10.1186/s12859-019-3285-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2019] [Accepted: 11/19/2019] [Indexed: 12/29/2022] Open
Abstract
BACKGROUND The protein ki67 (pki67) is a marker of tumor aggressiveness, and its expression has been proven to be useful in the prognostic and predictive evaluation of several types of tumors. To numerically quantify the pki67 presence in cancerous tissue areas, pathologists generally analyze histochemical images to count the number of tumor nuclei marked for pki67. This allows estimating the ki67-index, that is the percentage of tumor nuclei positive for pki67 over all the tumor nuclei. Given the high image resolution and dimensions, its estimation by expert clinicians is particularly laborious and time consuming. Though automatic cell counting techniques have been presented so far, the problem is still open. RESULTS In this paper we present a novel automatic approach for the estimations of the ki67-index. The method starts by exploiting the STRESS algorithm to produce a color enhanced image where all pixels belonging to nuclei are easily identified by thresholding, and then separated into positive (i.e. pixels belonging to nuclei marked for pki67) and negative by a binary classification tree. Next, positive and negative nuclei pixels are processed separately by two multiscale procedures identifying isolated nuclei and separating adjoining nuclei. The multiscale procedures exploit two Bayesian classification trees to recognize positive and negative nuclei-shaped regions. CONCLUSIONS The evaluation of the computed results, both through experts' visual assessments and through the comparison of the computed indexes with those of experts, proved that the prototype is promising, so that experts believe in its potential as a tool to be exploited in the clinical practice as a valid aid for clinicians estimating the ki67-index. The MATLAB source code is open source for research purposes.
Collapse
Affiliation(s)
- Barbara Rita Barricelli
- Department of Information Engineering, Università degli Studi di Brescia, Via Branze 38, 25123 Brescia, Italy
| | - Elena Casiraghi
- Department of Computer Science, Università degli Studi di Milano, Via Celoria 18, 20133 Milan, Italy
| | - Jessica Gliozzo
- Fondazione IRCCS Ca’ Granda - Ospedale Maggiore Policlinico, Department of Dermatology, Viale Regina Marghertita, 20122 Milan, Italy
| | - Veronica Huber
- Unit of Immunotherapy of Human Tumors, Department of Research, Fondazione IRCCS Istituto Nazionale dei Tumori, Milan, Italy
| | - Biagio Eugenio Leone
- School of Medicine and Surgery, Università degli Studi di Milano-Bicocca, Via Cadore 48, 20900 Monza, Italy
| | - Alessandro Rizzi
- Department of Computer Science, Università degli Studi di Milano, Via Celoria 18, 20133 Milan, Italy
| | - Barbara Vergani
- School of Medicine and Surgery, Università degli Studi di Milano-Bicocca, Via Cadore 48, 20900 Monza, Italy
| |
Collapse
|
29
|
Serag A, Ion-Margineanu A, Qureshi H, McMillan R, Saint Martin MJ, Diamond J, O'Reilly P, Hamilton P. Translational AI and Deep Learning in Diagnostic Pathology. Front Med (Lausanne) 2019; 6:185. [PMID: 31632973 PMCID: PMC6779702 DOI: 10.3389/fmed.2019.00185] [Citation(s) in RCA: 132] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Accepted: 07/30/2019] [Indexed: 12/15/2022] Open
Abstract
There has been an exponential growth in the application of AI in health and in pathology. This is resulting in the innovation of deep learning technologies that are specifically aimed at cellular imaging and practical applications that could transform diagnostic pathology. This paper reviews the different approaches to deep learning in pathology, the public grand challenges that have driven this innovation and a range of emerging applications in pathology. The translation of AI into clinical practice will require applications to be embedded seamlessly within digital pathology workflows, driving an integrated approach to diagnostics and providing pathologists with new tools that accelerate workflow and improve diagnostic consistency and reduce errors. The clearance of digital pathology for primary diagnosis in the US by some manufacturers provides the platform on which to deliver practical AI. AI and computational pathology will continue to mature as researchers, clinicians, industry, regulatory organizations and patient advocacy groups work together to innovate and deliver new technologies to health care providers: technologies which are better, faster, cheaper, more precise, and safe.
Collapse
|
30
|
Shamai G, Binenbaum Y, Slossberg R, Duek I, Gil Z, Kimmel R. Artificial Intelligence Algorithms to Assess Hormonal Status From Tissue Microarrays in Patients With Breast Cancer. JAMA Netw Open 2019; 2:e197700. [PMID: 31348505 PMCID: PMC6661721 DOI: 10.1001/jamanetworkopen.2019.7700] [Citation(s) in RCA: 73] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
IMPORTANCE Immunohistochemistry (IHC) is the most widely used assay for identification of molecular biomarkers. However, IHC is time consuming and costly, depends on tissue-handling protocols, and relies on pathologists' subjective interpretation. Image analysis by machine learning is gaining ground for various applications in pathology but has not been proposed to replace chemical-based assays for molecular detection. OBJECTIVE To assess the prediction feasibility of molecular expression of biomarkers in cancer tissues, relying only on tissue architecture as seen in digitized hematoxylin-eosin (H&E)-stained specimens. DESIGN, SETTING, AND PARTICIPANTS This single-institution retrospective diagnostic study assessed the breast cancer tissue microarrays library of patients from Vancouver General Hospital, British Columbia, Canada. The study and analysis were conducted from July 1, 2015, through July 1, 2018. A machine learning method, termed morphological-based molecular profiling (MBMP), was developed. Logistic regression was used to explore correlations between histomorphology and biomarker expression, and a deep convolutional neural network was used to predict the biomarker expression in examined tissues. MAIN OUTCOMES AND MEASURES Positive predictive value (PPV), negative predictive value (NPV), and area under the receiver operating characteristics curve measures of MBMP for assessment of molecular biomarkers. RESULTS The database consisted of 20 600 digitized, publicly available H&E-stained sections of 5356 patients with breast cancer from 2 cohorts. The median age at diagnosis was 61 years for cohort 1 (412 patients) and 62 years for cohort 2 (4944 patients), and the median follow-up was 12.0 years and 12.4 years, respectively. Tissue histomorphology was significantly correlated with the molecular expression of all 19 biomarkers assayed, including estrogen receptor (ER), progesterone receptor (PR), and ERBB2 (formerly HER2). Expression of ER was predicted for 105 of 207 validation patients in cohort 1 (50.7%) and 1059 of 2046 validation patients in cohort 2 (51.8%), with PPVs of 97% and 98%, respectively, NPVs of 68% and 76%, respectively, and accuracy of 91% and 92%, respectively, which were noninferior to traditional IHC (PPV, 91%-98%; NPV, 51%-78%; and accuracy, 81%-90%). Diagnostic accuracy improved given more data. Morphological analysis of patients with ER-negative/PR-positive status by IHC revealed resemblance to patients with ER-positive status (Bhattacharyya distance, 0.03) and not those with ER-negative/PR-negative status (Bhattacharyya distance, 0.25). This suggests a false-negative IHC finding and warrants antihormonal therapy for these patients. CONCLUSIONS AND RELEVANCE For at least half of the patients in this study, MBMP appeared to predict biomarker expression with noninferiority to IHC. Results suggest that prediction accuracy is likely to improve as data used for training expand. Morphological-based molecular profiling could be used as a general approach for mass-scale molecular profiling based on digitized H&E-stained images, allowing quick, accurate, and inexpensive methods for simultaneous profiling of multiple biomarkers in cancer tissues.
Collapse
Affiliation(s)
- Gil Shamai
- Department of Electrical Engineering, Technion Israel Institute of Technology, Haifa, Israel
| | - Yoav Binenbaum
- Laboratory of Pediatric Oncology, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel
- Laboratory for Applied Cancer Research, Rambam Healthcare Campus, Rappaport Institute of Medicine and Research, Haifa, Israel
| | - Ron Slossberg
- Departmemt of Computer Science, Technion Israel Institute of Technology, Haifa, Israel
| | - Irit Duek
- Department of Otolaryngology-Head and Neck Surgery, Rambam Health Care Campus, Haifa, Israel
| | - Ziv Gil
- Laboratory for Applied Cancer Research, Rambam Healthcare Campus, Rappaport Institute of Medicine and Research, Haifa, Israel
- Department of Otolaryngology-Head and Neck Surgery, Rambam Health Care Campus, Haifa, Israel
| | - Ron Kimmel
- Departmemt of Computer Science, Technion Israel Institute of Technology, Haifa, Israel
| |
Collapse
|
31
|
Zhang P, Wang F, Teodoro G, Liang Y, Roy M, Brat D, Kong J. Effective nuclei segmentation with sparse shape prior and dynamic occlusion constraint for glioblastoma pathology images. J Med Imaging (Bellingham) 2019; 6:017502. [PMID: 30891467 DOI: 10.1117/1.jmi.6.1.017502] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2018] [Accepted: 02/19/2019] [Indexed: 11/14/2022] Open
Abstract
We propose a segmentation method for nuclei in glioblastoma histopathologic images based on a sparse shape prior guided variational level set framework. By spectral clustering and sparse coding, a set of shape priors is exploited to accommodate complicated shape variations. We automate the object contour initialization by a seed detection algorithm and deform contours by minimizing an energy functional that incorporates a shape term in a sparse shape prior representation, an adaptive contour occlusion penalty term, and a boundary term encouraging contours to converge to strong edges. As a result, our approach is able to deal with mutual occlusions and detect contours of multiple intersected nuclei simultaneously. Our method is applied to several whole-slide histopathologic image datasets for nuclei segmentation. The proposed method is compared with other state-of-the-art methods and demonstrates good accuracy for nuclei detection and segmentation, suggesting its promise to support biomedical image-based investigations.
Collapse
Affiliation(s)
- Pengyue Zhang
- Stony Brook University, Department of Computer Science, Stony Brook, New York, United States
| | - Fusheng Wang
- Stony Brook University, Department of Biomedical Informatics and Computer Science, Stony Brook, New York, United States
| | - George Teodoro
- University of Brasìlia, Department of Computer Science, Brasìlia, Brazil
| | - Yanhui Liang
- Google Inc., Mountain View, California, United States
| | - Mousumi Roy
- Stony Brook University, Department of Computer Science, Stony Brook, New York, United States
| | - Daniel Brat
- Northwestern University, Department of Pathology, Chicago, Illinois, United States
| | - Jun Kong
- Emory University, Department of Computer Science and Biomedical Informatics, Atlanta, Georgia, United States.,Georgia State University, Department of Mathematics and Statistics, Atlanta, Georgia, United States
| |
Collapse
|
32
|
Xing F, Cornish TC, Bennett T, Ghosh D, Yang L. Pixel-to-Pixel Learning With Weak Supervision for Single-Stage Nucleus Recognition in Ki67 Images. IEEE Trans Biomed Eng 2019; 66:3088-3097. [PMID: 30802845 DOI: 10.1109/tbme.2019.2900378] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
OBJECTIVE Nucleus recognition is a critical yet challenging step in histopathology image analysis, for example, in Ki67 immunohistochemistry stained images. Although many automated methods have been proposed, most use a multi-stage processing pipeline to categorize nuclei, leading to cumbersome, low-throughput, and error-prone assessments. To address this issue, we propose a novel deep fully convolutional network for single-stage nucleus recognition. METHODS Instead of conducting direct pixel-wise classification, we formulate nucleus identification as a deep structured regression model. For each input image, it produces multiple proximity maps, each of which corresponds to one nucleus category and exhibits strong responses in central regions of the nuclei. In addition, by taking into consideration the nucleus distribution in histopathology images, we further introduce an auxiliary task, region of interest (ROI) extraction, to assist and boost the nucleus quantification with weak ROI annotation. The proposed network can be learned in an end-to-end, pixel-to-pixel manner for simultaneous nucleus detection and classification. RESULTS We have evaluated this network on a pancreatic neuroendocrine tumor Ki67 image dataset, and the experiments demonstrate that our method outperforms recent state-of-the-art approaches. CONCLUSION We present a new, pixel-to-pixel deep neural network with two sibling branches for effective nucleus recognition and observe that learning with another relevant task, ROI extraction, can further boost individual nucleus localization and classification. SIGNIFICANCE Our method provides a clean, single-stage nucleus recognition pipeline for histopathology image analysis, especially a new perspective for Ki67 image quantification, which would potentially benefit individual object quantification in whole-slide images.
Collapse
|
33
|
Xu J, Gong L, Wang G, Lu C, Gilmore H, Zhang S, Madabhushi A. Convolutional neural network initialized active contour model with adaptive ellipse fitting for nuclear segmentation on breast histopathological images. J Med Imaging (Bellingham) 2019; 6:017501. [PMID: 30840729 DOI: 10.1117/1.jmi.6.1.017501] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2018] [Accepted: 01/07/2019] [Indexed: 11/14/2022] Open
Abstract
Automated detection and segmentation of nuclei from high-resolution histopathological images is a challenging problem owing to the size and complexity of digitized histopathologic images. In the context of breast cancer, the modified Bloom-Richardson Grading system is highly correlated with the morphological and topological nuclear features are highly correlated with Modified Bloom-Richardson grading. Therefore, to develop a computer-aided prognosis system, automated detection and segmentation of nuclei are critical prerequisite steps. We present a method for automated detection and segmentation of breast cancer nuclei named a convolutional neural network initialized active contour model with adaptive ellipse fitting (CoNNACaeF). The CoNNACaeF model is able to detect and segment nuclei simultaneously, which consist of three different modules: convolutional neural network (CNN) for accurate nuclei detection, (2) region-based active contour (RAC) model for subsequent nuclear segmentation based on the initial CNN-based detection of nuclear patches, and (3) adaptive ellipse fitting for overlapping solution of clumped nuclear regions. The performance of the CoNNACaeF model is evaluated on three different breast histological data sets, comprising a total of 257 H&E-stained images. The model is shown to have improved detection accuracy of F-measure 80.18%, 85.71%, and 80.36% and average area under precision-recall curves (AveP) 77%, 82%, and 74% on a total of 3 million nuclei from 204 whole slide images from three different datasets. Additionally, CoNNACaeF yielded an F-measure at 74.01% and 85.36%, respectively, for two different breast cancer datasets. The CoNNACaeF model also outperformed the three other state-of-the-art nuclear detection and segmentation approaches, which are blue ratio initialized local region active contour, iterative radial voting initialized local region active contour, and maximally stable extremal region initialized local region active contour models.
Collapse
Affiliation(s)
- Jun Xu
- Nanjing University of Information Science and Technology, Jiangsu Key Laboratory of Big Data Analysis Technique, Nanjing, China
| | - Lei Gong
- Nanjing University of Information Science and Technology, Jiangsu Key Laboratory of Big Data Analysis Technique, Nanjing, China
| | - Guanhao Wang
- Nanjing University of Information Science and Technology, Jiangsu Key Laboratory of Big Data Analysis Technique, Nanjing, China
| | - Cheng Lu
- Case Western Reserve University, Department of Biomedical Engineering, Cleveland, Ohio, United States
| | - Hannah Gilmore
- University Hospitals Case Medical Center, Case Western Reserve University, Institute for Pathology, Cleveland, Ohio, United States
| | - Shaoting Zhang
- University of North Carolina at Charlotte, Department of Computer Science, Charlotte, North Carolina, United States
| | - Anant Madabhushi
- Case Western Reserve University, Department of Biomedical Engineering, Cleveland, Ohio, United States.,Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, Ohio, United States
| |
Collapse
|
34
|
Koyuncu CF, Cetin-Atalay R, Gunduz-Demir C. Object-Oriented Segmentation of Cell Nuclei in Fluorescence Microscopy Images. Cytometry A 2018; 93:1019-1028. [PMID: 30211975 DOI: 10.1002/cyto.a.23594] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2017] [Revised: 06/14/2018] [Accepted: 07/30/2018] [Indexed: 12/17/2022]
Abstract
Cell nucleus segmentation remains an open and challenging problem especially to segment nuclei in cell clumps. Splitting a cell clump would be straightforward if the gradients of boundary pixels in-between the nuclei were always higher than the others. However, imperfections may exist: inhomogeneities of pixel intensities in a nucleus may cause to define spurious boundaries whereas insufficient pixel intensity differences at the border of overlapping nuclei may cause to miss some true boundary pixels. In contrast, these imperfections are typically observed at the pixel-level, causing local changes in pixel values without changing the semantics on a large scale. In response to these issues, this article introduces a new nucleus segmentation method that relies on using gradient information not at the pixel level but at the object level. To this end, it proposes to decompose an image into smaller homogeneous subregions, define edge-objects at four different orientations to encode the gradient information at the object level, and devise a merging algorithm, in which the edge-objects vote for subregion pairs along their orientations and the pairs are iteratively merged if they get sufficient votes from multiple orientations. Our experiments on fluorescence microscopy images reveal that this high-level representation and the design of a merging algorithm using edge-objects (gradients at the object level) improve the segmentation results.
Collapse
Affiliation(s)
| | - Rengul Cetin-Atalay
- Graduate School of Informatics, Middle East Technical University, 06800, Ankara, Turkey
| | - Cigdem Gunduz-Demir
- Computer Engineering Department, Bilkent University, 06800, Ankara, Turkey.,Neuroscience Graduate Program, Bilkent University, 06800, Ankara, Turkey
| |
Collapse
|
35
|
Shi X, Xing F, Xu K, Xie Y, Su H, Yang L. Supervised graph hashing for histopathology image retrieval and classification. Med Image Anal 2017; 42:117-128. [DOI: 10.1016/j.media.2017.07.009] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2016] [Revised: 07/25/2017] [Accepted: 07/31/2017] [Indexed: 10/19/2022]
|
36
|
Xu H, Lu C, Berendt R, Jha N, Mandal M. Automatic Nuclear Segmentation Using Multiscale Radial Line Scanning With Dynamic Programming. IEEE Trans Biomed Eng 2017; 64:2475-2485. [DOI: 10.1109/tbme.2017.2649485] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
37
|
An Advanced Deep Learning Approach for Ki-67 Stained Hotspot Detection and Proliferation Rate Scoring for Prognostic Evaluation of Breast Cancer. Sci Rep 2017; 7:3213. [PMID: 28607456 PMCID: PMC5468356 DOI: 10.1038/s41598-017-03405-5] [Citation(s) in RCA: 58] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2017] [Accepted: 04/26/2017] [Indexed: 02/08/2023] Open
Abstract
Being a non-histone protein, Ki-67 is one of the essential biomarkers for the immunohistochemical assessment of proliferation rate in breast cancer screening and grading. The Ki-67 signature is always sensitive to radiotherapy and chemotherapy. Due to random morphological, color and intensity variations of cell nuclei (immunopositive and immunonegative), manual/subjective assessment of Ki-67 scoring is error-prone and time-consuming. Hence, several machine learning approaches have been reported; nevertheless, none of them had worked on deep learning based hotspots detection and proliferation scoring. In this article, we suggest an advanced deep learning model for computerized recognition of candidate hotspots and subsequent proliferation rate scoring by quantifying Ki-67 appearance in breast cancer immunohistochemical images. Unlike existing Ki-67 scoring techniques, our methodology uses Gamma mixture model (GMM) with Expectation-Maximization for seed point detection and patch selection and deep learning, comprises with decision layer, for hotspots detection and proliferation scoring. Experimental results provide 93% precision, 0.88% recall and 0.91% F-score value. The model performance has also been compared with the pathologists’ manual annotations and recently published articles. In future, the proposed deep learning framework will be highly reliable and beneficial to the junior and senior pathologists for fast and efficient Ki-67 scoring.
Collapse
|
38
|
Sapkota M, Liu F, Xie Y, Su H, Xing F, Yang L. AIIMDs: An Integrated Framework of Automatic Idiopathic Inflammatory Myopathy Diagnosis for Muscle. IEEE J Biomed Health Inform 2017; 22:942-954. [PMID: 28422672 DOI: 10.1109/jbhi.2017.2694344] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Idiopathic inflammatory myopathy (IIM) is a common skeletal muscle disease that relates to weakness and inflammation of muscle. Early diagnosis and prognosis of different types of IIMs will guide the effective treatment. Interpretation of digitized images of the cross-section muscle biopsy, which is currently done manually, provides the most reliable diagnostic information. With the increasing volume of images, the management and manual interpretation of the digitized muscle images suffer from low efficiency and high interobserver variabilities. In order to address these problems, we propose the first complete framework of automatic IIM diagnosis system for the management and interpretation of digitized skeletal muscle histopathology images. The proposed framework consists of several key components: (1) Automatic cell segmentation, perimysium annotation, and nuclei detection; (2) histogram-based feature extraction and quantification; (3) content-based image retrieval to search and retrieve similar cases in the database for comparative study; and (4) majority voting-based classification to provide decision support for computer-aided clinical diagnosis. Experiments show that the proposed diagnosis system provides efficient and robust interpretation of the digitized muscle image and computer-aided diagnosis of IIM.
Collapse
|
39
|
Mungle T, Tewary S, Das DK, Arun I, Basak B, Agarwal S, Ahmed R, Chatterjee S, Chakraborty C. MRF-ANN: a machine learning approach for automated ER scoring of breast cancer immunohistochemical images. J Microsc 2017; 267:117-129. [PMID: 28319275 DOI: 10.1111/jmi.12552] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2016] [Revised: 02/03/2017] [Accepted: 02/14/2017] [Indexed: 11/27/2022]
Abstract
Molecular pathology, especially immunohistochemistry, plays an important role in evaluating hormone receptor status along with diagnosis of breast cancer. Time-consumption and inter-/intraobserver variability are major hindrances for evaluating the receptor score. In view of this, the paper proposes an automated Allred Scoring methodology for estrogen receptor (ER). White balancing is used to normalize the colour image taking into consideration colour variation during staining in different labs. Markov random field model with expectation-maximization optimization is employed to segment the ER cells. The proposed segmentation methodology is found to have F-measure 0.95. Artificial neural network is subsequently used to obtain intensity-based score for ER cells, from pixel colour intensity features. Simultaneously, proportion score - percentage of ER positive cells is computed via cell counting. The final ER score is computed by adding intensity and proportion scores - a standard Allred scoring system followed by pathologists. The classification accuracy for classification of cells by classifier in terms of F-measure is 0.9626. The problem of subjective interobserver ability is addressed by quantifying ER score from two expert pathologist and proposed methodology. The intraclass correlation achieved is greater than 0.90. The study has potential advantage of assisting pathologist in decision making over manual procedure and could evolve as a part of automated decision support system with other receptor scoring/analysis procedure.
Collapse
Affiliation(s)
- T Mungle
- School of Medical Science & Technology, IIT Kharagpur, West Bengal, India
| | - S Tewary
- School of Medical Science & Technology, IIT Kharagpur, West Bengal, India
| | - D K Das
- School of Medical Science & Technology, IIT Kharagpur, West Bengal, India
| | - I Arun
- Tata Medical Center, Kolkata, West Bengal, India
| | - B Basak
- Tata Medical Center, Kolkata, West Bengal, India
| | - S Agarwal
- Tata Medical Center, Kolkata, West Bengal, India
| | - R Ahmed
- Tata Medical Center, Kolkata, West Bengal, India
| | - S Chatterjee
- Tata Medical Center, Kolkata, West Bengal, India
| | - C Chakraborty
- School of Medical Science & Technology, IIT Kharagpur, West Bengal, India
| |
Collapse
|
40
|
Mungle T, Tewary S, Arun I, Basak B, Agarwal S, Ahmed R, Chatterjee S, Maity AK, Chakraborty C. Automated characterization and counting of Ki-67 protein for breast cancer prognosis: A quantitative immunohistochemistry approach. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2017; 139:149-161. [PMID: 28187885 DOI: 10.1016/j.cmpb.2016.11.002] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/05/2015] [Revised: 10/21/2016] [Accepted: 11/03/2016] [Indexed: 06/06/2023]
Abstract
Ki-67 protein expression plays an important role in predicting the proliferative status of tumour cells and deciding the future course of therapy in breast cancer. Immunohistochemical (IHC) determination of Ki-67 score or labelling index, by estimating the fraction of Ki67 positively stained tumour cells, is the most widely practiced method to assess tumour proliferation (Dowsett et al. 2011). Accurate manual counting of these cells (specifically nuclei) due to complex and dense distribution of cells, therefore, becomes critical and presents a major challenge to pathologists. In this paper, we suggest a hybrid clustering algorithm to quantify the proliferative index of breast cancer cells based on automated counting of Ki-67 nuclei. The proposed methodology initially pre-processes the IHC images of Ki-67 stained slides of breast cancer. The RGB images are converted to grey, L*a*b*, HSI, YCbCr, YIQ and XYZ colour space. All the stained cells are then characterized by two stage segmentation process. Fuzzy C-means quantifies all the stained cells as one cluster. The blue channel of the first stage output is given as input to k-means algorithm, which provides separate cluster for Ki-67 positive and negative cells. The count of positive and negative nuclei is used to calculate the F-measure for each colour space. A comparative study of our work with the expert opinion is studied to evaluate the error rate. The positive and negative nuclei detection results for all colour spaces are compared with the ground truth for validation and F-measure is calculated. The F-measure for L*a*b* colour space (0.8847) provides the best statistical result as compared to grey, HSI, YCbCr, YIQ and XYZ colour space. Further, a study is carried out to count nuclei manually and automatically from the proposed algorithm with an average error rate of 6.84% which is significant. The study provides an automated count of positive and negative nuclei using L*a*b*colour space and hybrid segmentation technique. Computerized evaluation of proliferation index can aid pathologist in assessing breast cancer severity. The proposed methodology, further, has the potential advantage of saving time and assisting in decision making over the present manual procedure and could evolve as an assistive pathological decision support system.
Collapse
Affiliation(s)
- Tushar Mungle
- School of Medical Science & Technology, IIT Kharagpur, Kharagpur, West Bengal, India
| | - Suman Tewary
- School of Medical Science & Technology, IIT Kharagpur, Kharagpur, West Bengal, India
| | - Indu Arun
- Tata Medical Center, New Town, Rajarhat, Kolkata, West Bengal, India
| | - Bijan Basak
- Tata Medical Center, New Town, Rajarhat, Kolkata, West Bengal, India
| | - Sanjit Agarwal
- Tata Medical Center, New Town, Rajarhat, Kolkata, West Bengal, India
| | - Rosina Ahmed
- Tata Medical Center, New Town, Rajarhat, Kolkata, West Bengal, India
| | - Sanjoy Chatterjee
- Tata Medical Center, New Town, Rajarhat, Kolkata, West Bengal, India
| | - Asok Kumar Maity
- Midnapur Medical College and Hospital, Midnapur, West Bengal, India
| | - Chandan Chakraborty
- School of Medical Science & Technology, IIT Kharagpur, Kharagpur, West Bengal, India.
| |
Collapse
|
41
|
Lu C, Xu H, Xu J, Gilmore H, Mandal M, Madabhushi A. Multi-Pass Adaptive Voting for Nuclei Detection in Histopathological Images. Sci Rep 2016; 6:33985. [PMID: 27694950 PMCID: PMC5046183 DOI: 10.1038/srep33985] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2016] [Accepted: 09/02/2016] [Indexed: 12/15/2022] Open
Abstract
Nuclei detection is often a critical initial step in the development of computer aided diagnosis and prognosis schemes in the context of digital pathology images. While over the last few years, a number of nuclei detection methods have been proposed, most of these approaches make idealistic assumptions about the staining quality of the tissue. In this paper, we present a new Multi-Pass Adaptive Voting (MPAV) for nuclei detection which is specifically geared towards images with poor quality staining and noise on account of tissue preparation artifacts. The MPAV utilizes the symmetric property of nuclear boundary and adaptively selects gradient from edge fragments to perform voting for a potential nucleus location. The MPAV was evaluated in three cohorts with different staining methods: Hematoxylin &Eosin, CD31 &Hematoxylin, and Ki-67 and where most of the nuclei were unevenly and imprecisely stained. Across a total of 47 images and nearly 17,700 manually labeled nuclei serving as the ground truth, MPAV was able to achieve a superior performance, with an area under the precision-recall curve (AUC) of 0.73. Additionally, MPAV also outperformed three state-of-the-art nuclei detection methods, a single pass voting method, a multi-pass voting method, and a deep learning based method.
Collapse
Affiliation(s)
- Cheng Lu
- College of Computer Science, Shaanxi Normal University, Xi’an, Shaanxi Province, 710119, China
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, 44106-7207, USA
| | - Hongming Xu
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Alberta, T6G 2V4, Canada
| | - Jun Xu
- Jiangsu Key Laboratory of Big Data Analysis Technique, Nanjing University of Information Science and Technology, Nanjing, 210044, China
| | - Hannah Gilmore
- Department of Pathology-Anatomic, University Hospitals Case Medial Center, Case Western Reserve University, Cleveland, OH, 44106-7207, USA
| | - Mrinal Mandal
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Alberta, T6G 2V4, Canada
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, 44106-7207, USA
| |
Collapse
|
42
|
Shi P, Zhong J, Hong J, Huang R, Wang K, Chen Y. Automated Ki-67 Quantification of Immunohistochemical Staining Image of Human Nasopharyngeal Carcinoma Xenografts. Sci Rep 2016; 6:32127. [PMID: 27562647 PMCID: PMC4999801 DOI: 10.1038/srep32127] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2016] [Accepted: 08/02/2016] [Indexed: 01/15/2023] Open
Abstract
Nasopharyngeal carcinoma is one of the malignant neoplasm with high incidence in China and south-east Asia. Ki-67 protein is strictly associated with cell proliferation and malignant degree. Cells with higher Ki-67 expression are always sensitive to chemotherapy and radiotherapy, the assessment of which is beneficial to NPC treatment. It is still challenging to automatically analyze immunohistochemical Ki-67 staining nasopharyngeal carcinoma images due to the uneven color distributions in different cell types. In order to solve the problem, an automated image processing pipeline based on clustering of local correlation features is proposed in this paper. Unlike traditional morphology-based methods, our algorithm segments cells by classifying image pixels on the basis of local pixel correlations from particularly selected color spaces, then characterizes cells with a set of grading criteria for the reference of pathological analysis. Experimental results showed high accuracy and robustness in nucleus segmentation despite image data variance. Quantitative indicators obtained in this essay provide a reliable evidence for the analysis of Ki-67 staining nasopharyngeal carcinoma microscopic images, which would be helpful in relevant histopathological researches.
Collapse
Affiliation(s)
- Peng Shi
- School of Mathematics and Computer Science, Fujian Normal University, Fuzhou, Fujian 350117, China
| | - Jing Zhong
- The Graduate School, Fujian Medical University, Fuzhou, Fujian 350004, China
| | - Jinsheng Hong
- Department of Radiation Oncology, Laboratory of Radiation Biology, First Affiliated Hospital, Fujian Medical University, Fuzhou, Fujian 350005, China
| | - Rongfang Huang
- Department of Pathology, Fujian Provincial Cancer Hospital, Fuzhou, Fujian 350014, China
| | - Kaijun Wang
- School of Mathematics and Computer Science, Fujian Normal University, Fuzhou, Fujian 350117, China
| | - Yunbin Chen
- The Graduate School, Fujian Medical University, Fuzhou, Fujian 350004, China.,Department of Radiology, Fujian Provincial Cancer Hospital, Fuzhou, Fujian 350014, China
| |
Collapse
|
43
|
Pezzilli R, Partelli S, Cannizzaro R, Pagano N, Crippa S, Pagnanelli M, Falconi M. Ki-67 prognostic and therapeutic decision driven marker for pancreatic neuroendocrine neoplasms (PNENs): A systematic review. Adv Med Sci 2016; 61:147-53. [PMID: 26774266 DOI: 10.1016/j.advms.2015.10.001] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2015] [Revised: 09/19/2015] [Accepted: 10/08/2015] [Indexed: 12/14/2022]
Abstract
BACKGROUND We systematically evaluate the current evidence regarding Ki-67 as a prognostic factor in pancreatic neuroendocrine neoplasms to evaluate the differences of this marker in primary tumors and in distant metastases as well as the values of Ki-67 obtained by fine needle aspiration and by histology. METHODS The literature search was carried out using the MEDLINE/PubMed database, and only papers published in the last 10 years were selected. RESULTS The pancreatic tissue suitable for Ki-67 evaluation was obtained from surgical specimens in the majority of the studies. There was a concordance of 83% between preoperative and postoperative Ki-67 evaluation. Pooling the data of the studies which compared the Ki-67 values obtained in both cytological and surgical specimens, we found that they were not related. The assessment of Ki-67 was manual in the majority of the papers considered for this review. In order to eliminate manual counting, several imaging methods have been developed but none of them are routinely used at present. Twenty-two studies also explored the role of Ki-67 utilized as a prognostic marker for pancreatic neuroendocrine neoplasms and the majority of them showed that Ki-67 is a good prognostic marker of disease progression. Three studies explored the Ki-67 value in metastatic sites and one study demonstrated that, in metachronous and synchronous liver metastases, there was no significant variation in the index of proliferation. CONCLUSIONS Ki-67 is a reliable prognostic marker for pancreatic neuroendocrine neoplasms.
Collapse
|
44
|
Xing F, Yang L. Robust Nucleus/Cell Detection and Segmentation in Digital Pathology and Microscopy Images: A Comprehensive Review. IEEE Rev Biomed Eng 2016; 9:234-63. [PMID: 26742143 PMCID: PMC5233461 DOI: 10.1109/rbme.2016.2515127] [Citation(s) in RCA: 219] [Impact Index Per Article: 24.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Digital pathology and microscopy image analysis is widely used for comprehensive studies of cell morphology or tissue structure. Manual assessment is labor intensive and prone to interobserver variations. Computer-aided methods, which can significantly improve the objectivity and reproducibility, have attracted a great deal of interest in recent literature. Among the pipeline of building a computer-aided diagnosis system, nucleus or cell detection and segmentation play a very important role to describe the molecular morphological information. In the past few decades, many efforts have been devoted to automated nucleus/cell detection and segmentation. In this review, we provide a comprehensive summary of the recent state-of-the-art nucleus/cell segmentation approaches on different types of microscopy images including bright-field, phase-contrast, differential interference contrast, fluorescence, and electron microscopies. In addition, we discuss the challenges for the current methods and the potential future work of nucleus/cell detection and segmentation.
Collapse
|
45
|
Xu J, Xiang L, Liu Q, Gilmore H, Wu J, Tang J, Madabhushi A. Stacked Sparse Autoencoder (SSAE) for Nuclei Detection on Breast Cancer Histopathology Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:119-130. [PMID: 26208307 PMCID: PMC4729702 DOI: 10.1109/tmi.2015.2458702] [Citation(s) in RCA: 332] [Impact Index Per Article: 36.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Automated nuclear detection is a critical step for a number of computer assisted pathology related image analysis algorithms such as for automated grading of breast cancer tissue specimens. The Nottingham Histologic Score system is highly correlated with the shape and appearance of breast cancer nuclei in histopathological images. However, automated nucleus detection is complicated by 1) the large number of nuclei and the size of high resolution digitized pathology images, and 2) the variability in size, shape, appearance, and texture of the individual nuclei. Recently there has been interest in the application of "Deep Learning" strategies for classification and analysis of big image data. Histopathology, given its size and complexity, represents an excellent use case for application of deep learning strategies. In this paper, a Stacked Sparse Autoencoder (SSAE), an instance of a deep learning strategy, is presented for efficient nuclei detection on high-resolution histopathological images of breast cancer. The SSAE learns high-level features from just pixel intensities alone in order to identify distinguishing features of nuclei. A sliding window operation is applied to each image in order to represent image patches via high-level features obtained via the auto-encoder, which are then subsequently fed to a classifier which categorizes each image patch as nuclear or non-nuclear. Across a cohort of 500 histopathological images (2200 × 2200) and approximately 3500 manually segmented individual nuclei serving as the groundtruth, SSAE was shown to have an improved F-measure 84.49% and an average area under Precision-Recall curve (AveP) 78.83%. The SSAE approach also out-performed nine other state of the art nuclear detection strategies.
Collapse
Affiliation(s)
- Jun Xu
- The corresponding authors (; )
| | - Lei Xiang
- the Jiangsu Key Laboratory of Big Data Analysis Technique and CICAEET, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | - Qingshan Liu
- the Jiangsu Key Laboratory of Big Data Analysis Technique and CICAEET, Nanjing University of Information Science and Technology, Nanjing 210044, China
| | - Hannah Gilmore
- Department of Pathology-Anatomic, University Hospitals Case Medical Center, Case Western Reserve University, OH 44106-7207, USA
| | - Jianzhong Wu
- the Jiangsu Cancer Hospital, Nanjing 210000, China
| | - Jinghai Tang
- the Jiangsu Cancer Hospital, Nanjing 210000, China
| | | |
Collapse
|
46
|
Zhang X, Xing F, Su H, Yang L, Zhang S. High-throughput histopathological image analysis via robust cell segmentation and hashing. Med Image Anal 2015; 26:306-15. [PMID: 26599156 PMCID: PMC4679540 DOI: 10.1016/j.media.2015.10.005] [Citation(s) in RCA: 66] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2014] [Revised: 05/13/2015] [Accepted: 10/16/2015] [Indexed: 11/27/2022]
Abstract
Computer-aided diagnosis of histopathological images usually requires to examine all cells for accurate diagnosis. Traditional computational methods may have efficiency issues when performing cell-level analysis. In this paper, we propose a robust and scalable solution to enable such analysis in a real-time fashion. Specifically, a robust segmentation method is developed to delineate cells accurately using Gaussian-based hierarchical voting and repulsive balloon model. A large-scale image retrieval approach is also designed to examine and classify each cell of a testing image by comparing it with a massive database, e.g., half-million cells extracted from the training dataset. We evaluate this proposed framework on a challenging and important clinical use case, i.e., differentiation of two types of lung cancers (the adenocarcinoma and squamous carcinoma), using thousands of lung microscopic tissue images extracted from hundreds of patients. Our method has achieved promising accuracy and running time by searching among half-million cells .
Collapse
Affiliation(s)
- Xiaofan Zhang
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC 28223, USA
| | - Fuyong Xing
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL 32611, USA
| | - Hai Su
- Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA
| | - Lin Yang
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL 32611, USA; Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA
| | - Shaoting Zhang
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC 28223, USA.
| |
Collapse
|
47
|
Abstract
Background Virtual microscopy and advances in machine learning have paved the way for the ever-expanding field of digital pathology. Multiple image-based computing environments capable of performing automated quantitative and morphological analyses are the foundation on which digital pathology is built. Methods The applications for digital pathology in the clinical setting are numerous and are explored along with the digital software environments themselves, as well as the different analytical modalities specific to digital pathology. Prospective studies, case-control analyses, meta-analyses, and detailed descriptions of software environments were explored that pertained to digital pathology and its use in the clinical setting. Results Many different software environments have advanced platforms capable of improving digital pathology and potentially influencing clinical decisions. Conclusions The potential of digital pathology is vast, particularly with the introduction of numerous software environments available for use. With all the digital pathology tools available as well as those in development, the field will continue to advance, particularly in the era of personalized medicine, providing health care professionals with more precise prognostic information as well as helping them guide treatment decisions.
Collapse
Affiliation(s)
- Daryoush Saeed-Vafa
- Departments of Imaging Research H. Lee Moffitt Cancer Center & Research Institute, Tampa, Florida
| | - Anthony M. Magliocco
- Anatomic Pathology, H. Lee Moffitt Cancer Center & Research Institute, Tampa, Florida
| |
Collapse
|
48
|
Su H, Shen Y, Xing F, Qi X, Hirshfield KM, Yang L, Foran DJ. Robust automatic breast cancer staging using a combination of functional genomics and image-omics. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2015:7226-9. [PMID: 26737959 PMCID: PMC4918467 DOI: 10.1109/embc.2015.7320059] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/12/2023]
Abstract
Breast cancer is one of the leading cancers worldwide. Precision medicine is a new trend that systematically examines molecular and functional genomic information within each patient's cancer to identify the patterns that may affect treatment decisions and potential outcomes. As a part of precision medicine, computer-aided diagnosis enables joint analysis of functional genomic information and image from pathological images. In this paper we propose an integrated framework for breast cancer staging using image-omics and functional genomic information. The entire biomedical imaging informatics framework consists of image-omics extraction, feature combination, and classification. First, a robust automatic nuclei detection and segmentation is presented to identify tumor regions, delineate nuclei boundaries and calculate a set of image-based morphological features; next, the low dimensional image-omics is obtained through principal component analysis and is concatenated with the functional genomic features identified by a linear model. A support vector machine for differentiating stage I breast cancer from other stages are learned. We experimentally demonstrate that compared with a single type of representation (image-omics), the combination of image-omics and functional genomic feature can improve the classification accuracy by 3%.
Collapse
Affiliation(s)
- Hai Su
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA
| | - Yong Shen
- Genetics Institute, University of Florida, Gainesville, FL 32611, USA
| | - Fuyong Xing
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA
| | - Xin Qi
- Rutgers Cancer Institute of New Jersey, New Brunswick, NJ 08901, USA
| | - Kim M. Hirshfield
- Rutgers Cancer Institute of New Jersey, New Brunswick, NJ 08901, USA
| | - Lin Yang
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA
| | - David J. Foran
- Rutgers Cancer Institute of New Jersey, New Brunswick, NJ 08901, USA
| |
Collapse
|
49
|
Li R, Zhang W, Ji S. Automated identification of cell-type-specific genes in the mouse brain by image computing of expression patterns. BMC Bioinformatics 2014; 15:209. [PMID: 24947138 PMCID: PMC4078975 DOI: 10.1186/1471-2105-15-209] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2013] [Accepted: 05/29/2014] [Indexed: 02/07/2023] Open
Abstract
Background Differential gene expression patterns in cells of the mammalian brain result in the morphological, connectional, and functional diversity of cells. A wide variety of studies have shown that certain genes are expressed only in specific cell-types. Analysis of cell-type-specific gene expression patterns can provide insights into the relationship between genes, connectivity, brain regions, and cell-types. However, automated methods for identifying cell-type-specific genes are lacking to date. Results Here, we describe a set of computational methods for identifying cell-type-specific genes in the mouse brain by automated image computing of in situ hybridization (ISH) expression patterns. We applied invariant image feature descriptors to capture local gene expression information from cellular-resolution ISH images. We then built image-level representations by applying vector quantization on the image descriptors. We employed regularized learning methods for classifying genes specifically expressed in different brain cell-types. These methods can also rank image features based on their discriminative power. We used a data set of 2,872 genes from the Allen Brain Atlas in the experiments. Results showed that our methods are predictive of cell-type-specificity of genes. Our classifiers achieved AUC values of approximately 87% when the enrichment level is set to 20. In addition, we showed that the highly-ranked image features captured the relationship between cell-types. Conclusions Overall, our results showed that automated image computing methods could potentially be used to identify cell-type-specific genes in the mouse brain.
Collapse
Affiliation(s)
| | | | - Shuiwang Ji
- Department of Computer Science, Old Dominion University, 23529 Norfolk, VA, USA.
| |
Collapse
|