1
|
Shanmugam K, Rajaguru H. Exploration and Enhancement of Classifiers in the Detection of Lung Cancer from Histopathological Images. Diagnostics (Basel) 2023; 13:3289. [PMID: 37892110 PMCID: PMC10606104 DOI: 10.3390/diagnostics13203289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 10/20/2023] [Accepted: 10/21/2023] [Indexed: 10/29/2023] Open
Abstract
Lung cancer is a prevalent malignancy that impacts individuals of all genders and is often diagnosed late due to delayed symptoms. To catch it early, researchers are developing algorithms to study lung cancer images. The primary objective of this work is to propose a novel approach for the detection of lung cancer using histopathological images. In this work, the histopathological images underwent preprocessing, followed by segmentation using a modified approach of KFCM-based segmentation and the segmented image intensity values were dimensionally reduced using Particle Swarm Optimization (PSO) and Grey Wolf Optimization (GWO). Algorithms such as KL Divergence and Invasive Weed Optimization (IWO) are used for feature selection. Seven different classifiers such as SVM, KNN, Random Forest, Decision Tree, Softmax Discriminant, Multilayer Perceptron, and BLDC were used to analyze and classify the images as benign or malignant. Results were compared using standard metrics, and kappa analysis assessed classifier agreement. The Decision Tree Classifier with GWO feature extraction achieved good accuracy of 85.01% without feature selection and hyperparameter tuning approaches. Furthermore, we present a methodology to enhance the accuracy of the classifiers by employing hyperparameter tuning algorithms based on Adam and RAdam. By combining features from GWO and IWO, and using the RAdam algorithm, the Decision Tree classifier achieves the commendable accuracy of 91.57%.
Collapse
Affiliation(s)
| | - Harikumar Rajaguru
- Department of Electronics and Communication Engineering, Bannari Amman Institute of Technology, Sathyamangalam 638401, India;
| |
Collapse
|
2
|
Sun J, Zhang X, Li X, Liu R, Wang T. DARMF-UNet: A dual-branch attention-guided refinement network with multi-scale features fusion U-Net for gland segmentation. Comput Biol Med 2023; 163:107218. [PMID: 37393784 DOI: 10.1016/j.compbiomed.2023.107218] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 06/08/2023] [Accepted: 06/25/2023] [Indexed: 07/04/2023]
Abstract
Accurate gland segmentation is critical in determining adenocarcinoma. Automatic gland segmentation methods currently suffer from challenges such as less accurate edge segmentation, easy mis-segmentation, and incomplete segmentation. To solve these problems, this paper proposes a novel gland segmentation network Dual-branch Attention-guided Refinement and Multi-scale Features Fusion U-Net (DARMF-UNet), which fuses multi-scale features using deep supervision. At the first three layers of feature concatenation, a Coordinate Parallel Attention (CPA) is proposed to guide the network to focus on the key regions. A Dense Atrous Convolution (DAC) block is used in the fourth layer of feature concatenation to perform multi-scale features extraction and obtain global information. A hybrid loss function is adopted to calculate the loss of each segmentation result of the network to achieve deep supervision and improve the accuracy of segmentation. Finally, the segmentation results at different scales in each part of the network are fused to obtain the final gland segmentation result. The experimental results on the gland datasets Warwick-QU and Crag show that the network improves in terms of the evaluation metrics of F1 Score, Object Dice, Object Hausdorff, and the segmentation effect is better than the state-of-the-art network models.
Collapse
Affiliation(s)
- Junmei Sun
- School of Information Science and Technology, Hangzhou Normal University, Hangzhou, China
| | - Xin Zhang
- School of Information Science and Technology, Hangzhou Normal University, Hangzhou, China
| | - Xiumei Li
- School of Information Science and Technology, Hangzhou Normal University, Hangzhou, China.
| | - Ruyu Liu
- School of Information Science and Technology, Hangzhou Normal University, Hangzhou, China
| | - Tianyang Wang
- School of Information Science and Technology, Hangzhou Normal University, Hangzhou, China
| |
Collapse
|
3
|
Li Y, Du P, Zeng H, Wei Y, Fu H, Zhong X, Ma X. Integrative models of histopathological images and multi-omics data predict prognosis in endometrial carcinoma. PeerJ 2023; 11:e15674. [PMID: 37583914 PMCID: PMC10424667 DOI: 10.7717/peerj.15674] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 06/11/2023] [Indexed: 08/17/2023] Open
Abstract
Objective This study aimed to predict the molecular features of endometrial carcinoma (EC) and the overall survival (OS) of EC patients using histopathological imaging. Methods The patients from The Cancer Genome Atlas (TCGA) were separated into the training set (n = 215) and test set (n = 214) in proportion of 1:1. By analyzing quantitative histological image features and setting up random forest model verified by cross-validation, we constructed prognostic models for OS. The model performance is evaluated with the time-dependent receiver operating characteristics (AUC) over the test set. Results Prognostic models based on histopathological imaging features (HIF) predicted OS in the test set (5-year AUC = 0.803). The performance of combining histopathology and omics transcends that of genomics, transcriptomics, or proteomics alone. Additionally, multi-dimensional omics data, including HIF, genomics, transcriptomics, and proteomics, attained the largest AUCs of 0.866, 0.869, and 0.856 at years 1, 3, and 5, respectively, showcasing the highest discrepancy in survival (HR = 18.347, 95% CI [11.09-25.65], p < 0.001). Conclusions The results of this experiment indicated that the complementary features of HIF could improve the prognostic performance of EC patients. Moreover, the integration of HIF and multi-dimensional omics data might ameliorate survival prediction and risk stratification in clinical practice.
Collapse
Affiliation(s)
- Yueyi Li
- Department of Targeting Therapy & Immunology, Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Peixin Du
- Laboratory of Integrative Medicine, Clinical Research Center for Breast, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Hao Zeng
- Laboratory of Integrative Medicine, Clinical Research Center for Breast, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Yuhao Wei
- West China School of Medicine, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Haoxuan Fu
- Department of Statistics and Data Science, Wharton School, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Xi Zhong
- Department of Critical Care Medicine, West China Hospital of Sichuan University, Chengdu, Sichuan, China
| | - Xuelei Ma
- Department of Targeting Therapy & Immunology, Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan, China
| |
Collapse
|
4
|
Srikantamurthy MM, Rallabandi VPS, Dudekula DB, Natarajan S, Park J. Classification of benign and malignant subtypes of breast cancer histopathology imaging using hybrid CNN-LSTM based transfer learning. BMC Med Imaging 2023; 23:19. [PMID: 36717788 PMCID: PMC9885590 DOI: 10.1186/s12880-023-00964-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Accepted: 01/12/2023] [Indexed: 01/31/2023] Open
Abstract
BACKGROUND Grading of cancer histopathology slides requires more pathologists and expert clinicians as well as it is time consuming to look manually into whole-slide images. Hence, an automated classification of histopathological breast cancer sub-type is useful for clinical diagnosis and therapeutic responses. Recent deep learning methods for medical image analysis suggest the utility of automated radiologic imaging classification for relating disease characteristics or diagnosis and patient stratification. METHODS To develop a hybrid model using the convolutional neural network (CNN) and the long short-term memory recurrent neural network (LSTM RNN) to classify four benign and four malignant breast cancer subtypes. The proposed CNN-LSTM leveraging on ImageNet uses a transfer learning approach in classifying and predicting four subtypes of each. The proposed model was evaluated on the BreakHis dataset comprises 2480 benign and 5429 malignant cancer images acquired at magnifications of 40×, 100×, 200× and 400×. RESULTS The proposed hybrid CNN-LSTM model was compared with the existing CNN models used for breast histopathological image classification such as VGG-16, ResNet50, and Inception models. All the models were built using three different optimizers such as adaptive moment estimator (Adam), root mean square propagation (RMSProp), and stochastic gradient descent (SGD) optimizers by varying numbers of epochs. From the results, we noticed that the Adam optimizer was the best optimizer with maximum accuracy and minimum model loss for both the training and validation sets. The proposed hybrid CNN-LSTM model showed the highest overall accuracy of 99% for binary classification of benign and malignant cancer, and, whereas, 92.5% for multi-class classifier of benign and malignant cancer subtypes, respectively. CONCLUSION To conclude, the proposed transfer learning approach outperformed the state-of-the-art machine and deep learning models in classifying benign and malignant cancer subtypes. The proposed method is feasible in classification of other cancers as well as diseases.
Collapse
Affiliation(s)
| | | | - Dawood Babu Dudekula
- 3BIGS Omicscore Pvt. Ltd., 909 Lavelle Building, Richmond Circle, Bangalore, 560025 India
| | - Sathishkumar Natarajan
- 3BIGS Co. Ltd, 156, B-831, Geumgang Penterium IX Tower, Hwaseong, 18469 Republic of Korea
| | - Junhyung Park
- 3BIGS Co. Ltd, 156, B-831, Geumgang Penterium IX Tower, Hwaseong, 18469 Republic of Korea
| |
Collapse
|
5
|
Da Q, Huang X, Li Z, Zuo Y, Zhang C, Liu J, Chen W, Li J, Xu D, Hu Z, Yi H, Guo Y, Wang Z, Chen L, Zhang L, He X, Zhang X, Mei K, Zhu C, Lu W, Shen L, Shi J, Li J, S S, Krishnamurthi G, Yang J, Lin T, Song Q, Liu X, Graham S, Bashir RMS, Yang C, Qin S, Tian X, Yin B, Zhao J, Metaxas DN, Li H, Wang C, Zhang S. DigestPath: A benchmark dataset with challenge review for the pathological detection and segmentation of digestive-system. Med Image Anal 2022; 80:102485. [DOI: 10.1016/j.media.2022.102485] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2021] [Revised: 04/08/2022] [Accepted: 05/20/2022] [Indexed: 12/19/2022]
|
6
|
Prabhu S, Prasad K, Robels-Kelly A, Lu X. AI-based carcinoma detection and classification using histopathological images: A systematic review. Comput Biol Med 2022; 142:105209. [DOI: 10.1016/j.compbiomed.2022.105209] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 01/01/2022] [Accepted: 01/01/2022] [Indexed: 02/07/2023]
|
7
|
Wu C, Zhong J, Lin L, Chen Y, Xue Y, Shi P. Segmentation of HE-stained meningioma pathological images based on pseudo-labels. PLoS One 2022; 17:e0263006. [PMID: 35120175 PMCID: PMC8815980 DOI: 10.1371/journal.pone.0263006] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Accepted: 01/10/2022] [Indexed: 12/05/2022] Open
Abstract
Biomedical research is inseparable from the analysis of various histopathological images, and hematoxylin-eosin (HE)-stained images are one of the most basic and widely used types. However, at present, machine learning based approaches of the analysis of this kind of images are highly relied on manual labeling of images for training. Fully automated processing of HE-stained images remains a challenging task due to the high degree of color intensity, size and shape uncertainty of the stained cells. For this problem, we propose a fully automatic pixel-wise semantic segmentation method based on pseudo-labels, which concerns to significantly reduce the manual cell sketching and labeling work before machine learning, and guarantees the accuracy of segmentation. First, we collect reliable training samples in a unsupervised manner based on K-means clustering results; second, we use full mixup strategy to enhance the training images and to obtain the U-Net model for the nuclei segmentation from the background. The experimental results based on the meningioma pathology image dataset show that the proposed method has good performance and the pathological features obtained statistically based on the segmentation results can be used to assist in the clinical grading of meningiomas. Compared with other machine learning strategies, it can provide a reliable reference for clinical research more effectively.
Collapse
Affiliation(s)
- Chongshu Wu
- College of Computer and Cyber Security, Fujian Normal University, Fuzhou, Fujian, China
- Digit Fujian Internet-of-Things Laboratory of Environmental Monitoring Fuzhou, Fujian, China
| | - Jing Zhong
- Radiology and Pathology Department, Fujian Provincial Cancer Hospital, Fuzhou, Fujian, China
| | - Lin Lin
- Radiology Department, Fujian Medical University Union Hospital, Fuzhou, Fujian, China
| | - Yanping Chen
- Radiology and Pathology Department, Fujian Provincial Cancer Hospital, Fuzhou, Fujian, China
| | - Yunjing Xue
- Radiology Department, Fujian Medical University Union Hospital, Fuzhou, Fujian, China
| | - Peng Shi
- College of Computer and Cyber Security, Fujian Normal University, Fuzhou, Fujian, China
- Digit Fujian Internet-of-Things Laboratory of Environmental Monitoring Fuzhou, Fujian, China
- * E-mail:
| |
Collapse
|
8
|
Abdou MA. Literature review: efficient deep neural networks techniques for medical image analysis. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-06960-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
9
|
Khan S, Banday SA, Alam M. Big Data for Treatment Planning: Pathways and Possibilities for Smart Healthcare Systems. Curr Med Imaging 2022; 19:19-26. [PMID: 34533449 DOI: 10.2174/1573405617666210917125642] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Revised: 07/14/2021] [Accepted: 07/16/2021] [Indexed: 11/22/2022]
Abstract
BACKGROUND Treatment planning is one of the crucial stages of healthcare assessment and delivery. Moreover, it also has a significant impact on patient outcomes and system efficiency. With the evolution of transformative healthcare technologies, most areas of healthcare have started collecting data at different levels, as a result of which there is a splurge in the size and complexity of health data being generated every minute. INTRODUCTION This paper explores the different characteristics of health data with respect to big data. Besides this, it also classifies research efforts in treatment planning on the basis of the informatics domain being used, which includes medical informatics, imaging informatics and translational bioinformatics. METHODS This is a survey paper that reviews existing literature on the use of big data technologies for treatment planning in the healthcare ecosystem. Therefore, a qualitative research methodology was adopted for this work. RESULTS Review of existing literature has been analyzed to identify potential gaps in research, identifying and providing insights into high prospect areas for potential future research. CONCLUSION The use of big data for treatment planning is rapidly evolving, and findings of this research can head start and streamline specific research pathways in the field.
Collapse
Affiliation(s)
- Samiya Khan
- School of Mathematics and Computer Science, University of Wolverhampton, Wolverhampton, United Kingdom
| | - Shoaib Amin Banday
- Department of Electronics & Communication, Islamic University of Science & Technology, Awantipora, India
| | - Mansaf Alam
- Department of Computer Science, Jamia Millia Islamia, New Delhi, India
| |
Collapse
|
10
|
Chen L, Zeng H, Xiang Y, Huang Y, Luo Y, Ma X. Histopathological Images and Multi-Omics Integration Predict Molecular Characteristics and Survival in Lung Adenocarcinoma. Front Cell Dev Biol 2021; 9:720110. [PMID: 34708036 PMCID: PMC8542778 DOI: 10.3389/fcell.2021.720110] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 09/14/2021] [Indexed: 02/05/2023] Open
Abstract
Histopathological images and omics profiles play important roles in prognosis of cancer patients. Here, we extracted quantitative features from histopathological images to predict molecular characteristics and prognosis, and integrated image features with mutations, transcriptomics, and proteomics data for prognosis prediction in lung adenocarcinoma (LUAD). Patients obtained from The Cancer Genome Atlas (TCGA) were divided into training set (n = 235) and test set (n = 235). We developed machine learning models in training set and estimated their predictive performance in test set. In test set, the machine learning models could predict genetic aberrations: ALK (AUC = 0.879), BRAF (AUC = 0.847), EGFR (AUC = 0.855), ROS1 (AUC = 0.848), and transcriptional subtypes: proximal-inflammatory (AUC = 0.897), proximal-proliferative (AUC = 0.861), and terminal respiratory unit (AUC = 0.894) from histopathological images. Moreover, we obtained tissue microarrays from 316 LUAD patients, including four external validation sets. The prognostic model using image features was predictive of overall survival in test and four validation sets, with 5-year AUCs from 0.717 to 0.825. High-risk and low-risk groups stratified by the model showed different survival in test set (HR = 4.94, p < 0.0001) and three validation sets (HR = 1.64–2.20, p < 0.05). The combination of image features and single omics had greater prognostic power in test set, such as histopathology + transcriptomics model (5-year AUC = 0.840; HR = 7.34, p < 0.0001). Finally, the model integrating image features with multi-omics achieved the best performance (5-year AUC = 0.908; HR = 19.98, p < 0.0001). Our results indicated that the machine learning models based on histopathological image features could predict genetic aberrations, transcriptional subtypes, and survival outcomes of LUAD patients. The integration of histopathological images and multi-omics may provide better survival prediction for LUAD.
Collapse
Affiliation(s)
- Linyan Chen
- State Key Laboratory of Biotherapy, Department of Biotherapy, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| | - Hao Zeng
- State Key Laboratory of Biotherapy, Department of Biotherapy, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| | - Yu Xiang
- State Key Laboratory of Biotherapy, Department of Biotherapy, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| | - Yeqian Huang
- Department of Pathology, West China Hospital, Sichuan University, Chengdu, China
| | - Yuling Luo
- Department of Pathology, West China Hospital, Sichuan University, Chengdu, China
| | - Xuelei Ma
- State Key Laboratory of Biotherapy, Department of Biotherapy, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| |
Collapse
|
11
|
Zhang S, Yuan Z, Wang Y, Bai Y, Chen B, Wang H. REUR: A unified deep framework for signet ring cell detection in low-resolution pathological images. Comput Biol Med 2021; 136:104711. [PMID: 34388466 DOI: 10.1016/j.compbiomed.2021.104711] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2021] [Revised: 07/28/2021] [Accepted: 07/29/2021] [Indexed: 11/15/2022]
Abstract
Detecting signet ring cells (SRCs) in pathological images is essential for carcinoma diagnosis. However, it is time consuming for pathologists to detect SRCs manually from pathological images, and the accuracy of detecting them is also relatively low because of their small sizes. Recently, the exploration of deep learning methods in pathology analysis has been widely investigated by researchers. Nevertheless, the automatic detection of SRCs from real pathological images faces two problems. One is that labeled pathological images are insufficient and usually incomplete. The other is that the training data and the real clinical data have a large difference in resolution. Hence, adopting the transfer learning method affects the performance of deep learning methods. To address these two problems, we present a unified framework named REUR [RetinaNet combining USRNet (unfolding super-resolution network) with the RGHMC (revised gradient harmonizing mechanism classification) loss] that can accurately detect SRCs in low-resolution (LR) pathological images. First, the framework with the super-resolution (SR) module can address the difference in resolution between the training data and the real clinical data. Second, the framework with the label correction module can obtain the revised ground-truth labels from noisy examples, which are embedded into the gradient harmonizing mechanism to acquire the RGHMC loss. The results of the numerical experiments showed that the framework can perform better than other one-stage detectors based on the RetinaNet architecture in the high-resolution (HR) noisy dataset. It achieved a kappa value of 0.74 and an accuracy of 0.89 in the test with 27 randomly selected whole slide images (WSIs), and, thus, it can assist pathologists in better analyzing WSIs. The framework provides an essential method in computer-aided diagnosis for medical applications.
Collapse
Affiliation(s)
- Shuchang Zhang
- Department of Mathematics, National University of Defense Technology, Changsha, China.
| | - Ziyang Yuan
- Department of Mathematics, National University of Defense Technology, Changsha, China.
| | - Yadong Wang
- Department of Laboratory Pathology, Baiyun Branch, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Yang Bai
- Department of Gastroenterology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Bo Chen
- Suzhou Research Center, Institute of Automation, Chinese Academy of Sciences, Suzhou, China
| | - Hongxia Wang
- Department of Mathematics, National University of Defense Technology, Changsha, China.
| |
Collapse
|
12
|
Yu H, Zhang X, Song L, Jiang L, Huang X, Chen W, Zhang C, Li J, Yang J, Hu Z, Duan Q, Chen W, He X, Fan J, Jiang W, Zhang L, Qiu C, Gu M, Sun W, Zhang Y, Peng G, Shen W, Fu G. Large-scale gastric cancer screening and localization using multi-task deep neural network. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.03.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
13
|
|
14
|
Xing F, Zhang X, Cornish TC. Artificial intelligence for pathology. Artif Intell Med 2021. [DOI: 10.1016/b978-0-12-821259-2.00011-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
15
|
Qu H, Wu P, Huang Q, Yi J, Yan Z, Li K, Riedlinger GM, De S, Zhang S, Metaxas DN. Weakly Supervised Deep Nuclei Segmentation Using Partial Points Annotation in Histopathology Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3655-3666. [PMID: 32746112 DOI: 10.1109/tmi.2020.3002244] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Nuclei segmentation is a fundamental task in histopathology image analysis. Typically, such segmentation tasks require significant effort to manually generate accurate pixel-wise annotations for fully supervised training. To alleviate such tedious and manual effort, in this paper we propose a novel weakly supervised segmentation framework based on partial points annotation, i.e., only a small portion of nuclei locations in each image are labeled. The framework consists of two learning stages. In the first stage, we design a semi-supervised strategy to learn a detection model from partially labeled nuclei locations. Specifically, an extended Gaussian mask is designed to train an initial model with partially labeled data. Then, self-training with background propagation is proposed to make use of the unlabeled regions to boost nuclei detection and suppress false positives. In the second stage, a segmentation model is trained from the detected nuclei locations in a weakly-supervised fashion. Two types of coarse labels with complementary information are derived from the detected points and are then utilized to train a deep neural network. The fully-connected conditional random field loss is utilized in training to further refine the model without introducing extra computational complexity during inference. The proposed method is extensively evaluated on two nuclei segmentation datasets. The experimental results demonstrate that our method can achieve competitive performance compared to the fully supervised counterpart and the state-of-the-art methods while requiring significantly less annotation effort.
Collapse
|
16
|
Zeng H, Chen L, Huang Y, Luo Y, Ma X. Integrative Models of Histopathological Image Features and Omics Data Predict Survival in Head and Neck Squamous Cell Carcinoma. Front Cell Dev Biol 2020; 8:553099. [PMID: 33195188 PMCID: PMC7658095 DOI: 10.3389/fcell.2020.553099] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2020] [Accepted: 10/08/2020] [Indexed: 02/05/2023] Open
Abstract
Background Both histopathological image features and genomics data were associated with survival outcome of cancer patients. However, integrating features of histopathological images, genomics and other omics for improving prognosis prediction has not been reported in head and neck squamous cell carcinoma (HNSCC). Methods A dataset of 216 HNSCC patients was derived from the Cancer Genome Atlas (TCGA) with information of clinical characteristics, genetic mutation, RNA sequencing, protein expression and histopathological images. Patients were randomly assigned into training (n = 108) or validation (n = 108) sets. We extracted 593 quantitative image features, and used random forest algorithm with 10-fold cross-validation to build prognostic models for overall survival (OS) in training set, then compared the area under the time-dependent receiver operating characteristic curve (AUC) in validation set. Results In validation set, histopathological image features had significant predictive value for OS (5-year AUC = 0.784). The histopathology + omics models showed better predictive performance than genomics, transcriptomics or proteomics alone. Moreover, the multi-omics model incorporating image features, genomics, transcriptomics and proteomics reached the maximal 1-, 3-, and 5-year AUC of 0.871, 0.908, and 0.929, with most significant survival difference (HR = 10.66, 95%CI: 5.06–26.8, p < 0.001). Decision curve analysis also revealed a better net benefit of multi-omics model. Conclusion The histopathological images could provide complementary features to improve prognostic performance for HNSCC patients. The integrative model of histopathological image features and omics data might serve as an effective tool for survival prediction and risk stratification in clinical practice.
Collapse
Affiliation(s)
- Hao Zeng
- State Key Laboratory of Biotherapy, Department of Biotherapy, Cancer Center, West China Hospital, Sichuan University Collaborative Innovation Center, Chengdu, China
| | - Linyan Chen
- State Key Laboratory of Biotherapy, Department of Biotherapy, Cancer Center, West China Hospital, Sichuan University Collaborative Innovation Center, Chengdu, China
| | - Yeqian Huang
- West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
| | - Yuling Luo
- West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
| | - Xuelei Ma
- State Key Laboratory of Biotherapy, Department of Biotherapy, Cancer Center, West China Hospital, Sichuan University Collaborative Innovation Center, Chengdu, China
| |
Collapse
|
17
|
Panayides AS, Amini A, Filipovic ND, Sharma A, Tsaftaris SA, Young A, Foran D, Do N, Golemati S, Kurc T, Huang K, Nikita KS, Veasey BP, Zervakis M, Saltz JH, Pattichis CS. AI in Medical Imaging Informatics: Current Challenges and Future Directions. IEEE J Biomed Health Inform 2020; 24:1837-1857. [PMID: 32609615 PMCID: PMC8580417 DOI: 10.1109/jbhi.2020.2991043] [Citation(s) in RCA: 105] [Impact Index Per Article: 26.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper reviews state-of-the-art research solutions across the spectrum of medical imaging informatics, discusses clinical translation, and provides future directions for advancing clinical practice. More specifically, it summarizes advances in medical imaging acquisition technologies for different modalities, highlighting the necessity for efficient medical data management strategies in the context of AI in big healthcare data analytics. It then provides a synopsis of contemporary and emerging algorithmic methods for disease classification and organ/ tissue segmentation, focusing on AI and deep learning architectures that have already become the de facto approach. The clinical benefits of in-silico modelling advances linked with evolving 3D reconstruction and visualization applications are further documented. Concluding, integrative analytics approaches driven by associate research branches highlighted in this study promise to revolutionize imaging informatics as known today across the healthcare continuum for both radiology and digital pathology applications. The latter, is projected to enable informed, more accurate diagnosis, timely prognosis, and effective treatment planning, underpinning precision medicine.
Collapse
|
18
|
You Z, Balbastre Y, Bouvier C, Hérard AS, Gipchtein P, Hantraye P, Jan C, Souedet N, Delzescaux T. Automated Individualization of Size-Varying and Touching Neurons in Macaque Cerebral Microscopic Images. Front Neuroanat 2019; 13:98. [PMID: 31920567 PMCID: PMC6929681 DOI: 10.3389/fnana.2019.00098] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Accepted: 11/22/2019] [Indexed: 12/26/2022] Open
Abstract
In biomedical research, cell analysis is important to assess physiological and pathophysiological information. Virtual microscopy offers the unique possibility to study the compositions of tissues at a cellular scale. However, images acquired at such high spatial resolution are massive, contain complex information, and are therefore difficult to analyze automatically. In this article, we address the problem of individualization of size-varying and touching neurons in optical microscopy two-dimensional (2-D) images. Our approach is based on a series of processing steps that incorporate increasingly more information. (1) After a step of segmentation of neuron class using a Random Forest classifier, a novel min-max filter is used to enhance neurons' centroids and boundaries, enabling the use of region growing process based on a contour-based model to drive it to neuron boundary and achieve individualization of touching neurons. (2) Taking into account size-varying neurons, an adaptive multiscale procedure aiming at individualizing touching neurons is proposed. This protocol was evaluated in 17 major anatomical regions from three NeuN-stained macaque brain sections presenting diverse and comprehensive neuron densities. Qualitative and quantitative analyses demonstrate that the proposed method provides satisfactory results in most regions (e.g., caudate, cortex, subiculum, and putamen) and outperforms a baseline Watershed algorithm. Neuron counts obtained with our method show high correlation with an adapted stereology technique performed by two experts (respectively, 0.983 and 0.975 for the two experts). Neuron diameters obtained with our method ranged between 2 and 28.6 μm, matching values reported in the literature. Further works will aim to evaluate the impact of staining and interindividual variability on our protocol.
Collapse
Affiliation(s)
- Zhenzhen You
- CEA-CNRS-UMR 9199, Laboratoire des Maladies Neurodégénératives, MIRCen, Université Paris-Saclay, Fontenay-aux-Roses, France
- School of Computer Science and Engineering, Xi’an University of Technology, Xi’an, China
| | - Yaël Balbastre
- CEA-CNRS-UMR 9199, Laboratoire des Maladies Neurodégénératives, MIRCen, Université Paris-Saclay, Fontenay-aux-Roses, France
| | - Clément Bouvier
- CEA-CNRS-UMR 9199, Laboratoire des Maladies Neurodégénératives, MIRCen, Université Paris-Saclay, Fontenay-aux-Roses, France
| | - Anne-Sophie Hérard
- CEA-CNRS-UMR 9199, Laboratoire des Maladies Neurodégénératives, MIRCen, Université Paris-Saclay, Fontenay-aux-Roses, France
| | - Pauline Gipchtein
- CEA-CNRS-UMR 9199, Laboratoire des Maladies Neurodégénératives, MIRCen, Université Paris-Saclay, Fontenay-aux-Roses, France
| | - Philippe Hantraye
- CEA-CNRS-UMR 9199, Laboratoire des Maladies Neurodégénératives, MIRCen, Université Paris-Saclay, Fontenay-aux-Roses, France
| | - Caroline Jan
- CEA-CNRS-UMR 9199, Laboratoire des Maladies Neurodégénératives, MIRCen, Université Paris-Saclay, Fontenay-aux-Roses, France
| | - Nicolas Souedet
- CEA-CNRS-UMR 9199, Laboratoire des Maladies Neurodégénératives, MIRCen, Université Paris-Saclay, Fontenay-aux-Roses, France
| | - Thierry Delzescaux
- CEA-CNRS-UMR 9199, Laboratoire des Maladies Neurodégénératives, MIRCen, Université Paris-Saclay, Fontenay-aux-Roses, France
| |
Collapse
|
19
|
Gu Y, Yang J. Multi-level magnification correlation hashing for scalable histopathological image retrieval. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.03.050] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
20
|
Karobari FM, Suresh HN. Histopathological Image Segmentation Using Modified Kernel-Based Fuzzy C-Means and Edge Bridge and Fill Technique. JOURNAL OF INTELLIGENT SYSTEMS 2019. [DOI: 10.1515/jisys-2018-0316] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023] Open
Abstract
Abstract
Histopathological lung cancer segmentation using region of interest is one of the emerging research area in the field of health monitoring system. In this paper, the histopathological images were collected from the database Stanford Tissue Microarray Database (TMAD). After image collection, pre-processing was performed using a normalization technique, which enhances the quality of the histopathological image by eliminating unwanted noise. After pre-processing, segmentation was carried out using the modified kernel-based fuzzy c-means clustering (KFCM) approach along with the edge bridge and fill technique (EBFT). It was a flexible high-level machine learning technique to localize the object in a complex template. The experimental result shows that the proposed approach segments the normal and abnormal cancer regions by means of precision, recall, specificity, accuracy, and Jaccard coefficient. The proposed methodology improved the classification accuracy in lung cancer segmentation up to 2.5–5% compared to the existing methods deep convolutional neural network (DCNN) and diffusion-weighted approach.
Collapse
Affiliation(s)
- Faiz Mohammad Karobari
- Department of Electronics and Communication Engineering, KNS Institute of Technology, Kogilu Main Road, Yelahanka Hobli, Tirumenahalli, RK Hegde Nagar, Bengaluru, Karnataka 560064, India
| | - Hosahally Narayangowda Suresh
- Department of Electronics and Instrumentation Engineering, Bangalore Institute of Technology, Bangalore, India
- Research Guide, Visvesvaraya Technological University, Belagavi, Karnataka, India
| |
Collapse
|
21
|
Li Z, Butler E, Li K, Lu A, Ji S, Zhang S. Large-scale Exploration of Neuronal Morphologies Using Deep Learning and Augmented Reality. Neuroinformatics 2019; 16:339-349. [PMID: 29435954 DOI: 10.1007/s12021-018-9361-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Recently released large-scale neuron morphological data has greatly facilitated the research in neuroinformatics. However, the sheer volume and complexity of these data pose significant challenges for efficient and accurate neuron exploration. In this paper, we propose an effective retrieval framework to address these problems, based on frontier techniques of deep learning and binary coding. For the first time, we develop a deep learning based feature representation method for the neuron morphological data, where the 3D neurons are first projected into binary images and then learned features using an unsupervised deep neural network, i.e., stacked convolutional autoencoders (SCAEs). The deep features are subsequently fused with the hand-crafted features for more accurate representation. Considering the exhaustive search is usually very time-consuming in large-scale databases, we employ a novel binary coding method to compress feature vectors into short binary codes. Our framework is validated on a public data set including 58,000 neurons, showing promising retrieval precision and efficiency compared with state-of-the-art methods. In addition, we develop a novel neuron visualization program based on the techniques of augmented reality (AR), which can help users take a deep exploration of neuron morphologies in an interactive and immersive manner.
Collapse
Affiliation(s)
- Zhongyu Li
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC, 28223, USA
| | - Erik Butler
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC, 28223, USA
| | - Kang Li
- Department of Industrial and Systems Engineering, The State University of New Jersey, Piscataway, NJ, 08854, USA
| | - Aidong Lu
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC, 28223, USA
| | - Shuiwang Ji
- School of Electrical Engineering and Computer Science, Washington State University, Pullman, WA, 99164, USA
| | - Shaoting Zhang
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC, 28223, USA.
| |
Collapse
|
22
|
Cheng J, Mo X, Wang X, Parwani A, Feng Q, Huang K. Identification of topological features in renal tumor microenvironment associated with patient survival. Bioinformatics 2019; 34:1024-1030. [PMID: 29136101 PMCID: PMC7263397 DOI: 10.1093/bioinformatics/btx723] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Accepted: 11/07/2017] [Indexed: 11/13/2022] Open
Abstract
Motivation As a highly heterogeneous disease, the progression of tumor is not only achieved by unlimited growth of the tumor cells, but also supported, stimulated, and nurtured by the microenvironment around it. However, traditional qualitative and/or semi-quantitative parameters obtained by pathologist’s visual examination have very limited capability to capture this interaction between tumor and its microenvironment. With the advent of digital pathology, computerized image analysis may provide a better tumor characterization and give new insights into this problem. Results We propose a novel bioimage informatics pipeline for automatically characterizing the topological organization of different cell patterns in the tumor microenvironment. We apply this pipeline to the only publicly available large histopathology image dataset for a cohort of 190 patients with papillary renal cell carcinoma obtained from The Cancer Genome Atlas project. Experimental results show that the proposed topological features can successfully stratify early- and middle-stage patients with distinct survival, and show superior performance to traditional clinical features and cellular morphological and intensity features. The proposed features not only provide new insights into the topological organizations of cancers, but also can be integrated with genomic data in future studies to develop new integrative biomarkers. Availability and implementation https://github.com/chengjun583/KIRP-topological-features Supplementary information Supplementary data are available atBioinformatics online.
Collapse
Affiliation(s)
- Jun Cheng
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Xiaokui Mo
- Center for Biostatistics, The Ohio State University Wexner Medical Center
| | - Xusheng Wang
- Department of Electrical and Computer Engineering
| | | | - Qianjin Feng
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Kun Huang
- Department of Electrical and Computer Engineering.,Department of Biomedical Informatics, The Ohio State University, Columbus, OH 43210, USA.,Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA
| |
Collapse
|
23
|
Li J, Yang S, Huang X, Da Q, Yang X, Hu Z, Duan Q, Wang C, Li H. Signet Ring Cell Detection with a Semi-supervised Learning Framework. LECTURE NOTES IN COMPUTER SCIENCE 2019. [DOI: 10.1007/978-3-030-20351-1_66] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
24
|
Gu Y, Yang J. Densely-Connected Multi-Magnification Hashing for Histopathological Image Retrieval. IEEE J Biomed Health Inform 2018; 23:1683-1691. [PMID: 30475737 DOI: 10.1109/jbhi.2018.2882647] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Content-based medical image retrieval is an important computer-aided diagnosis technique providing the clinicians with interpretative references based on visual similarity. In this paper, we focus on the tasks of histopathological image retrieval for breast cancer diagnosis. The densely-connected multi-magnification (DCMMH) framework is proposed to generate the discriminative binary codes by exploiting the histopathological images with multiple magnification factors. The low-magnification images are boosted by the accumulated similarity based on local patches that also regularize the feature learning of high-magnification images. In order to fully utilize the information across different magnification levels, a densely-connected architecture is finally deployed for high-low magnification pairs of datasets. Experiments on BreakHis dataset demonstrate that, DCMMH outperforms the previous hashing methods on histopathological image retrieval.
Collapse
|
25
|
Hu B, Tang Y, Chang EIC, Fan Y, Lai M, Xu Y. Unsupervised Learning for Cell-Level Visual Representation in Histopathology Images With Generative Adversarial Networks. IEEE J Biomed Health Inform 2018; 23:1316-1328. [PMID: 29994411 DOI: 10.1109/jbhi.2018.2852639] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The visual attributes of cells, such as the nuclear morphology and chromatin openness, are critical for histopathology image analysis. By learning cell-level visual representation, we can obtain a rich mix of features that are highly reusable for various tasks, such as cell-level classification, nuclei segmentation, and cell counting. In this paper, we propose a unified generative adversarial networks architecture with a new formulation of loss to perform robust cell-level visual representation learning in an unsupervised setting. Our model is not only label-free and easily trained but also capable of cell-level unsupervised classification with interpretable visualization, which achieves promising results in the unsupervised classification of bone marrow cellular components. Based on the proposed cell-level visual representation learning, we further develop a pipeline that exploits the varieties of cellular elements to perform histopathology image classification, the advantages of which are demonstrated on bone marrow datasets.
Collapse
|
26
|
Zheng Y, Jiang Z, Zhang H, Xie F, Ma Y, Shi H, Zhao Y. Histopathological Whole Slide Image Analysis Using Context-Based CBIR. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1641-1652. [PMID: 29969415 DOI: 10.1109/tmi.2018.2796130] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Histopathological image classification (HIC) and content-based histopathological image retrieval (CBHIR) are two promising applications for the histopathological whole slide image (WSI) analysis. HIC can efficiently predict the type of lesion involved in a histopathological image. In general, HIC can aid pathologists in locating high-risk cancer regions from a WSI by providing a cancerous probability map for the WSI. In contrast, CBHIR was developed to allow searches for regions with similar content for a region of interest (ROI) from a database consisting of historical cases. Sets of cases with similar content are accessible to pathologists, which can provide more valuable references for diagnosis. A drawback of the recent CBHIR framework is that a query ROI needs to be manually selected from a WSI. An automatic CBHIR approach for a WSI-wise analysis needs to be developed. In this paper, we propose a novel aided-diagnosis framework of breast cancer using whole slide images, which shares the advantages of both HIC and CBHIR. In our framework, CBHIR is automatically processed throughout the WSI, based on which a probability map regarding the malignancy of breast tumors is calculated. Through the probability map, the malignant regions in WSIs can be easily recognized. Furthermore, the retrieval results corresponding to each sub-region of the WSIs are recorded during the automatic analysis and are available to pathologists during their diagnosis. Our method was validated on fully annotated WSI data sets of breast tumors. The experimental results certify the effectiveness of the proposed method.
Collapse
|
27
|
Ma Y, Jiang Z, Zhang H, Xie F, Zheng Y, Shi H, Zhao Y, Shi J. Generating region proposals for histopathological whole slide image retrieval. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 159:1-10. [PMID: 29650303 DOI: 10.1016/j.cmpb.2018.02.020] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2017] [Revised: 01/15/2018] [Accepted: 02/22/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVE Content-based image retrieval is an effective method for histopathological image analysis. However, given a database of huge whole slide images (WSIs), acquiring appropriate region-of-interests (ROIs) for training is significant and difficult. Moreover, histopathological images can only be annotated by pathologists, resulting in the lack of labeling information. Therefore, it is an important and challenging task to generate ROIs from WSI and retrieve image with few labels. METHODS This paper presents a novel unsupervised region proposing method for histopathological WSI based on Selective Search. Specifically, the WSI is over-segmented into regions which are hierarchically merged until the WSI becomes a single region. Nucleus-oriented similarity measures for region mergence and Nucleus-Cytoplasm color space for histopathological image are specially defined to generate accurate region proposals. Additionally, we propose a new semi-supervised hashing method for image retrieval. The semantic features of images are extracted with Latent Dirichlet Allocation and transformed into binary hashing codes with Supervised Hashing. RESULTS The methods are tested on a large-scale multi-class database of breast histopathological WSIs. The results demonstrate that for one WSI, our region proposing method can generate 7.3 thousand contoured regions which fit well with 95.8% of the ROIs annotated by pathologists. The proposed hashing method can retrieve a query image among 136 thousand images in 0.29 s and reach precision of 91% with only 10% of images labeled. CONCLUSIONS The unsupervised region proposing method can generate regions as predictions of lesions in histopathological WSI. The region proposals can also serve as the training samples to train machine-learning models for image retrieval. The proposed hashing method can achieve fast and precise image retrieval with small amount of labels. Furthermore, the proposed methods can be potentially applied in online computer-aided-diagnosis systems.
Collapse
Affiliation(s)
- Yibing Ma
- Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Beijing Key Laboratory of Digital Media, Beijing 100191, China.
| | - Zhiguo Jiang
- Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Beijing Key Laboratory of Digital Media, Beijing 100191, China.
| | - Haopeng Zhang
- Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Beijing Key Laboratory of Digital Media, Beijing 100191, China.
| | - Fengying Xie
- Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Beijing Key Laboratory of Digital Media, Beijing 100191, China.
| | - Yushan Zheng
- Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Beijing Key Laboratory of Digital Media, Beijing 100191, China.
| | - Huaqiang Shi
- Motic (Xiamen) Medical Diagnostic Systems Co. Ltd., Xiamen 361101, China; People's Liberation Army Air Force General Hospital, Beijing 100142, China.
| | - Yu Zhao
- Motic (Xiamen) Medical Diagnostic Systems Co. Ltd., Xiamen 361101, China.
| | - Jun Shi
- School of Software, Hefei University of Technology, Hefei 230601, China.
| |
Collapse
|
28
|
Cortesi M, Llamosas E, Henry CE, Kumaran RYA, Ng B, Youkhana J, Ford CE. I-AbACUS: a Reliable Software Tool for the Semi-Automatic Analysis of Invasion and Migration Transwell Assays. Sci Rep 2018; 8:3814. [PMID: 29491372 PMCID: PMC5830488 DOI: 10.1038/s41598-018-22091-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2017] [Accepted: 02/16/2018] [Indexed: 12/17/2022] Open
Abstract
The quantification of invasion and migration is an important aspect of cancer research, used both in the study of the molecular processes involved in this collection of diseases and the evaluation of the efficacy of new potential treatments. The transwell assay, while being one of the most widely used techniques for the evaluation of these characteristics, shows a high dependence on the operator's ability to correctly identify the cells and a low protocol standardization. Here we present I-AbACUS, a software tool specifically designed to aid the analysis of transwell assays that automatically and specifically recognizes cells in images of stained membranes and provides the user with a suggested cell count. A complete description of this instrument, together with its validation against the standard analysis technique for this assay is presented. Furthermore, we show that I-AbACUS is versatile and able to elaborate images containing cells with different morphologies and that the obtained results are less dependent on the operator and their experience. We anticipate that this instrument, freely available (Gnu Public Licence GPL v2) at www.marilisacortesi.com as a standalone application, could significantly improve the quantification of invasion and migration of cancer cells.
Collapse
Affiliation(s)
- Marilisa Cortesi
- Laboratory of Cellular and Molecular Engineering "S. Cavalcanti", Department of Electrical, Electronic and Information Engineering "G. Marconi" (DEI), University of Bologna, Cesena, Italy.
| | - Estelle Llamosas
- Gynaecological Cancer Research Group, Lowy Cancer Research Centre and School of Women's and Children's Health, Faculty of Medicine, University of New South Wales, Sydney, Australia
| | - Claire E Henry
- Gynaecological Cancer Research Group, Lowy Cancer Research Centre and School of Women's and Children's Health, Faculty of Medicine, University of New South Wales, Sydney, Australia
| | - Raani-Yogeeta A Kumaran
- Gynaecological Cancer Research Group, Lowy Cancer Research Centre and School of Women's and Children's Health, Faculty of Medicine, University of New South Wales, Sydney, Australia
| | - Benedict Ng
- Adult Cancer Program, Lowy Cancer Research Center, Prince of Wales Clinical School, University of New South Wales, Sydney, Australia
| | - Janet Youkhana
- Adult Cancer Program, Lowy Cancer Research Center, Prince of Wales Clinical School, University of New South Wales, Sydney, Australia
| | - Caroline E Ford
- Gynaecological Cancer Research Group, Lowy Cancer Research Centre and School of Women's and Children's Health, Faculty of Medicine, University of New South Wales, Sydney, Australia.
| |
Collapse
|
29
|
Liu C, Huang Y, Ozolek JA, Hanna MG, Singh R, Rohde GK. SetSVM: An Approach to Set Classification in Nuclei-Based Cancer Detection. IEEE J Biomed Health Inform 2018; 23:351-361. [PMID: 29994380 DOI: 10.1109/jbhi.2018.2803793] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Due to the importance of nuclear structure in cancer diagnosis, several predictive models have been described for diagnosing a wide variety of cancers based on nuclear morphology. In many computer-aided diagnosis (CAD) systems, cancer detection tasks can be generally formulated as set classification problems, which can not be directly solved by classifying single instances. In this paper, we propose a novel set classification approach SetSVM to build a predictive model by considering any nuclei set as a whole without specific assumptions. SetSVM features highly discriminative power in cancer detection challenges in the sense that it not only optimizes the classifier decision boundary but also transfers discriminative information to set representation learning. During model training, these two processes are unified in the support vector machine (SVM) maximum separation margin problem. Experiment results show that SetSVM provides significant improvements compared with five commonly used approaches in cancer detection tasks utilizing 260 patients in total across three different cancer types, namely, thyroid cancer, liver cancer, and melanoma. In addition, we show that SetSVM enables visual interpretation of discriminative nuclear characteristics representing the nuclei set. These features make SetSVM a potentially practical tool in building accurate and interpretable CAD systems for cancer detection.
Collapse
|
30
|
Cheng J, Zhang J, Han Y, Wang X, Ye X, Meng Y, Parwani A, Han Z, Feng Q, Huang K. Integrative Analysis of Histopathological Images and Genomic Data Predicts Clear Cell Renal Cell Carcinoma Prognosis. Cancer Res 2017; 77:e91-e100. [PMID: 29092949 DOI: 10.1158/0008-5472.can-17-0313] [Citation(s) in RCA: 72] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2017] [Revised: 02/13/2017] [Accepted: 06/29/2017] [Indexed: 12/17/2022]
Abstract
In cancer, both histopathologic images and genomic signatures are used for diagnosis, prognosis, and subtyping. However, combining histopathologic images with genomic data for predicting prognosis, as well as the relationships between them, has rarely been explored. In this study, we present an integrative genomics framework for constructing a prognostic model for clear cell renal cell carcinoma. We used patient data from The Cancer Genome Atlas (n = 410), extracting hundreds of cellular morphologic features from digitized whole-slide images and eigengenes from functional genomics data to predict patient outcome. The risk index generated by our model correlated strongly with survival, outperforming predictions based on considering morphologic features or eigengenes separately. The predicted risk index also effectively stratified patients in early-stage (stage I and stage II) tumors, whereas no significant survival difference was observed using staging alone. The prognostic value of our model was independent of other known clinical and molecular prognostic factors for patients with clear cell renal cell carcinoma. Overall, this workflow and the shared software code provide building blocks for applying similar approaches in other cancers. Cancer Res; 77(21); e91-100. ©2017 AACR.
Collapse
Affiliation(s)
- Jun Cheng
- Guangdong Province Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Jie Zhang
- Department of Biomedical Informatics, The Ohio State University, Columbus, Ohio.,Department of Medicine, Indiana University School of Medicine, Indianapolis, Indiana
| | - Yatong Han
- College of Automation, Harbin Engineering University, Harbin, Heilongjiang, China
| | - Xusheng Wang
- Department of Biomedical Informatics, The Ohio State University, Columbus, Ohio
| | - Xiufen Ye
- College of Automation, Harbin Engineering University, Harbin, Heilongjiang, China
| | - Yuebo Meng
- College of Information and Control Engineering, Xi'an University of Architecture and Technology, Xi'an, China
| | - Anil Parwani
- Department of Pathology, The Ohio State University, Columbus, Ohio
| | - Zhi Han
- Department of Biomedical Informatics, The Ohio State University, Columbus, Ohio.,Department of Medicine, Indiana University School of Medicine, Indianapolis, Indiana.,Department of Pathology, The Ohio State University, Columbus, Ohio
| | - Qianjin Feng
- Guangdong Province Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, Guangzhou, China.
| | - Kun Huang
- Department of Biomedical Informatics, The Ohio State University, Columbus, Ohio. .,Department of Medicine, Indiana University School of Medicine, Indianapolis, Indiana
| |
Collapse
|
31
|
Serin F, Erturkler M, Gul M. A novel overlapped nuclei splitting algorithm for histopathological images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2017; 151:57-70. [PMID: 28947006 DOI: 10.1016/j.cmpb.2017.08.010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 06/27/2017] [Accepted: 08/14/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND AND OBJECTIVE Nuclei segmentation is a common process for quantitative analysis of histopathological images. However, this process generally results in overlapping of nuclei due to the nature of images, the sample preparation and staining, and image acquisition processes as well as insufficiency of 2D histopathological images to represent 3D characteristics of tissues. We present a novel algorithm to split overlapped nuclei. METHODS The histopathological images are initially segmented by K-Means segmentation algorithm. Then, nuclei cluster are converted to binary image. The overlapping is detected by applying threshold area value to nuclei in the binary image. The splitting algorithm is applied to the overlapped nuclei. In first stage of splitting, circles are drawn on overlapped nuclei. The radius of the circles is calculated by using circle area formula, and each pixel's coordinates of overlapped nuclei are selected as center coordinates for each circle. The pixels in the circle that contains maximum number of intersected pixels in both the circle and the overlapped nuclei are removed from the overlapped nuclei, and the filled circle labeled as a nucleus. RESULTS The algorithm has been tested on histopathological images of healthy and damaged kidney tissues and compared with the results provided by an expert and three related studies. The results demonstrated that the proposed splitting algorithm can segment the overlapping nuclei with accuracy of 84%. CONCLUSIONS The study presents a novel algorithm splitting the overlapped nuclei in histopathological images and provides more accurate cell counting in histopathological analysis. Furthermore, the proposed splitting algorithm has the potential to be used in different fields to split any overlapped circular patterns.
Collapse
Affiliation(s)
- Faruk Serin
- Department of Computer Engineering, Faculty of Engineering, Munzur University, Tunceli, Turkey.
| | - Metin Erturkler
- Department of Computer Engineering, Faculty of Engineering, Inonu University, Malatya, Turkey
| | - Mehmet Gul
- Department of Embryology and Histology, Faculty of Medicine, Inonu University, Malatya, Turkey
| |
Collapse
|
32
|
Li Z, Zhang X, Müller H, Zhang S. Large-scale retrieval for medical image analytics: A comprehensive review. Med Image Anal 2017; 43:66-84. [PMID: 29031831 DOI: 10.1016/j.media.2017.09.007] [Citation(s) in RCA: 75] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2017] [Revised: 08/01/2017] [Accepted: 09/29/2017] [Indexed: 12/27/2022]
Abstract
Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis.
Collapse
Affiliation(s)
- Zhongyu Li
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC 28223, USA
| | - Xiaofan Zhang
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC 28223, USA
| | - Henning Müller
- Information Systems Institute, HES-SO Valais, Sierre, Switzerland
| | - Shaoting Zhang
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC 28223, USA.
| |
Collapse
|
33
|
Ahmad J, Sajjad M, Mehmood I, Baik SW. SiNC: Saliency-injected neural codes for representation and efficient retrieval of medical radiographs. PLoS One 2017; 12:e0181707. [PMID: 28771497 PMCID: PMC5542646 DOI: 10.1371/journal.pone.0181707] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2016] [Accepted: 07/06/2017] [Indexed: 01/10/2023] Open
Abstract
Medical image collections contain a wealth of information which can assist radiologists and medical experts in diagnosis and disease detection for making well-informed decisions. However, this objective can only be realized if efficient access is provided to semantically relevant cases from the ever-growing medical image repositories. In this paper, we present an efficient method for representing medical images by incorporating visual saliency and deep features obtained from a fine-tuned convolutional neural network (CNN) pre-trained on natural images. Saliency detector is employed to automatically identify regions of interest like tumors, fractures, and calcified spots in images prior to feature extraction. Neuronal activation features termed as neural codes from different CNN layers are comprehensively studied to identify most appropriate features for representing radiographs. This study revealed that neural codes from the last fully connected layer of the fine-tuned CNN are found to be the most suitable for representing medical images. The neural codes extracted from the entire image and salient part of the image are fused to obtain the saliency-injected neural codes (SiNC) descriptor which is used for indexing and retrieval. Finally, locality sensitive hashing techniques are applied on the SiNC descriptor to acquire short binary codes for allowing efficient retrieval in large scale image collections. Comprehensive experimental evaluations on the radiology images dataset reveal that the proposed framework achieves high retrieval accuracy and efficiency for scalable image retrieval applications and compares favorably with existing approaches.
Collapse
Affiliation(s)
- Jamil Ahmad
- College of Software and Convergence Technology, Department of Software, Sejong University, Seoul, Republic of Korea
| | - Muhammad Sajjad
- Digital Image Processing Lab, Department of Computer Science, Islamia College, Peshawar, Pakistan
| | - Irfan Mehmood
- Department of Computer Science and Engineering, Sejong University, Seoul, Republic of Korea
| | - Sung Wook Baik
- College of Software and Convergence Technology, Department of Software, Sejong University, Seoul, Republic of Korea
| |
Collapse
|
34
|
Song Y, Li Q, Huang H, Feng D, Chen M, Cai W. Low Dimensional Representation of Fisher Vectors for Microscopy Image Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:1636-1649. [PMID: 28358678 DOI: 10.1109/tmi.2017.2687466] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Microscopy image classification is important in various biomedical applications, such as cancer subtype identification, and protein localization for high content screening. To achieve automated and effective microscopy image classification, the representative and discriminative capability of image feature descriptors is essential. To this end, in this paper, we propose a new feature representation algorithm to facilitate automated microscopy image classification. In particular, we incorporate Fisher vector (FV) encoding with multiple types of local features that are handcrafted or learned, and we design a separation-guided dimension reduction method to reduce the descriptor dimension while increasing its discriminative capability. Our method is evaluated on four publicly available microscopy image data sets of different imaging types and applications, including the UCSB breast cancer data set, MICCAI 2015 CBTC challenge data set, and IICBU malignant lymphoma, and RNAi data sets. Our experimental results demonstrate the advantage of the proposed low-dimensional FV representation, showing consistent performance improvement over the existing state of the art and the commonly used dimension reduction techniques.
Collapse
|
35
|
Wan T, Zhang W, Zhu M, Chen J, Achim A, Qin Z. Automated mitosis detection in histopathology based on non-gaussian modeling of complex wavelet coefficients. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2017.01.008] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
36
|
Tan C, Li K, Yan Z, Yi J, Wu P, Yu HJ, Engelke K, Metaxas DN. Towards large-scale MR thigh image analysis via an integrated quantification framework. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.05.108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
37
|
Pan X, Li L, Yang H, Liu Z, Yang J, Zhao L, Fan Y. Accurate segmentation of nuclei in pathological images via sparse reconstruction and deep convolutional networks. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.08.103] [Citation(s) in RCA: 42] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
|
38
|
Xu Y, Shen F, Xu X, Gao L, Wang Y, Tan X. Large-scale image retrieval with supervised sparse hashing. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.05.109] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
39
|
Wan T, Cao J, Chen J, Qin Z. Automated grading of breast cancer histopathology using cascaded ensemble with combination of multi-level image features. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.05.084] [Citation(s) in RCA: 72] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
40
|
Li Z, Metaxas DN, Lu A, Zhang S. Interactive Exploration for Continuously Expanding Neuron Databases. Methods 2017; 115:100-109. [DOI: 10.1016/j.ymeth.2017.02.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2016] [Revised: 02/15/2017] [Accepted: 02/16/2017] [Indexed: 01/02/2023] Open
|
41
|
Jiang M, Zhang S, Huang J, Yang L, Metaxas DN. Scalable histopathological image analysis via supervised hashing with multiple features. Med Image Anal 2016; 34:3-12. [PMID: 27521299 DOI: 10.1016/j.media.2016.07.011] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2016] [Revised: 04/08/2016] [Accepted: 07/28/2016] [Indexed: 11/18/2022]
Abstract
Histopathology is crucial to diagnosis of cancer, yet its interpretation is tedious and challenging. To facilitate this procedure, content-based image retrieval methods have been developed as case-based reasoning tools. Especially, with the rapid growth of digital histopathology, hashing-based retrieval approaches are gaining popularity due to their exceptional efficiency and scalability. Nevertheless, few hashing-based histopathological image analysis methods perform feature fusion, despite the fact that it is a common practice to improve image retrieval performance. In response, we exploit joint kernel-based supervised hashing (JKSH) to integrate complementary features in a hashing framework. Specifically, hashing functions are designed based on linearly combined kernel functions associated with individual features. Supervised information is incorporated to bridge the semantic gap between low-level features and high-level diagnosis. An alternating optimization method is utilized to learn the kernel combination and hashing functions. The obtained hashing functions compress multiple high-dimensional features into tens of binary bits, enabling fast retrieval from a large database. Our approach is extensively validated on 3121 breast-tissue histopathological images by distinguishing between actionable and benign cases. It achieves 88.1% retrieval precision and 91.3% classification accuracy within 16.5 ms query time, comparing favorably with traditional methods.
Collapse
Affiliation(s)
- Menglin Jiang
- Department of Computer Science, Rutgers University, Piscataway, NJ 08854, USA
| | - Shaoting Zhang
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC 28223, USA.
| | - Junzhou Huang
- Department of Computer Science and Engineering, University of Texas at Arlington, Arlington, TX 76019, USA
| | - Lin Yang
- Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA
| | - Dimitris N Metaxas
- Department of Computer Science, Rutgers University, Piscataway, NJ 08854, USA
| |
Collapse
|
42
|
Detection of lobular structures in normal breast tissue. Comput Biol Med 2016; 74:91-102. [DOI: 10.1016/j.compbiomed.2016.05.004] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2016] [Revised: 04/29/2016] [Accepted: 05/08/2016] [Indexed: 01/20/2023]
|
43
|
Zhang S, Metaxas D. Large-Scale medical image analytics: Recent methodologies, applications and Future directions. Med Image Anal 2016; 33:98-101. [PMID: 27503077 DOI: 10.1016/j.media.2016.06.010] [Citation(s) in RCA: 43] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2016] [Revised: 06/06/2016] [Accepted: 06/13/2016] [Indexed: 11/29/2022]
Abstract
Despite the ever-increasing amount and complexity of annotated medical image data, the development of large-scale medical image analysis algorithms has not kept pace with the need for methods that bridge the semantic gap between images and diagnoses. The goal of this position paper is to discuss and explore innovative and large-scale data science techniques in medical image analytics, which will benefit clinical decision-making and facilitate efficient medical data management. Particularly, we advocate that the scale of image retrieval systems should be significantly increased at which interactive systems can be effective for knowledge discovery in potentially large databases of medical images. For clinical relevance, such systems should return results in real-time, incorporate expert feedback, and be able to cope with the size, quality, and variety of the medical images and their associated metadata for a particular domain. The design, development, and testing of the such framework can significantly impact interactive mining in medical image databases that are growing rapidly in size and complexity and enable novel methods of analysis at much larger scales in an efficient, integrated fashion.
Collapse
Affiliation(s)
- Shaoting Zhang
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC, 28223, USA
| | - Dimitris Metaxas
- Department of Computer Science, Rutgers University, Piscataway, NJ, 08854, USA.
| |
Collapse
|
44
|
Sparks R, Madabhushi A. Out-of-Sample Extrapolation utilizing Semi-Supervised Manifold Learning (OSE-SSL): Content Based Image Retrieval for Histopathology Images. Sci Rep 2016; 6:27306. [PMID: 27264985 PMCID: PMC4893667 DOI: 10.1038/srep27306] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2015] [Accepted: 05/16/2016] [Indexed: 12/22/2022] Open
Abstract
Content-based image retrieval (CBIR) retrieves database images most similar to the query image by (1) extracting quantitative image descriptors and (2) calculating similarity between database and query image descriptors. Recently, manifold learning (ML) has been used to perform CBIR in a low dimensional representation of the high dimensional image descriptor space to avoid the curse of dimensionality. ML schemes are computationally expensive, requiring an eigenvalue decomposition (EVD) for every new query image to learn its low dimensional representation. We present out-of-sample extrapolation utilizing semi-supervised ML (OSE-SSL) to learn the low dimensional representation without recomputing the EVD for each query image. OSE-SSL incorporates semantic information, partial class label, into a ML scheme such that the low dimensional representation co-localizes semantically similar images. In the context of prostate histopathology, gland morphology is an integral component of the Gleason score which enables discrimination between prostate cancer aggressiveness. Images are represented by shape features extracted from the prostate gland. CBIR with OSE-SSL for prostate histology obtained from 58 patient studies, yielded an area under the precision recall curve (AUPRC) of 0.53 ± 0.03 comparatively a CBIR with Principal Component Analysis (PCA) to learn a low dimensional space yielded an AUPRC of 0.44 ± 0.01.
Collapse
Affiliation(s)
- Rachel Sparks
- University College of London, Centre for Medical Image Computing, London, UK
| | - Anant Madabhushi
- Case Western Reserve University, Department of Biomedical Engineering, Cleveland, OH, USA
| |
Collapse
|
45
|
Xing F, Yang L. Robust Nucleus/Cell Detection and Segmentation in Digital Pathology and Microscopy Images: A Comprehensive Review. IEEE Rev Biomed Eng 2016; 9:234-63. [PMID: 26742143 PMCID: PMC5233461 DOI: 10.1109/rbme.2016.2515127] [Citation(s) in RCA: 212] [Impact Index Per Article: 26.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Digital pathology and microscopy image analysis is widely used for comprehensive studies of cell morphology or tissue structure. Manual assessment is labor intensive and prone to interobserver variations. Computer-aided methods, which can significantly improve the objectivity and reproducibility, have attracted a great deal of interest in recent literature. Among the pipeline of building a computer-aided diagnosis system, nucleus or cell detection and segmentation play a very important role to describe the molecular morphological information. In the past few decades, many efforts have been devoted to automated nucleus/cell detection and segmentation. In this review, we provide a comprehensive summary of the recent state-of-the-art nucleus/cell segmentation approaches on different types of microscopy images including bright-field, phase-contrast, differential interference contrast, fluorescence, and electron microscopies. In addition, we discuss the challenges for the current methods and the potential future work of nucleus/cell detection and segmentation.
Collapse
|