1
|
Al-Kadi OS, Di Ieva A. Fractal-Based Analysis of Histological Features of Brain Tumors. ADVANCES IN NEUROBIOLOGY 2024; 36:501-524. [PMID: 38468050 DOI: 10.1007/978-3-031-47606-8_26] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/13/2024]
Abstract
The structural complexity of brain tumor tissue represents a major challenge for effective histopathological diagnosis. Tumor vasculature is known to be heterogeneous, and mixtures of patterns are usually present. Therefore, extracting key descriptive features for accurate quantification is not a straightforward task. Several steps are involved in the texture analysis process where tissue heterogeneity contributes to the variability of the results. One of the interesting aspects of the brain lies in its fractal nature. Many regions within the brain tissue yield similar statistical properties at different scales of magnification. Fractal-based analysis of the histological features of brain tumors can reveal the underlying complexity of tissue structure and angiostructure, also providing an indication of tissue abnormality development. It can further be used to quantify the chaotic signature of disease to distinguish between different temporal tumor stages and histopathological grades.Brain meningioma subtype classifications' improvement from histopathological images is the main focus of this chapter. Meningioma tissue texture exhibits a wide range of histological patterns whereby a single slide may show a combination of multiple patterns. Distinctive fractal patterns quantified in a multiresolution manner would be for better spatial relationship representation. Fractal features extracted from textural tissue patterns can be useful in characterizing meningioma tumors in terms of subtype classification, a challenging problem compared to histological grading, and furthermore can provide an objective measure for quantifying subtle features within subtypes that are hard to discriminate.
Collapse
Affiliation(s)
- Omar S Al-Kadi
- Artificial Intelligence Department, King Abdullah II School for Information Technology, University of Jordan, Amman, Jordan.
| | - Antonio Di Ieva
- Computational NeuroSurgery (CNS) Lab & Macquarie Neurosurgery, Macquarie Medical School, Faculty of Medicine, Human and Health Sciences, Macquarie University, Sydney, NSW, Australia
| |
Collapse
|
2
|
Mohanty S, Shivanna DB, Rao RS, Astekar M, Chandrashekar C, Radhakrishnan R, Sanjeevareddygari S, Kotrashetti V, Kumar P. Building Automation Pipeline for Diagnostic Classification of Sporadic Odontogenic Keratocysts and Non-Keratocysts Using Whole-Slide Images. Diagnostics (Basel) 2023; 13:3384. [PMID: 37958281 PMCID: PMC10648794 DOI: 10.3390/diagnostics13213384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 10/13/2023] [Accepted: 10/27/2023] [Indexed: 11/15/2023] Open
Abstract
The microscopic diagnostic differentiation of odontogenic cysts from other cysts is intricate and may cause perplexity for both clinicians and pathologists. Of particular interest is the odontogenic keratocyst (OKC), a developmental cyst with unique histopathological and clinical characteristics. Nevertheless, what distinguishes this cyst is its aggressive nature and high tendency for recurrence. Clinicians encounter challenges in dealing with this frequently encountered jaw lesion, as there is no consensus on surgical treatment. Therefore, the accurate and early diagnosis of such cysts will benefit clinicians in terms of treatment management and spare subjects from the mental agony of suffering from aggressive OKCs, which impact their quality of life. The objective of this research is to develop an automated OKC diagnostic system that can function as a decision support tool for pathologists, whether they are working locally or remotely. This system will provide them with additional data and insights to enhance their decision-making abilities. This research aims to provide an automation pipeline to classify whole-slide images of OKCs and non-keratocysts (non-KCs: dentigerous and radicular cysts). OKC diagnosis and prognosis using the histopathological analysis of tissues using whole-slide images (WSIs) with a deep-learning approach is an emerging research area. WSIs have the unique advantage of magnifying tissues with high resolution without losing information. The contribution of this research is a novel, deep-learning-based, and efficient algorithm that reduces the trainable parameters and, in turn, the memory footprint. This is achieved using principal component analysis (PCA) and the ReliefF feature selection algorithm (ReliefF) in a convolutional neural network (CNN) named P-C-ReliefF. The proposed model reduces the trainable parameters compared to standard CNN, achieving 97% classification accuracy.
Collapse
Affiliation(s)
- Samahit Mohanty
- Department of Computer Science and Engineering, M S Ramaiah University of Applied Sciences, Bengaluru 560054, India;
| | - Divya B. Shivanna
- Department of Computer Science and Engineering, M S Ramaiah University of Applied Sciences, Bengaluru 560054, India;
| | - Roopa S. Rao
- Department of Oral Pathology and Microbiology, Faculty of Dental Sciences, M S Ramaiah University of Applied Sciences, Bengaluru 560054, India;
| | - Madhusudan Astekar
- Department of Oral Pathology, Institute of Dental Sciences, Bareilly 243006, India;
| | - Chetana Chandrashekar
- Department of Oral & Maxillofacial Pathology & Microbiology, Manipal College of Dental Sciences, Manipal 576104, India; (C.C.); (R.R.)
| | - Raghu Radhakrishnan
- Department of Oral & Maxillofacial Pathology & Microbiology, Manipal College of Dental Sciences, Manipal 576104, India; (C.C.); (R.R.)
| | | | - Vijayalakshmi Kotrashetti
- Department of Oral & Maxillofacial Pathology & Microbiology, Maratha Mandal’s Nathajirao G Halgekar, Institute of Dental Science & Research Centre, Belgaum 590010, India;
| | - Prashant Kumar
- Department of Oral & Maxillofacial Pathology, Nijalingappa Institute of Dental Science & Research, Gulbarga 585105, India;
| |
Collapse
|
3
|
Cooper M, Ji Z, Krishnan RG. Machine learning in computational histopathology: Challenges and opportunities. Genes Chromosomes Cancer 2023; 62:540-556. [PMID: 37314068 DOI: 10.1002/gcc.23177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 05/18/2023] [Accepted: 05/20/2023] [Indexed: 06/15/2023] Open
Abstract
Digital histopathological images, high-resolution images of stained tissue samples, are a vital tool for clinicians to diagnose and stage cancers. The visual analysis of patient state based on these images are an important part of oncology workflow. Although pathology workflows have historically been conducted in laboratories under a microscope, the increasing digitization of histopathological images has led to their analysis on computers in the clinic. The last decade has seen the emergence of machine learning, and deep learning in particular, a powerful set of tools for the analysis of histopathological images. Machine learning models trained on large datasets of digitized histopathology slides have resulted in automated models for prediction and stratification of patient risk. In this review, we provide context for the rise of such models in computational histopathology, highlight the clinical tasks they have found success in automating, discuss the various machine learning techniques that have been applied to this domain, and underscore open problems and opportunities.
Collapse
Affiliation(s)
- Michael Cooper
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- University Health Network, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Zongliang Ji
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Rahul G Krishnan
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
4
|
Wang H, Xie M, Chen X, Zhu J, Zhang L, Ding H, Pan Z, He L. Radiomics analysis of contrast-enhanced computed tomography in predicting the International Neuroblastoma Pathology Classification in neuroblastoma. Insights Imaging 2023; 14:106. [PMID: 37316589 DOI: 10.1186/s13244-023-01418-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Accepted: 03/30/2023] [Indexed: 06/16/2023] Open
Abstract
PURPOSE To predict the International Neuroblastoma Pathology Classification (INPC) in neuroblastoma using a computed tomography (CT)-based radiomics approach. METHODS We enrolled 297 patients with neuroblastoma retrospectively and divided them into a training group (n = 208) and a testing group (n = 89). To balance the classes in the training group, a Synthetic Minority Over-sampling Technique was applied. A logistic regression radiomics model based on the radiomics features after dimensionality reduction was then constructed and validated in both the training and testing groups. To evaluate the diagnostic performance of the radiomics model, the receiver operating characteristic curve and calibration curve were utilized. Moreover, the decision curve analysis to assess the net benefits of the radiomics model at different high-risk thresholds was employed. RESULTS Seventeen radiomics features were used to construct radiomics model. In the training group, radiomics model achieved an area under the curve (AUC), accuracy, sensitivity, and specificity of 0.851 (95% confidence interval (CI) 0.805-0.897), 0.770, 0.694, and 0.847, respectively. In the testing group, radiomics model achieved an AUC, accuracy, sensitivity, and specificity of 0.816 (95% CI 0.725-0.906), 0.787, 0.793, and 0.778, respectively. The calibration curve indicated that the radiomics model was well fitted in both the training and testing groups (p > 0.05). Decision curve analysis further confirmed that the radiomics model performed well at different high-risk thresholds. CONCLUSION Radiomics analysis of contrast-enhanced CT demonstrates favorable diagnostic capabilities in distinguishing the INPC subgroups of neuroblastoma. CRITICAL RELEVANCE STATEMENT Radiomics features of contrast-enhanced CT images correlate with the International Neuroblastoma Pathology Classification (INPC) of neuroblastoma.
Collapse
Affiliation(s)
- Haoru Wang
- Department of Radiology, Children's Hospital of Chongqing Medical University, National Clinical Research Center for Child Health and Disorders, Ministry of Education Key Laboratory of Child Development and Disorders, Chongqing Key Laboratory of Pediatrics, No. 136 Zhongshan Road 2, Yuzhong District, Chongqing, 400014, China
| | - Mingye Xie
- Department of Radiology, Children's Hospital of Chongqing Medical University, National Clinical Research Center for Child Health and Disorders, Ministry of Education Key Laboratory of Child Development and Disorders, Chongqing Key Laboratory of Pediatrics, No. 136 Zhongshan Road 2, Yuzhong District, Chongqing, 400014, China
| | - Xin Chen
- Department of Radiology, Children's Hospital of Chongqing Medical University, National Clinical Research Center for Child Health and Disorders, Ministry of Education Key Laboratory of Child Development and Disorders, Chongqing Key Laboratory of Pediatrics, No. 136 Zhongshan Road 2, Yuzhong District, Chongqing, 400014, China
| | - Jin Zhu
- Department of Pathology, Children's Hospital of Chongqing Medical University, National Clinical Research Center for Child Health and Disorders, Ministry of Education Key Laboratory of Child Development and Disorders, Chongqing Key Laboratory of Pediatrics, No. 136 Zhongshan Road 2, Yuzhong District, Chongqing, 400014, China
| | - Li Zhang
- Department of Radiology, Children's Hospital of Chongqing Medical University, National Clinical Research Center for Child Health and Disorders, Ministry of Education Key Laboratory of Child Development and Disorders, Chongqing Key Laboratory of Pediatrics, No. 136 Zhongshan Road 2, Yuzhong District, Chongqing, 400014, China
| | - Hao Ding
- Department of Radiology, Children's Hospital of Chongqing Medical University, National Clinical Research Center for Child Health and Disorders, Ministry of Education Key Laboratory of Child Development and Disorders, Chongqing Key Laboratory of Pediatrics, No. 136 Zhongshan Road 2, Yuzhong District, Chongqing, 400014, China
| | - Zhengxia Pan
- Department of Cardiothoracic Surgery, Children's Hospital of Chongqing Medical University, National Clinical Research Center for Child Health and Disorders, Ministry of Education Key Laboratory of Child Development and Disorders, Chongqing Key Laboratory of Pediatrics, No. 136 Zhongshan Road 2, Yuzhong District, Chongqing, 400014, China.
| | - Ling He
- Department of Radiology, Children's Hospital of Chongqing Medical University, National Clinical Research Center for Child Health and Disorders, Ministry of Education Key Laboratory of Child Development and Disorders, Chongqing Key Laboratory of Pediatrics, No. 136 Zhongshan Road 2, Yuzhong District, Chongqing, 400014, China.
| |
Collapse
|
5
|
Vu QD, Rajpoot K, Raza SEA, Rajpoot N. Handcrafted Histological Transformer (H2T): Unsupervised representation of whole slide images. Med Image Anal 2023; 85:102743. [PMID: 36702037 DOI: 10.1016/j.media.2023.102743] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 11/30/2022] [Accepted: 01/05/2023] [Indexed: 01/20/2023]
Abstract
Diagnostic, prognostic and therapeutic decision-making of cancer in pathology clinics can now be carried out based on analysis of multi-gigapixel tissue images, also known as whole-slide images (WSIs). Recently, deep convolutional neural networks (CNNs) have been proposed to derive unsupervised WSI representations; these are attractive as they rely less on expert annotation which is cumbersome. However, a major trade-off is that higher predictive power generally comes at the cost of interpretability, posing a challenge to their clinical use where transparency in decision-making is generally expected. To address this challenge, we present a handcrafted framework based on deep CNN for constructing holistic WSI-level representations. Building on recent findings about the internal working of the Transformer in the domain of natural language processing, we break down its processes and handcraft them into a more transparent framework that we term as the Handcrafted Histological Transformer or H2T. Based on our experiments involving various datasets consisting of a total of 10,042 WSIs, the results demonstrate that H2T based holistic WSI-level representations offer competitive performance compared to recent state-of-the-art methods and can be readily utilized for various downstream analysis tasks. Finally, our results demonstrate that the H2T framework can be up to 14 times faster than the Transformer models.
Collapse
Affiliation(s)
- Quoc Dang Vu
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Kashif Rajpoot
- School of Computer Science, University of Birmingham, UK
| | - Shan E Ahmed Raza
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK; The Alan Turing Institute, London, UK; Department of Pathology, University Hospitals Coventry & Warwickshire, UK.
| |
Collapse
|
6
|
Das M, Dash R, Mishra SK. Automatic Detection of Oral Squamous Cell Carcinoma from Histopathological Images of Oral Mucosa Using Deep Convolutional Neural Network. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:2131. [PMID: 36767498 PMCID: PMC9915186 DOI: 10.3390/ijerph20032131] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Revised: 01/19/2023] [Accepted: 01/20/2023] [Indexed: 06/18/2023]
Abstract
Worldwide, oral cancer is the sixth most common type of cancer. India is in 2nd position, with the highest number of oral cancer patients. To the population of oral cancer patients, India contributes to almost one-third of the total count. Among several types of oral cancer, the most common and dominant one is oral squamous cell carcinoma (OSCC). The major reason for oral cancer is tobacco consumption, excessive alcohol consumption, unhygienic mouth condition, betel quid eating, viral infection (namely human papillomavirus), etc. The early detection of oral cancer type OSCC, in its preliminary stage, gives more chances for better treatment and proper therapy. In this paper, author proposes a convolutional neural network model, for the automatic and early detection of OSCC, and for experimental purposes, histopathological oral cancer images are considered. The proposed model is compared and analyzed with state-of-the-art deep learning models like VGG16, VGG19, Alexnet, ResNet50, ResNet101, Mobile Net and Inception Net. The proposed model achieved a cross-validation accuracy of 97.82%, which indicates the suitability of the proposed approach for the automatic classification of oral cancer data.
Collapse
Affiliation(s)
- Madhusmita Das
- Department of Computer Application, Siksha ‘O’ Anusandhan Deemed to be University, Bhubaneswar 751030, India
| | - Rasmita Dash
- Department of Computer Science and Engineering, Siksha ‘O’ Anusandhan Deemed to be University, Bhubaneswar 751030, India
| | - Sambit Kumar Mishra
- Department of Computer Science and Engineering, SRM University-AP, Guntur 522240, India
| |
Collapse
|
7
|
Fiz F, Bottoni G, Bini F, Cerroni F, Marinozzi F, Conte M, Treglia G, Morana G, Sorrentino S, Garaventa A, Siri G, Piccardo A. Prognostic value of texture analysis of the primary tumour in high-risk neuroblastoma: An 18 F-DOPA PET study. Pediatr Blood Cancer 2022; 69:e29910. [PMID: 35920594 DOI: 10.1002/pbc.29910] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Revised: 06/22/2022] [Accepted: 07/14/2022] [Indexed: 01/01/2023]
Abstract
PURPOSE To evaluate the prognostic value of texture analysis of the primary tumour with 18 fluorine-dihydroxyphenylalanine positron emission tomography/X-ray computed tomography (18 F-DOPA PET/CT) in patients affected by high-risk neuroblastoma (HR-NBL). METHODS We retrospectively analysed 18 patients with HR-NBL, which had been prospectively enrolled in the course of a previous trial investigating the diagnostic role of 18 F-DOPA PET/CT at the time of the first onset. Texture analysis of the primary tumour was carried out on the PET images using LifeX. Conventional indices, histogram parameters, grey level co-occurrence (GLCM), run-length (GLRLM), neighbouring difference (NGLDM) and zone-length (GLZLM) matrices parameter were extracted; their values were compared with the overall metastatic load, expressed by means of whole-body metabolic burden (WBMB) score and the progression-free/overall survival (PFS and OS). RESULTS There was a direct correlation between WBMB and radiomics parameter describing uptake intensity (SUVmean : p = .004) and voxel heterogeneity (entropy: p = .026; GLCM_Contrast: p = .001). Conversely, texture indices of homogeneity showed an inverse correlation with WBMB (energy: p = .026; GLCM_Homogeneity: p = .006). On the multivariate model, WBMB (p < .01) and the first standardised uptake value (SUV) quartile (p < .001) predicted PFS; OS was predicted by WBMB and the N-myc proto-oncogene protein (MYCN) amplification (p < .05) for both. CONCLUSIONS Textural parameters describing heterogeneity and metabolic intensity of the primary HR-NBL are closely associated with its overall metastatic burden. In turn, the whole-body tumour load appears to be one of the most relevant predictors of progression-free and overall survival.
Collapse
Affiliation(s)
- Francesco Fiz
- Department of Nuclear Medicine, E.O. 'Ospedali Galliera', Genoa, Italy
| | - Gianluca Bottoni
- Department of Nuclear Medicine, E.O. 'Ospedali Galliera', Genoa, Italy
| | - Fabiano Bini
- Department of Mechanical and Aerospace Engineering, 'Sapienza' University of Rome, Rome, Italy
| | - Francesca Cerroni
- Department of Mechanical and Aerospace Engineering, 'Sapienza' University of Rome, Rome, Italy
| | - Franco Marinozzi
- Department of Mechanical and Aerospace Engineering, 'Sapienza' University of Rome, Rome, Italy
| | - Massimo Conte
- Oncology Unit, IRCCS Istituto Giannina Gaslini, Genoa, Italy
| | - Giorgio Treglia
- Clinic of Nuclear Medicine, Imaging Institute of Southern Switzerland, Ente Ospedaliero Cantonale, Bellinzona, Switzerland.,Faculty of Biomedical Sciences, Università della Svizzera italiana, Lugano, Switzerland.,Faculty of Biology and Medicine, University of Lausanne, Lausanne, Switzerland
| | - Giovanni Morana
- Pediatric Neuroradiology Unit, IRCCS Istituto Giannina Gaslini, Genoa, Italy.,Department of Neurosciences, University of Turin, Turin, Italy
| | | | | | - Giacomo Siri
- Scientific Directorate, E.O. 'Ospedali Galliera', Genoa, Italy
| | - Arnoldo Piccardo
- Department of Nuclear Medicine, E.O. 'Ospedali Galliera', Genoa, Italy
| |
Collapse
|
8
|
Wang Z, Yu L, Ding X, Liao X, Wang L. Lymph Node Metastasis Prediction From Whole Slide Images With Transformer-Guided Multiinstance Learning and Knowledge Transfer. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2777-2787. [PMID: 35486559 DOI: 10.1109/tmi.2022.3171418] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The gold standard for diagnosing lymph node metastasis of papillary thyroid carcinoma is to analyze the whole slide histopathological images (WSIs). Due to the large size of WSIs, recent computer-aided diagnosis approaches adopt the multi-instance learning (MIL) strategy and the key part is how to effectively aggregate the information of different instances (patches). In this paper, a novel transformer-guided framework is proposed to predict lymph node metastasis from WSIs, where we incorporate the transformer mechanism to improve the accuracy from three different aspects. First, we propose an effective transformer-based module for discriminative patch feature extraction, including a lightweight feature extractor with a pruned transformer (Tiny-ViT) and a clustering-based instance selection scheme. Next, we propose a new Transformer-MIL module to capture the relationship of different discriminative patches with sparse distribution on WSIs and better nonlinearly aggregate patch-level features into the slide-level prediction. Considering that the slide-level annotation is relatively limited to training a robust Transformer-MIL, we utilize the pathological relationship between the primary tumor and its lymph node metastasis and develop an effective attention-based mutual knowledge distillation (AMKD) paradigm. Experimental results on our collected WSI dataset demonstrate the efficiency of the proposed Transformer-MIL and attention-based knowledge distillation. Our method outperforms the state-of-the-art methods by over 2.72% in AUC (area under the curve).
Collapse
|
9
|
Liu Y, Jia Y, Hou C, Li N, Zhang N, Yan X, Yang L, Guo Y, Chen H, Li J, Hao Y, Liu J. Pathological prognosis classification of patients with neuroblastoma using computational pathology analysis. Comput Biol Med 2022; 149:105980. [PMID: 36001926 DOI: 10.1016/j.compbiomed.2022.105980] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 08/08/2022] [Accepted: 08/14/2022] [Indexed: 11/18/2022]
Abstract
Neuroblastoma is the most common extracranial solid tumor in early childhood. International Neuroblastoma Pathology Classification (INPC) is a commonly used classification system that provides clinicians with a reference for treatment stratification. However, given the complex and subjective assessment of the INPC, there will be inconsistencies in the analysis of the same patient by multiple pathologists. An automated, comprehensive and objective classification method is needed to identify different prognostic groups in patients with neuroblastoma. In this study, we collected 563 hematoxylin and eosin-stained histopathology whole-slide images from 107 patients with neuroblastoma who underwent surgical resection. We proposed a novel processing pipeline for nuclear segmentation, cell-level image feature extraction, and patient-level feature aggregation. Logistic regression model was built to classify patients with favorable histology (FH) and patients with unfavorable histology (UH). On the training/test dataset, patient-level of nucleus morphological/intensity features and age could correctly classify patients with a mean area under the receiver operating characteristic curve (AUC) of 0.946, a mean accuracy of 0.856, and a mean Matthews Correlation Coefficient (MCC) of 0.703,respectively. On the independent validation dataset, the classification model achieved a mean AUC of 0.938, a mean accuracy of 0.865 and a mean MCC of 0.630, showing good generalizability. Our results suggested that automatically derived image features could identify the differences in nuclear morphological and intensity between different prognostic groups, which could provide a reference to pathologists and facilitate the evaluation of the pathological prognosis in patients with neuroblastoma.
Collapse
Affiliation(s)
- Yanfei Liu
- The Affiliated Children's Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, 710003, China
| | - Yuxia Jia
- Center for Brain Imaging, School of Life Science and Technology, Xidian University & Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, Xi'an, Shaanxi, 710126, China; International Joint Research Center for Advanced Medical Imaging and Intelligent Diagnosis and Treatment & Xi'an Key Laboratory of Intelligent Sensing and Regulation of trans-Scale Life Information, School of Life Science and Technology, Xidian University, Xi'an, Shaanxi, 710126, China
| | - Chongzhi Hou
- The Affiliated Children's Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, 710003, China
| | - Nan Li
- Center for Brain Imaging, School of Life Science and Technology, Xidian University & Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, Xi'an, Shaanxi, 710126, China; International Joint Research Center for Advanced Medical Imaging and Intelligent Diagnosis and Treatment & Xi'an Key Laboratory of Intelligent Sensing and Regulation of trans-Scale Life Information, School of Life Science and Technology, Xidian University, Xi'an, Shaanxi, 710126, China
| | - Na Zhang
- The Affiliated Children's Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, 710003, China
| | - Xiaosong Yan
- The Affiliated Children's Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, 710003, China
| | - Li Yang
- Department of Pathology, Xijing Hospital, The Fourth Military Medical University, Xi'an, Shanxi, 710032, China
| | - Yong Guo
- Department of Pathology, Xijing Hospital, The Fourth Military Medical University, Xi'an, Shanxi, 710032, China
| | - Huangtao Chen
- Department of Neurosurgery, the Second Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, 710032, China
| | - Jun Li
- Center for Brain Imaging, School of Life Science and Technology, Xidian University & Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, Xi'an, Shaanxi, 710126, China; International Joint Research Center for Advanced Medical Imaging and Intelligent Diagnosis and Treatment & Xi'an Key Laboratory of Intelligent Sensing and Regulation of trans-Scale Life Information, School of Life Science and Technology, Xidian University, Xi'an, Shaanxi, 710126, China.
| | - Yuewen Hao
- The Affiliated Children's Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, 710003, China.
| | - Jixin Liu
- The Affiliated Children's Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, 710003, China; Center for Brain Imaging, School of Life Science and Technology, Xidian University & Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, Xi'an, Shaanxi, 710126, China; International Joint Research Center for Advanced Medical Imaging and Intelligent Diagnosis and Treatment & Xi'an Key Laboratory of Intelligent Sensing and Regulation of trans-Scale Life Information, School of Life Science and Technology, Xidian University, Xi'an, Shaanxi, 710126, China.
| |
Collapse
|
10
|
van der Kamp A, Waterlander TJ, de Bel T, van der Laak J, van den Heuvel-Eibrink MM, Mavinkurve-Groothuis AMC, de Krijger RR. Artificial Intelligence in Pediatric Pathology: The Extinction of a Medical Profession or the Key to a Bright Future? Pediatr Dev Pathol 2022; 25:380-387. [PMID: 35238696 DOI: 10.1177/10935266211059809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Artificial Intelligence (AI) has become of increasing interest over the past decade. While digital image analysis (DIA) is already being used in radiology, it is still in its infancy in pathology. One of the reasons is that large-scale digitization of glass slides has only recently become available. With the advent of digital slide scanners, that digitize glass slides into whole slide images, many labs are now in a transition phase towards digital pathology. However, only few departments worldwide are currently fully digital. Digital pathology provides the ability to annotate large datasets and train computers to develop and validate robust algorithms, similar to radiology. In this opinionated overview, we will give a brief introduction into AI in pathology, discuss the potential positive and negative implications and speculate about the future role of AI in the field of pediatric pathology.
Collapse
Affiliation(s)
- Ananda van der Kamp
- 541199Princess Máxima Center for Pediatric Oncology, Utrecht, the Netherlands
| | - Tomas J Waterlander
- 541199Princess Máxima Center for Pediatric Oncology, Utrecht, the Netherlands
| | - Thomas de Bel
- Department of Pathology, 234134Radboud University Medical Center, Nijmegen, the Netherlands
| | - Jeroen van der Laak
- Department of Pathology, 234134Radboud University Medical Center, Nijmegen, the Netherlands.,Center for Medical Image Science and Visualization, 4566Linköping University, Linköping, Sweden
| | | | | | - Ronald R de Krijger
- 541199Princess Máxima Center for Pediatric Oncology, Utrecht, the Netherlands.,Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
| |
Collapse
|
11
|
A comprehensive review of computer-aided whole-slide image analysis: from datasets to feature extraction, segmentation, classification and detection approaches. Artif Intell Rev 2022. [DOI: 10.1007/s10462-021-10121-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
12
|
Automated Detection and Characterization of Colon Cancer with Deep Convolutional Neural Networks. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:5269913. [PMID: 36704098 PMCID: PMC9873459 DOI: 10.1155/2022/5269913] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Revised: 06/22/2022] [Accepted: 07/14/2022] [Indexed: 01/31/2023]
Abstract
Colon cancer is a momentous reason for illness and death in people. The conclusive diagnosis of colon cancer is made through histological examination. Convolutional neural networks are being used to analyze colon cancer via digital image processing with the introduction of whole-slide imaging. Accurate categorization of colon cancers is necessary for capable analysis. Our objective is to promote a system for detecting and classifying colon adenocarcinomas by applying a deep convolutional neural network (DCNN) model with some preprocessing techniques on digital histopathology images. It is a leading cause of cancer-related death, despite the fact that both traditional and modern methods are capable of comparing images that may encompass cancer regions of various sorts after looking at a significant number of colon cancer images. The fundamental problem for colon histopathologists is differentiating benign from malignant illnesses to having some complicated factors. A cancer diagnosis can be automated through artificial intelligence (AI), enabling us to appraise more patients in less time and at a decreased cost. Modern deep learning (MDL) and digital image processing (DIP) approaches are used to accomplish this. The results indicate that the proposed structure can accurately analyze cancer tissues to a maximum of 99.80%. By implementing this approach, medical practitioners will establish an automated and reliable system for detecting various forms of colon cancer. Moreover, CAD systems will be built in the near future to extract numerous aspects from colonoscopic images for use as a preprocessing module for colon cancer diagnosis.
Collapse
|
13
|
Zhao L, Xu X, Hou R, Zhao W, Zhong H, Teng H, Han Y, Fu X, Sun J, Zhao J. Lung cancer subtype classification using histopathological images based on weakly supervised multi-instance learning. Phys Med Biol 2021; 66. [PMID: 34794136 DOI: 10.1088/1361-6560/ac3b32] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2021] [Accepted: 11/18/2021] [Indexed: 11/12/2022]
Abstract
Objective.Subtype classification plays a guiding role in the clinical diagnosis and treatment of non-small-cell lung cancer (NSCLC). However, due to the gigapixel of whole slide images (WSIs) and the absence of definitive morphological features, most automatic subtype classification methods for NSCLC require manually delineating the regions of interest (ROIs) on WSIs.Approach.In this paper, a weakly supervised framework is proposed for accurate subtype classification while freeing pathologists from pixel-level annotation. With respect to the characteristics of histopathological images, we design a two-stage structure with ROI localization and subtype classification. We first develop a method called multi-resolution expectation-maximization convolutional neural network (MR-EM-CNN) to locate ROIs for subsequent subtype classification. The EM algorithm is introduced to select the discriminative image patches for training a patch-wise network, with only WSI-wise labels available. A multi-resolution mechanism is designed for fine localization, similar to the coarse-to-fine process of manual pathological analysis. In the second stage, we build a novel hierarchical attention multi-scale network (HMS) for subtype classification. HMS can capture multi-scale features flexibly driven by the attention module and implement hierarchical features interaction.Results.Experimental results on the 1002-patient Cancer Genome Atlas dataset achieved an AUC of 0.9602 in the ROI localization and an AUC of 0.9671 for subtype classification.Significance.The proposed method shows superiority compared with other algorithms in the subtype classification of NSCLC. The proposed framework can also be extended to other classification tasks with WSIs.
Collapse
Affiliation(s)
- Lu Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Xiaowei Xu
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Runping Hou
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China.,Department of radiation oncology, Shanghai Chest Hospital, Shanghai, People's Republic of China
| | - Wangyuan Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Hai Zhong
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Haohua Teng
- Department of pathology, Shanghai Chest Hospital, Shanghai, People's Republic of China
| | - Yuchen Han
- Department of pathology, Shanghai Chest Hospital, Shanghai, People's Republic of China
| | - Xiaolong Fu
- Department of radiation oncology, Shanghai Chest Hospital, Shanghai, People's Republic of China
| | - Jianqi Sun
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| |
Collapse
|
14
|
Bouvier C, Souedet N, Levy J, Jan C, You Z, Herard AS, Mergoil G, Rodriguez BH, Clouchoux C, Delzescaux T. Reduced and stable feature sets selection with random forest for neurons segmentation in histological images of macaque brain. Sci Rep 2021; 11:22973. [PMID: 34836996 PMCID: PMC8626511 DOI: 10.1038/s41598-021-02344-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2020] [Accepted: 10/27/2021] [Indexed: 01/01/2023] Open
Abstract
In preclinical research, histology images are produced using powerful optical microscopes to digitize entire sections at cell scale. Quantification of stained tissue relies on machine learning driven segmentation. However, such methods require multiple additional information, or features, which are increasing the quantity of data to process. As a result, the quantity of features to deal with represents a drawback to process large series or massive histological images rapidly in a robust manner. Existing feature selection methods can reduce the amount of required information but the selected subsets lack reproducibility. We propose a novel methodology operating on high performance computing (HPC) infrastructures and aiming at finding small and stable sets of features for fast and robust segmentation of high-resolution histological images. This selection has two steps: (1) selection at features families scale (an intermediate pool of features, between spaces and individual features) and (2) feature selection performed on pre-selected features families. We show that the selected sets of features are stables for two different neuron staining. In order to test different configurations, one of these dataset is a mono-subject dataset and the other is a multi-subjects dataset to test different configurations. Furthermore, the feature selection results in a significant reduction of computation time and memory cost. This methodology will allow exhaustive histological studies at a high-resolution scale on HPC infrastructures for both preclinical and clinical research.
Collapse
Affiliation(s)
- C Bouvier
- CEA, CNRS, MIRCen, Laboratoire Des Maladies Neurodégénératives, Université Paris-Saclay, Fontenay-aux-Roses, France
- Witsee, Paris, France
| | - N Souedet
- CEA, CNRS, MIRCen, Laboratoire Des Maladies Neurodégénératives, Université Paris-Saclay, Fontenay-aux-Roses, France
| | - J Levy
- Service de Médecine Physique Et de Réadaptation - APHP Hôpital Raymond Poincaré, Garches, France
- UMR 1179, Handicap Neuromusculaire - INSERM-UVSQ, Montigny le Bretonneux, France
| | - C Jan
- CEA, CNRS, MIRCen, Laboratoire Des Maladies Neurodégénératives, Université Paris-Saclay, Fontenay-aux-Roses, France
| | - Z You
- CEA, CNRS, MIRCen, Laboratoire Des Maladies Neurodégénératives, Université Paris-Saclay, Fontenay-aux-Roses, France
- Shaanxi Key Laboratory for Network Computing and Security Technology, School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, China
| | - A-S Herard
- CEA, CNRS, MIRCen, Laboratoire Des Maladies Neurodégénératives, Université Paris-Saclay, Fontenay-aux-Roses, France
| | | | | | - C Clouchoux
- CEA, CNRS, MIRCen, Laboratoire Des Maladies Neurodégénératives, Université Paris-Saclay, Fontenay-aux-Roses, France
- Witsee, Paris, France
| | - T Delzescaux
- CEA, CNRS, MIRCen, Laboratoire Des Maladies Neurodégénératives, Université Paris-Saclay, Fontenay-aux-Roses, France.
| |
Collapse
|
15
|
Su R, Liu X, Jin Q, Liu X, Wei L. Identification of glioblastoma molecular subtype and prognosis based on deep MRI features. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.107490] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
|
16
|
Shifat-E-Rabbi M, Yin X, Rubaiyat AHM, Li S, Kolouri S, Aldroubi A, Nichols JM, Rohde GK. Radon Cumulative Distribution Transform Subspace Modeling for Image Classification. JOURNAL OF MATHEMATICAL IMAGING AND VISION 2021; 63:1185-1203. [PMID: 35464640 PMCID: PMC9032314 DOI: 10.1007/s10851-021-01052-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Accepted: 07/16/2021] [Indexed: 06/14/2023]
Abstract
We present a new supervised image classification method applicable to a broad class of image deformation models. The method makes use of the previously described Radon Cumulative Distribution Transform (R-CDT) for image data, whose mathematical properties are exploited to express the image data in a form that is more suitable for machine learning. While certain operations such as translation, scaling, and higher-order transformations are challenging to model in native image space, we show the R-CDT can capture some of these variations and thus render the associated image classification problems easier to solve. The method - utilizing a nearest-subspace algorithm in the R-CDT space - is simple to implement, non-iterative, has no hyper-parameters to tune, is computationally efficient, label efficient, and provides competitive accuracies to state-of-the-art neural networks for many types of classification problems. In addition to the test accuracy performances, we show improvements (with respect to neural network-based methods) in terms of computational efficiency (it can be implemented without the use of GPUs), number of training samples needed for training, as well as out-of-distribution generalization. The Python code for reproducing our results is available at [1].
Collapse
Affiliation(s)
| | | | | | - Shiying Li
- Department of Biomedical Engineering, University of Virginia, Charlottesville, VA 22908, USA
| | - Soheil Kolouri
- Department of Computer Science, Vanderbilt University, Nashville, TN 37212, USA
| | - Akram Aldroubi
- Department of Mathematics, Vanderbilt University, Nashville, TN 37212, USA
| | | | - Gustavo K. Rohde
- Department of Biomedical Engineering and the Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, VA 22908, USA
| |
Collapse
|
17
|
Wang D, Liu C, Wang X, Liu X, Lan C, Zhao P, Cho WC, Graeber MB, Liu Y. Automated Machine-Learning Framework Integrating Histopathological and Radiological Information for Predicting IDH1 Mutation Status in Glioma. FRONTIERS IN BIOINFORMATICS 2021; 1:718697. [PMID: 36303770 PMCID: PMC9581043 DOI: 10.3389/fbinf.2021.718697] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Accepted: 09/28/2021] [Indexed: 09/01/2023] Open
Abstract
Diffuse gliomas are the most common malignant primary brain tumors. Identification of isocitrate dehydrogenase 1 (IDH1) mutations aids the diagnostic classification of these tumors and the prediction of their clinical outcomes. While histology continues to play a key role in frozen section diagnosis, as a diagnostic reference and as a method for monitoring disease progression, recent research has demonstrated the ability of multi-parametric magnetic resonance imaging (MRI) sequences for predicting IDH genotypes. In this paper, we aim to improve the prediction accuracy of IDH1 genotypes by integrating multi-modal imaging information from digitized histopathological data derived from routine histological slide scans and the MRI sequences including T1-contrast (T1) and Fluid-attenuated inversion recovery imaging (T2-FLAIR). In this research, we have established an automated framework to process, analyze and integrate the histopathological and radiological information from high-resolution pathology slides and multi-sequence MRI scans. Our machine-learning framework comprehensively computed multi-level information including molecular level, cellular level, and texture level information to reflect predictive IDH genotypes. Firstly, an automated pre-processing was developed to select the regions of interest (ROIs) from pathology slides. Secondly, to interactively fuse the multimodal complementary information, comprehensive feature information was extracted from the pathology ROIs and segmented tumor regions (enhanced tumor, edema and non-enhanced tumor) from MRI sequences. Thirdly, a Random Forest (RF)-based algorithm was employed to identify and quantitatively characterize histopathological and radiological imaging origins, respectively. Finally, we integrated multi-modal imaging features with a machine-learning algorithm and tested the performance of the framework for IDH1 genotyping, we also provided visual and statistical explanation to support the understanding on prediction outcomes. The training and testing experiments on 217 pathologically verified IDH1 genotyped glioma cases from multi-resource validated that our fully automated machine-learning model predicted IDH1 genotypes with greater accuracy and reliability than models that were based on radiological imaging data only. The accuracy of IDH1 genotype prediction was 0.90 compared to 0.82 for radiomic result. Thus, the integration of multi-parametric imaging features for automated analysis of cross-modal biomedical data improved the prediction accuracy of glioma IDH1 genotypes.
Collapse
Affiliation(s)
- Dingqian Wang
- School of Computer Science, The University of Sydney, Sydney, NSW, Australia
| | - Cuicui Liu
- Department of Neurology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
| | - Xiuying Wang
- School of Computer Science, The University of Sydney, Sydney, NSW, Australia
| | - Xuejun Liu
- Department of Radiology, Hospital Affiliated to Qingdao University, Qingdao, China
| | - Chuanjin Lan
- Department of Neurosurgery, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
- Department of Neurosurgery, Shandong Provincial Hospital Affiliated to Shandong University, Jinan, China
| | - Peng Zhao
- Department of Neurosurgery, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
- Department of Neurosurgery, Shandong Provincial Hospital Affiliated to Shandong University, Jinan, China
| | - William C. Cho
- Department of Clinical Oncology, Queen Elizabeth Hospital, Kowloon, Hong Kong, SAR China
| | - Manuel B. Graeber
- Ken Parker Brain Tumor Research Laboratories, Brain and Mind Centre, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW, Australia
| | - Yingchao Liu
- Department of Neurosurgery, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
- Department of Neurosurgery, Shandong Provincial Hospital Affiliated to Shandong University, Jinan, China
| |
Collapse
|
18
|
Sharma A, Tarbox L, Kurc T, Bona J, Smith K, Kathiravelu P, Bremer E, Saltz JH, Prior F. PRISM: A Platform for Imaging in Precision Medicine. JCO Clin Cancer Inform 2021; 4:491-499. [PMID: 32479186 PMCID: PMC7328100 DOI: 10.1200/cci.20.00001] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
PURPOSE Precision medicine requires an understanding of individual variability, which can only be acquired from large data collections such as those supported by the Cancer Imaging Archive (TCIA). We have undertaken a program to extend the types of data TCIA can support. This, in turn, will enable TCIA to play a key role in precision medicine research by collecting and disseminating high-quality, state-of-the-art, quantitative imaging data that meet the evolving needs of the cancer research community. METHODS A modular technology platform is presented that would allow existing data resources, such as TCIA, to evolve into a comprehensive data resource that meets the needs of users engaged in translational research for imaging-based precision medicine. This Platform for Imaging in Precision Medicine (PRISM) helps streamline the deployment and improve TCIA's efficiency and sustainability. More importantly, its inherent modular architecture facilitates a piecemeal adoption by other data repositories. RESULTS PRISM includes services for managing radiology and pathology images and features and associated clinical data. A semantic layer is being built to help users explore diverse collections and pool data sets to create specialized cohorts. PRISM includes tools for image curation and de-identification. It includes image visualization and feature exploration tools. The entire platform is distributed as a series of containerized microservices with representational state transfer interfaces. CONCLUSION PRISM is helping modernize, scale, and sustain the technology stack that powers TCIA. Repositories can take advantage of individual PRISM services such as de-identification and quality control. PRISM is helping scale image informatics for cancer research at a time when the size, complexity, and demands to integrate image data with other precision medicine data-intensive commons are mounting.
Collapse
Affiliation(s)
| | - Lawrence Tarbox
- University of Arkansas for Medical Sciences, Little Rock, AR
| | | | - Jonathan Bona
- University of Arkansas for Medical Sciences, Little Rock, AR
| | - Kirk Smith
- University of Arkansas for Medical Sciences, Little Rock, AR
| | | | | | | | - Fred Prior
- University of Arkansas for Medical Sciences, Little Rock, AR
| |
Collapse
|
19
|
Mi W, Li J, Guo Y, Ren X, Liang Z, Zhang T, Zou H. Deep Learning-Based Multi-Class Classification of Breast Digital Pathology Images. Cancer Manag Res 2021; 13:4605-4617. [PMID: 34140807 PMCID: PMC8203273 DOI: 10.2147/cmar.s312608] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Accepted: 05/13/2021] [Indexed: 11/23/2022] Open
Abstract
Introduction Breast cancer, one of the most common health threats to females worldwide, has always been a crucial topic in the medical field. With the rapid development of digital pathology, many scholars have used AI-based systems to classify breast cancer pathological images. However, most existing studies only stayed on the binary classification of breast lesions (normal vs tumor or benign vs malignant), far from meeting the clinical demand. Therefore, we established a multi-class classification system of breast digital pathology images based on AI, which is more clinically practical than the binary classification system. Methods In this paper, we adopted a two-stage architecture based on deep learning method and machine learning method for the multi-class classification (normal tissue, benign lesion, ductal carcinoma in situ, and invasive carcinoma) of breast digital pathological images. Results The proposed approach achieved an overall accuracy of 86.67% at patch-level. At WSI-level, the overall accuracies of our classification system were 88.16% on validation data and 90.43% on test data. Additionally, we used two public datasets, the BreakHis and BACH, for independent verification. The accuracies our model obtained on these two datasets were comparable to related publications. Furthermore, our model could achieve accuracies of 85.19% on multi-classification and 96.30% on binary classification (non-malignant vs malignant) using pathology images of frozen sections, which was proven to have good generalizability. Then, we used t-SNE for visualization of patch classification efficiency. Finally, we analyzed morphological characteristics of patches learned by the model. Conclusion The proposed two-stage model could be effectively applied to the multi-class classification task of breast pathology images and could be a very useful tool for assisting pathologists in diagnosing breast cancer.
Collapse
Affiliation(s)
- Weiming Mi
- Department of Automation, School of Information Science and Technology, Tsinghua University, Beijing, Peoples Republic of China.,Beijing National Research Center for Information Science and Technology, Beijing, Peoples Republic of China
| | - Junjie Li
- Molecular Pathology Research Center, Department of Pathology, Peking Union Medical College Hospital (PUMCH), Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing, Peoples Republic of China
| | - Yucheng Guo
- Tsimage Medical Technology, Yantian Modern Industry Service Center, Shenzhen, Peoples Republic of China
| | - Xinyu Ren
- Molecular Pathology Research Center, Department of Pathology, Peking Union Medical College Hospital (PUMCH), Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing, Peoples Republic of China
| | - Zhiyong Liang
- Molecular Pathology Research Center, Department of Pathology, Peking Union Medical College Hospital (PUMCH), Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing, Peoples Republic of China
| | - Tao Zhang
- Department of Automation, School of Information Science and Technology, Tsinghua University, Beijing, Peoples Republic of China.,Beijing National Research Center for Information Science and Technology, Beijing, Peoples Republic of China
| | - Hao Zou
- Tsimage Medical Technology, Yantian Modern Industry Service Center, Shenzhen, Peoples Republic of China.,Center for Intelligent Medical Imaging & Health, Research Institute of Tsinghua University in Shenzhen, Shenzhen, Peoples Republic of China
| |
Collapse
|
20
|
Cicalese PA, Mobiny A, Shahmoradi Z, Yi X, Mohan C, Van Nguyen H. Kidney Level Lupus Nephritis Classification Using Uncertainty Guided Bayesian Convolutional Neural Networks. IEEE J Biomed Health Inform 2021; 25:315-324. [PMID: 33206612 DOI: 10.1109/jbhi.2020.3039162] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The kidney biopsy based diagnosis of Lupus Nephritis (LN) is characterized by low inter-observer agreement, with misdiagnosis being associated with increased patient morbidity and mortality. Although various Computer Aided Diagnosis (CAD) systems have been developed for other nephrohistopathological applications, little has been done to accurately classify kidneys based on their kidney level Lupus Glomerulonephritis (LGN) scores. The successful implementation of CAD systems has also been hindered by the diagnosing physician's perceived classifier strengths and weaknesses, which has been shown to have a negative effect on patient outcomes. We propose an Uncertainty-Guided Bayesian Classification (UGBC) scheme that is designed to accurately classify control, class I/II, and class III/IV LGN (3 class) at both the glomerular-level classification task (26,634 segmented glomerulus images) and the kidney-level classification task (87 MRL/lpr mouse kidney sections). Data annotation was performed using a high throughput, bulk labeling scheme that is designed to take advantage of Deep Neural Network's (or DNNs) resistance to label noise. Our augmented UGBC scheme achieved a 94.5% weighted glomerular-level accuracy while achieving a weighted kidney-level accuracy of 96.6%, improving upon the standard Convolutional Neural Network (CNN) architecture by 11.8% and 3.5% respectively.
Collapse
|
21
|
Tellez D, Litjens G, van der Laak J, Ciompi F. Neural Image Compression for Gigapixel Histopathology Image Analysis. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2021; 43:567-578. [PMID: 31442971 DOI: 10.1109/tpami.2019.2936841] [Citation(s) in RCA: 58] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
We propose Neural Image Compression (NIC), a two-step method to build convolutional neural networks for gigapixel image analysis solely using weak image-level labels. First, gigapixel images are compressed using a neural network trained in an unsupervised fashion, retaining high-level information while suppressing pixel-level noise. Second, a convolutional neural network (CNN) is trained on these compressed image representations to predict image-level labels, avoiding the need for fine-grained manual annotations. We compared several encoding strategies, namely reconstruction error minimization, contrastive training and adversarial feature learning, and evaluated NIC on a synthetic task and two public histopathology datasets. We found that NIC can exploit visual cues associated with image-level labels successfully, integrating both global and local visual information. Furthermore, we visualized the regions of the input gigapixel images where the CNN attended to, and confirmed that they overlapped with annotations from human experts.
Collapse
|
22
|
Zeng H, Chen L, Huang Y, Luo Y, Ma X. Integrative Models of Histopathological Image Features and Omics Data Predict Survival in Head and Neck Squamous Cell Carcinoma. Front Cell Dev Biol 2020; 8:553099. [PMID: 33195188 PMCID: PMC7658095 DOI: 10.3389/fcell.2020.553099] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2020] [Accepted: 10/08/2020] [Indexed: 02/05/2023] Open
Abstract
Background Both histopathological image features and genomics data were associated with survival outcome of cancer patients. However, integrating features of histopathological images, genomics and other omics for improving prognosis prediction has not been reported in head and neck squamous cell carcinoma (HNSCC). Methods A dataset of 216 HNSCC patients was derived from the Cancer Genome Atlas (TCGA) with information of clinical characteristics, genetic mutation, RNA sequencing, protein expression and histopathological images. Patients were randomly assigned into training (n = 108) or validation (n = 108) sets. We extracted 593 quantitative image features, and used random forest algorithm with 10-fold cross-validation to build prognostic models for overall survival (OS) in training set, then compared the area under the time-dependent receiver operating characteristic curve (AUC) in validation set. Results In validation set, histopathological image features had significant predictive value for OS (5-year AUC = 0.784). The histopathology + omics models showed better predictive performance than genomics, transcriptomics or proteomics alone. Moreover, the multi-omics model incorporating image features, genomics, transcriptomics and proteomics reached the maximal 1-, 3-, and 5-year AUC of 0.871, 0.908, and 0.929, with most significant survival difference (HR = 10.66, 95%CI: 5.06–26.8, p < 0.001). Decision curve analysis also revealed a better net benefit of multi-omics model. Conclusion The histopathological images could provide complementary features to improve prognostic performance for HNSCC patients. The integrative model of histopathological image features and omics data might serve as an effective tool for survival prediction and risk stratification in clinical practice.
Collapse
Affiliation(s)
- Hao Zeng
- State Key Laboratory of Biotherapy, Department of Biotherapy, Cancer Center, West China Hospital, Sichuan University Collaborative Innovation Center, Chengdu, China
| | - Linyan Chen
- State Key Laboratory of Biotherapy, Department of Biotherapy, Cancer Center, West China Hospital, Sichuan University Collaborative Innovation Center, Chengdu, China
| | - Yeqian Huang
- West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
| | - Yuling Luo
- West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
| | - Xuelei Ma
- State Key Laboratory of Biotherapy, Department of Biotherapy, Cancer Center, West China Hospital, Sichuan University Collaborative Innovation Center, Chengdu, China
| |
Collapse
|
23
|
Yin PN, Kc K, Wei S, Yu Q, Li R, Haake AR, Miyamoto H, Cui F. Histopathological distinction of non-invasive and invasive bladder cancers using machine learning approaches. BMC Med Inform Decis Mak 2020; 20:162. [PMID: 32680493 PMCID: PMC7367328 DOI: 10.1186/s12911-020-01185-z] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2019] [Accepted: 07/13/2020] [Indexed: 01/18/2023] Open
Abstract
Background One of the most challenging tasks for bladder cancer diagnosis is to histologically differentiate two early stages, non-invasive Ta and superficially invasive T1, the latter of which is associated with a significantly higher risk of disease progression. Indeed, in a considerable number of cases, Ta and T1 tumors look very similar under microscope, making the distinction very difficult even for experienced pathologists. Thus, there is an urgent need for a favoring system based on machine learning (ML) to distinguish between the two stages of bladder cancer. Methods A total of 1177 images of bladder tumor tissues stained by hematoxylin and eosin were collected by pathologists at University of Rochester Medical Center, which included 460 non-invasive (stage Ta) and 717 invasive (stage T1) tumors. Automatic pipelines were developed to extract features for three invasive patterns characteristic to the T1 stage bladder cancer (i.e., desmoplastic reaction, retraction artifact, and abundant pinker cytoplasm), using imaging processing software ImageJ and CellProfiler. Features extracted from the images were analyzed by a suite of machine learning approaches. Results We extracted nearly 700 features from the Ta and T1 tumor images. Unsupervised clustering analysis failed to distinguish hematoxylin and eosin images of Ta vs. T1 tumors. With a reduced set of features, we successfully distinguished 1177 Ta or T1 images with an accuracy of 91–96% by six supervised learning methods. By contrast, convolutional neural network (CNN) models that automatically extract features from images produced an accuracy of 84%, indicating that feature extraction driven by domain knowledge outperforms CNN-based automatic feature extraction. Further analysis revealed that desmoplastic reaction was more important than the other two patterns, and the number and size of nuclei of tumor cells were the most predictive features. Conclusions We provide a ML-empowered, feature-centered, and interpretable diagnostic system to facilitate the accurate staging of Ta and T1 diseases, which has a potential to apply to other types of cancer.
Collapse
Affiliation(s)
- Peng-Nien Yin
- Thomas H. Gosnell School of Life Sciences, Rochester Institute of Technology, 1 Lomb Memorial Drive, Rochester, NY, 14623, USA
| | - Kishan Kc
- Golisano College of Computing and Information Sciences, Rochester Institute of Technology, 20 Lomb Memorial Drive, Rochester, NY, 14623, USA
| | - Shishi Wei
- Golisano College of Computing and Information Sciences, Rochester Institute of Technology, 20 Lomb Memorial Drive, Rochester, NY, 14623, USA
| | - Qi Yu
- Golisano College of Computing and Information Sciences, Rochester Institute of Technology, 20 Lomb Memorial Drive, Rochester, NY, 14623, USA
| | - Rui Li
- Golisano College of Computing and Information Sciences, Rochester Institute of Technology, 20 Lomb Memorial Drive, Rochester, NY, 14623, USA
| | - Anne R Haake
- Golisano College of Computing and Information Sciences, Rochester Institute of Technology, 20 Lomb Memorial Drive, Rochester, NY, 14623, USA
| | - Hiroshi Miyamoto
- Department of Pathology and Laboratory Medicine, University of Rochester Medical Center, 601 Elmwood Avenue, Rochester, NY, 14642, USA.
| | - Feng Cui
- Thomas H. Gosnell School of Life Sciences, Rochester Institute of Technology, 1 Lomb Memorial Drive, Rochester, NY, 14623, USA.
| |
Collapse
|
24
|
Casanova R, Leblond AL, Wu C, Haberecker M, Burger IA, Soltermann A. Enhanced prognostic stratification of neoadjuvant treated lung squamous cell carcinoma by computationally-guided tumor regression scoring. Lung Cancer 2020; 147:49-55. [PMID: 32673826 DOI: 10.1016/j.lungcan.2020.07.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 06/19/2020] [Accepted: 07/02/2020] [Indexed: 11/15/2022]
Abstract
INTRODUCTION The amount of residual tumor burden after neoadjuvant chemotherapy is an important prognosticator, but for non-small cell lung carcinoma (NSCLC), no official regression scoring system is yet established. Computationally derived histological regression scores could provide unbiased and quantitative readouts to complement the clinical assessment of treatment response. METHODS Histopathologic tumor regression was microscopically assessed on whole cases in a neoadjuvant chemotherapy-treated cohort (NAC, n = 55 patients) of lung squamous cell carcinomas (LSCC). For each patient, the slide showing the least pathologic regression was selected for subsequent computational analysis and histological features were quantified: percentage of vital tumor cells (cTu.Percentage), total surface covered by vital tumor cells (cTu.Area), area of the largest vital tumor fragment (cTu.Size.max), and total number of vital tumor fragments (cTu.Fragments). A chemo-naïve LSCC cohort (CN, n = 104) was used for reference. For 23 of the 55 patients [18F]-Fluorodeoxyglucose (FDG) PET/CT measurements of maximum standard uptake value (SUVmax), background subtracted lesion activity (BSL) and background subtracted volume (BSV) were correlated with pathologic regression. Survival analysis was carried out using Cox regression and receiver operating characteristic (ROC) curve analysis using a 3-years cutoff. RESULTS All computational regression parameters significantly correlated with relative changes of BSV FDG PET/CT values after neoadjuvant chemotherapy. ROC curve analysis of histological parameters of NAC patients showed that cTu.Percentage was the most accurate prognosticator of overall survival (ROC curve AUC = 0.77, p-value = 0.001, Cox regression HR = 3.6, p = 0.001, variable cutoff < = 30 %). CONCLUSIONS This study demonstrates the prognostic relevance of computer-derived histopathologic scores. Additionally, the analysis carried out on slides displaying the least pathologic regression correlated with overall pathologic response and PET/CT values. This might improve the objective histopathologic assessment of tumor response in neoadjuvant setting.
Collapse
Affiliation(s)
- Ruben Casanova
- Institute of Pathology and Molecular Pathology, University Hospital Zurich, Switzerland.
| | - Anne-Laure Leblond
- Institute of Pathology and Molecular Pathology, University Hospital Zurich, Switzerland
| | - Chengguang Wu
- Institute of Pathology and Molecular Pathology, University Hospital Zurich, Switzerland
| | - Martina Haberecker
- Institute of Pathology and Molecular Pathology, University Hospital Zurich, Switzerland
| | - Irene A Burger
- Department of Nuclear Medicine, University Hospital Zurich, Switzerland
| | | |
Collapse
|
25
|
Shifat-E-Rabbi M, Yin X, Fitzgerald CE, Rohde GK. Cell Image Classification: A Comparative Overview. Cytometry A 2020; 97:347-362. [PMID: 32040260 DOI: 10.1002/cyto.a.23984] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Revised: 10/18/2019] [Accepted: 01/18/2020] [Indexed: 12/13/2022]
Abstract
Cell image classification methods are currently being used in numerous applications in cell biology and medicine. Applications include understanding the effects of genes and drugs in screening experiments, understanding the role and subcellular localization of different proteins, as well as diagnosis and prognosis of cancer from images acquired using cytological and histological techniques. The article also reviews three main approaches for cell image classification most often used: numerical feature extraction, end-to-end classification with neural networks (NNs), and transport-based morphometry (TBM). In addition, we provide comparisons on four different cell imaging datasets to highlight the relative strength of each method. The results computed using four publicly available datasets show that numerical features tend to carry the best discriminative information for most of the classification tasks. Results also show that NN-based methods produce state-of-the-art results in the dataset that contains a relatively large number of training samples. Data augmentation or the choice of a more recently reported architecture does not necessarily improve the classification performance of NNs in the datasets with limited number of training samples. If understanding and visualization are desired aspects, TBM methods can offer the ability to invert classification functions, and thus can aid in the interpretation of results. These and other comparison outcomes are discussed with the aim of clarifying the advantages and disadvantages of each method. © 2020 International Society for Advancement of Cytometry.
Collapse
Affiliation(s)
- Mohammad Shifat-E-Rabbi
- Imaging and Data Science Lab, Charlottesville, Virginia, 22903
- Department of Biomedical Engineering, University of Virginia, Charlottesville, Virginia, 22903
| | - Xuwang Yin
- Imaging and Data Science Lab, Charlottesville, Virginia, 22903
- Department of Electrical & Computer Engineering, University of Virginia, Charlottesville, Virginia, 22903
| | - Cailey E Fitzgerald
- Imaging and Data Science Lab, Charlottesville, Virginia, 22903
- Department of Biomedical Engineering, University of Virginia, Charlottesville, Virginia, 22903
| | - Gustavo K Rohde
- Imaging and Data Science Lab, Charlottesville, Virginia, 22903
- Department of Biomedical Engineering, University of Virginia, Charlottesville, Virginia, 22903
- Department of Electrical & Computer Engineering, University of Virginia, Charlottesville, Virginia, 22903
| |
Collapse
|
26
|
Liao H, Xiong T, Peng J, Xu L, Liao M, Zhang Z, Wu Z, Yuan K, Zeng Y. Classification and Prognosis Prediction from Histopathological Images of Hepatocellular Carcinoma by a Fully Automated Pipeline Based on Machine Learning. Ann Surg Oncol 2020; 27:2359-2369. [PMID: 31916093 DOI: 10.1245/s10434-019-08190-1] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2019] [Indexed: 02/05/2023]
Abstract
OBJECTIVE The aim of this study was to develop quantitative feature-based models from histopathological images to distinguish hepatocellular carcinoma (HCC) from adjacent normal tissue and predict the prognosis of HCC patients after surgical resection. METHODS A fully automated pipeline was constructed using computational approaches to analyze the quantitative features of histopathological slides of HCC patients, in which the features were extracted from the hematoxylin and eosin (H&E)-stained whole-slide images of HCC patients from The Cancer Genome Atlas and tissue microarray images from West China Hospital. The extracted features were used to train the statistical models that classify tissue slides and predict patients' survival outcomes by machine-learning methods. RESULTS A total of 1733 quantitative image features were extracted from each histopathological slide. The diagnostic classifier based on 31 features was able to successfully distinguish HCC from adjacent normal tissues in both the test [area under the receiver operating characteristic curve (AUC) 0.988] and external validation sets (AUC 0.886). The random-forest prognostic model using 46 features was able to significantly stratify patients in each set into longer- or shorter-term survival groups according to their assigned risk scores. Moreover, the prognostic model we constructed showed comparable predicting accuracy as TNM staging systems in predicting patients' survival at different time points after surgery. CONCLUSIONS Our findings suggest that machine-learning models derived from image features can assist clinicians in HCC diagnosis and its prognosis prediction after hepatectomy.
Collapse
Affiliation(s)
- Haotian Liao
- Department of Liver Surgery and Liver Transplantation, State Key Laboratory of Biotherapy and Cancer Center, West China Hospital, Sichuan University and Collaborative Innovation Center of Biotherapy, Chengdu, China
| | - Tianyuan Xiong
- Department of Cardiology, West China Hospital, Sichuan University, Chengdu, China
| | - Jiajie Peng
- School of Computer Science, Northwestern Polytechnical University, Xi'an, China
| | - Lin Xu
- Department of Liver Surgery and Liver Transplantation, State Key Laboratory of Biotherapy and Cancer Center, West China Hospital, Sichuan University and Collaborative Innovation Center of Biotherapy, Chengdu, China
| | - Mingheng Liao
- Department of Liver Surgery and Liver Transplantation, State Key Laboratory of Biotherapy and Cancer Center, West China Hospital, Sichuan University and Collaborative Innovation Center of Biotherapy, Chengdu, China
| | - Zhen Zhang
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
| | - Zhenru Wu
- Laboratory of Pathology, Department of Pathology, West China Hospital, Sichuan University, Chengdu, China
| | - Kefei Yuan
- Department of Liver Surgery and Liver Transplantation, State Key Laboratory of Biotherapy and Cancer Center, West China Hospital, Sichuan University and Collaborative Innovation Center of Biotherapy, Chengdu, China.
| | - Yong Zeng
- Department of Liver Surgery and Liver Transplantation, State Key Laboratory of Biotherapy and Cancer Center, West China Hospital, Sichuan University and Collaborative Innovation Center of Biotherapy, Chengdu, China.
| |
Collapse
|
27
|
|
28
|
Li Q, Wang X, Liang F, Xiao G. A BAYESIAN MARK INTERACTION MODEL FOR ANALYSIS OF TUMOR PATHOLOGY IMAGES. Ann Appl Stat 2019; 13:1708-1732. [PMID: 34349870 PMCID: PMC8330435 DOI: 10.1214/19-aoas1254] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
With the advance of imaging technology, digital pathology imaging of tumor tissue slides is becoming a routine clinical procedure for cancer diagnosis. This process produces massive imaging data that capture histological details in high resolution. Recent developments in deep-learning methods have enabled us to identify and classify individual cells from digital pathology images at large scale. Reliable statistical approaches to model the spatial pattern of cells can provide new insight into tumor progression and shed light on the biological mechanisms of cancer. We consider the problem of modeling spatial correlations among three commonly seen cells observed in tumor pathology images. A novel geostatistical marking model with interpretable underlying parameters is proposed in a Bayesian framework. We use auxiliary variable MCMC algorithms to sample from the posterior distribution with an intractable normalizing constant. We demonstrate how this model-based analysis can lead to sharper inferences than ordinary exploratory analyses, by means of application to three benchmark datasets and a case study on the pathology images of 188 lung cancer patients. The case study shows that the spatial correlation between tumor and stromal cells predicts patient prognosis. This statistical methodology not only presents a new model for characterizing spatial correlations in a multitype spatial point pattern conditioning on the locations of the points, but also provides a new perspective for understanding the role of cell-cell interactions in cancer progression.
Collapse
|
29
|
Wang S, Zhu Y, Yu L, Chen H, Lin H, Wan X, Fan X, Heng PA. RMDL: Recalibrated multi-instance deep learning for whole slide gastric image classification. Med Image Anal 2019; 58:101549. [PMID: 31499320 DOI: 10.1016/j.media.2019.101549] [Citation(s) in RCA: 84] [Impact Index Per Article: 16.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2019] [Revised: 08/24/2019] [Accepted: 08/29/2019] [Indexed: 12/11/2022]
Abstract
The whole slide histopathology images (WSIs) play a critical role in gastric cancer diagnosis. However, due to the large scale of WSIs and various sizes of the abnormal area, how to select informative regions and analyze them are quite challenging during the automatic diagnosis process. The multi-instance learning based on the most discriminative instances can be of great benefit for whole slide gastric image diagnosis. In this paper, we design a recalibrated multi-instance deep learning method (RMDL) to address this challenging problem. We first select the discriminative instances, and then utilize these instances to diagnose diseases based on the proposed RMDL approach. The designed RMDL network is capable of capturing instance-wise dependencies and recalibrating instance features according to the importance coefficient learned from the fused features. Furthermore, we build a large whole-slide gastric histopathology image dataset with detailed pixel-level annotations. Experimental results on the constructed gastric dataset demonstrate the significant improvement on the accuracy of our proposed framework compared with other state-of-the-art multi-instance learning methods. Moreover, our method is general and can be extended to other diagnosis tasks of different cancer types based on WSIs.
Collapse
Affiliation(s)
- Shujun Wang
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Yaxi Zhu
- Department of Pathology, The Sixth Affiliated Hospital of Sun Yat-sen University, China
| | - Lequan Yu
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Hao Chen
- Imsight Medical Technology Co., Ltd., China.
| | - Huangjing Lin
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China; Imsight Medical Technology Co., Ltd., China
| | - Xiangbo Wan
- Department of Radiation Oncology, The Sixth Affiliated Hospital of Sun Yat-sen University, China
| | - Xinjuan Fan
- Department of Pathology, The Sixth Affiliated Hospital of Sun Yat-sen University, China.
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| |
Collapse
|
30
|
Taveira LFR, Kurc T, Melo ACMA, Kong J, Bremer E, Saltz JH, Teodoro G. Multi-objective Parameter Auto-tuning for Tissue Image Segmentation Workflows. J Digit Imaging 2019; 32:521-533. [PMID: 30402669 PMCID: PMC6499855 DOI: 10.1007/s10278-018-0138-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022] Open
Abstract
We propose a software platform that integrates methods and tools for multi-objective parameter auto-tuning in tissue image segmentation workflows. The goal of our work is to provide an approach for improving the accuracy of nucleus/cell segmentation pipelines by tuning their input parameters. The shape, size, and texture features of nuclei in tissue are important biomarkers for disease prognosis, and accurate computation of these features depends on accurate delineation of boundaries of nuclei. Input parameters in many nucleus segmentation workflows affect segmentation accuracy and have to be tuned for optimal performance. This is a time-consuming and computationally expensive process; automating this step facilitates more robust image segmentation workflows and enables more efficient application of image analysis in large image datasets. Our software platform adjusts the parameters of a nuclear segmentation algorithm to maximize the quality of image segmentation results while minimizing the execution time. It implements several optimization methods to search the parameter space efficiently. In addition, the methodology is developed to execute on high-performance computing systems to reduce the execution time of the parameter tuning phase. These capabilities are packaged in a Docker container for easy deployment and can be used through a friendly interface extension in 3D Slicer. Our results using three real-world image segmentation workflows demonstrate that the proposed solution is able to (1) search a small fraction (about 100 points) of the parameter space, which contains billions to trillions of points, and improve the quality of segmentation output by × 1.20, × 1.29, and × 1.29, on average; (2) decrease the execution time of a segmentation workflow by up to 11.79× while improving output quality; and (3) effectively use parallel systems to accelerate parameter tuning and segmentation phases.
Collapse
Affiliation(s)
- Luis F R Taveira
- Department of Computer Science, University of Brasília, Brasília, Brazil
| | - Tahsin Kurc
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
- Scientific Data Group, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| | - Alba C M A Melo
- Department of Computer Science, University of Brasília, Brasília, Brazil
| | - Jun Kong
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, GA, USA
- Department of Biomedical Engineering, Emory - Georgia Institute of Technology, Atlanta, GA, USA
- Department of Mathematics and Statistics, Georgia State University, Atlanta, GA, USA
| | - Erich Bremer
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
| | - Joel H Saltz
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
| | - George Teodoro
- Department of Computer Science, University of Brasília, Brasília, Brazil.
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, GA, USA.
| |
Collapse
|
31
|
Sari CT, Gunduz-Demir C. Unsupervised Feature Extraction via Deep Learning for Histopathological Classification of Colon Tissue Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1139-1149. [PMID: 30403624 DOI: 10.1109/tmi.2018.2879369] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Histopathological examination is today's gold standard for cancer diagnosis. However, this task is time consuming and prone to errors as it requires a detailed visual inspection and interpretation of a pathologist. Digital pathology aims at alleviating these problems by providing computerized methods that quantitatively analyze digitized histopathological tissue images. The performance of these methods mainly relies on the features that they use, and thus, their success strictly depends on the ability of these features by successfully quantifying the histopathology domain. With this motivation, this paper presents a new unsupervised feature extractor for effective representation and classification of histopathological tissue images. This feature extractor has three main contributions: First, it proposes to identify salient subregions in an image, based on domain-specific prior knowledge, and to quantify the image by employing only the characteristics of these subregions instead of considering the characteristics of all image locations. Second, it introduces a new deep learning-based technique that quantizes the salient subregions by extracting a set of features directly learned on image data and uses the distribution of these quantizations for image representation and classification. To this end, the proposed deep learning-based technique constructs a deep belief network of the restricted Boltzmann machines (RBMs), defines the activation values of the hidden unit nodes in the final RBM as the features, and learns the quantizations by clustering these features in an unsupervised way. Third, this extractor is the first example for successfully using the restricted Boltzmann machines in the domain of histopathological image analysis. Our experiments on microscopic colon tissue images reveal that the proposed feature extractor is effective to obtain more accurate classification results compared to its counterparts.
Collapse
|
32
|
Zhong T, Wu M, Ma S. Examination of Independent Prognostic Power of Gene Expressions and Histopathological Imaging Features in Cancer. Cancers (Basel) 2019; 11:E361. [PMID: 30871256 PMCID: PMC6468814 DOI: 10.3390/cancers11030361] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2019] [Revised: 03/04/2019] [Accepted: 03/10/2019] [Indexed: 12/26/2022] Open
Abstract
Cancer prognosis is of essential interest, and extensive research has been conducted searching for biomarkers with prognostic power. Recent studies have shown that both omics profiles and histopathological imaging features have prognostic power. There are also studies exploring integrating the two types of measurements for prognosis modeling. However, there is a lack of study rigorously examining whether omics measurements have independent prognostic power conditional on histopathological imaging features, and vice versa. In this article, we adopt a rigorous statistical testing framework and test whether an individual gene expression measurement can improve prognosis modeling conditional on high-dimensional imaging features, and a parallel analysis is conducted reversing the roles of gene expressions and imaging features. In the analysis of The Cancer Genome Atlas (TCGA) lung adenocarcinoma and liver hepatocellular carcinoma data, it is found that multiple individual genes, conditional on imaging features, can lead to significant improvement in prognosis modeling; however, individual imaging features, conditional on gene expressions, only offer limited prognostic power. Being among the first to examine the independent prognostic power, this study may assist better understanding the "connectedness" between omics profiles and histopathological imaging features and provide important insights for data integration in cancer modeling.
Collapse
Affiliation(s)
- Tingyan Zhong
- SJTU-Yale Joint Center for Biostatistics, Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai 200240, China.
| | - Mengyun Wu
- School of Statistics and Management, Shanghai University of Finance and Economics, Shanghai 200433, China.
| | - Shuangge Ma
- Department of Biostatistics, Yale University, New Haven, CT 06520, USA.
| |
Collapse
|
33
|
Hou L, Nguyen V, Kanevsky AB, Samaras D, Kurc TM, Zhao T, Gupta RR, Gao Y, Chen W, Foran D, Saltz JH. Sparse Autoencoder for Unsupervised Nucleus Detection and Representation in Histopathology Images. PATTERN RECOGNITION 2019; 86:188-200. [PMID: 30631215 PMCID: PMC6322841 DOI: 10.1016/j.patcog.2018.09.007] [Citation(s) in RCA: 55] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
We propose a sparse Convolutional Autoencoder (CAE) for simultaneous nucleus detection and feature extraction in histopathology tissue images. Our CAE detects and encodes nuclei in image patches in tissue images into sparse feature maps that encode both the location and appearance of nuclei. A primary contribution of our work is the development of an unsupervised detection network by using the characteristics of histopathology image patches. The pretrained nucleus detection and feature extraction modules in our CAE can be fine-tuned for supervised learning in an end-to-end fashion. We evaluate our method on four datasets and achieve state-of-the-art results. In addition, we are able to achieve comparable performance with only 5% of the fully- supervised annotation cost.
Collapse
Affiliation(s)
- Le Hou
- Dept. of Computer Science, Stony Brook University, Stony Brook, NY, USA
| | - Vu Nguyen
- Dept. of Computer Science, Stony Brook University, Stony Brook, NY, USA
| | - Ariel B Kanevsky
- Dept. of Computer Science, Stony Brook University, Stony Brook, NY, USA
- Montreal Institute for Learning Algorithms, University of Montreal, Montreal, Canada
| | - Dimitris Samaras
- Dept. of Computer Science, Stony Brook University, Stony Brook, NY, USA
| | - Tahsin M Kurc
- Dept. of Computer Science, Stony Brook University, Stony Brook, NY, USA
- Dept. of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
- Oak Ridge National Laboratory, Oak Ridge, TN, USA
| | - Tianhao Zhao
- Dept. of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
- Dept. of Pathology, Stony Brook University Medical Center, Stony Brook, NY, USA
| | - Rajarsi R Gupta
- Dept. of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
- Dept. of Pathology, Stony Brook University Medical Center, Stony Brook, NY, USA
| | - Yi Gao
- School of Biomedical Engineering, Health Science Center, Shenzhen University, China
| | - Wenjin Chen
- Center for Biomedical Imaging & Informatics, Rutgers, the State University of New Jersey,New Brunswick, NJ, USA
- Rutgers Cancer Institute of New Jersey, Rutgers, the State University of New Jersey, NJ, USA
| | - David Foran
- Center for Biomedical Imaging & Informatics, Rutgers, the State University of New Jersey,New Brunswick, NJ, USA
- Rutgers Cancer Institute of New Jersey, Rutgers, the State University of New Jersey, NJ, USA
- Div. of Medical Informatics, Rutgers-Robert Wood Johnson Medical School, Piscataway Township, NJ, USA
| | - Joel H Saltz
- Dept. of Computer Science, Stony Brook University, Stony Brook, NY, USA
- Dept. of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
- Dept. of Pathology, Stony Brook University Medical Center, Stony Brook, NY, USA
- Cancer Center, Stony Brook University Hospital, Stony Brook, NY, USA
| |
Collapse
|
34
|
Lichtblau D, Stoean C. Cancer diagnosis through a tandem of classifiers for digitized histopathological slides. PLoS One 2019; 14:e0209274. [PMID: 30650087 PMCID: PMC6334911 DOI: 10.1371/journal.pone.0209274] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2018] [Accepted: 12/03/2018] [Indexed: 11/18/2022] Open
Abstract
The current research study is concerned with the automated differentiation between histopathological slides from colon tissues with respect to four classes (healthy tissue and cancerous of grades 1, 2 or 3) through an optimized ensemble of predictors. Six distinct classifiers with prediction accuracies ranging from 87% to 95% are considered for the task. The proposed method of combining them takes into account the probabilities of the individual classifiers for each sample to be assigned to any of the four classes, optimizes weights for each technique by differential evolution and attains an accuracy that is significantly better than the individual results. Moreover, a degree of confidence is defined that would allow the pathologists to separate the data into two distinct sets, one that is correctly classified with a high level of confidence and the rest that would need their further attention. The tandem is also validated on other benchmark data sets. The proposed methodology proves to be efficient in improving the classification accuracy of each algorithm taken separately and performs reasonably well on other data sets, even with default weights. In addition, by establishing a degree of confidence the method becomes more viable for use by actual practitioners.
Collapse
Affiliation(s)
| | - Catalin Stoean
- Faculty of Sciences, University of Craiova, Craiova, Romania
- * E-mail:
| |
Collapse
|
35
|
Wang X, Wang D, Yao Z, Xin B, Wang B, Lan C, Qin Y, Xu S, He D, Liu Y. Machine Learning Models for Multiparametric Glioma Grading With Quantitative Result Interpretations. Front Neurosci 2019; 12:1046. [PMID: 30686996 PMCID: PMC6337068 DOI: 10.3389/fnins.2018.01046] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2018] [Accepted: 12/24/2018] [Indexed: 12/11/2022] Open
Abstract
Gliomas are the most common primary malignant brain tumors in adults. Accurate grading is crucial as therapeutic strategies are often disparate for different grades and may influence patient prognosis. This study aims to provide an automated glioma grading platform on the basis of machine learning models. In this paper, we investigate contributions of multi-parameters from multimodal data including imaging parameters or features from the Whole Slide images (WSI) and the proliferation marker Ki-67 for automated brain tumor grading. For each WSI, we extract both visual parameters such as morphology parameters and sub-visual parameters including first-order and second-order features. On the basis of machine learning models, our platform classifies gliomas into grades II, III, and IV. Furthermore, we quantitatively interpret and reveal the important parameters contributing to grading with the Local Interpretable Model-Agnostic Explanations (LIME) algorithm. The quantitative analysis and explanation may assist clinicians to better understand the disease and accordingly to choose optimal treatments for improving clinical outcomes. The performance of our grading model was evaluated with cross-validation, which randomly divided the patients into non-overlapping training and testing sets and repeatedly validated the model on the different testing sets. The primary results indicated that this modular platform approach achieved the highest grading accuracy of 0.90 ± 0.04 with support vector machine (SVM) algorithm, with grading accuracies of 0.91 ± 0.08, 0.90 ± 0.08, and 0.90 ± 0.07 for grade II, III, and IV gliomas, respectively.
Collapse
Affiliation(s)
- Xiuying Wang
- School of Information Technologies, The University of Sydney, Sydney, NSW, Australia
| | - Dingqian Wang
- School of Information Technologies, The University of Sydney, Sydney, NSW, Australia
| | - Zhigang Yao
- Department of Pathology, Provincial Hospital Affiliated to Shandong University, Jinan, China
| | - Bowen Xin
- School of Information Technologies, The University of Sydney, Sydney, NSW, Australia
| | - Bao Wang
- School of Medicine, Shandong University, Jinan, China
| | - Chuanjin Lan
- School of Medicine, Shandong University, Jinan, China
| | - Yejun Qin
- Department of Pathology, Provincial Hospital Affiliated to Shandong University, Jinan, China
| | - Shangchen Xu
- Department of Neurosurgery, Provincial Hospital Affiliated to Shandong University, Jinan, China
| | - Dazhong He
- School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing, China
| | - Yingchao Liu
- Department of Neurosurgery, Provincial Hospital Affiliated to Shandong University, Jinan, China
| |
Collapse
|
36
|
Hernandez-Cabronero M, Sanchez V, Blanes I, Auli-Llinas F, Marcellin MW, Serra-Sagrista J. Mosaic-Based Color-Transform Optimization for Lossy and Lossy-to-Lossless Compression of Pathology Whole-Slide Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:21-32. [PMID: 29994394 DOI: 10.1109/tmi.2018.2852685] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The use of whole-slide images (WSIs) in pathology entails stringent storage and transmission requirements because of their huge dimensions. Therefore, image compression is an essential tool to enable efficient access to these data. In particular, color transforms are needed to exploit the very high degree of inter-component correlation and obtain competitive compression performance. Even though the state-of-the-art color transforms remove some redundancy, they disregard important details of the compression algorithm applied after the transform. Therefore, their coding performance is not optimal. We propose an optimization method called mosaic optimization for designing irreversible and reversible color transforms simultaneously optimized for any given WSI and the subsequent compression algorithm. Mosaic optimization is designed to attain reasonable computational complexity and enable continuous scanner operation. Exhaustive experimental results indicate that, for JPEG 2000 at identical compression ratios, the optimized transforms yield images more similar to the original than the other state-of-the-art transforms. Specifically, irreversible optimized transforms outperform the Karhunen-Loève Transform in terms of PSNR (up to 1.1 dB), the HDR-VDP-2 visual distortion metric (up to 3.8 dB), and the accuracy of computer-aided nuclei detection tasks (F1 score up to 0.04 higher). In addition, reversible optimized transforms achieve PSNR, HDR-VDP-2, and nuclei detection accuracy gains of up to 0.9 dB, 7.1 dB, and 0.025, respectively, when compared with the reversible color transform in lossy-to-lossless compression regimes.
Collapse
|
37
|
Nondestructive Identification of Salmon Adulteration with Water Based on Hyperspectral Data. J FOOD QUALITY 2018. [DOI: 10.1155/2018/1809297] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
For the identification of salmon adulteration with water injection, a nondestructive identification method based on hyperspectral images was proposed. The hyperspectral images of salmon fillets in visible and near-infrared ranges (390–1050 nm) were obtained with a system. The original hyperspectral data were processed through the principal-component analysis (PCA). According to the image quality and PCA parameters, a second principal-component (PC2) image was selected as the feature image, and the wavelengths corresponding to the local extremum values of feature image weighting coefficients were extracted as feature wavelengths, which were 454.9, 512.3, and 569.1 nm. On this basis, the color combined with spectra at feature wavelengths, texture combined with spectra at feature wavelengths, and color-texture combined with spectra at feature wavelengths were independently set as the input, for the modeling of salmon adulteration identification based on the self-organizing feature map (SOM) network. The distances between neighboring neurons and feature weights of the models were analyzed to realize the visualization of identification results. The results showed that the SOM-based model, with texture-color combined with fusion features of spectra at feature wavelengths as the input, was evaluated to possess the best performance and identification accuracy is as high as 96.7%.
Collapse
|
38
|
Gecer B, Aksoy S, Mercan E, Shapiro LG, Weaver DL, Elmore JG. Detection and classification of cancer in whole slide breast histopathology images using deep convolutional networks. PATTERN RECOGNITION 2018; 84:345-356. [PMID: 30679879 PMCID: PMC6342566 DOI: 10.1016/j.patcog.2018.07.022] [Citation(s) in RCA: 70] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Generalizability of algorithms for binary cancer vs. no cancer classification is unknown for clinically more significant multi-class scenarios where intermediate categories have different risk factors and treatment strategies. We present a system that classifies whole slide images (WSI) of breast biopsies into five diagnostic categories. First, a saliency detector that uses a pipeline of four fully convolutional networks, trained with samples from records of pathologists' screenings, performs multi-scale localization of diagnostically relevant regions of interest in WSI. Then, a convolutional network, trained from consensus-derived reference samples, classifies image patches as non-proliferative or proliferative changes, atypical ductal hyperplasia, ductal carcinoma in situ, and invasive carcinoma. Finally, the saliency and classification maps are fused for pixel-wise labeling and slide-level categorization. Experiments using 240 WSI showed that both saliency detector and classifier networks performed better than competing algorithms, and the five-class slide-level accuracy of 55% was not statistically different from the predictions of 45 pathologists. We also present example visualizations of the learned representations for breast cancer diagnosis.
Collapse
Affiliation(s)
- Baris Gecer
- Department of Computer Engineering, Bilkent University, Ankara, 06800, Turkey
| | - Selim Aksoy
- Department of Computer Engineering, Bilkent University, Ankara, 06800, Turkey
| | - Ezgi Mercan
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA 98195, USA
| | - Linda G. Shapiro
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA 98195, USA
| | - Donald L. Weaver
- Department of Pathology, University of Vermont, Burlington, VT 05405, USA
| | - Joann G. Elmore
- Department of Medicine, University of Washington, Seattle, WA 98195, USA
| |
Collapse
|
39
|
CNN cascades for segmenting sparse objects in gigapixel whole slide images. Comput Med Imaging Graph 2018; 71:40-48. [PMID: 30472409 DOI: 10.1016/j.compmedimag.2018.11.002] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2017] [Revised: 11/12/2018] [Accepted: 11/13/2018] [Indexed: 01/26/2023]
Abstract
Due to the increasing availability of whole slide scanners facilitating digitization of histopathological tissue, large amounts of digital image data are being generated. Accordingly, there is a strong demand for the development of computer based image analysis systems. Here, we address application scenarios in histopathology consisting of sparse, small objects-of-interest occurring in the large gigapixel images. To tackle the thereby arising challenges, we propose two different CNN cascade approaches which are subsequently applied to segment the glomeruli in whole slide images of the kidney and compared with conventional fully-convolutional networks. To facilitate unbiased evaluation, eight-fold cross-validation is performed and finally means and standard deviations are reported. Overall, with the best performing cascade approach, single CNNs are outperformed and a pixel-level Dice similarity coefficient of 0.90 is obtained (precision: 0.89, recall: 0.92). Combined with qualitative and further object-level analyses the obtained results are assessed as excellent also compared to previous approaches. We can state that especially one of the proposed cascade networks proved to be a highly powerful tool providing the best segmentation accuracies and also keeping the computing time at the lowest level. This work facilitates accurate automated segmentation of renal whole slide images which consequently allows fully-automated big data analyses for the assessment of medical treatments. Furthermore, this approach can also easily be adapted to other similar biomedical application scenarios.
Collapse
|
40
|
Gheisari S, Catchpoole DR, Charlton A, Melegh Z, Gradhand E, Kennedy PJ. Computer Aided Classification of Neuroblastoma Histological Images Using Scale Invariant Feature Transform with Feature Encoding. Diagnostics (Basel) 2018; 8:diagnostics8030056. [PMID: 30154334 PMCID: PMC6165255 DOI: 10.3390/diagnostics8030056] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2018] [Revised: 08/15/2018] [Accepted: 08/23/2018] [Indexed: 11/16/2022] Open
Abstract
Neuroblastoma is the most common extracranial solid malignancy in early childhood. Optimal management of neuroblastoma depends on many factors, including histopathological classification. Although histopathology study is considered the gold standard for classification of neuroblastoma histological images, computers can help to extract many more features some of which may not be recognizable by human eyes. This paper, proposes a combination of Scale Invariant Feature Transform with feature encoding algorithm to extract highly discriminative features. Then, distinctive image features are classified by Support Vector Machine classifier into five clinically relevant classes. The advantage of our model is extracting features which are more robust to scale variation compared to the Patched Completed Local Binary Pattern and Completed Local Binary Pattern methods. We gathered a database of 1043 histologic images of neuroblastic tumours classified into five subtypes. Our approach identified features that outperformed the state-of-the-art on both our neuroblastoma dataset and a benchmark breast cancer dataset. Our method shows promise for classification of neuroblastoma histological images.
Collapse
Affiliation(s)
- Soheila Gheisari
- Centre for Artificial Intelligence, Faculty of Engineering and Information Technology, University of Technology Sydney, Ultimo, NSW 2007, Australia.
| | - Daniel R Catchpoole
- The Tumour Bank, The Children's Cancer Research Unit, The Kids Research Institute, The Children's Hospital at Westmead, Locked Bag 4001, Westmead, NSW 2145, Australia.
| | - Amanda Charlton
- Department of Histopathology, Auckland City Hospital, Auckland 1023, New Zealand.
- Department of Molecular Medicine and Pathology, Faculty of Medical and Health Sciences, University of Auckland, Auckland 1142, New Zealand.
| | - Zsombor Melegh
- Department of Pathology, Southmead Hospital, Bristol BS10 5NB, UK.
| | - Elise Gradhand
- Department of Cellular Pathology, Pathology Science Building, Southmead Hospital, Bristol BS10 5NB, UK.
| | - Paul J Kennedy
- Centre for Artificial Intelligence, Faculty of Engineering and Information Technology, University of Technology Sydney, Ultimo, NSW 2007, Australia.
| |
Collapse
|
41
|
Qu J, Hiruta N, Terai K, Nosato H, Murakawa M, Sakanashi H. Gastric Pathology Image Classification Using Stepwise Fine-Tuning for Deep Neural Networks. JOURNAL OF HEALTHCARE ENGINEERING 2018; 2018:8961781. [PMID: 30034677 PMCID: PMC6033298 DOI: 10.1155/2018/8961781] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/23/2018] [Revised: 05/14/2018] [Accepted: 05/27/2018] [Indexed: 02/06/2023]
Abstract
Deep learning using convolutional neural networks (CNNs) is a distinguished tool for many image classification tasks. Due to its outstanding robustness and generalization, it is also expected to play a key role to facilitate advanced computer-aided diagnosis (CAD) for pathology images. However, the shortage of well-annotated pathology image data for training deep neural networks has become a major issue at present because of the high-cost annotation upon pathologist's professional observation. Faced with this problem, transfer learning techniques are generally used to reinforcing the capacity of deep neural networks. In order to further boost the performance of the state-of-the-art deep neural networks and alleviate insufficiency of well-annotated data, this paper presents a novel stepwise fine-tuning-based deep learning scheme for gastric pathology image classification and establishes a new type of target-correlative intermediate datasets. Our proposed scheme is deemed capable of making the deep neural network imitating the pathologist's perception manner and of acquiring pathology-related knowledge in advance, but with very limited extra cost in data annotation. The experiments are conducted with both well-annotated gastric pathology data and the proposed target-correlative intermediate data on several state-of-the-art deep neural networks. The results congruously demonstrate the feasibility and superiority of our proposed scheme for boosting the classification performance.
Collapse
Affiliation(s)
- Jia Qu
- Department of Intelligent Interaction Technologies, University of Tsukuba, Tsukuba 305-8573, Japan
| | - Nobuyuki Hiruta
- Department of Surgical Pathology, Toho University Sakura Medical Center, Sakura 285-8741, Japan
| | - Kensuke Terai
- Department of Surgical Pathology, Toho University Sakura Medical Center, Sakura 285-8741, Japan
| | - Hirokazu Nosato
- Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba 305-8560, Japan
| | - Masahiro Murakawa
- Department of Intelligent Interaction Technologies, University of Tsukuba, Tsukuba 305-8573, Japan
- Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba 305-8560, Japan
| | - Hidenori Sakanashi
- Department of Intelligent Interaction Technologies, University of Tsukuba, Tsukuba 305-8573, Japan
- Artificial Intelligence Research Center, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba 305-8560, Japan
| |
Collapse
|
42
|
Xu H, Lu C, Berendt R, Jha N, Mandal M. Automated analysis and classification of melanocytic tumor on skin whole slide images. Comput Med Imaging Graph 2018; 66:124-134. [DOI: 10.1016/j.compmedimag.2018.01.008] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2017] [Revised: 12/24/2017] [Accepted: 01/18/2018] [Indexed: 10/18/2022]
|
43
|
High-throughput adaptive sampling for whole-slide histopathology image analysis (HASHI) via convolutional neural networks: Application to invasive breast cancer detection. PLoS One 2018; 13:e0196828. [PMID: 29795581 PMCID: PMC5967747 DOI: 10.1371/journal.pone.0196828] [Citation(s) in RCA: 64] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2017] [Accepted: 04/22/2018] [Indexed: 12/30/2022] Open
Abstract
Precise detection of invasive cancer on whole-slide images (WSI) is a critical first step in digital pathology tasks of diagnosis and grading. Convolutional neural network (CNN) is the most popular representation learning method for computer vision tasks, which have been successfully applied in digital pathology, including tumor and mitosis detection. However, CNNs are typically only tenable with relatively small image sizes (200 × 200 pixels). Only recently, Fully convolutional networks (FCN) are able to deal with larger image sizes (500 × 500 pixels) for semantic segmentation. Hence, the direct application of CNNs to WSI is not computationally feasible because for a WSI, a CNN would require billions or trillions of parameters. To alleviate this issue, this paper presents a novel method, High-throughput Adaptive Sampling for whole-slide Histopathology Image analysis (HASHI), which involves: i) a new efficient adaptive sampling method based on probability gradient and quasi-Monte Carlo sampling, and, ii) a powerful representation learning classifier based on CNNs. We applied HASHI to automated detection of invasive breast cancer on WSI. HASHI was trained and validated using three different data cohorts involving near 500 cases and then independently tested on 195 studies from The Cancer Genome Atlas. The results show that (1) the adaptive sampling method is an effective strategy to deal with WSI without compromising prediction accuracy by obtaining comparative results of a dense sampling (∼6 million of samples in 24 hours) with far fewer samples (∼2,000 samples in 1 minute), and (2) on an independent test dataset, HASHI is effective and robust to data from multiple sites, scanners, and platforms, achieving an average Dice coefficient of 76%.
Collapse
|
44
|
Sahran S, Albashish D, Abdullah A, Shukor NA, Hayati Md Pauzi S. Absolute cosine-based SVM-RFE feature selection method for prostate histopathological grading. Artif Intell Med 2018; 87:78-90. [PMID: 29680688 DOI: 10.1016/j.artmed.2018.04.002] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2017] [Revised: 04/02/2018] [Accepted: 04/07/2018] [Indexed: 01/09/2023]
Abstract
OBJECTIVE Feature selection (FS) methods are widely used in grading and diagnosing prostate histopathological images. In this context, FS is based on the texture features obtained from the lumen, nuclei, cytoplasm and stroma, all of which are important tissue components. However, it is difficult to represent the high-dimensional textures of these tissue components. To solve this problem, we propose a new FS method that enables the selection of features with minimal redundancy in the tissue components. METHODOLOGY We categorise tissue images based on the texture of individual tissue components via the construction of a single classifier and also construct an ensemble learning model by merging the values obtained by each classifier. Another issue that arises is overfitting due to the high-dimensional texture of individual tissue components. We propose a new FS method, SVM-RFE(AC), that integrates a Support Vector Machine-Recursive Feature Elimination (SVM-RFE) embedded procedure with an absolute cosine (AC) filter method to prevent redundancy in the selected features of the SV-RFE and an unoptimised classifier in the AC. RESULTS We conducted experiments on H&E histopathological prostate and colon cancer images with respect to three prostate classifications, namely benign vs. grade 3, benign vs. grade 4 and grade 3 vs. grade 4. The colon benchmark dataset requires a distinction between grades 1 and 2, which are the most difficult cases to distinguish in the colon domain. The results obtained by both the single and ensemble classification models (which uses the product rule as its merging method) confirm that the proposed SVM-RFE(AC) is superior to the other SVM and SVM-RFE-based methods. CONCLUSION We developed an FS method based on SVM-RFE and AC and successfully showed that its use enabled the identification of the most crucial texture feature of each tissue component. Thus, it makes possible the distinction between multiple Gleason grades (e.g. grade 3 vs. grade 4) and its performance is far superior to other reported FS methods.
Collapse
Affiliation(s)
- Shahnorbanun Sahran
- Pattern Recognition Research Group, Center for Artificial Intelligence Technology, Faculty of Information Science and Technology, University Kebangsaan Malaysia, 43600 Bangi, Malaysia.
| | - Dheeb Albashish
- Computer Science Department, Prince Abdullah Bin Ghazi Faculty of Information Technology, Al-Balqa Applied University, Jordan.
| | - Azizi Abdullah
- Pattern Recognition Research Group, Center for Artificial Intelligence Technology, Faculty of Information Science and Technology, University Kebangsaan Malaysia, 43600 Bangi, Malaysia.
| | - Nordashima Abd Shukor
- Department of Pathology, University Kebangsaan Malaysia Medical Center, 56000 Batu 9 Cheras, Malaysia.
| | - Suria Hayati Md Pauzi
- Department of Pathology, University Kebangsaan Malaysia Medical Center, 56000 Batu 9 Cheras, Malaysia.
| |
Collapse
|
45
|
Abstract
Predicting the expected outcome of patients diagnosed with cancer is a critical step in treatment. Advances in genomic and imaging technologies provide physicians with vast amounts of data, yet prognostication remains largely subjective, leading to suboptimal clinical management. We developed a computational approach based on deep learning to predict the overall survival of patients diagnosed with brain tumors from microscopic images of tissue biopsies and genomic biomarkers. This method uses adaptive feedback to simultaneously learn the visual patterns and molecular biomarkers associated with patient outcomes. Our approach surpasses the prognostic accuracy of human experts using the current clinical standard for classifying brain tumors and presents an innovative approach for objective, accurate, and integrated prediction of patient outcomes. Cancer histology reflects underlying molecular processes and disease progression and contains rich phenotypic information that is predictive of patient outcomes. In this study, we show a computational approach for learning patient outcomes from digital pathology images using deep learning to combine the power of adaptive machine learning algorithms with traditional survival models. We illustrate how these survival convolutional neural networks (SCNNs) can integrate information from both histology images and genomic biomarkers into a single unified framework to predict time-to-event outcomes and show prediction accuracy that surpasses the current clinical paradigm for predicting the overall survival of patients diagnosed with glioma. We use statistical sampling techniques to address challenges in learning survival from histology images, including tumor heterogeneity and the need for large training cohorts. We also provide insights into the prediction mechanisms of SCNNs, using heat map visualization to show that SCNNs recognize important structures, like microvascular proliferation, that are related to prognosis and that are used by pathologists in grading. These results highlight the emerging role of deep learning in precision medicine and suggest an expanding utility for computational analysis of histology in the future practice of pathology.
Collapse
|
46
|
Dooley AE, Tong L, Deshpande SR, Wang MD. Prediction of Heart Transplant Rejection Using Histopathological Whole-Slide Imaging. ... IEEE-EMBS INTERNATIONAL CONFERENCE ON BIOMEDICAL AND HEALTH INFORMATICS. IEEE-EMBS INTERNATIONAL CONFERENCE ON BIOMEDICAL AND HEALTH INFORMATICS 2018; 2018:10.1109/bhi.2018.8333416. [PMID: 32551442 PMCID: PMC7302110 DOI: 10.1109/bhi.2018.8333416] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Endomyocardial biopsies are the current gold standard for monitoring heart transplant patients for signs of cardiac allograft rejection. Manually analyzing the acquired tissue samples can be costly, time-consuming, and subjective. Computer-aided diagnosis, using digitized whole-slide images, has been used to classify the presence and grading of diseases such as brain tumors and breast cancer, and we expect it can be used for prediction of cardiac allograft rejection. In this paper, we first create a pipeline to normalize and extract pixel-level and object-level features from histopathological whole-slide images of endomyocardial biopsies. Then, we develop a two-stage classification algorithm, where we first cluster individual tiles and then use the frequency of tiles in each cluster for classification of each whole-slide image. Our results show that the addition of an unsupervised clustering step leads to higher classification accuracy, as well as the importance of object-level features based on the pathophysiology of rejection. Future expansion of this study includes the development of a multiclass classification pipeline for subtypes and grades of cardiac allograft rejection.
Collapse
Affiliation(s)
- Adrienne E. Dooley
- Dept. of Biomedical Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Li Tong
- Dept. of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA 30332, USA
| | | | - May. D Wang
- Dept. of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA 30332, USA
| |
Collapse
|
47
|
Which Way Round? A Study on the Performance of Stain-Translation for Segmenting Arbitrarily Dyed Histological Images. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION – MICCAI 2018 2018. [DOI: 10.1007/978-3-030-00934-2_19] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
|
48
|
Brahmaiah Naik J, Srinivasarao C, Babu Kande G. Local vector pattern with global index angles for a content‐based image retrieval system. J Assoc Inf Sci Technol 2017. [DOI: 10.1002/asi.23907] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Jatothu Brahmaiah Naik
- Research Scholar, JNTUKKakinada Andhra Pradesh India
- Assistant Professor, Department of Electronics & Communication Engineering, Vignan's Lara Institute of Technology & Science Andhra Pradesh India
| | - Chanamallu Srinivasarao
- Professor, ECE Department, JNTUK University college of Engineering, VizianagaramAndhra Pradesh India
| | - Giri Babu Kande
- Professor & HoD, ECE Department, Vasireddy Venkatadri Institute of Technology, Nambur, Guntur (Dt)Andhra Pradesh India
| |
Collapse
|
49
|
Nalisnik M, Amgad M, Lee S, Halani SH, Velazquez Vega JE, Brat DJ, Gutman DA, Cooper LAD. Interactive phenotyping of large-scale histology imaging data with HistomicsML. Sci Rep 2017; 7:14588. [PMID: 29109450 PMCID: PMC5674015 DOI: 10.1038/s41598-017-15092-3] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2017] [Accepted: 10/20/2017] [Indexed: 11/09/2022] Open
Abstract
Whole-slide imaging of histologic sections captures tissue microenvironments and cytologic details in expansive high-resolution images. These images can be mined to extract quantitative features that describe tissues, yielding measurements for hundreds of millions of histologic objects. A central challenge in utilizing this data is enabling investigators to train and evaluate classification rules for identifying objects related to processes like angiogenesis or immune response. In this paper we describe HistomicsML, an interactive machine-learning system for digital pathology imaging datasets. This framework uses active learning to direct user feedback, making classifier training efficient and scalable in datasets containing 108+ histologic objects. We demonstrate how this system can be used to phenotype microvascular structures in gliomas to predict survival, and to explore the molecular pathways associated with these phenotypes. Our approach enables researchers to unlock phenotypic information from digital pathology datasets to investigate prognostic image biomarkers and genotype-phenotype associations.
Collapse
Affiliation(s)
- Michael Nalisnik
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, USA
| | - Mohamed Amgad
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, USA
| | - Sanghoon Lee
- Department of Neurology, Emory University School of Medicine, Atlanta, USA
| | | | | | - Daniel J Brat
- Department of Pathology & Laboratory Medicine, Emory University School of Medicine, Atlanta, USA.,Winship Cancer Institute, Emory University, Atlanta, USA
| | - David A Gutman
- Department of Neurology, Emory University School of Medicine, Atlanta, USA
| | - Lee A D Cooper
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, USA. .,Winship Cancer Institute, Emory University, Atlanta, USA. .,Department of Biomedical Engineering, Georgia Institute of Technology/Emory University School of Medicine, Atlanta, GA, USA.
| |
Collapse
|
50
|
Reis S, Gazinska P, Hipwell JH, Mertzanidou T, Naidoo K, Williams N, Pinder S, Hawkes DJ. Automated Classification of Breast Cancer Stroma Maturity From Histological Images. IEEE Trans Biomed Eng 2017; 64:2344-2352. [PMID: 28186876 DOI: 10.1109/tbme.2017.2665602] [Citation(s) in RCA: 44] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
OBJECTIVE The tumor microenvironment plays a crucial role in regulating tumor progression by a number of different mechanisms, in particular, the remodeling of collagen fibers in tumor-associated stroma, which has been reported to be related to patient survival. The underlying motivation of this work is that remodeling of collagen fibers gives rise to observable patterns in hematoxylin and eosin (H&E) stained slides from clinical cases of invasive breast carcinoma that the pathologist can label as mature or immature stroma. The aim of this paper is to categorise and automatically classify stromal regions according to their maturity and show that this classification agrees with that of skilled observers, hence providing a repeatable and quantitative measure for prognostic studies. METHODS We use multiscale basic image features and local binary patterns, in combination with a random decision trees classifier for classification of breast cancer stroma regions-of-interest (ROI). RESULTS We present results from a cohort of 55 patients with analysis of 169 ROI. Our multiscale approach achieved a classification accuracy of 84%. CONCLUSION This work demonstrates the ability of texture-based image analysis to differentiate breast cancer stroma maturity in clinically acquired H&E-stained slides at least as well as skilled observers.
Collapse
|