1
|
Ortiz S, Rojas-Valenzuela I, Rojas F, Valenzuela O, Herrera LJ, Rojas I. Novel methodology for detecting and localizing cancer area in histopathological images based on overlapping patches. Comput Biol Med 2024; 168:107713. [PMID: 38000243 DOI: 10.1016/j.compbiomed.2023.107713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 11/07/2023] [Accepted: 11/15/2023] [Indexed: 11/26/2023]
Abstract
Cancer disease is one of the most important pathologies in the world, as it causes the death of millions of people, and the cure of this disease is limited in most cases. Rapid spread is one of the most important features of this disease, so many efforts are focused on its early-stage detection and localization. Medicine has made numerous advances in the recent decades with the help of artificial intelligence (AI), reducing costs and saving time. In this paper, deep learning models (DL) are used to present a novel method for detecting and localizing cancerous zones in WSI images, using tissue patch overlay to improve performance results. A novel overlapping methodology is proposed and discussed, together with different alternatives to evaluate the labels of the patches overlapping in the same zone to improve detection performance. The goal is to strengthen the labeling of different areas of an image with multiple overlapping patch testing. The results show that the proposed method improves the traditional framework and provides a different approach to cancer detection. The proposed method, based on applying 3x3 step 2 average pooling filters on overlapping patch labels, provides a better result with a 12.9% correction percentage for misclassified patches on the HUP dataset and 15.8% on the CINIJ dataset. In addition, a filter is implemented to correct isolated patches that were also misclassified. Finally, a CNN decision threshold study is performed to analyze the impact of the threshold value on the accuracy of the model. The alteration of the threshold decision along with the filter for isolated patches and the proposed method for overlapping patches, corrects about 20% of the patches that are mislabeled in the traditional method. As a whole, the proposed method achieves an accuracy rate of 94.6%. The code is available at https://github.com/sergioortiz26/Cancer_overlapping_filter_WSI_images.
Collapse
Affiliation(s)
- Sergio Ortiz
- Department of Computer Architecture and Technology, University of Granada, E.T.S. de Ingenierías Informática y de Telecomunicación, C/ Periodista Daniel Saucedo Aranda S/N CP:18071 Granada, Spain.
| | - Ignacio Rojas-Valenzuela
- Department of Computer Architecture and Technology, University of Granada, E.T.S. de Ingenierías Informática y de Telecomunicación, C/ Periodista Daniel Saucedo Aranda S/N CP:18071 Granada, Spain
| | - Fernando Rojas
- Department of Computer Architecture and Technology, University of Granada, E.T.S. de Ingenierías Informática y de Telecomunicación, C/ Periodista Daniel Saucedo Aranda S/N CP:18071 Granada, Spain
| | - Olga Valenzuela
- Department of Applied Mathematics, University of Granada, Facultad de Ciencias, Avenida de la Fuente Nueva S/N CP:18071 Granada, Spain
| | - Luis Javier Herrera
- Department of Computer Architecture and Technology, University of Granada, E.T.S. de Ingenierías Informática y de Telecomunicación, C/ Periodista Daniel Saucedo Aranda S/N CP:18071 Granada, Spain
| | - Ignacio Rojas
- Department of Computer Architecture and Technology, University of Granada, E.T.S. de Ingenierías Informática y de Telecomunicación, C/ Periodista Daniel Saucedo Aranda S/N CP:18071 Granada, Spain.
| |
Collapse
|
2
|
Xing F, Yang X, Cornish TC, Ghosh D. Learning with limited target data to detect cells in cross-modality images. Med Image Anal 2023; 90:102969. [PMID: 37802010 DOI: 10.1016/j.media.2023.102969] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 08/16/2023] [Accepted: 09/11/2023] [Indexed: 10/08/2023]
Abstract
Deep neural networks have achieved excellent cell or nucleus quantification performance in microscopy images, but they often suffer from performance degradation when applied to cross-modality imaging data. Unsupervised domain adaptation (UDA) based on generative adversarial networks (GANs) has recently improved the performance of cross-modality medical image quantification. However, current GAN-based UDA methods typically require abundant target data for model training, which is often very expensive or even impossible to obtain for real applications. In this paper, we study a more realistic yet challenging UDA situation, where (unlabeled) target training data is limited and previous work seldom delves into cell identification. We first enhance a dual GAN with task-specific modeling, which provides additional supervision signals to assist with generator learning. We explore both single-directional and bidirectional task-augmented GANs for domain adaptation. Then, we further improve the GAN by introducing a differentiable, stochastic data augmentation module to explicitly reduce discriminator overfitting. We examine source-, target-, and dual-domain data augmentation for GAN enhancement, as well as joint task and data augmentation in a unified GAN-based UDA framework. We evaluate the framework for cell detection on multiple public and in-house microscopy image datasets, which are acquired with different imaging modalities, staining protocols and/or tissue preparations. The experiments demonstrate that our method significantly boosts performance when compared with the reference baseline, and it is superior to or on par with fully supervised models that are trained with real target annotations. In addition, our method outperforms recent state-of-the-art UDA approaches by a large margin on different datasets.
Collapse
Affiliation(s)
- Fuyong Xing
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA.
| | - Xinyi Yang
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| | - Toby C Cornish
- Department of Pathology, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| | - Debashis Ghosh
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| |
Collapse
|
3
|
Fang K, Li J, Zhang Q, Xu Y, Ma S. Pathological imaging-assisted cancer gene-environment interaction analysis. Biometrics 2023; 79:3883-3894. [PMID: 37132273 PMCID: PMC10622332 DOI: 10.1111/biom.13873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2022] [Accepted: 04/26/2023] [Indexed: 05/04/2023]
Abstract
Gene-environment (G-E) interactions have important implications for cancer outcomes and phenotypes beyond the main G and E effects. Compared to main-effect-only analysis, G-E interaction analysis more seriously suffers from a lack of information caused by higher dimensionality, weaker signals, and other factors. It is also uniquely challenged by the "main effects, interactions" variable selection hierarchy. Effort has been made to bring in additional information to assist cancer G-E interaction analysis. In this study, we take a strategy different from the existing literature and borrow information from pathological imaging data. Such data are a "byproduct" of biopsy, enjoys broad availability and low cost, and has been shown as informative for modeling prognosis and other cancer outcomes/phenotypes in recent studies. Building on penalization, we develop an assisted estimation and variable selection approach for G-E interaction analysis. The approach is intuitive, can be effectively realized, and has competitive performance in simulation. We further analyze The Cancer Genome Atlas (TCGA) data on lung adenocarcinoma (LUAD). The outcome of interest is overall survival, and for G variables, we analyze gene expressions. Assisted by pathological imaging data, our G-E interaction analysis leads to different findings with competitive prediction performance and stability.
Collapse
Affiliation(s)
- Kuangnan Fang
- Department of Statistics and Data Science, School of Economics, Xiamen University, Xiamen, China
| | - Jingmao Li
- Department of Statistics and Data Science, School of Economics, Xiamen University, Xiamen, China
| | - Qingzhao Zhang
- Department of Statistics and Data Science, School of Economics, Xiamen University, Xiamen, China
- The Wang Yanan Institute for Studies in Economics, Xiamen University, Xiamen, China
| | - Yaqing Xu
- School of Public Health, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Shuangge Ma
- Department of Biostatistics, Yale School of Public Health, New Haven, U.S.A
| |
Collapse
|
4
|
Romero-Arias JR, González-Castro CA, Ramírez-Santiago G. A multiscale model of the role of microenvironmental factors in cell segregation and heterogeneity in breast cancer development. PLoS Comput Biol 2023; 19:e1011673. [PMID: 37992135 DOI: 10.1371/journal.pcbi.1011673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 12/06/2023] [Accepted: 11/08/2023] [Indexed: 11/24/2023] Open
Abstract
We analyzed a quantitative multiscale model that describes the epigenetic dynamics during the growth and evolution of an avascular tumor. A gene regulatory network (GRN) formed by a set of ten genes that are believed to play an important role in breast cancer development was kinetically coupled to the microenvironmental agents: glucose, estrogens, and oxygen. The dynamics of spontaneous mutations was described by a Yule-Furry master equation whose solution represents the probability that a given cell in the tissue undergoes a certain number of mutations at a given time. We assumed that the mutation rate is modified by a spatial gradient of nutrients. The tumor mass was simulated by means of cellular automata supplemented with a set of reaction diffusion equations that described the transport of microenvironmental agents. By analyzing the epigenetic state space described by the GRN dynamics, we found three attractors that were identified with cellular epigenetic states: normal, precancer and cancer. For two-dimensional (2D) and three-dimensional (3D) tumors we calculated the spatial distribution of the following quantities: (i) number of mutations, (ii) mutation of each gene and, (iii) phenotypes. Using estrogen as the principal microenvironmental agent that regulates cell proliferation process, we obtained tumor shapes for different values of estrogen consumption and supply rates. It was found that he majority of mutations occurred in cells that were located close to the 2D tumor perimeter or close to the 3D tumor surface. Also, it was found that the occurrence of different phenotypes in the tumor are controlled by estrogen concentration levels since they can change the individual cell threshold and gene expression levels. All results were consistently observed for 2D and 3D tumors.
Collapse
Affiliation(s)
- J Roberto Romero-Arias
- Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Universidad Nacional Autónoma de México, Ciudad de México, Mexico
| | | | | |
Collapse
|
5
|
Salvi M, Molinari F, Ciccarelli M, Testi R, Taraglio S, Imperiale D. Quantitative analysis of prion disease using an AI-powered digital pathology framework. Sci Rep 2023; 13:17759. [PMID: 37853094 PMCID: PMC10584956 DOI: 10.1038/s41598-023-44782-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 10/12/2023] [Indexed: 10/20/2023] Open
Abstract
Prion disease is a fatal neurodegenerative disorder characterized by accumulation of an abnormal prion protein (PrPSc) in the central nervous system. To identify PrPSc aggregates for diagnostic purposes, pathologists use immunohistochemical staining of prion protein antibodies on tissue samples. With digital pathology, artificial intelligence can now analyze stained slides. In this study, we developed an automated pipeline for the identification of PrPSc aggregates in tissue samples from the cerebellar and occipital cortex. To the best of our knowledge, this is the first framework to evaluate PrPSc deposition in digital images. We used two strategies: a deep learning segmentation approach using a vision transformer, and a machine learning classification approach with traditional classifiers. Our method was developed and tested on 64 whole slide images from 41 patients definitively diagnosed with prion disease. The results of our study demonstrated that our proposed framework can accurately classify WSIs from a blind test set. Moreover, it can quantify PrPSc distribution and localization throughout the brain. This could potentially be extended to evaluate protein expression in other neurodegenerative diseases like Alzheimer's and Parkinson's. Overall, our pipeline highlights the potential of AI-assisted pathology to provide valuable insights, leading to improved diagnostic accuracy and efficiency.
Collapse
Affiliation(s)
- Massimo Salvi
- Biolab, PoliTo(BIO)Med Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129, Turin, Italy.
| | - Filippo Molinari
- Biolab, PoliTo(BIO)Med Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129, Turin, Italy
| | - Mario Ciccarelli
- Biolab, PoliTo(BIO)Med Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129, Turin, Italy
| | - Roberto Testi
- SC Medicina Legale, ASL Città di Torino, Turin, Italy
| | | | - Daniele Imperiale
- SC Neurologia Ospedale Maria Vittoria & Centro Diagnosi Osservazione Malattie Prioniche, ASL Città di Torino, Turin, Italy
| |
Collapse
|
6
|
Al-Thelaya K, Gilal NU, Alzubaidi M, Majeed F, Agus M, Schneider J, Househ M. Applications of discriminative and deep learning feature extraction methods for whole slide image analysis: A survey. J Pathol Inform 2023; 14:100335. [PMID: 37928897 PMCID: PMC10622844 DOI: 10.1016/j.jpi.2023.100335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 07/17/2023] [Accepted: 07/19/2023] [Indexed: 11/07/2023] Open
Abstract
Digital pathology technologies, including whole slide imaging (WSI), have significantly improved modern clinical practices by facilitating storing, viewing, processing, and sharing digital scans of tissue glass slides. Researchers have proposed various artificial intelligence (AI) solutions for digital pathology applications, such as automated image analysis, to extract diagnostic information from WSI for improving pathology productivity, accuracy, and reproducibility. Feature extraction methods play a crucial role in transforming raw image data into meaningful representations for analysis, facilitating the characterization of tissue structures, cellular properties, and pathological patterns. These features have diverse applications in several digital pathology applications, such as cancer prognosis and diagnosis. Deep learning-based feature extraction methods have emerged as a promising approach to accurately represent WSI contents and have demonstrated superior performance in histology-related tasks. In this survey, we provide a comprehensive overview of feature extraction methods, including both manual and deep learning-based techniques, for the analysis of WSIs. We review relevant literature, analyze the discriminative and geometric features of WSIs (i.e., features suited to support the diagnostic process and extracted by "engineered" methods as opposed to AI), and explore predictive modeling techniques using AI and deep learning. This survey examines the advances, challenges, and opportunities in this rapidly evolving field, emphasizing the potential for accurate diagnosis, prognosis, and decision-making in digital pathology.
Collapse
Affiliation(s)
- Khaled Al-Thelaya
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Nauman Ullah Gilal
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mahmood Alzubaidi
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Fahad Majeed
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Marco Agus
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Jens Schneider
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mowafa Househ
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| |
Collapse
|
7
|
Madusanka N, Jayalath P, Fernando D, Yasakethu L, Lee BI. Impact of H&E Stain Normalization on Deep Learning Models in Cancer Image Classification: Performance, Complexity, and Trade-Offs. Cancers (Basel) 2023; 15:4144. [PMID: 37627172 PMCID: PMC10452714 DOI: 10.3390/cancers15164144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Revised: 07/28/2023] [Accepted: 08/02/2023] [Indexed: 08/27/2023] Open
Abstract
Accurate classification of cancer images plays a crucial role in diagnosis and treatment planning. Deep learning (DL) models have shown promise in achieving high accuracy, but their performance can be influenced by variations in Hematoxylin and Eosin (H&E) staining techniques. In this study, we investigate the impact of H&E stain normalization on the performance of DL models in cancer image classification. We evaluate the performance of VGG19, VGG16, ResNet50, MobileNet, Xception, and InceptionV3 on a dataset of H&E-stained cancer images. Our findings reveal that while VGG16 exhibits strong performance, VGG19 and ResNet50 demonstrate limitations in this context. Notably, stain normalization techniques significantly improve the performance of less complex models such as MobileNet and Xception. These models emerge as competitive alternatives with lower computational complexity and resource requirements and high computational efficiency. The results highlight the importance of optimizing less complex models through stain normalization to achieve accurate and reliable cancer image classification. This research holds tremendous potential for advancing the development of computationally efficient cancer classification systems, ultimately benefiting cancer diagnosis and treatment.
Collapse
Affiliation(s)
- Nuwan Madusanka
- Digital Healthcare Research Center, Pukyong National University, Busan 48513, Republic of Korea;
| | - Pramudini Jayalath
- Institute of Biochemistry, Faculty of Mathematics and Natural Science, University of Cologne, 50923 Cologne, Germany;
| | - Dileepa Fernando
- School of Computer Science and Engineering, Nanyang Technological University, Singapore 639798, Singapore;
| | - Lasith Yasakethu
- Department of Software Engineering, Sri Lanka Technological Campus (SLTC), Padukka 10500, Sri Lanka;
| | - Byeong-Il Lee
- Digital Healthcare Research Center, Pukyong National University, Busan 48513, Republic of Korea;
- Division of Smart Healthcare, College of Information Technology and Convergence, Pukyong National University, Busan 48513, Republic of Korea
- Department of Industry 4.0 Convergence Bionics Engineering, Pukyoung National University, Busan 48513, Republic of Korea
| |
Collapse
|
8
|
Aziz MT, Mahmud SMH, Elahe MF, Jahan H, Rahman MH, Nandi D, Smirani LK, Ahmed K, Bui FM, Moni MA. A Novel Hybrid Approach for Classifying Osteosarcoma Using Deep Feature Extraction and Multilayer Perceptron. Diagnostics (Basel) 2023; 13:2106. [PMID: 37371001 DOI: 10.3390/diagnostics13122106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 06/10/2023] [Accepted: 06/13/2023] [Indexed: 06/29/2023] Open
Abstract
Osteosarcoma is the most common type of bone cancer that tends to occur in teenagers and young adults. Due to crowded context, inter-class similarity, inter-class variation, and noise in H&E-stained (hematoxylin and eosin stain) histology tissue, pathologists frequently face difficulty in osteosarcoma tumor classification. In this paper, we introduced a hybrid framework for improving the efficiency of three types of osteosarcoma tumor (nontumor, necrosis, and viable tumor) classification by merging different types of CNN-based architectures with a multilayer perceptron (MLP) algorithm on the WSI (whole slide images) dataset. We performed various kinds of preprocessing on the WSI images. Then, five pre-trained CNN models were trained with multiple parameter settings to extract insightful features via transfer learning, where convolution combined with pooling was utilized as a feature extractor. For feature selection, a decision tree-based RFE was designed to recursively eliminate less significant features to improve the model generalization performance for accurate prediction. Here, a decision tree was used as an estimator to select the different features. Finally, a modified MLP classifier was employed to classify binary and multiclass types of osteosarcoma under the five-fold CV to assess the robustness of our proposed hybrid model. Moreover, the feature selection criteria were analyzed to select the optimal one based on their execution time and accuracy. The proposed model achieved an accuracy of 95.2% for multiclass classification and 99.4% for binary classification. Experimental findings indicate that our proposed model significantly outperforms existing methods; therefore, this model could be applicable to support doctors in osteosarcoma diagnosis in clinics. In addition, our proposed model is integrated into a web application using the FastAPI web framework to provide a real-time prediction.
Collapse
Affiliation(s)
- Md Tarek Aziz
- Centre for Advanced Machine Learning and Applications (CAMLAs), Bashundhara R/A, Dhaka 1229, Bangladesh
| | - S M Hasan Mahmud
- Centre for Advanced Machine Learning and Applications (CAMLAs), Bashundhara R/A, Dhaka 1229, Bangladesh
- Department of Computer Science, American International University-Bangladesh (AIUB), 408/1, Kuratoli, Khilkhet, Dhaka 1229, Bangladesh
| | - Md Fazla Elahe
- Centre for Advanced Machine Learning and Applications (CAMLAs), Bashundhara R/A, Dhaka 1229, Bangladesh
- Department of Software Engineering, Daffodil International University, Daffodil Smart City (DSC), Savar, Dhaka 1216, Bangladesh
| | - Hosney Jahan
- Centre for Advanced Machine Learning and Applications (CAMLAs), Bashundhara R/A, Dhaka 1229, Bangladesh
- Department of Computer Science & Engineering (CSE), Military Institute of Science and Technology (MIST), Mirpur Cantonment, Dhaka 1216, Bangladesh
| | - Md Habibur Rahman
- Centre for Advanced Machine Learning and Applications (CAMLAs), Bashundhara R/A, Dhaka 1229, Bangladesh
- Department of Computer Science and Engineering, Islamic University, Kushtia 7003, Bangladesh
| | - Dip Nandi
- Department of Computer Science, American International University-Bangladesh (AIUB), 408/1, Kuratoli, Khilkhet, Dhaka 1229, Bangladesh
| | - Lassaad K Smirani
- The Deanship of Information Technology and E-learning, Umm Al-Qura University, Mecca 24382, Saudi Arabia
| | - Kawsar Ahmed
- Department of Electrical and Computer Engineering, University of Saskatchewan, 57 Campus Drive, Saskatoon, SK S7N 5A9, Canada
- Group of Biophotomatiχ, Department of Information and Communication Technology (ICT), Mawlana Bhashani Science and Technology University (MBSTU), Tangail 1902, Bangladesh
| | - Francis M Bui
- Department of Electrical and Computer Engineering, University of Saskatchewan, 57 Campus Drive, Saskatoon, SK S7N 5A9, Canada
| | - Mohammad Ali Moni
- Artificial Intelligence & Digital Health, School of Health and Rehabilitation Sciences, Faculty of Health and Behavioural Sciences, The University of Queensland, St. Lucia, QLD 4072, Australia
| |
Collapse
|
9
|
Callewaert B, Gsell W, Himmelreich U, Jones EAV. Q-VAT: Quantitative Vascular Analysis Tool. Front Cardiovasc Med 2023; 10:1147462. [PMID: 37332588 PMCID: PMC10272742 DOI: 10.3389/fcvm.2023.1147462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Accepted: 05/18/2023] [Indexed: 06/20/2023] Open
Abstract
As our imaging capability increase, so does our need for appropriate image quantification tools. Quantitative Vascular Analysis Tool (Q-VAT) is an open-source software, written for Fiji (ImageJ), that perform automated analysis and quantification on large two-dimensional images of whole tissue sections. Importantly, it allows separation of the vessel measurement based on diameter, allowing the macro- and microvasculature to be quantified separately. To enable analysis of entire tissue sections on regular laboratory computers, the vascular network of large samples is analyzed in a tile-wise manner, significantly reducing labor and bypassing several limitations related to manual quantification. Double or triple-stained slides can be analyzed, with a quantification of the percentage of vessels where the staining's overlap. To demonstrate the versatility, we applied Q-VAT to obtain morphological read-outs of the vasculature network in microscopy images of whole-mount immuno-stained sections of various mouse tissues.
Collapse
Affiliation(s)
- Bram Callewaert
- Center for Molecular and Vascular Biology (CMVB), Department of Cardiovascular Sciences, KU Leuven, Leuven, Belgium
- Biomedical MRI Unit, Department of Imaging and Pathology, KU Leuven, Leuven, Belgium
| | - Willy Gsell
- Biomedical MRI Unit, Department of Imaging and Pathology, KU Leuven, Leuven, Belgium
| | - Uwe Himmelreich
- Biomedical MRI Unit, Department of Imaging and Pathology, KU Leuven, Leuven, Belgium
| | - Elizabeth A. V. Jones
- Center for Molecular and Vascular Biology (CMVB), Department of Cardiovascular Sciences, KU Leuven, Leuven, Belgium
- School for Cardiovascular Diseases (CARIM), Department of Cardiology, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
10
|
Cieslak C, Mitteldorf C, Krömer-Olbrisch T, Kempf W, Stadler R. QuPath Analysis for CD30+ Cutaneous T-Cell Lymphoma. Am J Dermatopathol 2023; 45:93-98. [PMID: 36669072 DOI: 10.1097/dad.0000000000002330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Accepted: 09/10/2022] [Indexed: 01/22/2023]
Abstract
BACKGROUND Mycosis fungoides is the most common subtype of cutaneous T-cell lymphoma, in which the expression of cluster of differentiation 30 (CD30)+ subtype can now be treated with the CD30 antibody conjugate brentuximab vedotin. Diagnostic methods are based on immunohistochemical (IHC) staining followed by manual assessment by pathologists, which is always a subjective calculation. QuPath, an open-source software for digital pathology image analysis, satisfies the requirements of objective approaches. METHODS Ten samples from mycosis fungoides patients with CD30 expression at different stages were stained for CD3 and CD30 by IHC staining, scanned, and quantitative analysis was performed using QuPath (version 2.1). Each slide was independently assessed by 3 board-certified dermatopathologists. RESULTS Individual estimates for CD30+/CD3+ cells varied among the individual histopathologists (mean coefficient of variation, 0.46; range, 0-0.78). QuPath analysis showed excellent separation between the positively stained cells for CD3 and CD30 IHC and other cells and tissue structures, although the results correlated strongly with the respective mean estimates of the 3 histopathologists (Pearson-R 0.93). CONCLUSIONS The results show a high interobserver variability evaluation of IHC markers, although quantitative image analysis offer a significant advantage for comparison. This is not only relevant for clinical routine but also especially critical in therapeutic studies addressing targeted molecules.
Collapse
Affiliation(s)
- Cassandra Cieslak
- University Clinic for Dermatology, Johannes Wesling Medical Centre, Minden, Germany
- University Hospital of Ruhr-University, Bochum, Germany
| | - Christina Mitteldorf
- Department of Dermatology, Venereology and Allergology, University Medical Center Göttingen, Göttingen, Germany; and
| | - Tanja Krömer-Olbrisch
- University Clinic for Dermatology, Johannes Wesling Medical Centre, Minden, Germany
- University Hospital of Ruhr-University, Bochum, Germany
| | - Werner Kempf
- Kempf und Pfaltz Histologische Diagnostik, Zurich, Switzerland
| | - Rudolf Stadler
- University Clinic for Dermatology, Johannes Wesling Medical Centre, Minden, Germany
- University Hospital of Ruhr-University, Bochum, Germany
| |
Collapse
|
11
|
Leng H, Deng R, Asad Z, Womick RM, Yang H, Wan L, Huo Y. An Accelerated Pipeline for Multi-label Renal Pathology Image Segmentation at the Whole Slide Image Level. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2023; 12471:124710Q. [PMID: 38606193 PMCID: PMC11008744 DOI: 10.1117/12.2653651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/13/2024]
Abstract
Deep-learning techniques have been used widely to alleviate the labour-intensive and time-consuming manual annotation required for pixel-level tissue characterization. Our previous study introduced an efficient single dynamic network - Omni-Seg - that achieved multi-class multi-scale pathological segmentation with less computational complexity. However, the patch-wise segmentation paradigm still applies to Omni-Seg, and the pipeline is time-consuming when providing segmentation for Whole Slide Images (WSIs). In this paper, we propose an enhanced version of the Omni-Seg pipeline in order to reduce the repetitive computing processes and utilize a GPU to accelerate the model's prediction for both better model performance and faster speed. Our proposed method's innovative contribution is two-fold: (1) a Docker is released for an end-to-end slide-wise multi-tissue segmentation for WSIs; and (2) the pipeline is deployed on a GPU to accelerate the prediction, achieving better segmentation quality in less time. The proposed accelerated implementation reduced the average processing time (at the testing stage) on a standard needle biopsy WSI from 2.3 hours to 22 minutes, using 35 WSIs from the Kidney Tissue Atlas (KPMP) Datasets. The source code and the Docker have been made publicly available at https://github.com/ddrrnn123/Omni-Seg.
Collapse
Affiliation(s)
- Haoju Leng
- Department of Computer Science, Vanderbilt University, Nashville, TN, 37235 USA
| | - Ruining Deng
- Department of Computer Science, Vanderbilt University, Nashville, TN, 37235 USA
| | - Zuhayr Asad
- College of Arts and Science, Vanderbilt University, Nashville, TN, 37235 USA
| | - R. Michael Womick
- Department of Computer Science, The University of North Carolina at Chapel Hill, Chapel Hill, NC, 27514, USA
| | - Haichun Yang
- Department of Pathology, Vanderbilt University Medical Center, Nashville, TN 37215, USA
| | - Lipeng Wan
- Department of Computer Science, Georgia State University, Atlanta, GA, 30302 USA
| | - Yuankai Huo
- Department of Computer Science, Vanderbilt University, Nashville, TN, 37235 USA
| |
Collapse
|
12
|
Dual Consistency Semi-supervised Nuclei Detection via Global Regularization and Local Adversarial Learning. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.01.075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
13
|
Li L, Liang Y, Shao M, Lu S, Liao S, Ouyang D. Self-supervised learning-based Multi-Scale feature Fusion Network for survival analysis from whole slide images. Comput Biol Med 2023; 153:106482. [PMID: 36586231 DOI: 10.1016/j.compbiomed.2022.106482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 12/16/2022] [Accepted: 12/25/2022] [Indexed: 12/29/2022]
Abstract
Understanding prognosis and mortality is critical for evaluating the treatment plan of patients. Advances in digital pathology and deep learning techniques have made it practical to perform survival analysis in whole slide images (WSIs). Current methods are usually based on a multi-stage framework which includes patch sampling, feature extraction and prediction. However, the random patch sampling strategy is highly unstable and prone to sampling non-ROI. Feature extraction typically relies on hand-crafted features or convolutional neural networks (CNNs) pre-trained on ImageNet, while the artificial error or domain gaps may affect the survival prediction performance. Besides, the limited information representation of local sampling patches will create a bottleneck limitation on the effectiveness of prediction. To address the above challenges, we propose a novel patch sampling strategy based on image information entropy and construct a Multi-Scale feature Fusion Network (MSFN) based on self-supervised feature extractor. Specifically, we adopt image information entropy as a criterion to select representative sampling patches, thereby avoiding the noise interference caused by random to blank regions. Meanwhile, we pretrain the feature extractor utilizing self-supervised learning mechanism to improve the efficiency of feature extraction. Furthermore, a global-local feature fusion prediction network based on the attention mechanism is constructed to improve the survival prediction effect of WSIs with comprehensive multi-scale information representation. The proposed method is validated by adequate experiments and achieves competitive results on both of the most popular WSIs survival analysis datasets, TCGA-GBM and TCGA-LUSC. Code and trained models are made available at: https://github.com/Mercuriiio/MSFN.
Collapse
Affiliation(s)
- Le Li
- Faculty of Innovation Engineering, Macau University of Science and Technology, 999078, Macao Special Administrative Region of China.
| | - Yong Liang
- Peng Cheng Laboratory, Shenzhen, 518055, China.
| | - Mingwen Shao
- College of Computer Science and Technology, China University of Petroleum, Qingdao 266580, China.
| | - Shanghui Lu
- Faculty of Innovation Engineering, Macau University of Science and Technology, 999078, Macao Special Administrative Region of China.
| | - Shuilin Liao
- Faculty of Innovation Engineering, Macau University of Science and Technology, 999078, Macao Special Administrative Region of China.
| | - Dong Ouyang
- Faculty of Innovation Engineering, Macau University of Science and Technology, 999078, Macao Special Administrative Region of China.
| |
Collapse
|
14
|
Parwani AV, Patel A, Zhou M, Cheville JC, Tizhoosh H, Humphrey P, Reuter VE, True LD. An update on computational pathology tools for genitourinary pathology practice: A review paper from the Genitourinary Pathology Society (GUPS). J Pathol Inform 2023; 14:100177. [PMID: 36654741 PMCID: PMC9841212 DOI: 10.1016/j.jpi.2022.100177] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 12/20/2022] [Accepted: 12/20/2022] [Indexed: 12/31/2022] Open
Abstract
Machine learning has been leveraged for image analysis applications throughout a multitude of subspecialties. This position paper provides a perspective on the evolutionary trajectory of practical deep learning tools for genitourinary pathology through evaluating the most recent iterations of such algorithmic devices. Deep learning tools for genitourinary pathology demonstrate potential to enhance prognostic and predictive capacity for tumor assessment including grading, staging, and subtype identification, yet limitations in data availability, regulation, and standardization have stymied their implementation.
Collapse
Affiliation(s)
- Anil V. Parwani
- The Ohio State University, Columbus, Ohio, USA
- Corresponding author.
| | - Ankush Patel
- The Ohio State University, 2441 60th Ave SE, Mercer Island, Washington 98040, USA
| | - Ming Zhou
- Tufts University, Medford, Massachusetts, USA
| | | | | | | | | | | |
Collapse
|
15
|
Ren M, Zhang Q, Zhang S, Zhong T, Huang J, Ma S. Hierarchical cancer heterogeneity analysis based on histopathological imaging features. Biometrics 2022; 78:1579-1591. [PMID: 34390584 PMCID: PMC8995088 DOI: 10.1111/biom.13544] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 08/01/2021] [Accepted: 08/06/2021] [Indexed: 12/30/2022]
Abstract
In cancer research, supervised heterogeneity analysis has important implications. Such analysis has been traditionally based on clinical/demographic/molecular variables. Recently, histopathological imaging features, which are generated as a byproduct of biopsy, have been shown as effective for modeling cancer outcomes, and a handful of supervised heterogeneity analysis has been conducted based on such features. There are two types of histopathological imaging features, which are extracted based on specific biological knowledge and using automated imaging processing software, respectively. Using both types of histopathological imaging features, our goal is to conduct the first supervised cancer heterogeneity analysis that satisfies a hierarchical structure. That is, the first type of imaging features defines a rough structure, and the second type defines a nested and more refined structure. A penalization approach is developed, which has been motivated by but differs significantly from penalized fusion and sparse group penalization. It has satisfactory statistical and numerical properties. In the analysis of lung adenocarcinoma data, it identifies a heterogeneity structure significantly different from the alternatives and has satisfactory prediction and stability performance.
Collapse
Affiliation(s)
- Mingyang Ren
- School of Mathematics Sciences, University of Chinese Academy of Sciences, Key Laboratory of Big Data Mining and Knowledge Management, Chinese Academy of Sciences, Beijing, China
| | - Qingzhao Zhang
- MOE Key Laboratory of Economics, Department of Statistics, School of Economics, The Wang Yanan Institute for Studies in Economics and Fujian Key Lab of Statistics, Xiamen University, Xiamen, China
| | - Sanguo Zhang
- School of Mathematics Sciences, University of Chinese Academy of Sciences, Key Laboratory of Big Data Mining and Knowledge Management, Chinese Academy of Sciences, Beijing, China
| | - Tingyan Zhong
- SJTU-Yale Joint Center for Biostatistics, Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China
| | - Jian Huang
- Department of Statistics and Actuarial Science, University of Iowa, Iowa City, Iowa, USA
| | - Shuangge Ma
- Department of Biostatistics, Yale School of Public Health, New Haven, Connecticut, USA
| |
Collapse
|
16
|
Patel AU, Mohanty SK, Parwani AV. Applications of Digital and Computational Pathology and Artificial Intelligence in Genitourinary Pathology Diagnostics. Surg Pathol Clin 2022; 15:759-785. [PMID: 36344188 DOI: 10.1016/j.path.2022.08.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
As machine learning (ML) solutions for genitourinary pathology image analysis are fostered by a progressively digitized laboratory landscape, these integrable modalities usher in a revolution in histopathological diagnosis. As technology advances, limitations stymying clinical artificial intelligence (AI) will not be extinguished without thorough validation and interrogation of ML tools by pathologists and regulatory bodies alike. ML solutions deployed in clinical settings for applications in prostate pathology yield promising results. Recent breakthroughs in clinical artificial intelligence for genitourinary pathology demonstrate unprecedented generalizability, heralding prospects for a future in which AI-driven assistive solutions may be seen as laboratory faculty, rather than novelty.
Collapse
Affiliation(s)
- Ankush Uresh Patel
- Department of Laboratory Medicine and Pathology, Mayo Clinic, 200 First Street Southwest, Rochester, MN 55905, USA
| | - Sambit K Mohanty
- Surgical and Molecular Pathology, Advanced Medical Research Institute, Plot No. 1, Near Jayadev Vatika Park, Khandagiri, Bhubaneswar, Odisha 751019. https://twitter.com/SAMBITKMohanty1
| | - Anil V Parwani
- Department of Pathology, The Ohio State University, Cooperative Human Tissue Network (CHTN) Midwestern Division Polaris Innovation Centre, 2001 Polaris Parkway Suite 1000, Columbus, OH 43240, USA.
| |
Collapse
|
17
|
Zhou W, Deng Z, Liu Y, Shen H, Deng H, Xiao H. Global Research Trends of Artificial Intelligence on Histopathological Images: A 20-Year Bibliometric Analysis. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph191811597. [PMID: 36141871 PMCID: PMC9517580 DOI: 10.3390/ijerph191811597] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Revised: 08/31/2022] [Accepted: 09/01/2022] [Indexed: 06/13/2023]
Abstract
Cancer has become a major threat to global health care. With the development of computer science, artificial intelligence (AI) has been widely applied in histopathological images (HI) analysis. This study analyzed the publications of AI in HI from 2001 to 2021 by bibliometrics, exploring the research status and the potential popular directions in the future. A total of 2844 publications from the Web of Science Core Collection were included in the bibliometric analysis. The country/region, institution, author, journal, keyword, and references were analyzed by using VOSviewer and CiteSpace. The results showed that the number of publications has grown rapidly in the last five years. The USA is the most productive and influential country with 937 publications and 23,010 citations, and most of the authors and institutions with higher numbers of publications and citations are from the USA. Keyword analysis showed that breast cancer, prostate cancer, colorectal cancer, and lung cancer are the tumor types of greatest concern. Co-citation analysis showed that classification and nucleus segmentation are the main research directions of AI-based HI studies. Transfer learning and self-supervised learning in HI is on the rise. This study performed the first bibliometric analysis of AI in HI from multiple indicators, providing insights for researchers to identify key cancer types and understand the research trends of AI application in HI.
Collapse
Affiliation(s)
- Wentong Zhou
- Center for System Biology, Data Sciences, and Reproductive Health, School of Basic Medical Science, Central South University, Changsha 410031, China
| | - Ziheng Deng
- Center for System Biology, Data Sciences, and Reproductive Health, School of Basic Medical Science, Central South University, Changsha 410031, China
| | - Yong Liu
- Center for System Biology, Data Sciences, and Reproductive Health, School of Basic Medical Science, Central South University, Changsha 410031, China
| | - Hui Shen
- Tulane Center of Biomedical Informatics and Genomics, Deming Department of Medicine, School of Medicine, Tulane University School, New Orleans, LA 70112, USA
| | - Hongwen Deng
- Tulane Center of Biomedical Informatics and Genomics, Deming Department of Medicine, School of Medicine, Tulane University School, New Orleans, LA 70112, USA
| | - Hongmei Xiao
- Center for System Biology, Data Sciences, and Reproductive Health, School of Basic Medical Science, Central South University, Changsha 410031, China
| |
Collapse
|
18
|
Sun B, Laberiano-Fernández C, Salazar -Alejo R, Zhang J, Rendon JLS, Lee J, Soto LMS, Wistuba II, Parra ER. Impact of Region-of-Interest Size on Immune Profiling Using Multiplex Immunofluorescence Tyramide Signal Amplification for Paraffin-Embedded Tumor Tissues. Pathobiology 2022; 90:1-12. [PMID: 35609532 PMCID: PMC9684353 DOI: 10.1159/000523751] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 02/21/2022] [Indexed: 01/29/2023] Open
Abstract
INTRODUCTION Representative regions of interest (ROIs) analysis from the whole slide images (WSI) are currently being used to study immune markers by multiplex immunofluorescence (mIF) and single immunohistochemistry (IHC). However, the amount of area needed to be analyzed to be representative of the entire tumor in a WSI has not been defined. METHODS We labeled tumor-associated immune cells by mIF and single IHC in separate cohorts of non-small cell lung cancer (NSCLC) samples and we analyzed them as whole tumor area as well as using different number of ROIs to know how much area will be need to represent the entire tumor area. RESULTS For mIF using the InForm software and ROI of 0.33 mm2 each, we observed that the cell density data from five randomly selected ROIs is enough to achieve, in 90% of our samples, more than 0.9 of Spearman correlation coefficient and for single IHC using ScanScope tool box from Aperio and ROIs of 1 mm2 each, we found that the correlation value of more than 0.9 was achieved using 5 ROIs in a similar cohort. Additionally, we also observed that each cell phenotype in mIF influence differently the correlation between the areas analyzed by the ROIs and the WSI. Tumor tissue with high intratumor epithelial and immune cells phenotype, quality, and spatial distribution heterogeneity need more area analyzed to represent better the whole tumor area. CONCLUSION We found that at minimum 1.65 mm2 area is enough to represent the entire tumor areas in most of our NSCLC samples using mIF.
Collapse
Affiliation(s)
- Baohua Sun
- Department of Translational Molecular Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Caddie Laberiano-Fernández
- Department of Translational Molecular Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Ruth Salazar -Alejo
- Department of Translational Molecular Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Jiexin Zhang
- Department of Bioinformatics and Computational Biology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Jose Luis Solorzano Rendon
- Department of Translational Molecular Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Jack Lee
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Luisa Maren Solis Soto
- Department of Translational Molecular Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Ignacio Ivan Wistuba
- Department of Translational Molecular Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Edwin Roger Parra
- Department of Translational Molecular Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| |
Collapse
|
19
|
Lin A, Qi C, Li M, Guan R, Imyanitov EN, Mitiushkina NV, Cheng Q, Liu Z, Wang X, Lyu Q, Zhang J, Luo P. Deep Learning Analysis of the Adipose Tissue and the Prediction of Prognosis in Colorectal Cancer. Front Nutr 2022; 9:869263. [PMID: 35634419 PMCID: PMC9131178 DOI: 10.3389/fnut.2022.869263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Accepted: 04/11/2022] [Indexed: 11/18/2022] Open
Abstract
Research has shown that the lipid microenvironment surrounding colorectal cancer (CRC) is closely associated with the occurrence, development, and metastasis of CRC. According to pathological images from the National Center for Tumor diseases (NCT), the University Medical Center Mannheim (UMM) database and the ImageNet data set, a model called VGG19 was pre-trained. A deep convolutional neural network (CNN), VGG19CRC, was trained by the migration learning method. According to the VGG19CRC model, adipose tissue scores were calculated for TCGA-CRC hematoxylin and eosin (H&E) images and images from patients at Zhujiang Hospital of Southern Medical University and First People's Hospital of Chenzhou. Kaplan-Meier (KM) analysis was used to compare the overall survival (OS) of patients. The XCell and MCP-Counter algorithms were used to evaluate the immune cell scores of the patients. Gene set enrichment analysis (GSEA) and single-sample GSEA (ssGSEA) were used to analyze upregulated and downregulated pathways. In TCGA-CRC, patients with high-adipocytes (high-ADI) CRC had significantly shorter OS times than those with low-ADI CRC. In a validation queue from Zhujiang Hospital of Southern Medical University (Local-CRC1), patients with high-ADI had worse OS than CRC patients with low-ADI. In another validation queue from First People's Hospital of Chenzhou (Local-CRC2), patients with low-ADI CRC had significantly longer OS than patients with high-ADI CRC. We developed a deep convolution network to segment various tissues from pathological H&E images of CRC and automatically quantify ADI. This allowed us to further analyze and predict the survival of CRC patients according to information from their segmented pathological tissue images, such as tissue components and the tumor microenvironment.
Collapse
Affiliation(s)
- Anqi Lin
- Department of Oncology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Chang Qi
- Department of Oncology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Mujiao Li
- College of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Department of Information, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Rui Guan
- Department of Oncology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Evgeny N. Imyanitov
- Department of Tumor Growth Biology, N.N. Petrov Institute of Oncology, St. Petersburg, Russia
| | - Natalia V. Mitiushkina
- Department of Tumor Growth Biology, N.N. Petrov Institute of Oncology, St. Petersburg, Russia
| | - Quan Cheng
- Department of Neurosurgery, Xiangya Hospital, Central South University, Changsha, China
| | - Zaoqu Liu
- Department of Interventional Radiology, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Xiaojun Wang
- First People's Hospital of Chenzhou City, Chenzhou, China
| | - Qingwen Lyu
- Department of Information, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- *Correspondence: Qingwen Lyu
| | - Jian Zhang
- Department of Oncology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Jian Zhang
| | - Peng Luo
- Department of Oncology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Peng Luo
| |
Collapse
|
20
|
Alom Z, Asari VK, Parwani A, Taha TM. Microscopic nuclei classification, segmentation, and detection with improved deep convolutional neural networks (DCNN). Diagn Pathol 2022; 17:38. [PMID: 35436941 PMCID: PMC9017017 DOI: 10.1186/s13000-022-01189-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Accepted: 12/30/2021] [Indexed: 11/10/2022] Open
Abstract
Background Nuclei classification, segmentation, and detection from pathological images are challenging tasks due to cellular heterogeneity in the Whole Slide Images (WSI). Methods In this work, we propose advanced DCNN models for nuclei classification, segmentation, and detection tasks. The Densely Connected Neural Network (DCNN) and Densely Connected Recurrent Convolutional Network (DCRN) models are applied for the nuclei classification tasks. The Recurrent Residual U-Net (R2U-Net) and the R2UNet-based regression model named the University of Dayton Net (UD-Net) are applied for nuclei segmentation and detection tasks respectively. The experiments are conducted on publicly available datasets, including Routine Colon Cancer (RCC) classification and detection and the Nuclei Segmentation Challenge 2018 datasets for segmentation tasks. The experimental results were evaluated with a five-fold cross-validation method, and the average testing results are compared against the existing approaches in terms of precision, recall, Dice Coefficient (DC), Mean Squared Error (MSE), F1-score, and overall testing accuracy by calculating pixels and cell-level analysis. Results The results demonstrate around 2.6% and 1.7% higher performance in terms of F1-score for nuclei classification and detection tasks when compared to the recently published DCNN based method. Also, for nuclei segmentation, the R2U-Net shows around 91.90% average testing accuracy in terms of DC, which is around 1.54% higher than the U-Net model. Conclusion The proposed methods demonstrate robustness with better quantitative and qualitative results in three different tasks for analyzing the WSI.
Collapse
Affiliation(s)
- Zahangir Alom
- Department of Pathology, St. Jude Children's Research Hospital, Memphis, TN, USA.
| | - Vijayan K Asari
- Department of Electrical and Computer Engineering, University of Dayton, Dayton, OH, USA
| | - Anil Parwani
- Department of Pathology, The Ohio State University, Columbus, OH, USA
| | - Tarek M Taha
- Department of Electrical and Computer Engineering, University of Dayton, Dayton, OH, USA
| |
Collapse
|
21
|
Privat-Maldonado A, Verloy R, Cardenas Delahoz E, Lin A, Vanlanduit S, Smits E, Bogaerts A. Cold Atmospheric Plasma Does Not Affect Stellate Cells Phenotype in Pancreatic Cancer Tissue in Ovo. Int J Mol Sci 2022; 23:ijms23041954. [PMID: 35216069 PMCID: PMC8878510 DOI: 10.3390/ijms23041954] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Revised: 02/04/2022] [Accepted: 02/08/2022] [Indexed: 02/06/2023] Open
Abstract
Pancreatic ductal adenocarcinoma (PDAC) is a challenging neoplastic disease, mainly due to the development of resistance to radio- and chemotherapy. Cold atmospheric plasma (CAP) is an alternative technology that can eliminate cancer cells through oxidative damage, as shown in vitro, in ovo, and in vivo. However, how CAP affects the pancreatic stellate cells (PSCs), key players in the invasion and metastasis of PDAC, is poorly understood. This study aims to determine the effect of an anti-PDAC CAP treatment on PSCs tissue developed in ovo using mono- and co-cultures of RLT-PSC (PSCs) and Mia PaCa-2 cells (PDAC). We measured tissue reduction upon CAP treatment and mRNA expression of PSC activation markers and extracellular matrix (ECM) remodelling factors via qRT-PCR. Protein expression of selected markers was confirmed via immunohistochemistry. CAP inhibited growth in Mia PaCa-2 and co-cultured tissue, but its effectiveness was reduced in the latter, which correlates with reduced ki67 levels. CAP did not alter the mRNA expression of PSC activation and ECM remodelling markers. No changes in MMP2 and MMP9 expression were observed in RLT-PSCs, but small changes were observed in Mia PaCa-2 cells. Our findings support the ability of CAP to eliminate PDAC cells, without altering the PSCs.
Collapse
Affiliation(s)
- Angela Privat-Maldonado
- PLASMANT, Chemistry Department, Faculty of Sciences, University of Antwerp, 2610 Antwerp, Belgium; (R.V.); (A.L.); (A.B.)
- Solid Tumor Immunology Group, Center for Oncological Research, Integrated Personalized and Precision Oncology Network, Department of Molecular Imaging, Pathology, Radiotherapy and Oncology, University of Antwerp, 2610 Antwerp, Belgium;
- Correspondence: ; Tel.: +32-3265-25-76
| | - Ruben Verloy
- PLASMANT, Chemistry Department, Faculty of Sciences, University of Antwerp, 2610 Antwerp, Belgium; (R.V.); (A.L.); (A.B.)
- Solid Tumor Immunology Group, Center for Oncological Research, Integrated Personalized and Precision Oncology Network, Department of Molecular Imaging, Pathology, Radiotherapy and Oncology, University of Antwerp, 2610 Antwerp, Belgium;
| | - Edgar Cardenas Delahoz
- Industrial Vision Lab InViLab, Faculty of Applied Engineering, University of Antwerp, 2610 Antwerp, Belgium; (E.C.D.); (S.V.)
| | - Abraham Lin
- PLASMANT, Chemistry Department, Faculty of Sciences, University of Antwerp, 2610 Antwerp, Belgium; (R.V.); (A.L.); (A.B.)
- Solid Tumor Immunology Group, Center for Oncological Research, Integrated Personalized and Precision Oncology Network, Department of Molecular Imaging, Pathology, Radiotherapy and Oncology, University of Antwerp, 2610 Antwerp, Belgium;
| | - Steve Vanlanduit
- Industrial Vision Lab InViLab, Faculty of Applied Engineering, University of Antwerp, 2610 Antwerp, Belgium; (E.C.D.); (S.V.)
| | - Evelien Smits
- Solid Tumor Immunology Group, Center for Oncological Research, Integrated Personalized and Precision Oncology Network, Department of Molecular Imaging, Pathology, Radiotherapy and Oncology, University of Antwerp, 2610 Antwerp, Belgium;
| | - Annemie Bogaerts
- PLASMANT, Chemistry Department, Faculty of Sciences, University of Antwerp, 2610 Antwerp, Belgium; (R.V.); (A.L.); (A.B.)
| |
Collapse
|
22
|
A comprehensive review of computer-aided whole-slide image analysis: from datasets to feature extraction, segmentation, classification and detection approaches. Artif Intell Rev 2022. [DOI: 10.1007/s10462-021-10121-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
23
|
Ruusuvuori P, Valkonen M, Kartasalo K, Valkonen M, Visakorpi T, Nykter M, Latonen L. Spatial analysis of histology in 3D: quantification and visualization of organ and tumor level tissue environment. Heliyon 2022; 8:e08762. [PMID: 35128089 PMCID: PMC8800033 DOI: 10.1016/j.heliyon.2022.e08762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 11/24/2021] [Accepted: 01/11/2022] [Indexed: 10/25/2022] Open
Abstract
Histological changes in tissue are of primary importance in pathological research and diagnosis. Automated histological analysis requires ability to computationally separate pathological alterations from normal tissue. Conventional histopathological assessments are performed from individual tissue sections, leading to the loss of three-dimensional context of the tissue. Yet, the tissue context and spatial determinants are critical in several pathologies, such as in understanding growth patterns of cancer in its local environment. Here, we develop computational methods for visualization and quantitative assessment of histopathological alterations in three dimensions. First, we reconstruct the 3D representation of the whole organ from serial sectioned tissue. Then, we proceed to analyze the histological characteristics and regions of interest in 3D. As our example cases, we use whole slide images representing hematoxylin-eosin stained whole mouse prostates in a Pten+/- mouse prostate tumor model. We show that quantitative assessment of tumor sizes, shapes, and separation between spatial locations within the organ enable characterizing and grouping tumors. Further, we show that 3D visualization of tissue with computationally quantified features provides an intuitive way to observe tissue pathology. Our results underline the heterogeneity in composition and cellular organization within individual tumors. As an example, we show how prostate tumors have nuclear density gradients indicating areas of tumor growth directions and reflecting varying pressure from the surrounding tissue. The methods presented here are applicable to any tissue and different types of pathologies. This work provides a proof-of-principle for gaining a comprehensive view from histology by studying it quantitatively in 3D.
Collapse
Affiliation(s)
- Pekka Ruusuvuori
- Institute of Biomedicine, University of Turku, Turku, Finland
- Faculty of Medicine and Health Technology, Tampere University, Finland
| | - Masi Valkonen
- Institute of Biomedicine, University of Turku, Turku, Finland
| | - Kimmo Kartasalo
- Faculty of Medicine and Health Technology, Tampere University, Finland
| | - Mira Valkonen
- Faculty of Medicine and Health Technology, Tampere University, Finland
| | - Tapio Visakorpi
- Faculty of Medicine and Health Technology, Tampere University, Finland
- Tays Cancer Center, Tampere University Hospital, Tampere, Finland
- Fimlab Laboratories Ltd, Tampere University Hospital, Tampere, Finland
| | - Matti Nykter
- Faculty of Medicine and Health Technology, Tampere University, Finland
- Tays Cancer Center, Tampere University Hospital, Tampere, Finland
| | - Leena Latonen
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
| |
Collapse
|
24
|
Liang H, Cheng Z, Zhong H, Qu A, Chen L. A region-based convolutional network for nuclei detection and segmentation in microscopy images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103276] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
25
|
Wang NC, Kaplan J, Lee J, Hodgin J, Udager A, Rao A. Stress Testing Pathology Models with Generated Artifacts. J Pathol Inform 2021; 12:54. [PMID: 35070483 PMCID: PMC8721870 DOI: 10.4103/jpi.jpi_6_21] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Revised: 06/23/2021] [Accepted: 07/06/2021] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Machine learning models provide significant opportunities for improvement in health care, but their "black-box" nature poses many risks. METHODS We built a custom Python module as part of a framework for generating artifacts that are meant to be tunable and describable to allow for future testing needs. We conducted an analysis of a previously published digital pathology classification model and an internally developed kidney tissue segmentation model, utilizing a variety of generated artifacts including testing their effects. The artifacts simulated were bubbles, tissue folds, uneven illumination, marker lines, uneven sectioning, altered staining, and tissue tears. RESULTS We found that there is some performance degradation on the tiles with artifacts, particularly with altered stains but also with marker lines, tissue folds, and uneven sectioning. We also found that the response of deep learning models to artifacts could be nonlinear. CONCLUSIONS Generated artifacts can provide a useful tool for testing and building trust in machine learning models by understanding where these models might fail.
Collapse
Affiliation(s)
- Nicholas Chandler Wang
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
| | - Jeremy Kaplan
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
| | - Joonsang Lee
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
| | - Jeffrey Hodgin
- Department of Pathology, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Aaron Udager
- Department of Pathology, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Arvind Rao
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
| |
Collapse
|
26
|
Jose L, Liu S, Russo C, Nadort A, Ieva AD. Generative Adversarial Networks in Digital Pathology and Histopathological Image Processing: A Review. J Pathol Inform 2021; 12:43. [PMID: 34881098 PMCID: PMC8609288 DOI: 10.4103/jpi.jpi_103_20] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Revised: 03/03/2021] [Accepted: 04/23/2021] [Indexed: 12/13/2022] Open
Abstract
Digital pathology is gaining prominence among the researchers with developments
in advanced imaging modalities and new technologies. Generative adversarial
networks (GANs) are a recent development in the field of artificial intelligence
and since their inception, have boosted considerable interest in digital
pathology. GANs and their extensions have opened several ways to tackle many
challenging histopathological image processing problems such as color
normalization, virtual staining, ink removal, image enhancement, automatic
feature extraction, segmentation of nuclei, domain adaptation and data
augmentation. This paper reviews recent advances in histopathological image
processing using GANs with special emphasis on the future perspectives related
to the use of such a technique. The papers included in this review were
retrieved by conducting a keyword search on Google Scholar and manually
selecting the papers on the subject of H&E stained digital pathology
images for histopathological image processing. In the first part, we describe
recent literature that use GANs in various image preprocessing tasks such as
stain normalization, virtual staining, image enhancement, ink removal, and data
augmentation. In the second part, we describe literature that use GANs for image
analysis, such as nuclei detection, segmentation, and feature extraction. This
review illustrates the role of GANs in digital pathology with the objective to
trigger new research on the application of generative models in future research
in digital pathology informatics.
Collapse
Affiliation(s)
- Laya Jose
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia.,ARC Centre of Excellence for Nanoscale Biophotonics, Macquarie University, Sydney, Australia
| | - Sidong Liu
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia.,Australian Institute of Health Innovation, Centre for Health Informatics, Macquarie University, Sydney, Australia
| | - Carlo Russo
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia
| | - Annemarie Nadort
- ARC Centre of Excellence for Nanoscale Biophotonics, Macquarie University, Sydney, Australia.,Department of Physics and Astronomy, Faculty of Science and Engineering, Macquarie University, Sydney, Australia
| | - Antonio Di Ieva
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Faculty of Medicine, Health and Human Sciences, Macquarie University, Sydney, Australia
| |
Collapse
|
27
|
Xing F, Cornish TC, Bennett TD, Ghosh D. Bidirectional Mapping-Based Domain Adaptation for Nucleus Detection in Cross-Modality Microscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2880-2896. [PMID: 33284750 PMCID: PMC8543886 DOI: 10.1109/tmi.2020.3042789] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Cell or nucleus detection is a fundamental task in microscopy image analysis and has recently achieved state-of-the-art performance by using deep neural networks. However, training supervised deep models such as convolutional neural networks (CNNs) usually requires sufficient annotated image data, which is prohibitively expensive or unavailable in some applications. Additionally, when applying a CNN to new datasets, it is common to annotate individual cells/nuclei in those target datasets for model re-learning, leading to inefficient and low-throughput image analysis. To tackle these problems, we present a bidirectional, adversarial domain adaptation method for nucleus detection on cross-modality microscopy image data. Specifically, the method learns a deep regression model for individual nucleus detection with both source-to-target and target-to-source image translation. In addition, we explicitly extend this unsupervised domain adaptation method to a semi-supervised learning situation and further boost the nucleus detection performance. We evaluate the proposed method on three cross-modality microscopy image datasets, which cover a wide variety of microscopy imaging protocols or modalities, and obtain a significant improvement in nucleus detection compared to reference baseline approaches. In addition, our semi-supervised method is very competitive with recent fully supervised learning models trained with all real target training labels.
Collapse
|
28
|
How the variability between computer-assisted analysis procedures evaluating immune markers can influence patients' outcome prediction. Histochem Cell Biol 2021; 156:461-478. [PMID: 34383240 DOI: 10.1007/s00418-021-02022-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/02/2021] [Indexed: 10/20/2022]
Abstract
Differences between computer-assisted image analysis (CAI) algorithms may cause discrepancies in the identification of immunohistochemically stained immune biomarkers in biopsies of breast cancer patients. These discrepancies have implications for their association with disease outcome. This study aims to compare three CAI procedures (A, B and C) to measure positive marker areas in post-neoadjuvant chemotherapy biopsies of patients with triple-negative breast cancer (TNBC) and to explore the differences in their performance in determining the potential association with relapse in these patients. A total of 3304 digital images of biopsy tissue obtained from 118 TNBC patients were stained for seven immune markers using immunohistochemistry (CD4, CD8, FOXP3, CD21, CD1a, CD83, HLA-DR) and were analyzed with procedures A, B and C. The three methods measure the positive pixel markers in the total tissue areas. The extent of agreement between paired CAI procedures, a principal component analysis (PCA) and Cox multivariate analysis was assessed. Comparisons of paired procedures showed close agreement for most of the immune markers at low concentration. The probability of differences between the paired procedures B/C and B/A was generally higher than those observed in C/A. The principal component analysis, largely based on data from CD8, CD1a and HLA-DR, identified two groups of patients with a significantly lower probability of relapse than the others. The multivariate regression models showed similarities in the factors associated with relapse for procedures A and C, as opposed to those obtained with procedure B. General agreement among the results of CAI procedures would not guarantee that the same predictive breast cancer markers were consistently identified. These results highlight the importance of developing additional strategies to improve the sensitivity of CAI procedures.
Collapse
|
29
|
Zinchuk V, Grossenbacher-Zinchuk O. Machine Learning for Analysis of Microscopy Images: A Practical Guide. ACTA ACUST UNITED AC 2021; 86:e101. [PMID: 31904918 DOI: 10.1002/cpcb.101] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
The explosive growth of machine learning has provided scientists with insights into data in ways unattainable using prior research techniques. It has allowed the detection of biological features that were previously unrecognized and overlooked. However, because machine-learning methodology originates from informatics, many cell biology labs have experienced difficulties in implementing this approach. In this article, we target the rapidly expanding audience of cell and molecular biologists interested in exploiting machine learning for analysis of their research. We discuss the advantages of employing machine learning with microscopy approaches and describe the machine-learning pipeline. We also give practical guidelines for building models of cell behavior using machine learning. We conclude with an overview of the tools required for model creation, and share advice on their use. © 2020 by John Wiley & Sons, Inc.
Collapse
Affiliation(s)
- Vadim Zinchuk
- Department of Neurobiology and Anatomy, Kochi University Faculty of Medicine, Kochi, Japan
| | | |
Collapse
|
30
|
Kobayashi S, Saltz JH, Yang VW. State of machine and deep learning in histopathological applications in digestive diseases. World J Gastroenterol 2021; 27:2545-2575. [PMID: 34092975 PMCID: PMC8160628 DOI: 10.3748/wjg.v27.i20.2545] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 03/27/2021] [Accepted: 04/29/2021] [Indexed: 02/06/2023] Open
Abstract
Machine learning (ML)- and deep learning (DL)-based imaging modalities have exhibited the capacity to handle extremely high dimensional data for a number of computer vision tasks. While these approaches have been applied to numerous data types, this capacity can be especially leveraged by application on histopathological images, which capture cellular and structural features with their high-resolution, microscopic perspectives. Already, these methodologies have demonstrated promising performance in a variety of applications like disease classification, cancer grading, structure and cellular localizations, and prognostic predictions. A wide range of pathologies requiring histopathological evaluation exist in gastroenterology and hepatology, indicating these as disciplines highly targetable for integration of these technologies. Gastroenterologists have also already been primed to consider the impact of these algorithms, as development of real-time endoscopic video analysis software has been an active and popular field of research. This heightened clinical awareness will likely be important for future integration of these methods and to drive interdisciplinary collaborations on emerging studies. To provide an overview on the application of these methodologies for gastrointestinal and hepatological histopathological slides, this review will discuss general ML and DL concepts, introduce recent and emerging literature using these methods, and cover challenges moving forward to further advance the field.
Collapse
Affiliation(s)
- Soma Kobayashi
- Department of Biomedical Informatics, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY 11794, United States
| | - Joel H Saltz
- Department of Biomedical Informatics, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY 11794, United States
| | - Vincent W Yang
- Department of Medicine, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY 11794, United States
- Department of Physiology and Biophysics, Renaissance School of Medicine, Stony Brook University, Stony Brook , NY 11794, United States
| |
Collapse
|
31
|
Im S, Hyeon J, Rha E, Lee J, Choi HJ, Jung Y, Kim TJ. Classification of Diffuse Glioma Subtype from Clinical-Grade Pathological Images Using Deep Transfer Learning. SENSORS 2021; 21:s21103500. [PMID: 34067934 PMCID: PMC8156672 DOI: 10.3390/s21103500] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Revised: 05/06/2021] [Accepted: 05/14/2021] [Indexed: 11/16/2022]
Abstract
Diffuse gliomas are the most common primary brain tumors and they vary considerably in their morphology, location, genetic alterations, and response to therapy. In 2016, the World Health Organization (WHO) provided new guidelines for making an integrated diagnosis that incorporates both morphologic and molecular features to diffuse gliomas. In this study, we demonstrate how deep learning approaches can be used for an automatic classification of glioma subtypes and grading using whole-slide images that were obtained from routine clinical practice. A deep transfer learning method using the ResNet50V2 model was trained to classify subtypes and grades of diffuse gliomas according to the WHO’s new 2016 classification. The balanced accuracy of the diffuse glioma subtype classification model with majority voting was 0.8727. These results highlight an emerging role of deep learning in the future practice of pathologic diagnosis.
Collapse
Affiliation(s)
- Sanghyuk Im
- Department of Neurosurgery, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea;
| | - Jonghwan Hyeon
- School of Computing, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (J.H.); (J.L.); (H.-J.C.); (Y.J.)
| | - Eunyoung Rha
- Department of Plastic and Reconstructive Surgery, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea;
| | - Janghyeon Lee
- School of Computing, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (J.H.); (J.L.); (H.-J.C.); (Y.J.)
| | - Ho-Jin Choi
- School of Computing, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (J.H.); (J.L.); (H.-J.C.); (Y.J.)
| | - Yuchae Jung
- School of Computing, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (J.H.); (J.L.); (H.-J.C.); (Y.J.)
| | - Tae-Jung Kim
- Department of Hospital Pathology, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea
- Correspondence: ; Tel.: +82-2-3779-2157
| |
Collapse
|
32
|
Homeyer A, Lotz J, Schwen LO, Weiss N, Romberg D, Höfener H, Zerbe N, Hufnagl P. Artificial Intelligence in Pathology: From Prototype to Product. J Pathol Inform 2021; 12:13. [PMID: 34012717 PMCID: PMC8112352 DOI: 10.4103/jpi.jpi_84_20] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 12/28/2020] [Accepted: 01/18/2021] [Indexed: 12/13/2022] Open
Abstract
Modern image analysis techniques based on artificial intelligence (AI) have great potential to improve the quality and efficiency of diagnostic procedures in pathology and to detect novel biomarkers. Despite thousands of published research papers on applications of AI in pathology, hardly any research implementations have matured into commercial products for routine use. Bringing an AI solution for pathology to market poses significant technological, business, and regulatory challenges. In this paper, we provide a comprehensive overview and advice on how to meet these challenges. We outline how research prototypes can be turned into a product-ready state and integrated into the IT infrastructure of clinical laboratories. We also discuss business models for profitable AI solutions and reimbursement options for computer assistance in pathology. Moreover, we explain how to obtain regulatory approval so that AI solutions can be launched as in vitro diagnostic medical devices. Thus, this paper offers computer scientists, software companies, and pathologists a road map for transforming prototypes of AI solutions into commercial products.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Norman Zerbe
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Institute of Pathology, Berlin, Germany.,HTW University of Applied Sciences Berlin, Berlin, Germany
| | - Peter Hufnagl
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Institute of Pathology, Berlin, Germany.,HTW University of Applied Sciences Berlin, Berlin, Germany
| |
Collapse
|
33
|
Hoque MZ, Keskinarkaus A, Nyberg P, Seppänen T. Retinex model based stain normalization technique for whole slide image analysis. Comput Med Imaging Graph 2021; 90:101901. [PMID: 33862354 DOI: 10.1016/j.compmedimag.2021.101901] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 02/28/2021] [Accepted: 03/06/2021] [Indexed: 10/21/2022]
Abstract
Medical imaging provides the means for diagnosing many of the medical phenomena currently studied in clinical medicine and pathology. The variations of color and intensity in stained histological slides affect the quantitative analysis of the histopathological images. Moreover, stain normalization utilizing color for the classification of pixels into different stain components is challenging. The staining also suffers from variability, which complicates the automatization of tissue area segmentation with different staining and the analysis of whole slide images. We have developed a Retinex model based stain normalization technique in terms of area segmentation from stained tissue images to quantify the individual stain components of the histochemical stains for the ideal removal of variability. The performance was experimentally compared to reference methods and tested on organotypic carcinoma model based on myoma tissue and our method consistently has the smallest standard deviation, skewness value, and coefficient of variation in normalized median intensity measurements. Our method also achieved better quality performance in terms of Quaternion Structure Similarity Index Metric (QSSIM), Structural Similarity Index Metric (SSIM), and Pearson Correlation Coefficient (PCC) by improving robustness against variability and reproducibility. The proposed method could potentially be used in the development of novel research as well as diagnostic tools with the potential improvement of accuracy and consistency in computer aided diagnosis in biobank applications.
Collapse
Affiliation(s)
- Md Ziaul Hoque
- Physiological Signal Analysis Group, Center for Machine Vision and Signal Analysis, University of Oulu, Finland; Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland.
| | - Anja Keskinarkaus
- Physiological Signal Analysis Group, Center for Machine Vision and Signal Analysis, University of Oulu, Finland; Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland
| | - Pia Nyberg
- Biobank Borealis of Northern Finland, Oulu University Hospital, Finland; Translational & Cancer Research Unit, Medical Research Center Oulu, Faculty of Medicine, University of Oulu, Finland
| | - Tapio Seppänen
- Physiological Signal Analysis Group, Center for Machine Vision and Signal Analysis, University of Oulu, Finland; Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland
| |
Collapse
|
34
|
Joint analysis of expression levels and histological images identifies genes associated with tissue morphology. Nat Commun 2021; 12:1609. [PMID: 33707455 PMCID: PMC7952575 DOI: 10.1038/s41467-021-21727-x] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2017] [Accepted: 02/05/2021] [Indexed: 01/01/2023] Open
Abstract
Histopathological images are used to characterize complex phenotypes such as tumor stage. Our goal is to associate features of stained tissue images with high-dimensional genomic markers. We use convolutional autoencoders and sparse canonical correlation analysis (CCA) on paired histological images and bulk gene expression to identify subsets of genes whose expression levels in a tissue sample correlate with subsets of morphological features from the corresponding sample image. We apply our approach, ImageCCA, to two TCGA data sets, and find gene sets associated with the structure of the extracellular matrix and cell wall infrastructure, implicating uncharacterized genes in extracellular processes. We find sets of genes associated with specific cell types, including neuronal cells and cells of the immune system. We apply ImageCCA to the GTEx v6 data, and find image features that capture population variation in thyroid and in colon tissues associated with genetic variants (image morphology QTLs, or imQTLs), suggesting that genetic variation regulates population variation in tissue morphological traits.
Collapse
|
35
|
Zormpas-Petridis K, Noguera R, Ivankovic DK, Roxanis I, Jamin Y, Yuan Y. SuperHistopath: A Deep Learning Pipeline for Mapping Tumor Heterogeneity on Low-Resolution Whole-Slide Digital Histopathology Images. Front Oncol 2021; 10:586292. [PMID: 33552964 PMCID: PMC7855703 DOI: 10.3389/fonc.2020.586292] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Accepted: 11/30/2020] [Indexed: 12/27/2022] Open
Abstract
High computational cost associated with digital pathology image analysis approaches is a challenge towards their translation in routine pathology clinic. Here, we propose a computationally efficient framework (SuperHistopath), designed to map global context features reflecting the rich tumor morphological heterogeneity. SuperHistopath efficiently combines i) a segmentation approach using the linear iterative clustering (SLIC) superpixels algorithm applied directly on the whole-slide images at low resolution (5x magnification) to adhere to region boundaries and form homogeneous spatial units at tissue-level, followed by ii) classification of superpixels using a convolution neural network (CNN). To demonstrate how versatile SuperHistopath was in accomplishing histopathology tasks, we classified tumor tissue, stroma, necrosis, lymphocytes clusters, differentiating regions, fat, hemorrhage and normal tissue, in 127 melanomas, 23 triple-negative breast cancers, and 73 samples from transgenic mouse models of high-risk childhood neuroblastoma with high accuracy (98.8%, 93.1% and 98.3% respectively). Furthermore, SuperHistopath enabled discovery of significant differences in tumor phenotype of neuroblastoma mouse models emulating genomic variants of high-risk disease, and stratification of melanoma patients (high ratio of lymphocyte-to-tumor superpixels (p = 0.015) and low stroma-to-tumor ratio (p = 0.028) were associated with a favorable prognosis). Finally, SuperHistopath is efficient for annotation of ground-truth datasets (as there is no need of boundary delineation), training and application (~5 min for classifying a whole-slide image and as low as ~30 min for network training). These attributes make SuperHistopath particularly attractive for research in rich datasets and could also facilitate its adoption in the clinic to accelerate pathologist workflow with the quantification of phenotypes, predictive/prognosis markers.
Collapse
Affiliation(s)
| | - Rosa Noguera
- Department of Pathology, Medical School, University of Valencia-INCLIVA Biomedical Health Research Institute, Valencia, Spain.,Low Prevalence Tumors, Centro de Investigación Biomédica en Red de Cáncer (CIBERONC), Instituto de Salud Carlos III, Madrid, Spain
| | | | - Ioannis Roxanis
- Breast Cancer Now Toby Robins Research Centre, The Institute of Cancer Research, London, United Kingdom
| | - Yann Jamin
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London, United Kingdom
| | - Yinyin Yuan
- Division of Molecular Pathology, The Institute of Cancer Research, London, United Kingdom
| |
Collapse
|
36
|
Xing F, Zhang X, Cornish TC. Artificial intelligence for pathology. Artif Intell Med 2021. [DOI: 10.1016/b978-0-12-821259-2.00011-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
37
|
Fu H, Mi W, Pan B, Guo Y, Li J, Xu R, Zheng J, Zou C, Zhang T, Liang Z, Zou J, Zou H. Automatic Pancreatic Ductal Adenocarcinoma Detection in Whole Slide Images Using Deep Convolutional Neural Networks. Front Oncol 2021; 11:665929. [PMID: 34249702 PMCID: PMC8267174 DOI: 10.3389/fonc.2021.665929] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Accepted: 06/10/2021] [Indexed: 01/11/2023] Open
Abstract
Pancreatic ductal adenocarcinoma (PDAC) is one of the deadliest cancer types worldwide, with the lowest 5-year survival rate among all kinds of cancers. Histopathology image analysis is considered a gold standard for PDAC detection and diagnosis. However, the manual diagnosis used in current clinical practice is a tedious and time-consuming task and diagnosis concordance can be low. With the development of digital imaging and machine learning, several scholars have proposed PDAC analysis approaches based on feature extraction methods that rely on field knowledge. However, feature-based classification methods are applicable only to a specific problem and lack versatility, so that the deep-learning method is becoming a vital alternative to feature extraction. This paper proposes the first deep convolutional neural network architecture for classifying and segmenting pancreatic histopathological images on a relatively large WSI dataset. Our automatic patch-level approach achieved 95.3% classification accuracy and the WSI-level approach achieved 100%. Additionally, we visualized the classification and segmentation outcomes of histopathological images to determine which areas of an image are more important for PDAC identification. Experimental results demonstrate that our proposed model can effectively diagnose PDAC using histopathological images, which illustrates the potential of this practical application.
Collapse
Affiliation(s)
- Hao Fu
- Department of Automation, School of Information Science and Engineering, East China University of Science and Technology, Shanghai, China
| | - Weiming Mi
- Department of Automation, School of Information Science and Technology, Tsinghua University, Beijing, China
| | - Boju Pan
- Molecular Pathology Research Center, Department of Pathology, Peking Union Medical College Hospital (PUMCH), Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing, China
| | - Yucheng Guo
- Yihai Center, Tsimage Medical Technology, Shenzhen, China
- Center for Intelligent Medical Imaging & Health, Research Institute of Tsinghua University in Shenzhen, Shenzhen, China
| | - Junjie Li
- Molecular Pathology Research Center, Department of Pathology, Peking Union Medical College Hospital (PUMCH), Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing, China
| | - Rongyan Xu
- Shanghai Chenshan Plant Science Research Center, Chinese Academy of Sciences, Shanghai, China
| | - Jie Zheng
- Yihai Center, Tsimage Medical Technology, Shenzhen, China
- Center for Intelligent Medical Imaging & Health, Research Institute of Tsinghua University in Shenzhen, Shenzhen, China
| | - Chunli Zou
- Yihai Center, Tsimage Medical Technology, Shenzhen, China
- Center for Intelligent Medical Imaging & Health, Research Institute of Tsinghua University in Shenzhen, Shenzhen, China
| | - Tao Zhang
- Department of Automation, School of Information Science and Technology, Tsinghua University, Beijing, China
| | - Zhiyong Liang
- Molecular Pathology Research Center, Department of Pathology, Peking Union Medical College Hospital (PUMCH), Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing, China
- *Correspondence: Zhiyong Liang, ; Hao Zou, ; Junzhong Zou,
| | - Junzhong Zou
- Department of Automation, School of Information Science and Engineering, East China University of Science and Technology, Shanghai, China
- *Correspondence: Zhiyong Liang, ; Hao Zou, ; Junzhong Zou,
| | - Hao Zou
- Yihai Center, Tsimage Medical Technology, Shenzhen, China
- Center for Intelligent Medical Imaging & Health, Research Institute of Tsinghua University in Shenzhen, Shenzhen, China
- *Correspondence: Zhiyong Liang, ; Hao Zou, ; Junzhong Zou,
| |
Collapse
|
38
|
Characterizing Immune Responses in Whole Slide Images of Cancer With Digital Pathology and Pathomics. CURRENT PATHOBIOLOGY REPORTS 2020. [DOI: 10.1007/s40139-020-00217-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Abstract
Purpose of Review
Our goal is to show how readily available Pathomics tissue analytics can be used to study tumor immune interactions in cancer. We provide a brief overview of how Pathomics complements traditional histopathologic examination of cancer tissue samples. We highlight a novel Pathomics application, Tumor-TILs, that quantitatively measures and generates maps of tumor infiltrating lymphocytes in breast, pancreatic, and lung cancer by leveraging deep learning computer vision applications to perform automated analyses of whole slide images.
Recent Findings
Tumor-TIL maps have been generated to analyze WSIs from thousands of cases of breast, pancreatic, and lung cancer. We report the availability of these tools in an effort to promote collaborative research and motivate future development of ensemble Pathomics applications to discover novel biomarkers and perform a wide range of correlative clinicopathologic research in cancer immunopathology and beyond.
Summary
Tumor immune interactions in cancer are a fascinating aspect of cancer pathobiology with particular significance due to the emergence of immunotherapy. We present simple yet powerful specialized Pathomics methods that serve as powerful clinical research tools and potential standalone clinical screening tests to predict clinical outcomes and treatment responses for precision medicine applications in immunotherapy.
Collapse
|
39
|
Xu H, Park S, Hwang TH. Computerized Classification of Prostate Cancer Gleason Scores from Whole Slide Images. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2020; 17:1871-1882. [PMID: 31536012 DOI: 10.1109/tcbb.2019.2941195] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Histological Gleason grading of tumor patterns is one of the most powerful prognostic predictors in prostate cancer. However, manual analysis and grading performed by pathologists are typically subjective and time-consuming. In this paper, we present an automatic technique for Gleason grading of prostate cancer from H&E stained whole slide pathology images using a set of novel completed and statistical local binary pattern (CSLBP) descriptors. First, the technique divides the whole slide image (WSI) into a set of small image tiles, where salient tumor tiles with high nuclei densities are selected for analysis. The CSLBP texture features that encode pixel intensity variations from circularly surrounding neighborhoods are extracted from salient image tiles to characterize different Gleason patterns. Finally, the CSLBP texture features computed from all tiles are integrated and utilized by the multi-class support vector machine (SVM) that assigns patient slides with different Gleason scores such as 6, 7, or ≥ 8. Experiments have been performed on 312 different patient cases selected from the cancer genome atlas (TCGA) and have achieved superior performances over state-of-the-art texture descriptors and baseline methods including deep learning models for prostate cancer Gleason grading.
Collapse
|
40
|
Patil SM, Tong L, Wang MD. Generating Region of Interests for Invasive Breast Cancer in Histopathological Whole-Slide-Image. PROCEEDINGS : ANNUAL INTERNATIONAL COMPUTER SOFTWARE AND APPLICATIONS CONFERENCE. COMPSAC 2020; 2020:723-728. [PMID: 33029594 DOI: 10.1109/compsac48688.2020.0-174] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The detection of the region of interests (ROIs) on Whole Slide Images (WSIs) is one of the primary steps in computer-aided cancer diagnosis and grading. Early and accurate identification of invasive cancer regions in WSI is critical in the improvement of breast cancer diagnosis and further improvements in patient survival rates. However, invasive cancer ROI segmentation is a challenging task on WSI because of the low contrast of invasive cancer cells and their high similarity in terms of appearance, to non-invasive regions. In this paper, we propose a CNN based architecture for generating ROIs through segmentation. The network tackles the constraints of data-driven learning and working with very low-resolution WSI data in the detection of invasive breast cancer. Our proposed approach is based on transfer learning and the use of dilated convolutions. We propose a highly modified version of U-Net based auto-encoder, which takes as input an entire WSI with a resolution of 320×320. The network was trained on low-resolution WSI from four different data cohorts and has been tested for inter as well as intra- dataset variance. The proposed architecture shows significant improvements in terms of accuracy for the detection of invasive breast cancer regions.
Collapse
Affiliation(s)
| | - Li Tong
- Dept. of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA 30332
| | - May D Wang
- Dept. of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA 30332
| |
Collapse
|
41
|
Li S, Shi H, Sui D, Hao A, Qin H. A Novel Pathological Images and Genomic Data Fusion Framework for Breast Cancer Survival Prediction. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1384-1387. [PMID: 33018247 DOI: 10.1109/embc44109.2020.9176360] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Survival analysis is a valid solution for cancer treatments and outcome evaluations. Due to the wide application of medical imaging and genome technology, computer-aided survival analysis has become a popular and promising area, from which we can get relatively satisfactory results. Although there are already some impressive technologies in this field, most of them make some recommendations using single-source medical data and have not combined multi-level and multi-source data efficiently. In this paper, we propose a novel pathological images and gene expression data fusion framework to perform the survival prediction. Different from previous methods, our framework can extract correlated multi-scale deep features from whole slide images (WSIs) and dimensionality reduced gene expression data respectively for jointly survival analysis. The experiment results demonstrate that the integrated multi-level image and genome features can achieve higher prediction accuracy compared with single-source features.
Collapse
|
42
|
Zhang S, Fan Y, Zhong T, Ma S. Histopathological imaging features- versus molecular measurements-based cancer prognosis modeling. Sci Rep 2020; 10:15030. [PMID: 32929170 PMCID: PMC7490375 DOI: 10.1038/s41598-020-72201-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Accepted: 08/27/2020] [Indexed: 02/07/2023] Open
Abstract
For lung and many other cancers, prognosis is essentially important, and extensive modeling has been carried out. Cancer is a genetic disease. In the past 2 decades, diverse molecular data (such as gene expressions and DNA mutations) have been analyzed in prognosis modeling. More recently, histopathological imaging data, which is a "byproduct" of biopsy, has been suggested as informative for prognosis. In this article, with the TCGA LUAD and LUSC data, we examine and directly compare modeling lung cancer overall survival using gene expressions versus histopathological imaging features. High-dimensional penalization methods are adopted for estimation and variable selection. Our findings include that gene expressions have slightly better prognostic performance, and that most of the gene expressions are weakly correlated imaging features. This study may provide additional insight into utilizing the two types of important data in cancer prognosis modeling and into lung cancer overall survival.
Collapse
Affiliation(s)
- Sanguo Zhang
- School of Mathematics Sciences, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Yu Fan
- School of Mathematics Sciences, University of Chinese Academy of Sciences, Beijing, 100049, China
- Department of Biostatistics, Yale School of Public Health, New Haven, CT, 06520, USA
| | - Tingyan Zhong
- SJTU-Yale Joint Center for Biostatistics, Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China
- Department of Biostatistics, Yale School of Public Health, New Haven, CT, 06520, USA
| | - Shuangge Ma
- Department of Biostatistics, Yale School of Public Health, New Haven, CT, 06520, USA.
| |
Collapse
|
43
|
Fassler DJ, Abousamra S, Gupta R, Chen C, Zhao M, Paredes D, Batool SA, Knudsen BS, Escobar-Hoyos L, Shroyer KR, Samaras D, Kurc T, Saltz J. Deep learning-based image analysis methods for brightfield-acquired multiplex immunohistochemistry images. Diagn Pathol 2020; 15:100. [PMID: 32723384 PMCID: PMC7385962 DOI: 10.1186/s13000-020-01003-0] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2019] [Accepted: 07/12/2020] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Multiplex immunohistochemistry (mIHC) permits the labeling of six or more distinct cell types within a single histologic tissue section. The classification of each cell type requires detection of the unique colored chromogens localized to cells expressing biomarkers of interest. The most comprehensive and reproducible method to evaluate such slides is to employ digital pathology and image analysis pipelines to whole-slide images (WSIs). Our suite of deep learning tools quantitatively evaluates the expression of six biomarkers in mIHC WSIs. These methods address the current lack of readily available methods to evaluate more than four biomarkers and circumvent the need for specialized instrumentation to spectrally separate different colors. The use case application for our methods is a study that investigates tumor immune interactions in pancreatic ductal adenocarcinoma (PDAC) with a customized mIHC panel. METHODS Six different colored chromogens were utilized to label T-cells (CD3, CD4, CD8), B-cells (CD20), macrophages (CD16), and tumor cells (K17) in formalin-fixed paraffin-embedded (FFPE) PDAC tissue sections. We leveraged pathologist annotations to develop complementary deep learning-based methods: (1) ColorAE is a deep autoencoder which segments stained objects based on color; (2) U-Net is a convolutional neural network (CNN) trained to segment cells based on color, texture and shape; and ensemble methods that employ both ColorAE and U-Net, collectively referred to as (3) ColorAE:U-Net. We assessed the performance of our methods using: structural similarity and DICE score to evaluate segmentation results of ColorAE against traditional color deconvolution; F1 score, sensitivity, positive predictive value, and DICE score to evaluate the predictions from ColorAE, U-Net, and ColorAE:U-Net ensemble methods against pathologist-generated ground truth. We then used prediction results for spatial analysis (nearest neighbor). RESULTS We observed that (1) the performance of ColorAE is comparable to traditional color deconvolution for single-stain IHC images (note: traditional color deconvolution cannot be used for mIHC); (2) ColorAE and U-Net are complementary methods that detect 6 different classes of cells with comparable performance; (3) combinations of ColorAE and U-Net into ensemble methods outperform using either ColorAE and U-Net alone; and (4) ColorAE:U-Net ensemble methods can be employed for detailed analysis of the tumor microenvironment (TME). We developed a suite of scalable deep learning methods to analyze 6 distinctly labeled cell populations in mIHC WSIs. We evaluated our methods and found that they reliably detected and classified cells in the PDAC tumor microenvironment. We also present a use case, wherein we apply the ColorAE:U-Net ensemble method across 3 mIHC WSIs and use the predictions to quantify all stained cell populations and perform nearest neighbor spatial analysis. Thus, we provide proof of concept that these methods can be employed to quantitatively describe the spatial distribution immune cells within the tumor microenvironment. These complementary deep learning methods are readily deployable for use in clinical research studies.
Collapse
Affiliation(s)
- Danielle J Fassler
- Department of Pathology, Stony Brook University Renaissance School of Medicine, 101 Nicolls Rd, Stony Brook, 11794, USA
| | - Shahira Abousamra
- Department of Computer Science, Stony Brook University, 100 Nicolls Rd, Stony Brook, 11794, USA
| | - Rajarsi Gupta
- Department of Biomedical Informatics, Stony Brook University Renaissance School of Medicine, 101 Nicolls Rd, Stony Brook, 11794, USA
| | - Chao Chen
- Department of Biomedical Informatics, Stony Brook University Renaissance School of Medicine, 101 Nicolls Rd, Stony Brook, 11794, USA
| | - Maozheng Zhao
- Department of Computer Science, Stony Brook University, 100 Nicolls Rd, Stony Brook, 11794, USA
| | - David Paredes
- Department of Computer Science, Stony Brook University, 100 Nicolls Rd, Stony Brook, 11794, USA
| | - Syeda Areeha Batool
- Department of Biomedical Informatics, Stony Brook University Renaissance School of Medicine, 101 Nicolls Rd, Stony Brook, 11794, USA
| | - Beatrice S Knudsen
- Department of Pathology, University of Utah, 2000 Circle of Hope, Salt Lake City, UT, 84112, USA
| | - Luisa Escobar-Hoyos
- Department of Pathology, Stony Brook University Renaissance School of Medicine, 101 Nicolls Rd, Stony Brook, 11794, USA
- Department Therapeutic Radiology, Yale University, 15 York Street, New Haven, CT, 06513, USA
| | - Kenneth R Shroyer
- Department of Pathology, Stony Brook University Renaissance School of Medicine, 101 Nicolls Rd, Stony Brook, 11794, USA
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, 100 Nicolls Rd, Stony Brook, 11794, USA
| | - Tahsin Kurc
- Department of Biomedical Informatics, Stony Brook University Renaissance School of Medicine, 101 Nicolls Rd, Stony Brook, 11794, USA
| | - Joel Saltz
- Department of Biomedical Informatics, Stony Brook University Renaissance School of Medicine, 101 Nicolls Rd, Stony Brook, 11794, USA.
| |
Collapse
|
44
|
Comparative Analysis of Rhino-Cytological Specimens with Image Analysis and Deep Learning Techniques. ELECTRONICS 2020. [DOI: 10.3390/electronics9060952] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Cytological study of the nasal mucosa (also known as rhino-cytology) represents an important diagnostic aid that allows highlighting of the presence of some types of rhinitis through the analysis of cellular features visible under a microscope. Nowadays, the automated detection and classification of cells benefit from the capacity of deep learning techniques in processing digital images of the cytological preparation. Even though the results of such automatic systems need to be validated by a specialized rhino-cytologist, this technology represents a valid support that aims at increasing the accuracy of the analysis while reducing the required time and effort. The quality of the rhino-cytological preparation, which is clearly important for the microscope observation phase, is also fundamental for the automatic classification process. In fact, the slide-preparing technique turns out to be a crucial factor among the multiple ones that may modify the morphological and chromatic characteristics of the cells. This paper aims to investigate the possible differences between direct smear (SM) and cytological centrifugation (CYT) slide-preparation techniques, in order to preserve image quality during the observation and cell classification phases in rhino-cytology. Firstly, a comparative study based on image analysis techniques has been put forward. The extraction of densitometric and morphometric features has made it possible to quantify and describe the spatial distribution of the cells in the field images observed under the microscope. Statistical analysis of the distribution of these features has been used to evaluate the degree of similarity between images acquired from SM and CYT slides. The results prove an important difference in the observation process of the cells prepared with the above-mentioned techniques, with reference to cell density and spatial distribution: the analysis of CYT slides has been more difficult than of the SM ones due to the spatial distribution of the cells, which results in a lower cell density than the SM slides. As a marginal part of this study, a performance assessment of the computer-aided diagnosis (CAD) system called Rhino-cyt has also been carried out on both groups of image slide types.
Collapse
|
45
|
Zhong QZ, Long LH, Liu A, Li CM, Xiu X, Hou XY, Wu QH, Gao H, Xu YG, Zhao T, Wang D, Lin HL, Sha XY, Wang WH, Chen M, Li GF. Radiomics of Multiparametric MRI to Predict Biochemical Recurrence of Localized Prostate Cancer After Radiation Therapy. Front Oncol 2020; 10:731. [PMID: 32477949 PMCID: PMC7235325 DOI: 10.3389/fonc.2020.00731] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Accepted: 04/16/2020] [Indexed: 12/12/2022] Open
Abstract
Background: To identify multiparametric magnetic resonance imaging (mp-MRI)-based radiomics features as prognostic factors in patients with localized prostate cancer after radiotherapy. Methods:From 2011 to 2016, a total of 91 consecutive patients with T1-4N0M0 prostate cancer were identified and divided into two cohorts for an adaptive boosting (Adaboost) model (training cohort: n = 73; test cohort: n = 18). All patients were treated with neoadjuvant endocrine therapy followed by radiotherapy. The optimal feature set, identified through an Inception-Resnet v2 network, consisted of a combination of T1, T2, and diffusion-weighted imaging (DWI) MR series. Through a Wilcoxon sign rank test, a total of 45 distinct signatures were extracted from 1,536 radiomics features and used in our Adaboost model. Results:Among 91 patients, 29 (32%) were classified as biochemical recurrence (BCR) and 62 (68%) as non-BCR. Once trained, the model demonstrated a predictive classification accuracy of 50.0 and 86.1% respectively for BCR and non-BCR groups on our test samples. The overall classification accuracy of the test cohort was 74.1%. The highest classification accuracy was 77.8% between three-fold cross-validation. The areas under the curve (AUC) of receiver operating characteristic curve (ROC) indices for the training and test cohorts were 0.99 and 0.73, respectively. Conclusion:The potential of multiparametric MRI-based radiomics to predict the BCR of localized prostate cancer patients was demonstrated in this manuscript. This analysis provided additional prognostic factors based on routine MR images and holds the potential to contribute to precision medicine and inform treatment management.
Collapse
Affiliation(s)
- Qiu-Zi Zhong
- Department of Radiation Oncology, National Center of Gerontology, Beijing Hospital, Beijing, China
| | - Liu-Hua Long
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education / Beijing), Department of Radiation Oncology, Peking University Cancer Hospital and Institute, Beijing, China
| | - An Liu
- Department of Radiation Oncology, City of Hope Medical Center, Duarte, CA, United States
| | - Chun-Mei Li
- Department of Radiology, Beijing Hospital, National Center of Gerontology, Beijing, China
| | - Xia Xiu
- Department of Radiation Oncology, National Center of Gerontology, Beijing Hospital, Beijing, China
| | - Xiu-Yu Hou
- Department of Radiation Oncology, National Center of Gerontology, Beijing Hospital, Beijing, China
| | - Qin-Hong Wu
- Department of Radiation Oncology, National Center of Gerontology, Beijing Hospital, Beijing, China
| | - Hong Gao
- Department of Radiation Oncology, National Center of Gerontology, Beijing Hospital, Beijing, China
| | - Yong-Gang Xu
- Department of Radiation Oncology, National Center of Gerontology, Beijing Hospital, Beijing, China
| | - Ting Zhao
- Department of Radiation Oncology, National Center of Gerontology, Beijing Hospital, Beijing, China
| | - Dan Wang
- Department of Radiation Oncology, National Center of Gerontology, Beijing Hospital, Beijing, China
| | - Hai-Lei Lin
- Department of Radiation Oncology, National Center of Gerontology, Beijing Hospital, Beijing, China
| | - Xiang-Yan Sha
- Department of Radiation Oncology, National Center of Gerontology, Beijing Hospital, Beijing, China
| | - Wei-Hu Wang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education / Beijing), Department of Radiation Oncology, Peking University Cancer Hospital and Institute, Beijing, China
| | - Min Chen
- Department of Radiology, Beijing Hospital, National Center of Gerontology, Beijing, China
| | - Gao-Feng Li
- Department of Radiation Oncology, National Center of Gerontology, Beijing Hospital, Beijing, China
| |
Collapse
|
46
|
Yu KH, Wang F, Berry GJ, Ré C, Altman RB, Snyder M, Kohane IS. Classifying non-small cell lung cancer types and transcriptomic subtypes using convolutional neural networks. J Am Med Inform Assoc 2020; 27:757-769. [PMID: 32364237 PMCID: PMC7309263 DOI: 10.1093/jamia/ocz230] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Revised: 11/22/2019] [Accepted: 03/05/2020] [Indexed: 12/26/2022] Open
Abstract
OBJECTIVE Non-small cell lung cancer is a leading cause of cancer death worldwide, and histopathological evaluation plays the primary role in its diagnosis. However, the morphological patterns associated with the molecular subtypes have not been systematically studied. To bridge this gap, we developed a quantitative histopathology analytic framework to identify the types and gene expression subtypes of non-small cell lung cancer objectively. MATERIALS AND METHODS We processed whole-slide histopathology images of lung adenocarcinoma (n = 427) and lung squamous cell carcinoma patients (n = 457) in the Cancer Genome Atlas. We built convolutional neural networks to classify histopathology images, evaluated their performance by the areas under the receiver-operating characteristic curves (AUCs), and validated the results in an independent cohort (n = 125). RESULTS To establish neural networks for quantitative image analyses, we first built convolutional neural network models to identify tumor regions from adjacent dense benign tissues (AUCs > 0.935) and recapitulated expert pathologists' diagnosis (AUCs > 0.877), with the results validated in an independent cohort (AUCs = 0.726-0.864). We further demonstrated that quantitative histopathology morphology features identified the major transcriptomic subtypes of both adenocarcinoma and squamous cell carcinoma (P < .01). DISCUSSION Our study is the first to classify the transcriptomic subtypes of non-small cell lung cancer using fully automated machine learning methods. Our approach does not rely on prior pathology knowledge and can discover novel clinically relevant histopathology patterns objectively. The developed procedure is generalizable to other tumor types or diseases.
Collapse
Affiliation(s)
- Kun-Hsing Yu
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, USA
| | - Feiran Wang
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| | - Gerald J Berry
- Department of Pathology, Stanford University, Stanford, California, USA
| | - Christopher Ré
- Department of Computer Science, Stanford University, Stanford, California, USA
| | - Russ B Altman
- Biomedical Informatics Program, Stanford University, Stanford, California, USA
- Department of Bioengineering, Stanford University, Stanford, California, USA
- Department of Genetics, Stanford University, Stanford, California, USA
| | - Michael Snyder
- Department of Genetics, Stanford University, Stanford, California, USA
| | - Isaac S Kohane
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
47
|
Kurc T, Bakas S, Ren X, Bagari A, Momeni A, Huang Y, Zhang L, Kumar A, Thibault M, Qi Q, Wang Q, Kori A, Gevaert O, Zhang Y, Shen D, Khened M, Ding X, Krishnamurthi G, Kalpathy-Cramer J, Davis J, Zhao T, Gupta R, Saltz J, Farahani K. Segmentation and Classification in Digital Pathology for Glioma Research: Challenges and Deep Learning Approaches. Front Neurosci 2020; 14:27. [PMID: 32153349 PMCID: PMC7046596 DOI: 10.3389/fnins.2020.00027] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Accepted: 01/10/2020] [Indexed: 12/12/2022] Open
Abstract
Biomedical imaging Is an important source of information in cancer research. Characterizations of cancer morphology at onset, progression, and in response to treatment provide complementary information to that gleaned from genomics and clinical data. Accurate extraction and classification of both visual and latent image features Is an increasingly complex challenge due to the increased complexity and resolution of biomedical image data. In this paper, we present four deep learning-based image analysis methods from the Computational Precision Medicine (CPM) satellite event of the 21st International Medical Image Computing and Computer Assisted Intervention (MICCAI 2018) conference. One method Is a segmentation method designed to segment nuclei in whole slide tissue images (WSIs) of adult diffuse glioma cases. It achieved a Dice similarity coefficient of 0.868 with the CPM challenge datasets. Three methods are classification methods developed to categorize adult diffuse glioma cases into oligodendroglioma and astrocytoma classes using radiographic and histologic image data. These methods achieved accuracy values of 0.75, 0.80, and 0.90, measured as the ratio of the number of correct classifications to the number of total cases, with the challenge datasets. The evaluations of the four methods indicate that (1) carefully constructed deep learning algorithms are able to produce high accuracy in the analysis of biomedical image data and (2) the combination of radiographic with histologic image information improves classification performance.
Collapse
Affiliation(s)
- Tahsin Kurc
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, United States
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, PA, United States
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Xuhua Ren
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Aditya Bagari
- Department of Engineering Design, Indian Institute of Technology Madras, Chennai, India
| | - Alexandre Momeni
- Department of Medicine and Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Yue Huang
- School of Informatics, Xiamen University, Xiamen, China
| | - Lichi Zhang
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ashish Kumar
- Department of Engineering Design, Indian Institute of Technology Madras, Chennai, India
| | - Marc Thibault
- Department of Medicine and Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Qi Qi
- School of Informatics, Xiamen University, Xiamen, China
| | - Qian Wang
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Avinash Kori
- Department of Engineering Design, Indian Institute of Technology Madras, Chennai, India
| | - Olivier Gevaert
- Department of Medicine and Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Yunlong Zhang
- School of Informatics, Xiamen University, Xiamen, China
| | - Dinggang Shen
- Department of Radiology and BRIC, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Mahendra Khened
- Department of Engineering Design, Indian Institute of Technology Madras, Chennai, India
| | - Xinghao Ding
- School of Informatics, Xiamen University, Xiamen, China
| | | | - Jayashree Kalpathy-Cramer
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States
| | - James Davis
- Department of Pathology, Stony Brook University, Stony Brook, NY, United States
| | - Tianhao Zhao
- Department of Pathology, Stony Brook University, Stony Brook, NY, United States
| | - Rajarsi Gupta
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, United States
- Department of Pathology, Stony Brook University, Stony Brook, NY, United States
| | - Joel Saltz
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, United States
| | - Keyvan Farahani
- Cancer Imaging Program, National Cancer Institute, National Institutes of Health, Bethesda, MD, United States
| |
Collapse
|
48
|
Barsoum I, Tawedrous E, Faragalla H, Yousef GM. Histo-genomics: digital pathology at the forefront of precision medicine. ACTA ACUST UNITED AC 2020; 6:203-212. [PMID: 30827078 DOI: 10.1515/dx-2018-0064] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2018] [Accepted: 09/28/2018] [Indexed: 12/26/2022]
Abstract
The toughest challenge OMICs face is that they provide extremely high molecular resolution but poor spatial information. Understanding the cellular/histological context of the overwhelming genetic data is critical for a full understanding of the clinical behavior of a malignant tumor. Digital pathology can add an extra layer of information to help visualize in a spatial and microenvironmental context the molecular information of cancer. Thus, histo-genomics provide a unique chance for data integration. In the era of a precision medicine, a four-dimensional (4D) (temporal/spatial) analysis of cancer aided by digital pathology can be a critical step to understand the evolution/progression of different cancers and consequently tailor individual treatment plans. For instance, the integration of molecular biomarkers expression into a three-dimensional (3D) image of a digitally scanned tumor can offer a better understanding of its subtype, behavior, host immune response and prognosis. Using advanced digital image analysis, a larger spectrum of parameters can be analyzed as potential predictors of clinical behavior. Correlation between morphological features and host immune response can be also performed with therapeutic implications. Radio-histomics, or the interface of radiological images and histology is another emerging exciting field which encompasses the integration of radiological imaging with digital pathological images, genomics, and clinical data to portray a more holistic approach to understating and treating disease. These advances in digital slide scanning are not without technical challenges, which will be addressed carefully in this review with quick peek at its future.
Collapse
Affiliation(s)
- Ivraym Barsoum
- Department of Pathology and Molecular Medicine, Faculty of Health Sciences, Queen's University, Kingston, Ontario, Canada
| | - Eriny Tawedrous
- Department of Laboratory Medicine, and the Keenan Research Centre for Biomedical Science at the Li Ka Shing Knowledge Institute, St. Michael's Hospital, Toronto, Canada
| | - Hala Faragalla
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Canada
| | - George M Yousef
- Department of Laboratory Medicine, and the Keenan Research Centre for Biomedical Science at the Li Ka Shing Knowledge Institute, St. Michael's Hospital, Toronto, Canada.,Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Canada.,Department of Pediatric Laboratory Medicine, The Hospital for Sick Children, 555 University Avenue, Toronto, ON M5G 1X8, Canada
| |
Collapse
|
49
|
Barreiros W, Moreira J, Kurc T, Kong J, Melo AC, Saltz JH, Teodoro G. Optimizing parameter sensitivity analysis of large-scale microscopy image analysis workflows with multilevel computation reuse. CONCURRENCY AND COMPUTATION : PRACTICE & EXPERIENCE 2020; 32:e5403. [PMID: 32669980 PMCID: PMC7363336 DOI: 10.1002/cpe.5403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Accepted: 05/18/2019] [Indexed: 06/11/2023]
Abstract
Parameter sensitivity analysis (SA) is an effective tool to gain knowledge about complex analysis applications and assess the variability in their analysis results. However, it is an expensive process as it requires the execution of the target application multiple times with a large number of different input parameter values. In this work, we propose optimizations to reduce the overall computation cost of SA in the context of analysis applications that segment high-resolution slide tissue images, ie, images with resolutions of 100k × 100k pixels. Two cost-cutting techniques are combined to efficiently execute SA: use of distributed hybrid systems for parallel execution and computation reuse at multiple levels of an analysis pipeline to reduce the amount of computation. These techniques were evaluated using a cancer image analysis workflow on a hybrid cluster with 256 nodes, each with an Intel Phi and a dual socket CPU. Our parallel execution method attained an efficiency of over 90% on 256 nodes. The hybrid execution on the CPU and Intel Phi improved the performance by 2×. Multilevel computation reuse led to performance gains of over 2.9×.
Collapse
Affiliation(s)
- Willian Barreiros
- Department of Computer Science, University of Brasília, Brasília, Brazil
| | - Jeremias Moreira
- Department of Computer Science, University of Brasília, Brasília, Brazil
| | - Tahsin Kurc
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, New York
- Scientific Data Group, Oak Ridge National Laboratory, Oak Ridge, Tennessee
| | - Jun Kong
- Department of Biomedical Informatics, Emory University, Atlanta, Georgia
- Department of Computer Science, Emory University, Atlanta, Georgia
- Department of Mathematics and Statistics, Georgia State University, Atlanta, Georgia
| | - Alba C.M.A. Melo
- Department of Computer Science, University of Brasília, Brasília, Brazil
| | - Joel H. Saltz
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, New York
| | - George Teodoro
- Department of Computer Science, University of Brasília, Brasília, Brazil
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, New York
- Department of Computer Science, Federal University of Minas Gerais, Belo Horizonte, Brazil
| |
Collapse
|
50
|
Hannig J, Schäfer H, Ackermann J, Hebel M, Schäfer T, Döring C, Hartmann S, Hansmann ML, Koch I. Bioinformatics analysis of whole slide images reveals significant neighborhood preferences of tumor cells in Hodgkin lymphoma. PLoS Comput Biol 2020; 16:e1007516. [PMID: 31961873 PMCID: PMC6999891 DOI: 10.1371/journal.pcbi.1007516] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2019] [Revised: 02/04/2020] [Accepted: 10/29/2019] [Indexed: 11/25/2022] Open
Abstract
In pathology, tissue images are evaluated using a light microscope, relying on the expertise and experience of pathologists. There is a great need for computational methods to quantify and standardize histological observations. Computational quantification methods become more and more essential to evaluate tissue images. In particular, the distribution of tumor cells and their microenvironment are of special interest. Here, we systematically investigated tumor cell properties and their spatial neighborhood relations by a new application of statistical analysis to whole slide images of Hodgkin lymphoma, a tumor arising in lymph nodes, and inflammation of lymph nodes called lymphadenitis. We considered properties of more than 400, 000 immunohistochemically stained, CD30-positive cells in 35 whole slide images of tissue sections from subtypes of the classical Hodgkin lymphoma, nodular sclerosis and mixed cellularity, as well as from lymphadenitis. We found that cells of specific morphology exhibited significantly favored and unfavored spatial neighborhood relations of cells in dependence of their morphology. This information is important to evaluate differences between Hodgkin lymph nodes infiltrated by tumor cells (Hodgkin lymphoma) and inflamed lymph nodes, concerning the neighborhood relations of cells and the sizes of cells. The quantification of neighborhood relations revealed new insights of relations of CD30-positive cells in different diagnosis cases. The approach is general and can easily be applied to whole slide image analysis of other tumor types. In pathology, histological diagnosis is still challenging, in particular, for tumor diseases. Pathologists diagnose the disease and its stage of development on the basis of evaluation and interpretation of images of tissue sections. The quantification of experimental data to support decisions of diagnosis and prognosis, applying bioinformatics methods, is an important issue. Here, we introduce a new, general approach to analyze tissue images of tumor and non-tumor patients and to evaluate the distribution of tumor cells in the tissue. Moreover, we consider neighborhood relations between immunostained cells of different cell morphology. We focus on a special type of lymph node tumor, the Hodgkin lymphoma, exploring the two main types of the classical Hodgkin lymphoma, the nodular sclerosis and the mixed cellularity, and the non-tumor case, the lymphadenitis, representing an inflammation of the lymph node. We considered more than 400, 000 cells immunohistochemically stained with CD30 in 35 whole slide images of tissue sections. We found that cells of specific morphology exhibited significant relations to cells of certain morphology as spatial nearest neighbor. We could show different neighborhood patterns of CD30-positive cells between tumor and non-tumor. The approach is general and can easily be applied to other tumor types.
Collapse
Affiliation(s)
- Jennifer Hannig
- KITE - Kompetenzzentrum für Informationstechnologie, Technische Hochschule Mittelhessen, Friedberg, Germany
| | - Hendrik Schäfer
- Molecular Bioinformatics, Institute of Computer Science, Johann Wolfgang Goethe-University, Frankfurt am Main, Germany
| | - Jörg Ackermann
- Molecular Bioinformatics, Institute of Computer Science, Johann Wolfgang Goethe-University, Frankfurt am Main, Germany
| | - Marie Hebel
- Institute of Biochemistry II, Johann Wolfgang Goethe-University, University Hospital Frankfurt am Main, Frankfurt am Main, Germany
| | - Tim Schäfer
- Department of Child and Adolescent Psychiatry, University Hospital Frankfurt am Main, Johann Wolfgang Goethe-University, Frankfurt am Main, Germany
| | - Claudia Döring
- Dr. Senckenberg Institute of Pathology, Johann Wolfgang Goethe-University, Frankfurt am Main, Germany
| | - Sylvia Hartmann
- Dr. Senckenberg Institute of Pathology, Johann Wolfgang Goethe-University, Frankfurt am Main, Germany
| | - Martin-Leo Hansmann
- Consultation and reference center for lymph node pathology at Dr. Senckenberg Institute of Pathology, Johann Wolfgang Goethe-University, Frankfurt am Main, Germany
| | - Ina Koch
- Molecular Bioinformatics, Institute of Computer Science, Johann Wolfgang Goethe-University, Frankfurt am Main, Germany
- * E-mail:
| |
Collapse
|