1
|
Ferrero A, Ghelichkhan E, Manoochehri H, Ho MM, Albertson DJ, Brintz BJ, Tasdizen T, Whitaker RT, Knudsen BS. HistoEM: A Pathologist-Guided and Explainable Workflow Using Histogram Embedding for Gland Classification. Mod Pathol 2024; 37:100447. [PMID: 38369187 DOI: 10.1016/j.modpat.2024.100447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 01/06/2024] [Accepted: 02/06/2024] [Indexed: 02/20/2024]
Abstract
Pathologists have, over several decades, developed criteria for diagnosing and grading prostate cancer. However, this knowledge has not, so far, been included in the design of convolutional neural networks (CNN) for prostate cancer detection and grading. Further, it is not known whether the features learned by machine-learning algorithms coincide with diagnostic features used by pathologists. We propose a framework that enforces algorithms to learn the cellular and subcellular differences between benign and cancerous prostate glands in digital slides from hematoxylin and eosin-stained tissue sections. After accurate gland segmentation and exclusion of the stroma, the central component of the pipeline, named HistoEM, utilizes a histogram embedding of features from the latent space of the CNN encoder. Each gland is represented by 128 feature-wise histograms that provide the input into a second network for benign vs cancer classification of the whole gland. Cancer glands are further processed by a U-Net structured network to separate low-grade from high-grade cancer. Our model demonstrates similar performance compared with other state-of-the-art prostate cancer grading models with gland-level resolution. To understand the features learned by HistoEM, we first rank features based on the distance between benign and cancer histograms and visualize the tissue origins of the 2 most important features. A heatmap of pixel activation by each feature is generated using Grad-CAM and overlaid on nuclear segmentation outlines. We conclude that HistoEM, similar to pathologists, uses nuclear features for the detection of prostate cancer. Altogether, this novel approach can be broadly deployed to visualize computer-learned features in histopathology images.
Collapse
Affiliation(s)
- Alessandro Ferrero
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | - Elham Ghelichkhan
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | - Hamid Manoochehri
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | - Man Minh Ho
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | | | | | - Tolga Tasdizen
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | - Ross T Whitaker
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | | |
Collapse
|
2
|
Meng X, Zou T. Clinical applications of graph neural networks in computational histopathology: A review. Comput Biol Med 2023; 164:107201. [PMID: 37517325 DOI: 10.1016/j.compbiomed.2023.107201] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 06/10/2023] [Accepted: 06/19/2023] [Indexed: 08/01/2023]
Abstract
Pathological examination is the optimal approach for diagnosing cancer, and with the advancement of digital imaging technologies, it has spurred the emergence of computational histopathology. The objective of computational histopathology is to assist in clinical tasks through image processing and analysis techniques. In the early stages, the technique involved analyzing histopathology images by extracting mathematical features, but the performance of these models was unsatisfactory. With the development of artificial intelligence (AI) technologies, traditional machine learning methods were applied in this field. Although the performance of the models improved, there were issues such as poor model generalization and tedious manual feature extraction. Subsequently, the introduction of deep learning techniques effectively addressed these problems. However, models based on traditional convolutional architectures could not adequately capture the contextual information and deep biological features in histopathology images. Due to the special structure of graphs, they are highly suitable for feature extraction in tissue histopathology images and have achieved promising performance in numerous studies. In this article, we review existing graph-based methods in computational histopathology and propose a novel and more comprehensive graph construction approach. Additionally, we categorize the methods and techniques in computational histopathology according to different learning paradigms. We summarize the common clinical applications of graph-based methods in computational histopathology. Furthermore, we discuss the core concepts in this field and highlight the current challenges and future research directions.
Collapse
Affiliation(s)
- Xiangyan Meng
- Xi'an Technological University, Xi'an, Shaanxi, 710021, China.
| | - Tonghui Zou
- Xi'an Technological University, Xi'an, Shaanxi, 710021, China.
| |
Collapse
|
3
|
Bokhorst JM, Nagtegaal ID, Fraggetta F, Vatrano S, Mesker W, Vieth M, van der Laak J, Ciompi F. Deep learning for multi-class semantic segmentation enables colorectal cancer detection and classification in digital pathology images. Sci Rep 2023; 13:8398. [PMID: 37225743 PMCID: PMC10209185 DOI: 10.1038/s41598-023-35491-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 05/18/2023] [Indexed: 05/26/2023] Open
Abstract
In colorectal cancer (CRC), artificial intelligence (AI) can alleviate the laborious task of characterization and reporting on resected biopsies, including polyps, the numbers of which are increasing as a result of CRC population screening programs ongoing in many countries all around the globe. Here, we present an approach to address two major challenges in the automated assessment of CRC histopathology whole-slide images. We present an AI-based method to segment multiple ([Formula: see text]) tissue compartments in the H &E-stained whole-slide image, which provides a different, more perceptible picture of tissue morphology and composition. We test and compare a panel of state-of-the-art loss functions available for segmentation models, and provide indications about their use in histopathology image segmentation, based on the analysis of (a) a multi-centric cohort of CRC cases from five medical centers in the Netherlands and Germany, and (b) two publicly available datasets on segmentation in CRC. We used the best performing AI model as the basis for a computer-aided diagnosis system that classifies colon biopsies into four main categories that are relevant pathologically. We report the performance of this system on an independent cohort of more than 1000 patients. The results show that with a good segmentation network as a base, a tool can be developed which can support pathologists in the risk stratification of colorectal cancer patients, among other possible uses. We have made the segmentation model available for research use on https://grand-challenge.org/algorithms/colon-tissue-segmentation/ .
Collapse
Affiliation(s)
- John-Melle Bokhorst
- Department of pathology, Radboud University Medical Center, Nijmegen, The Netherlands.
| | - Iris D Nagtegaal
- Department of pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Filippo Fraggetta
- Pathology Unit Gravina Hospital, Gravina Hospital, Caltagirone, Italy
| | - Simona Vatrano
- Pathology Unit Gravina Hospital, Gravina Hospital, Caltagirone, Italy
| | - Wilma Mesker
- Leids Universitair Medisch Centrum, Leiden, The Netherlands
| | - Michael Vieth
- Klinikum Bayreuth, Friedrich-Alexander-University Erlangen-Nuremberg, Bayreuth, Germany
| | - Jeroen van der Laak
- Department of pathology, Radboud University Medical Center, Nijmegen, The Netherlands
- Center for Medical Image Science and Visualization, Linköping University, Linköping, Sweden
| | - Francesco Ciompi
- Department of pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
4
|
Rubinstein JC, Foroughi Pour A, Zhou J, Sheridan TB, White BS, Chuang JH. Deep learning image analysis quantifies tumor heterogeneity and identifies microsatellite instability in colon cancer. J Surg Oncol 2023; 127:426-433. [PMID: 36251352 DOI: 10.1002/jso.27118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 09/09/2022] [Accepted: 09/24/2022] [Indexed: 02/01/2023]
Abstract
BACKGROUND AND OBJECTIVES Deep learning utilizing convolutional neural networks (CNNs) applied to hematoxylin & eosin (H&E)-stained slides numerically encodes histomorphological tumor features. Tumor heterogeneity is an emerging biomarker in colon cancer that is, captured by these features, whereas microsatellite instability (MSI) is an established biomarker traditionally assessed by immunohistochemistry or polymerase chain reaction. METHODS H&E-stained slides from The Cancer Genome Atlas (TCGA) colon cohort are passed through the CNN. Resulting imaging features are used to cluster morphologically similar slide regions. Tile-level pairwise similarities are calculated and used to generate a tumor heterogeneity score (THS). Patient-level THS is then correlated with TCGA-reported biomarkers, including MSI-status. RESULTS H&E-stained images from 313 patients generated 534 771 tiles. Deep learning automatically identified and annotated cells by type and clustered morphologically similar slide regions. MSI-high tumors demonstrated significantly higher THS than MSS/MSI-low (p < 0.001). THS was higher in MLH1-silent versus non-silent tumors (p < 0.001). The sequencing derived MSIsensor score also correlated with THS (r = 0.51, p < 0.0001). CONCLUSIONS Deep learning provides spatially resolved visualization of imaging-derived biomarkers and automated quantification of tumor heterogeneity. Our novel THS correlates with MSI-status, indicating that with expanded training sets, translational tools could be developed that predict MSI-status using H&E-stained images alone.
Collapse
Affiliation(s)
- Jill C Rubinstein
- The Jackson Laboratory for Genomic Medicine, Farmington, Connecticut, USA
- University of Connecticut School of Medicine, Farmington, Connecticut, USA
- Hartford Healthcare, Hartford, Connecticut, USA
| | - Ali Foroughi Pour
- The Jackson Laboratory for Genomic Medicine, Farmington, Connecticut, USA
| | - Jie Zhou
- The Jackson Laboratory for Genomic Medicine, Farmington, Connecticut, USA
| | - Todd B Sheridan
- The Jackson Laboratory for Genomic Medicine, Farmington, Connecticut, USA
- Hartford Healthcare, Hartford, Connecticut, USA
| | - Brian S White
- The Jackson Laboratory for Genomic Medicine, Farmington, Connecticut, USA
| | - Jeffrey H Chuang
- The Jackson Laboratory for Genomic Medicine, Farmington, Connecticut, USA
| |
Collapse
|
5
|
Ben Hamida A, Devanne M, Weber J, Truntzer C, Derangère V, Ghiringhelli F, Forestier G, Wemmert C. Weakly Supervised Learning using Attention gates for colon cancer histopathological image segmentation. Artif Intell Med 2022; 133:102407. [PMID: 36328667 DOI: 10.1016/j.artmed.2022.102407] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 09/07/2022] [Accepted: 09/15/2022] [Indexed: 02/08/2023]
Abstract
Recently, Artificial Intelligence namely Deep Learning methods have revolutionized a wide range of domains and applications. Besides, Digital Pathology has so far played a major role in the diagnosis and the prognosis of tumors. However, the characteristics of the Whole Slide Images namely the gigapixel size, high resolution and the shortage of richly labeled samples have hindered the efficiency of classical Machine Learning methods. That goes without saying that traditional methods are poor in generalization to different tasks and data contents. Regarding the success of Deep learning when dealing with Large Scale applications, we have resorted to the use of such models for histopathological image segmentation tasks. First, we review and compare the classical UNet and Att-UNet models for colon cancer WSI segmentation in a sparsely annotated data scenario. Then, we introduce novel enhanced models of the Att-UNet where different schemes are proposed for the skip connections and spatial attention gates positions in the network. In fact, spatial attention gates assist the training process and enable the model to avoid irrelevant feature learning. Alternating the presence of such modules namely in our Alter-AttUNet model adds robustness and ensures better image segmentation results. In order to cope with the lack of richly annotated data in our AiCOLO colon cancer dataset, we suggest the use of a multi-step training strategy that also deals with the WSI sparse annotations and unbalanced class issues. All proposed methods outperform state-of-the-art approaches but Alter-AttUNet generates the best compromise between accurate results and light network. The model achieves 95.88% accuracy with our sparse AiCOLO colon cancer datasets. Finally, to evaluate and validate our proposed architectures we resort to publicly available WSI data: the NCT-CRC-HE-100K, the CRC-5000 and the Warwick colon cancer histopathological dataset. Respective accuracies of 99.65%, 99.73% and 79.03% were reached. A comparison with state-of-art approaches is established to view and compare the key solutions for histopathological image segmentation.
Collapse
Affiliation(s)
| | - M Devanne
- IRIMAS, University of Haute-Alsace, France
| | - J Weber
- IRIMAS, University of Haute-Alsace, France
| | - C Truntzer
- Platform of Transform in Biological Oncology, Dijon, France
| | - V Derangère
- Platform of Transform in Biological Oncology, Dijon, France
| | - F Ghiringhelli
- Platform of Transform in Biological Oncology, Dijon, France
| | | | - C Wemmert
- ICube, University of Strasbourg, France
| |
Collapse
|
6
|
Mayer RS, Gretser S, Heckmann LE, Ziegler PK, Walter B, Reis H, Bankov K, Becker S, Triesch J, Wild PJ, Flinner N. How to learn with intentional mistakes: NoisyEnsembles to overcome poor tissue quality for deep learning in computational pathology. Front Med (Lausanne) 2022; 9:959068. [PMID: 36106328 PMCID: PMC9464871 DOI: 10.3389/fmed.2022.959068] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 08/01/2022] [Indexed: 11/13/2022] Open
Abstract
There is a lot of recent interest in the field of computational pathology, as many algorithms are introduced to detect, for example, cancer lesions or molecular features. However, there is a large gap between artificial intelligence (AI) technology and practice, since only a small fraction of the applications is used in routine diagnostics. The main problems are the transferability of convolutional neural network (CNN) models to data from other sources and the identification of uncertain predictions. The role of tissue quality itself is also largely unknown. Here, we demonstrated that samples of the TCGA ovarian cancer (TCGA-OV) dataset from different tissue sources have different quality characteristics and that CNN performance is linked to this property. CNNs performed best on high-quality data. Quality control tools were partially able to identify low-quality tiles, but their use did not increase the performance of the trained CNNs. Furthermore, we trained NoisyEnsembles by introducing label noise during training. These NoisyEnsembles could improve CNN performance for low-quality, unknown datasets. Moreover, the performance increases as the ensemble become more consistent, suggesting that incorrect predictions could be discarded efficiently to avoid wrong diagnostic decisions.
Collapse
Affiliation(s)
- Robin S. Mayer
- Dr. Senckenberg Institute of Pathology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Steffen Gretser
- Dr. Senckenberg Institute of Pathology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Lara E. Heckmann
- Dr. Senckenberg Institute of Pathology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Paul K. Ziegler
- Dr. Senckenberg Institute of Pathology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Britta Walter
- Dr. Senckenberg Institute of Pathology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Henning Reis
- Dr. Senckenberg Institute of Pathology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Katrin Bankov
- Dr. Senckenberg Institute of Pathology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Sven Becker
- Department of Gynecology and Obstetrics, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Jochen Triesch
- Frankfurt Institute for Advanced Studies (FIAS), Frankfurt am Main, Germany
| | - Peter J. Wild
- Dr. Senckenberg Institute of Pathology, University Hospital Frankfurt, Frankfurt am Main, Germany
- Frankfurt Institute for Advanced Studies (FIAS), Frankfurt am Main, Germany
- Wildlab, University Hospital Frankfurt MVZ GmbH, Frankfurt am Main, Germany
- Frankfurt Cancer Institute (FCI), Frankfurt am Main, Germany
- University Cancer Center (UCT) Frankfurt-Marburg, Frankfurt am Main, Germany
| | - Nadine Flinner
- Dr. Senckenberg Institute of Pathology, University Hospital Frankfurt, Frankfurt am Main, Germany
- Frankfurt Institute for Advanced Studies (FIAS), Frankfurt am Main, Germany
- Frankfurt Cancer Institute (FCI), Frankfurt am Main, Germany
- University Cancer Center (UCT) Frankfurt-Marburg, Frankfurt am Main, Germany
| |
Collapse
|
7
|
Meirelles ALS, Kurc T, Kong J, Ferreira R, Saltz JH, Teodoro G. Building Efficient CNN Architectures for Histopathology Images Analysis: A Case-Study in Tumor-Infiltrating Lymphocytes Classification. Front Med (Lausanne) 2022; 9:894430. [PMID: 35712087 PMCID: PMC9197439 DOI: 10.3389/fmed.2022.894430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 05/11/2022] [Indexed: 11/13/2022] Open
Abstract
Background Deep learning methods have demonstrated remarkable performance in pathology image analysis, but they are computationally very demanding. The aim of our study is to reduce their computational cost to enable their use with large tissue image datasets. Methods We propose a method called Network Auto-Reduction (NAR) that simplifies a Convolutional Neural Network (CNN) by reducing the network to minimize the computational cost of doing a prediction. NAR performs a compound scaling in which the width, depth, and resolution dimensions of the network are reduced together to maintain a balance among them in the resulting simplified network. We compare our method with a state-of-the-art solution called ResRep. The evaluation is carried out with popular CNN architectures and a real-world application that identifies distributions of tumor-infiltrating lymphocytes in tissue images. Results The experimental results show that both ResRep and NAR are able to generate simplified, more efficient versions of ResNet50 V2. The simplified versions by ResRep and NAR require 1.32× and 3.26× fewer floating-point operations (FLOPs), respectively, than the original network without a loss in classification power as measured by the Area under the Curve (AUC) metric. When applied to a deeper and more computationally expensive network, Inception V4, NAR is able to generate a version that requires 4× lower than the original version with the same AUC performance. Conclusions NAR is able to achieve substantial reductions in the execution cost of two popular CNN architectures, while resulting in small or no loss in model accuracy. Such cost savings can significantly improve the use of deep learning methods in digital pathology. They can enable studies with larger tissue image datasets and facilitate the use of less expensive and more accessible graphics processing units (GPUs), thus reducing the computing costs of a study.
Collapse
Affiliation(s)
| | - Tahsin Kurc
- Biomedical Informatics Department, Stony Brook University, Stony Brook, NY, United States
| | - Jun Kong
- Department of Mathematics and Statistics and Computer Science, Georgia State University, Atlanta, GA, United States
| | - Renato Ferreira
- Department of Computer Science, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil
| | - Joel H. Saltz
- Biomedical Informatics Department, Stony Brook University, Stony Brook, NY, United States
| | - George Teodoro
- Department of Computer Science, Universidade de Brasília, Brasília, Brazil
- Department of Computer Science, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil
| |
Collapse
|
8
|
Prabhu S, Prasad K, Robels-Kelly A, Lu X. AI-based carcinoma detection and classification using histopathological images: A systematic review. Comput Biol Med 2022; 142:105209. [DOI: 10.1016/j.compbiomed.2022.105209] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 01/01/2022] [Accepted: 01/01/2022] [Indexed: 02/07/2023]
|
9
|
Wan C, Shao Y, Wang C, Jing J, Yang W. A Novel System for Measuring Pterygium's Progress Using Deep Learning. Front Med (Lausanne) 2022; 9:819971. [PMID: 35237630 PMCID: PMC8882585 DOI: 10.3389/fmed.2022.819971] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 01/21/2022] [Indexed: 12/28/2022] Open
Abstract
Pterygium is a common ocular surface disease. When pterygium significantly invades the cornea, it limits eye movement and impairs vision, which requires surgery to remove. It is medically recognized that when the width of the pterygium that invades the cornea is >3 mm, the patient can be treated with surgical resection. Owing to this, this study proposes a system for diagnosing and measuring the pathological progress of pterygium using deep learning methods, which aims to assist doctors in designing pterygium surgical treatment strategies. The proposed system only needs to input the anterior segment images of patients to automatically and efficiently measure the width of the pterygium that invades the cornea, and the patient's pterygium symptom status can be obtained. The system consists of three modules, including cornea segmentation module, pterygium segmentation module, and measurement module. Both segmentation modules use convolutional neural networks. In the pterygium segmentation module, to adapt the diversity of the pterygium's shape and size, an improved U-Net++ model by adding an Attention gate before each up-sampling layer is proposed. The Attention gates extract information related to the target, so that the model can pay more attention to the shape and size of the pterygium. The measurement module realizes the measurement of the width and area of the pterygium that invades the cornea and the classification of pterygium symptom status. In this study, the effectiveness of the proposed system is verified using datasets collected from the ocular surface diseases center at the Affiliated Eye Hospital of Nanjing Medical University. The results obtained show that the Dice coefficient of the cornea segmentation module and the pterygium segmentation module are 0.9620 and 0.9020, respectively. The Kappa consistency coefficient between the final measurement results of the system and the doctor's visual inspection results is 0.918, which proves that the system has practical application significance.
Collapse
Affiliation(s)
- Cheng Wan
- College of Electronic Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Yiwei Shao
- College of Electronic Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Chenghu Wang
- Department of Ophthalmology, Nanjing Lishui Hospital of Traditional Chinese Medicine, Nanjing, China
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- *Correspondence: Weihua Yang
| | - Jiaona Jing
- Department of Ophthalmology, Children's Hospital of Nanjing Medical University, Nanjing, China
- Jiaona Jing
| | - Weihua Yang
- The Laboratory of Artificial Intelligence and Bigdata in Ophthalmology, The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- Chenghu Wang
| |
Collapse
|
10
|
Pati P, Jaume G, Foncubierta-Rodríguez A, Feroce F, Anniciello AM, Scognamiglio G, Brancati N, Fiche M, Dubruc E, Riccio D, Di Bonito M, De Pietro G, Botti G, Thiran JP, Frucci M, Goksel O, Gabrani M. Hierarchical graph representations in digital pathology. Med Image Anal 2021; 75:102264. [PMID: 34781160 DOI: 10.1016/j.media.2021.102264] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Revised: 09/22/2021] [Accepted: 10/06/2021] [Indexed: 01/01/2023]
Abstract
Cancer diagnosis, prognosis, and therapy response predictions from tissue specimens highly depend on the phenotype and topological distribution of constituting histological entities. Thus, adequate tissue representations for encoding histological entities is imperative for computer aided cancer patient care. To this end, several approaches have leveraged cell-graphs, capturing the cell-microenvironment, to depict the tissue. These allow for utilizing graph theory and machine learning to map the tissue representation to tissue functionality, and quantify their relationship. Though cellular information is crucial, it is incomplete alone to comprehensively characterize complex tissue structure. We herein treat the tissue as a hierarchical composition of multiple types of histological entities from fine to coarse level, capturing multivariate tissue information at multiple levels. We propose a novel multi-level hierarchical entity-graph representation of tissue specimens to model the hierarchical compositions that encode histological entities as well as their intra- and inter-entity level interactions. Subsequently, a hierarchical graph neural network is proposed to operate on the hierarchical entity-graph and map the tissue structure to tissue functionality. Specifically, for input histology images, we utilize well-defined cells and tissue regions to build HierArchical Cell-to-Tissue (HACT) graph representations, and devise HACT-Net, a message passing graph neural network, to classify the HACT representations. As part of this work, we introduce the BReAst Carcinoma Subtyping (BRACS) dataset, a large cohort of Haematoxylin & Eosin stained breast tumor regions-of-interest, to evaluate and benchmark our proposed methodology against pathologists and state-of-the-art computer-aided diagnostic approaches. Through comparative assessment and ablation studies, our proposed method is demonstrated to yield superior classification results compared to alternative methods as well as individual pathologists. The code, data, and models can be accessed at https://github.com/histocartography/hact-net.
Collapse
Affiliation(s)
- Pushpak Pati
- IBM Zurich Research Lab, Zurich, Switzerland; Computer-Assisted Applications in Medicine, ETH Zurich, Zurich, Switzerland.
| | - Guillaume Jaume
- IBM Zurich Research Lab, Zurich, Switzerland; Signal Processing Laboratory 5, EPFL, Lausanne, Switzerland
| | | | - Florinda Feroce
- National Cancer Institute - IRCCS-Fondazione Pascale, Naples, Italy
| | | | | | - Nadia Brancati
- Institute for High Performance Computing and Networking - CNR, Naples, Italy
| | - Maryse Fiche
- Aurigen- Centre de Pathologie, Lausanne, Switzerland
| | | | - Daniel Riccio
- Institute for High Performance Computing and Networking - CNR, Naples, Italy
| | | | - Giuseppe De Pietro
- Institute for High Performance Computing and Networking - CNR, Naples, Italy
| | - Gerardo Botti
- National Cancer Institute - IRCCS-Fondazione Pascale, Naples, Italy
| | | | - Maria Frucci
- Institute for High Performance Computing and Networking - CNR, Naples, Italy
| | - Orcun Goksel
- Computer-Assisted Applications in Medicine, ETH Zurich, Zurich, Switzerland; Department of Information Technology, Uppsala University, Sweden
| | | |
Collapse
|
11
|
Shan D, Zheng J, Klimowicz A, Panzenbeck M, Liu Z, Feng D. Deep learning for discovering pathological continuum of crypts and evaluating therapeutic effects: An implication for in vivo preclinical study. PLoS One 2021; 16:e0252429. [PMID: 34125849 PMCID: PMC8202954 DOI: 10.1371/journal.pone.0252429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Accepted: 05/16/2021] [Indexed: 11/21/2022] Open
Abstract
Applying deep learning to the field of preclinical in vivo studies is a new and exciting prospect with the potential to unlock decades worth of underutilized data. As a proof of concept, we performed a feasibility study on a colitis model treated with Sulfasalazine, a drug used in therapeutic care of inflammatory bowel disease. We aimed to evaluate the colonic mucosa improvement associated with the recovery response of the crypts, a complex histologic structure reflecting tissue homeostasis and repair in response to inflammation. Our approach requires robust image segmentation of objects of interest from whole slide images, a composite low dimensional representation of the typical or novel morphological variants of the segmented objects, and exploration of image features of significance towards biology and treatment efficacy. Both interpretable features (eg. counts, area, distance and angle) as well as statistical texture features calculated using Gray Level Co-Occurance Matrices (GLCMs), are shown to have significance in analysis. Ultimately, this analytic framework of supervised image segmentation, unsupervised learning, and feature analysis can be generally applied to preclinical data. We hope our report will inspire more efforts to utilize deep learning in preclinical in vivo studies and ultimately make the field more innovative and efficient.
Collapse
Affiliation(s)
- Dechao Shan
- Global Computational Biology and Digital Sciences, Boehringer Ingelheim Pharmaceuticals, Ridgefield, Connecticut, United States of America
| | - Jie Zheng
- Immunology and Respiratory Disease Research, Boehringer Ingelheim Pharmaceuticals, Ridgefield, Connecticut, United States of America
| | - Alexander Klimowicz
- Immunology and Respiratory Disease Research, Boehringer Ingelheim Pharmaceuticals, Ridgefield, Connecticut, United States of America
| | - Mark Panzenbeck
- Immunology and Respiratory Disease Research, Boehringer Ingelheim Pharmaceuticals, Ridgefield, Connecticut, United States of America
| | - Zheng Liu
- Global Computational Biology and Digital Sciences, Boehringer Ingelheim Pharmaceuticals, Ridgefield, Connecticut, United States of America
| | - Di Feng
- Global Computational Biology and Digital Sciences, Boehringer Ingelheim Pharmaceuticals, Ridgefield, Connecticut, United States of America
| |
Collapse
|
12
|
Kobayashi S, Saltz JH, Yang VW. State of machine and deep learning in histopathological applications in digestive diseases. World J Gastroenterol 2021; 27:2545-2575. [PMID: 34092975 PMCID: PMC8160628 DOI: 10.3748/wjg.v27.i20.2545] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 03/27/2021] [Accepted: 04/29/2021] [Indexed: 02/06/2023] Open
Abstract
Machine learning (ML)- and deep learning (DL)-based imaging modalities have exhibited the capacity to handle extremely high dimensional data for a number of computer vision tasks. While these approaches have been applied to numerous data types, this capacity can be especially leveraged by application on histopathological images, which capture cellular and structural features with their high-resolution, microscopic perspectives. Already, these methodologies have demonstrated promising performance in a variety of applications like disease classification, cancer grading, structure and cellular localizations, and prognostic predictions. A wide range of pathologies requiring histopathological evaluation exist in gastroenterology and hepatology, indicating these as disciplines highly targetable for integration of these technologies. Gastroenterologists have also already been primed to consider the impact of these algorithms, as development of real-time endoscopic video analysis software has been an active and popular field of research. This heightened clinical awareness will likely be important for future integration of these methods and to drive interdisciplinary collaborations on emerging studies. To provide an overview on the application of these methodologies for gastrointestinal and hepatological histopathological slides, this review will discuss general ML and DL concepts, introduce recent and emerging literature using these methods, and cover challenges moving forward to further advance the field.
Collapse
Affiliation(s)
- Soma Kobayashi
- Department of Biomedical Informatics, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY 11794, United States
| | - Joel H Saltz
- Department of Biomedical Informatics, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY 11794, United States
| | - Vincent W Yang
- Department of Medicine, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY 11794, United States
- Department of Physiology and Biophysics, Renaissance School of Medicine, Stony Brook University, Stony Brook , NY 11794, United States
| |
Collapse
|
13
|
Fu Y, Lei Y, Wang T, Curran WJ, Liu T, Yang X. A review of deep learning based methods for medical image multi-organ segmentation. Phys Med 2021; 85:107-122. [PMID: 33992856 PMCID: PMC8217246 DOI: 10.1016/j.ejmp.2021.05.003] [Citation(s) in RCA: 73] [Impact Index Per Article: 24.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 03/12/2021] [Accepted: 05/03/2021] [Indexed: 12/12/2022] Open
Abstract
Deep learning has revolutionized image processing and achieved the-state-of-art performance in many medical image segmentation tasks. Many deep learning-based methods have been published to segment different parts of the body for different medical applications. It is necessary to summarize the current state of development for deep learning in the field of medical image segmentation. In this paper, we aim to provide a comprehensive review with a focus on multi-organ image segmentation, which is crucial for radiotherapy where the tumor and organs-at-risk need to be contoured for treatment planning. We grouped the surveyed methods into two broad categories which are 'pixel-wise classification' and 'end-to-end segmentation'. Each category was divided into subgroups according to their network design. For each type, we listed the surveyed works, highlighted important contributions and identified specific challenges. Following the detailed review, we discussed the achievements, shortcomings and future potentials of each category. To enable direct comparison, we listed the performance of the surveyed works that used thoracic and head-and-neck benchmark datasets.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA.
| |
Collapse
|
14
|
Shi G, Xiao L, Chen Y, Zhou SK. Marginal loss and exclusion loss for partially supervised multi-organ segmentation. Med Image Anal 2021; 70:101979. [PMID: 33636451 DOI: 10.1016/j.media.2021.101979] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2020] [Revised: 12/02/2020] [Accepted: 01/20/2021] [Indexed: 11/25/2022]
Abstract
Annotating multiple organs in medical images is both costly and time-consuming; therefore, existing multi-organ datasets with labels are often low in sample size and mostly partially labeled, that is, a dataset has a few organs labeled but not all organs. In this paper, we investigate how to learn a single multi-organ segmentation network from a union of such datasets. To this end, we propose two types of novel loss function, particularly designed for this scenario: (i) marginal loss and (ii) exclusion loss. Because the background label for a partially labeled image is, in fact, a 'merged' label of all unlabelled organs and 'true' background (in the sense of full labels), the probability of this 'merged' background label is a marginal probability, summing the relevant probabilities before merging. This marginal probability can be plugged into any existing loss function (such as cross entropy loss, Dice loss, etc.) to form a marginal loss. Leveraging the fact that the organs are non-overlapping, we propose the exclusion loss to gauge the dissimilarity between labeled organs and the estimated segmentation of unlabelled organs. Experiments on a union of five benchmark datasets in multi-organ segmentation of liver, spleen, left and right kidneys, and pancreas demonstrate that using our newly proposed loss functions brings a conspicuous performance improvement for state-of-the-art methods without introducing any extra computation.
Collapse
Affiliation(s)
- Gonglei Shi
- Medical Imaging, Robotics, Analytic Computing Laboratory & Engineering (MIRACLE), Key Lab of Intelligent Information Processing of Chinese Academy of Sciences, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190, China; School of Computer Science and Engineering, Southeast University, Nanjing, 210000, China
| | - Li Xiao
- Medical Imaging, Robotics, Analytic Computing Laboratory & Engineering (MIRACLE), Key Lab of Intelligent Information Processing of Chinese Academy of Sciences, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190, China.
| | - Yang Chen
- School of Computer Science and Engineering, Southeast University, Nanjing, 210000, China
| | - S Kevin Zhou
- Medical Imaging, Robotics, Analytic Computing Laboratory & Engineering (MIRACLE), Key Lab of Intelligent Information Processing of Chinese Academy of Sciences, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190, China; School of Biomedical Engineering & Suzhou Institute For Advanced Research, University of Science and Technology, Suzhou, 215123, China.
| |
Collapse
|
15
|
Salvi M, Acharya UR, Molinari F, Meiburger KM. The impact of pre- and post-image processing techniques on deep learning frameworks: A comprehensive review for digital pathology image analysis. Comput Biol Med 2021; 128:104129. [DOI: 10.1016/j.compbiomed.2020.104129] [Citation(s) in RCA: 89] [Impact Index Per Article: 29.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2020] [Accepted: 11/13/2020] [Indexed: 12/12/2022]
|
16
|
Pacal I, Karaboga D, Basturk A, Akay B, Nalbantoglu U. A comprehensive review of deep learning in colon cancer. Comput Biol Med 2020; 126:104003. [DOI: 10.1016/j.compbiomed.2020.104003] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Revised: 08/28/2020] [Accepted: 08/28/2020] [Indexed: 12/17/2022]
|
17
|
Duong MT, Rauschecker AM, Mohan S. Diverse Applications of Artificial Intelligence in Neuroradiology. Neuroimaging Clin N Am 2020; 30:505-516. [PMID: 33039000 DOI: 10.1016/j.nic.2020.07.003] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Recent advances in artificial intelligence (AI) and deep learning (DL) hold promise to augment neuroimaging diagnosis for patients with brain tumors and stroke. Here, the authors review the diverse landscape of emerging neuroimaging applications of AI, including workflow optimization, lesion segmentation, and precision education. Given the many modalities used in diagnosing neurologic diseases, AI may be deployed to integrate across modalities (MR imaging, computed tomography, PET, electroencephalography, clinical and laboratory findings), facilitate crosstalk among specialists, and potentially improve diagnosis in patients with trauma, multiple sclerosis, epilepsy, and neurodegeneration. Together, there are myriad applications of AI for neuroradiology."
Collapse
Affiliation(s)
- Michael Tran Duong
- Department of Radiology, Perelman School of Medicine at the University of Pennsylvania, 3400 Spruce Street, 219 Dulles Building, Philadelphia, PA 19104, USA. https://twitter.com/MichaelDuongMD
| | - Andreas M Rauschecker
- Department of Radiology & Biomedical Imaging, University of California, San Francisco, 513 Parnassus Avenue, Room S-261, San Francisco, CA 94143, USA. https://twitter.com/DrDreMDPhD
| | - Suyash Mohan
- Department of Radiology, Perelman School of Medicine at the University of Pennsylvania, 3400 Spruce Street, 219 Dulles Building, Philadelphia, PA 19104, USA.
| |
Collapse
|
18
|
Swiderska-Chadaj Z, de Bel T, Blanchet L, Baidoshvili A, Vossen D, van der Laak J, Litjens G. Impact of rescanning and normalization on convolutional neural network performance in multi-center, whole-slide classification of prostate cancer. Sci Rep 2020; 10:14398. [PMID: 32873856 PMCID: PMC7462850 DOI: 10.1038/s41598-020-71420-0] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2020] [Accepted: 08/12/2020] [Indexed: 01/12/2023] Open
Abstract
Algorithms can improve the objectivity and efficiency of histopathologic slide analysis. In this paper, we investigated the impact of scanning systems (scanners) and cycle-GAN-based normalization on algorithm performance, by comparing different deep learning models to automatically detect prostate cancer in whole-slide images. Specifically, we compare U-Net, DenseNet and EfficientNet. Models were developed on a multi-center cohort with 582 WSIs and subsequently evaluated on two independent test sets including 85 and 50 WSIs, respectively, to show the robustness of the proposed method to differing staining protocols and scanner types. We also investigated the application of normalization as a pre-processing step by two techniques, the whole-slide image color standardizer (WSICS) algorithm, and a cycle-GAN based method. For the two independent datasets we obtained an AUC of 0.92 and 0.83 respectively. After rescanning the AUC improves to 0.91/0.88 and after style normalization to 0.98/0.97. In the future our algorithm could be used to automatically pre-screen prostate biopsies to alleviate the workload of pathologists.
Collapse
Affiliation(s)
| | - Thomas de Bel
- Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Lionel Blanchet
- Digital and Computational Pathology, Philips, Best, The Netherlands
| | - Alexi Baidoshvili
- Laboratorium Pathologie Oost-Nederland, LabPON, Hengelo, The Netherlands
| | - Dirk Vossen
- Digital and Computational Pathology, Philips, Best, The Netherlands
| | - Jeroen van der Laak
- Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands.,Center for Medical Image Science and Visualization, Linköping University, Linköping, Sweden
| | - Geert Litjens
- Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
19
|
Thakur N, Yoon H, Chong Y. Current Trends of Artificial Intelligence for Colorectal Cancer Pathology Image Analysis: A Systematic Review. Cancers (Basel) 2020; 12:E1884. [PMID: 32668721 PMCID: PMC7408874 DOI: 10.3390/cancers12071884] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Revised: 07/06/2020] [Accepted: 07/09/2020] [Indexed: 02/06/2023] Open
Abstract
Colorectal cancer (CRC) is one of the most common cancers requiring early pathologic diagnosis using colonoscopy biopsy samples. Recently, artificial intelligence (AI) has made significant progress and shown promising results in the field of medicine despite several limitations. We performed a systematic review of AI use in CRC pathology image analysis to visualize the state-of-the-art. Studies published between January 2000 and January 2020 were searched in major online databases including MEDLINE (PubMed, Cochrane Library, and EMBASE). Query terms included "colorectal neoplasm," "histology," and "artificial intelligence." Of 9000 identified studies, only 30 studies consisting of 40 models were selected for review. The algorithm features of the models were gland segmentation (n = 25, 62%), tumor classification (n = 8, 20%), tumor microenvironment characterization (n = 4, 10%), and prognosis prediction (n = 3, 8%). Only 20 gland segmentation models met the criteria for quantitative analysis, and the model proposed by Ding et al. (2019) performed the best. Studies with other features were in the elementary stage, although most showed impressive results. Overall, the state-of-the-art is promising for CRC pathological analysis. However, datasets in most studies had relatively limited scale and quality for clinical application of this technique. Future studies with larger datasets and high-quality annotations are required for routine practice-level validation.
Collapse
Affiliation(s)
- Nishant Thakur
- Department of Hospital Pathology, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, 10, 63-ro, Yeongdeungpo-gu, Seoul 07345, Korea;
| | - Hongjun Yoon
- AI Lab, Deepnoid, #1305 E&C Venture Dream Tower 2, 55, Digital-ro 33-Gil, Guro-gu, Seoul 06216, Korea;
| | - Yosep Chong
- Department of Hospital Pathology, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, 10, 63-ro, Yeongdeungpo-gu, Seoul 07345, Korea;
| |
Collapse
|